content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Bob is dead in the INSIDE of his apartment. There is a puddle of water, a ball, some glass.............How did Bob die?
A man in a restaurant asked a waiter for a juice glass, a dinner plate, water, a match, and a lemon wedge. The man poured enough water onto the plate to cover it.
"If you can get the water on the plate into this glass without touching or moving this plate, I will give you Rs1000," the man said. "You can use the match and lemon to do this."
A few minutes later, the waiter walked away with Rs1000 in his pocket.
How did the waiter get the water into the glass? | {"url":"https://www.queryhome.com/puzzle/31001/dead-inside-apartment-there-puddle-water-ball-some-glass-die","timestamp":"2024-11-02T11:00:00Z","content_type":"text/html","content_length":"111257","record_id":"<urn:uuid:ca3fb14b-5e45-4aaa-a243-6ceaa4a3477b>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00032.warc.gz"} |
CT.03 / Dynamic Behaviour of Simple Processes +
Dynamic behavior of simple processes refers to how these processes respond to changes in input or disturbance variables over time. The dynamic behavior can be described mathematically using
differential equations or transfer functions, and can be visualized using time-domain or frequency-domain analysis.
Some examples of simple processes with different dynamic behaviors are:
1. First-order systems: A first-order system responds to a change in input with an exponential curve. The output initially increases or decreases rapidly and then gradually approaches a steady-state
value. Examples of first-order systems include tanks, pipes, and filters.
2. Second-order systems: A second-order system has a similar response to a first-order system, but the response is more complex due to the presence of oscillations. The output overshoots the
steady-state value before approaching it, and the oscillations can decay slowly or quickly depending on the system’s damping coefficient. Examples of second-order systems include springs,
dampers, and pendulums.
3. Integrating systems: An integrating system responds to a change in input with a linear increase or decrease in output over time. The output does not reach a steady-state value unless the input
remains constant. Examples of integrating systems include level control systems and integrators in electrical circuits.
4. Dead-time systems: A dead-time system has a delay between a change in input and a change in output. The output starts changing only after a certain time delay, and this delay can cause
instability or oscillations in the system. Examples of dead-time systems include transportation systems and communication networks.
Understanding the dynamic behavior of a process is essential for designing effective control strategies. It allows for the selection of appropriate controllers, tuning of controller parameters, and
prediction of system responses to changes in input or disturbance variables.
Initially, simple processes without a controller are considered and their open-loop behaviour is studied. Let us consider the response of the system to two types of inputs
• A unit step:
• A Dirac unit impulse:
Figure 1.1 [Open-loop block diagram of a process].
The Laplace transform
provided that the product
The response
First-Order Systems
A first-order differential equation of the form
describes a first-order system. The steady-state gain, or asymptotic gain, of the process is denoted by
The block diagram for a first-order system is shown in Figure 1.2.
Figure 1.2 [Block diagram of a first-order system].
When the input
Using the transfer function, the Laplace transform of the output
The forced and natural responses are given by
The asymptotic output, when
The time constant
Many real physical systems exhibit first-order dynamics, such as systems storing mass, energy or momentum, or systems exhibiting resistance to the flow of mass, energy or momentum.
Figure 1.3 [Response of a first-order system (
Table 1.1 [Response of a first-order system to a unit step function expressed in percentage of the asymptotic value].
Integrating Systems
Processes that only contain the first-order derivative of
The corresponding transfer function is:
When a step input of magnitude
In the time domain, the response of the system
Pure capacitive processes are so-called because they accumulate energy, mass, or electrical charge. A surge tank is an example of a pure capacitive process.
Figure 1.4 [Response of a capacitive system (
Second-Order Systems
A second-order system is described by a second-order differential equation written in the classical form as
with the corresponding transfer function
The notions of natural period of oscillation and of damping factor are related to the sampled or undamped oscillators. For
The transfer function of a second-order system is sometimes written as
Several real physical processes exhibit second-order dynamics, among them are:
• Two first-order systems in series.
• Intrinsic second-order systems, e.g. mechanical systems having an acceleration.
• Feedback or closed-loop transfer function of a first-order process with a PI controller.
Note that the transfer function
If the natural period of oscillation
Figure 1.5 [Normalised response of a second-order system to a unit step function for different values of the damping coefficient
If the input is a step function with magnitude
which can be decomposed into
The overall response consists of the forced and the natural responses
The forced response is equal to
and the overall response is
The forced response is constant and equal to
Figure 1.6 [Response of a second-order system to a unit step input].
With reference to the underdamped response of Fig. 1.6, the following terms are defined:
• Decay ratio =
• Natural period of oscillation is defined for a system with a damping coefficient
• Actual period of oscillation
• Rise time: this is the time necessary to reach the asymptotic value for the first time
• First peak reach time: the time necessary for the response to reach the first peak
• Settling time: time necessary for the response to remain in an interval between
and the envelope of the undamped sinusoidal response is
Figure 1.7 [Normalised amplitude ratio for a sinusoidal input with varying damping coefficient
The time domain response of a second-order system subjected to a sinusoidal input:
and the normalised amplitude ratio is equal to
which is maximum at a frequency
The normalised amplitude ratio has a maximum equal to
Large oscillations are not desired, therefore small damping coefficients | {"url":"http://prizm.studio/knowledgebase/dynamic-behaviour-of-simple-processes/","timestamp":"2024-11-05T05:25:52Z","content_type":"text/html","content_length":"166145","record_id":"<urn:uuid:c4d4a51b-4ef1-42bf-8bd6-0e871bc4fc30>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00497.warc.gz"} |
PurpleTutor - Online Coding Classes for Kids Ages 6-16Data Science using Python - Best Course & Certification
Data Science using Python
What is data science using Python?
programming language has been very popular and widely used for all kinds of applications and research. Python has also become one of the most preferred languages for data scientists in the world
today. According to SlashData, 69% of machine learning developers and data scientists are applying Data Science using Python (compared to 24% of them using R). The reason being, Python provides a
large number of libraries to make your Data Science processes easier and more effective.
PurpleTutor offers a great Python for Data Science course, for school as well as junior college students. Our Python for Data science course introduces students to the concepts of Data Science using
Python libraries. In our course, students will explore Python libraries such as NumPy, Pandas, and Matplotlib. Students learn how to use them to perform data analysis and machine learning tasks and
gain exposure to Data Science using Python.
What will you learn in our Python for Data Science course?
Our Python for Data Science course introduces the student to the basics of Data Science and how they can be applied to real-world problems.
We offer the Python for Data Science course to students falling in the following age groups:
Age group: 9-11 years
In this age-group, students will learn –
• Fundamentals of Python programming language, including data types, variables, loops, functions
• How to create and use Google Sheets for storing and summarizing data.
• Data handling and cleaning using libraries like Pandas.
• File handling : management of csv files in Data Science with Python.
• Data visualization using Pandas.
• Handling and solving real-world data problems.
Age group: 12-18 years
In the above courses students will –
• Review fundamentals of Python programming language, including data types, variables, loops, functions
• Explore and apply Python for Data Science libraries which are useful in Data Science such as the math, random, statistics libraries.
• Understand and apply the concepts of Object-Oriented Programming.
• Understand and apply descriptive and inferential statistics.
• Understand and use file handling: management of csv files in Data Science with Python.
• Understand the concept of big data.
• Understand Data handling and cleaning using libraries like Pandas and NumPy.
• Apply Data visualization using libraries like Matplotlib.
• Apply Data pre-processing techniques.
• Understand handling and solving of real-world data problems.
The goal of our Data Science with Python course is to equip students with the skills and knowledge required to perform end-to-end data analysis and modelling tasks. The course complexity varies with
each age group.
What are the benefits of doing our Data Science using Python course?
There are several benefits of doing our Data Science using Python course for kids in the age groups of 9-18 yrs.
• Problem Solving Skills: Taking up our Python for Data Science course teaches kids to think logically and systematically, helping them develop problem-solving skills.
• Coding Practice: The Python for Data Science course we offer provides plenty of coding practice in Python for students as they complete the assigned tasks. This enables them to perfect their
Python coding skills.
• Early Exposure to Cutting-Edge Technology: Data Science using Python is a rapidly evolving field and learning it at an early age can give students a competitive advantage in the future.
• Understanding of Real-World Applications: The course has a wide range of applications, and children can learn about these applications and how they can be used to solve real-world problems.
• Building a Strong Foundation for Future Learning: Data Science concepts build upon each other, so learning them at a young age can help lay a strong foundation for future learning and career
• Improved Critical Thinking and Analytical Skills: Data Science involves analyzing data and drawing conclusions, helping children improve their critical thinking and analytical skills.
In addition, for college students who are weighing career options, selecting our Python for Data Science course could be beneficial in the following ways –
• Career opportunities: Data science using Python is a rapidly growing field with high demand for skilled professionals. A course in Data Science using Python can open up a variety of career paths
in industries such as finance, healthcare, technology, and more.
• Interdisciplinary Skills: Data Science requires a combination of technical, mathematical, and business skills, making it a field that draws from multiple disciplines. Doing a Python for Data
Science course will help college students to understand and excel in related subjects like math, science and statistics.
Overall, our Data Science course can provide a solid foundation for a rewarding and challenging career and the skills to work with data in a meaningful and impactful way.
Course Content
Our Python for Data Science course has been created especially for students from ages of 9 years to 18 years keeping in mind the age-appropriate topics:
Age: 9-11 years
Name of the course – Introduction to Data Science – Young Learners (YL)
While pursuing the above Data Science using Python course, students will explore and understand different types of data and their real-life applications. They will be introduced to the working of
Google Sheets and will learn how to run basic math operations to analyse data and represent it using different types of charts and infographics. During the data analysis module, they will learn the
Python Pandas library commands to read data from the CSV file and create dataframes to analyse data.
You can explore the content for the Data Science – Python course (YL) here –
Session Concept
1 Introduction to Data and Data Science
2 Introduction to Google sheets
4 Using formulae in Google Sheets
6 Formative Assessment
7 Event Planning
8 Data Visualization
10 Data Representation
12 Data Visualization techniques
13 Data cleanup
15 Introduction to Infographics
16 Creating the Infographic
17 Formative Assessment
19 Introduction to Data Analysis & Python Basics
21 Introduction to Pandas Series
23 Introduction to pandas DataFrames
25 Introduction Pandas Statistical Functions
26 Working with Text Files and .csv Files in Python
28 Pandas Plotting
30 Formative Assessment
To download the detailed Data Science – Python course (YL) content, click here!
Age: 12-15 years
Name of the course – Data Science – Python for Early Achievers (EA)
While pursuing the above Data Science using Python course, students will explore and understand different types of data and their real-life applications. They will be introduced to the working of
Google Sheets and will learn how to use the Python Numpy module to analyse data.
Students will explore the Python Panda library commands to create dataframes. Using Pandas, students will learn how to read data from the CSV file and use dataframes to analyse data.
Students will learn how to visually represent the data using the methods of the Python Matplotlib library. The data is represented using different types of charts.
You can explore the content for the Data Science – Python course(EA) here –
Session Concept
1 Introduction to Python packages
2 Using Python Packages : Pandas
3 Using Python packages – Matplotlib
4 Using Python packages – NumPy
5 Introduction to modules -the statistics module
6 The math module
8 The random module
10 Errors and Error handling
11 Formative Assessment
12 Introduction to Files
13 Working with text files
14 Working with Binary files
15 Classes and Objects
Principles of OOP
19 Storing state of objects using the Pickle module
20 Formative Assessment
21 Understanding data
22 Big Data
23 Statistical analysis of data – Terms and Plotting
24 Statistical analysis of data – Statistical Measures
25 Formative Assessment
26 Exploring the numpy package
27 Operations on numpy arrays
29 Working with file data in numpy
30 Statistical Methods in numpy
31 Exploring the Pandas package – Series
32 Operations on Pandas Dataframes
34 Filtering Dataframes
35 Data Cleaning
36 Formative Assessment
37 Matplotlib – Line Plot
38 Matplotlib – Pie Plot
39 Matplotlib-Bar plot and Histogram
41 Matplotlib-Scatter plot
44 Data Science Project
To download the detailed Data Science – Python course(EA) content, click here!
Age: 15+ years
Name of the course – Data Science – Python for Young Professionals(YP)
While pursuing the above Data Science using Python course, students will explore and understand different types of data and their real-life applications. They will learn how to use the Python Numpy
module to analyze data. Students will explore the Python Panda library commands to create dataframes. Using Pandas, students will learn how to read data from the CSV file and use dataframes to
analyse data. Students will learn how to visually represent the data using the methods of the Python Matplotlib library. The data is represented using different types of charts.
You can explore the content for the Data Science – Python for Young Professionals (YP) course here:
Session Concept
1 Introduction to Python packages
2 Using Python Packages : Pandas
3 Using Python packages – Matplotlib
4 Using Python packages – NumPy
5 Introduction to modules -the statistics module
6 The math module
8 The random module
10 Errors and Error handling
11 Formative Assessment
12 Introduction to Files
13 Working with text files
14 Working with Binary files
15 Classes and Objects
17 Principles of OOP
19 Storing state of objects using the Pickle module
20 Formative Assessment
21 Understanding data
22 Big Data
23 Statistical analysis of data – Terms and Plotting
24 Statistical analysis of data – Statistical Measures
25 Formative Assessment
26 Exploring the numpy package
27 Operations on numpy arrays
29 Working with file data in numpy
30 Statistical Methods in numpy
31 Exploring the Pandas package – Series
32 Operations on Pandas Dataframes
34 Filtering Dataframes
35 Data Cleaning
36 Formative Assessment
37 Matplotlib – Line Plot
38 Matplotlib – Pie Plot
39 Matplotlib-Bar plot and Histogram
41 Matplotlib-Scatter plot
44 Data Science Project
To download the detailed Data Science – Python for Young Professionals (YP) course content, click here!
Course Duration and Certificate
The Introduction to Data Science with Python for Young Learners (YL: 9-11 years) course consists of 30 sessions of one hour each, therefore the total duration of this course is 30 hours.
The Data Science with Python for Early Achievers (EA: 12-15 years) course consists of 45 sessions of one hour each, with the total duration of the course being 45 hours.
The Data Science with Python for Young Professionals (YP: 15+ years) course consists of 45 sessions of one hour each, with the total duration of the course being 45 hours.
On completion of the course, a certificate is given to the student. The certificate recognises the skills the student learnt, and the level of mastery achieved.
Requirements for the course
• Students need to have the knowledge of core Python programming concepts such as data types, variables, loops, conditionals and functions. Using these concepts, they should be able to write Python
code to perform small tasks.
• It is necessary to have a laptop or computer with a webcam and a stable internet connection to take the course.
Frequently Asked Questions (FAQs)
!. Can I try a free class for coding?
A: Yes. We offer one free demo class. You can book the free class from the booking link.
2. Can I choose my own days and timings for the classes?
A: Yes. The days and timings of the classes are flexible. You can select any time and any day that suits your timetable.
3. How do I know if learning Data Science using Python is easy?
A: The teachers assess the level of the student in the demo class and then will give the suggestion of whether to go ahead with the course online.
4. Is there any certificate given on completion of the Python for data Science course online?
A: The student will get a certificate after completion of the course. The certificate recognises the skills the student learnt, and the level of mastery achieved.
5. What do you require for learning Data Science using Python from PurpleTutor?
A: It is necessary to have a laptop or computer with a webcam and a stable internet connection to take our Python for Data Science course online
6. Do you have assessments during the course?
A. Yes, we assess the student periodically during the progress of the classes and give feedback on the student’s performance. | {"url":"https://purpletutor.com/course/data-science-using-python/","timestamp":"2024-11-12T13:33:21Z","content_type":"text/html","content_length":"82432","record_id":"<urn:uuid:fa3a1d38-4512-4a7b-98b9-68726c5bf67c>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00588.warc.gz"} |
Question ID - 57074 | SaraNextGen Top Answer
At what speed, the velocity head of water is equal to pressure head of 40 cm of Hg?
a) b) c) d)
At what speed, the velocity head of water is equal to pressure head of 40 cm of Hg?
Bernoulli’s equation for flowing liquid be written as
Dividing the Eq. (i)by
In this expression
It is given that, | {"url":"https://www.saranextgen.com/homeworkhelp/doubts.php?id=57074","timestamp":"2024-11-08T14:39:19Z","content_type":"text/html","content_length":"17714","record_id":"<urn:uuid:98c373f0-e962-4db0-9136-0c91a0cd4eba>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00868.warc.gz"} |
Forex Trading
True Strength Index
This lesson will cover the following
• Explanation and calculation
• How to interpret this indicator
• Trading signals, generated by the indicator
Developed by William Blau and described in Stocks & Commodities Magazine, the True Strength Index (TSI) is a momentum-based oscillator, which incorporates the leading characteristic of a differing
momentum calculation with the lagging characteristic of an averaging function. This creates an indicator, which captures the flow of price action and filters out the noise.
The TSI is calculated in three stages: double smoothed price change, double smoothed absolute price change and the final TSI.
The double smoothing method includes the following steps:
First, calculating the change in price from a given period to another,
Second, calculating a 25-period Exponential Moving Average, based on the price change,
Third, calculating a 13-period Exponential Moving Average, based on the 25-period EMA.
Double Smoothed Price Change:
1. Price Change = Current Price Level less Previous Price Level
2. First Smoothing = 25-period EMA, based on Price Change
3. Second Smoothing = 13-period EMA, based on 25-period EMA
Double Smoothed Absolute Price Change:
1. Absolute Price Change = Absolute Value of Current Price less Absolute Value of Previous Price
2. First Smoothing = 25-period EMA, based on Absolute Price Change
3. Second Smoothing = 13-period EMA, based on 25-period EMA
True Strength Index formula:
TSI = (Double Smoothed Price Change / Double Smoothed Absolute Price Change) x 100
The TSI oscillates between positive and negative territory. If the TSI is positive, this implies bullish momentum. If the TSI is negative, this implies bearish momentum. There are several ways to use
the TSI as a signal provider:
First, crossing the zero line. A signal to buy is generated, when the TSI crosses above 0. A signal to sell is generated, when the TSI crosses below 0.
Second, crossovers between the TSI line and the signal line. The signal line represents an Exponential Moving Average of the TSI line. A signal to buy is generated, when the TSI line crosses above
the signal line. A signal to sell is generated, when the TSI line crosses below the signal line.
In order to reduce whipsaws, a trader may prefer to increase the settings for the TSI or the settings for the chart.
Third, taking advantage of overbought and oversold conditions. An overbought condition occurs, when the TSI is at or above its +25 level. An oversold condition occurs, when the TSI is at or below its
-25 level. Signals are generated when the TSI crosses these extremes. In case the TSI crosses below -25 and then moves back above it, a signal to buy is generated. In case the TSI crosses above +25
and then moves back below it, a signal to sell is generated.
A trader will be provided with faster signals and less lag, if he/she prefers to use shorter periods for the TSI. However, this increases the probability of whipsaws and unreliable signals.
The trader will get less whipsaws, if he/she prefers to use longer periods for the TSI. However, signals will be lagged and the the risk-to-reward ratio will be lower. | {"url":"https://www.tradingpedia.com/forex-trading-indicators/true-strength-index","timestamp":"2024-11-11T13:00:31Z","content_type":"text/html","content_length":"72037","record_id":"<urn:uuid:09338c5f-edd3-4c72-882d-86321e9027de>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00650.warc.gz"} |
Re: [Inkscape-devel] Animation pages in wiki
7 May 2008 7 May '08
10:15 a.m.
On Wednesday, May 7, 2008, 6:48:59 AM, Jonathan-David wrote:
JDS> I don't know svg much, but one way further to animate object JDS> properties, instead of using the builtin tweening svg JDS> capabilities, is to have script drivers (like python drivers for JDS>
Blender IPOS (keyframes)) : expressions that are shorter than a JDS> piece of code, and return some value to use. For example, assign JDS> to the alpha of some object the java script expression :
JDS> sin(2*3.14*10*_current_frame_number) where _current_frame_number JDS> is the official frame number is a variable representing the JDS> actual frame being played in the running svg animation/
SVG isn't frame based (perhaps you are more familiar with SWF?) but a relative time-based expression would be one way to do this.
-- Chris Lilley mailto:chris@...157... Interaction Domain Leader W3C Graphics Activity Lead Co-Chair, W3C Hypertext CG | {"url":"https://lists.inkscape.org/hyperkitty/list/inkscape-devel@lists.inkscape.org/message/JPJPGJH4JHV7MEKVE5UDXZXY566IHNQA/","timestamp":"2024-11-10T22:58:17Z","content_type":"text/html","content_length":"13555","record_id":"<urn:uuid:92b11aaa-fec0-4249-af75-ff5d149d8df4>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00618.warc.gz"} |
Layouts (resize your windows)
Are you keyboard junkie like myself? Have a ton of finder windows open or trying to copy and paste from one document to another... or some other unorthodox reason for needing window management?
Yeah... me too. I've used several of the window management apps, they are all very cool and have a really cool feature set about them, but still can't justify the price? Yeah... me too.
I've converted this Alfred extension from http://projects.jga.me/layouts/#toc3 to a workflow that uses both Hotkeys and Keyword trigger to help manage the top most window on your screen. Take a look
at a look at Greg Allen's write up to learn how to use it. I only added a couple of hotkeys but feel free to edit to match your needs.
If you already own a window manager like SizeUp take a look at Carlos-Sz - SizeUp workflow
Edited by Ginfuru
Unfortunately, it still suffers from the same flaw that made me stop using it. The other developer clearly abandoned it, but maybe you’ll want to take on the task of fixing it. basically it does not
work properly when the dock is in any position but the bottom.
Personally, I use another app for window management, but if you want to take on this bug, you basically would just need to refactor the calculations depending on the dock’s position. I give some
details on how you can get the position, on the original repo (https://github.com/jgallen23/layouts/issues/4).
Unfortunately, it still suffers from the same flaw that made me stop using it. The other developer clearly abandoned it, but maybe you’ll want to take on the task of fixing it. basically it does
not work properly when the dock is in any position but the bottom.
Personally, I use another app for window management, but if you want to take on this bug, you basically would just need to refactor the calculations depending on the dock’s position. I give some
details on how you can get the position, on the original repo (https://github.com/jgallen23/layouts/issues/4).
Interesting... Oddly enough I've never personally had any issues. I use multiple monitors and my dock sits on the left side, but hidden. All the keyboard triggers work as you'd expect.
Interesting... Oddly enough I've never personally had any issues. I use multiple monitors and my dock sits on the left side, but hidden. All the keyboard triggers work as you'd expect.
That's probably why you haven’t noticed (because it’s hidden). If you notice (at least with the original script), if the dock is at the bottom and visible, resizes take that into account, but when
it’s at the sides, windows go behind it, which gets very annoying, very fast.
Edited by Vítor
thanks for this, i tried it out and have some issues when i disconnect my external monitor. it seems that it still uses the dimensions of my external monitor even when i disconnect it and am only
using my laptop screen. i tried the reset command but it did not help. any ideas?
thanks for this, i tried it out and have some issues when i disconnect my external monitor. it seems that it still uses the dimensions of my external monitor even when i disconnect it and am only
using my laptop screen. i tried the reset command but it did not help. any ideas?
When you have the external hooked up is that the primary screen? I found that if you have the external screen set as the primary monitor I get the same result.
I'll be working through the AS over time hopefully fixing bugs.
When you have the external hooked up is that the primary screen? I found that if you have the external screen set as the primary monitor I get the same result.
I'll be working through the AS over time hopefully fixing bugs.
It indeed is the primary screen. thanks for taking the time to look through it.
• 1 month later...
• 6 months later...
Hello there,
Thanks for trying to fix this workflow
I am using it with a dual monitor setup and if I try to arrange the window it extends it to fill both monitors.
Is there any way to fix that?
Hello there,
Thanks for trying to fix this workflow
I am using it with a dual monitor setup and if I try to arrange the window it extends it to fill both monitors.
Is there any way to fix that?
Just use the about link to get the updated one.
I can not see an 'about link'. I just got it so I am on version 2.1.
My system is Mavericks so maybe is related to that....
• 1 year later...
Is multi-monitor support on the horizon for this workflow? If not, are there any alfred window-management workflows that do offer it?
Not exactly an answer to your question but in case you are interested you can use Phoenix or Slate. Check them out on github.
I'll dig into both Phenoix and Slate and see how I can create a workflow.
Just a clarification.
My suggestion was to just use these programs. No need to create a workflow with them.
Have fun
Thanks for the helpful responses!
Hammerspoon supports multiple monitors and I have a workflow that makes use of Hammerspoon (I can resize the current window with vim like keyboard shortcuts. Very handy). You can get the workflow
here: http://www.alfredforum.com/topic/5334-hammerspoon-workflow/
I do not have any predefined items for multiple monitors because I do not have multiple monitors. But, if anyone that has that type of setup would like to expand this workflow, please do and send me
the changes to keep them updated on Packal!
My multiple monitor setup is really three computers I use together. :smile: | {"url":"https://www.alfredforum.com/topic/1145-layouts-resize-your-windows/","timestamp":"2024-11-10T04:42:16Z","content_type":"text/html","content_length":"207570","record_id":"<urn:uuid:ba874a47-38cc-42cc-9b3b-a1a91d68e5f4>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00777.warc.gz"} |
Concentration–Time Relationships: Integrated Rate Laws
Learning Objectives
• To gain an understanding of graphical methods used to determine rate laws.
• To gain an understanding of half-life with respect to first-order reactions.
An alternate way to determine a rate law is to monitor the concentration of reactants or products in a single trial over a period of time and compare that to what is expected mathematically for a
first-, second-, or zero-order reaction.
First-Order Reactions
We have seen earlier that the rate law of a generic first-order reaction where A → B can be expressed in terms of the reactant concentration:
Rate of reaction = – [latex] \frac{\Delta \ [A]}{\Delta \ t}\ [/latex] = [latex] \textit{k}[A]{}^{1}\ [/latex]
This form of the rate law is sometimes referred to as the differential rate law. We can perform a mathematical procedure known as an integration to transform the rate law to another useful form known
as the integrated rate law:
ln [latex] \frac{{[A]}_t}{{[A]}_0}\ [/latex]= –[latex] \textit{k}t\ [/latex]
where “ln” is the natural logarithm, [A][0] is the initial concentration of A, and [A][t] is the concentration of A at another time.
The process of integration is beyond the scope of this textbook, but is covered in most calculus textbooks and courses. The most useful aspect of the integrated rate law is that it can be rearranged
to have the general form of a straight line (y = mx + b).
ln [latex] \ {[A]}_t\ [/latex]= –[latex] \textit{k}t + ln {[A]}_0\ [/latex]
(y = mx + b)
Therefore, if we were to graph the natural logarithm of the concentration of a reactant (ln) versus time, a reaction that has a first-order rate law will yield a straight line, while a reaction with
any other order will not yield a straight line (Figure 17.7 “Concentration vs. Time, First-Order Reaction”). The slope of the straight line corresponds to the negative rate constant, –k, and the y
-intercept corresponds to the natural logarithm of the initial concentration.
Figure 17.7. Concentration vs. Time, First-Order Reaction
This graph shows the plot of the natural logarithm of concentration versus time for a first-order reaction.
Example 4
The decomposition of a pollutant in water at 15^oC occurs with a rate constant of 2.39 y^-1, following first-order kinetics. If a local factory spills 6,500 moles of this pollutant into a lake with a
volume of 2,500 L, what will the concentration of pollutant be after two years, assuming the lake temperature remains constant at 15^oC?
We are given the rate constant and time and can determine an initial concentration from the number of moles and volume given.
[latex] \ {[Pollutant]}_0\ [/latex]= [latex] \frac{{\rm 6500\ mol}}{{\rm 2500\ L}}\ [/latex] = 2.6 M
We can substitute this data into the integrated rate law of a first-order equation and solve for the concentration after 2.0 years:
ln [latex] \ {[Pollutant]}_{2\ y}\ [/latex]= –[latex] \textit{k}t + ln {[Pollutant]}_0\ [/latex]
ln [latex] \ {[Pollutant]}_{2\ y}\ [/latex]= –[latex] \ (2.39 y {}^{-1}) (2.0 y) + ln (2.6 M)\ [/latex]
ln [latex] \ {[Pollutant]}_{2\ y}\ [/latex]= –[latex] \ 4.78 + 0.955 = -3.82\ [/latex]
[latex] \ {[Pollutant]}_{2\ y}\ [/latex]= [latex] \ e{}^{-3.82} = 0.022 M\ [/latex]
Second-Order Reactions
The rate for second-order reactions depends either on two reactants raised to the first power or a single reactant raised to the second power. We will examine a reaction that is the latter type: C
→ D. The differential rate law can be written:
Rate of reaction = – [latex] \frac{\Delta \ [C]}{\Delta \ t}\ [/latex] = [latex] \textit{k}[C]{}^{2}\ [/latex]
The integrated rate law can be written in the form of a straight line as:
[latex] \frac{1}{{[C]}_t}\ [/latex] = [latex] \textit{k}t + \frac{1}{{[C]}_0}\ [/latex]
Therefore, if the reaction is second order, a plot of 1/[C][t] versus t will produce a straight line with a slope that corresponds to the rate constant, k, and a y-intercept that corresponds to the
inverse of the initial concentration, 1/[C][0] (Figure 17.8. “1/[C]t vs. Time, Second-Order Reaction”).
Figure 17.8. 1/[C]t vs. Time, Second-Order Reaction
Zero-Order Reactions
Zero-order reaction rates occur when the rate of reactant disappearance is independent of reactant concentrations. The differential rate law for the hypothetical zero-order reaction E → F could be
written as:
Rate of reaction = – [latex] \frac{\Delta \ [E]}{\Delta \ t}\ [/latex] =[latex] \textit{ k}\ [/latex]
The integrated rate law can be written in the form of a straight line as:
[latex] \ [E]{}_{t}{}_{ }\ [/latex]= –[latex] \textit{k}t + [E]{}_{0}\ [/latex]
[ ]Therefore, if the reaction is zero order, a plot of [E] versus t will produce a straight line with a slope that corresponds to the negative of the product of the rate constant and time, –kt, and a
y-intercept that corresponds to the initial concentration, [E][0] (Figure 17.9. “Concentration vs. Time, Zero-Order Reaction”).
Figure 17.9. Concentration vs. Time, Zero-Order Reaction
Graphical Methods for Determining Reaction Order–A Summary
We have just seen that first-, second-, and zero-order reactions all have unique, integrated rate-law equations that allow us to plot them as a straight line (y = mx + b) (Table 17.1 “Integrated Rate
Law Summary”). When presented with experimental concentration–time data, we can determine the order by simply plotting the data in different ways to obtain a straight line.
Table 17.1 Integrated Rate Law Summary
Example 5
The following data were obtained for the reaction 3 A → 2 B:
│Time, s │0 │5 │10 │15 │20 │
│[A], M │0.200│0.0282│0.0156│0.0106│0.008│
Determine the order of the reaction.
We can plot the characteristic kinetic plots of zero-, first-, and second-order reactions to determine which will give a straight line.
│Time, s│[A], mol L^-1 │ln [A]│1/[A], L mol^-1 │
│0 │0.200 │-1.61 │5.00 │
│5 │0.0282 │-3.57 │35.5 │
│10 │0.0156 │-4.16 │64.1 │
│15 │0.0106 │-4.55 │94.3 │
│20 │0.008 │-4.83 │125 │
The reaction is second order since 1/[A][t] versus t gives a straight line.
The half-life of a reaction, t[1/2], is the duration of time required for the concentration of a reactant to drop to one-half of its initial concentration.
[latex] \ [A]{}_{t1/2}{}_{ }\ [/latex]= [latex] \frac{1}{2} [A]{}_{0}\ [/latex]
Half-life is typically used to describe first-order reactions and serves as a metric to discuss the relative speeds of reactions. A slower reaction will have a longer half-life, while a faster
reaction will have a shorter half-life.
To determine the half-life of a first-order reaction, we can manipulate the integrated rate law by substituting t[1/2] for t and [A][t1/2 ]= [A][0] for [A][t], then solve for t[1/2]:
ln = –kt + ln (integrated rate law for a first-order reaction)
ln [latex] \frac{1}{2} [A]{}_{0}\ [/latex]= –[latex] \textit{k} t{}_{1/2 }+ ln {[A]}_0\ [/latex]
ln [latex] \frac{\frac{1}{2}{\rm \ }{{\rm [A]}}_0{\rm \ }}{{[A]}_0}\ [/latex]= –[latex] \textit{k} t{}_{1/2 }\ [/latex]
ln [latex] \frac{1}{2}\ [/latex]= –[latex] \textit{k} t{}_{1/2 }\ [/latex]
[latex] \ t{}_{1/2 }\ [/latex] = – [latex] \frac{{\rm ln\ }\frac{1}{2}\ }{k}\ [/latex] = [latex] \frac{0.693}{k}\ [/latex]
Since the half-life equation of a first-order reaction does not include a reactant concentration term, it does not rely on the concentration of reactant present. In other words, a half-life is
independent of concentration and remains constant throughout the duration of the reaction. Consequently, plots of kinetic data for first-order reactions exhibit a series of regularly spaced t[1/2]
intervals (Figure 17.10 “Generic First-Order Reaction Kinetics Plot”).
Figure 17.10. Generic First-Order Reaction Kinetics Plot
This graph shows repeating half-lives on a kinetics plot of a generic first-order reaction.
Example 6
A reaction having a first-order rate has a rate constant of 4.00 x 10^-3 s^-1.
1. Determine the half-life.
2. How long will it take for a sample of reactant at 1.0 M to decrease to 0.25 M?
3. What concentration of the 1.0 M sample of reactant would you expect to be present after it has reacted for 500 s?
1. [latex] \ t{}_{1/2 }\ [/latex] = [latex] \frac{0.693}{k}\ [/latex] = [latex] \frac{0.693}{{\rm 4.00\ x\ }{{\rm 10}}^{-3}{\rm \ }{{\rm s}}^{-1}}\ [/latex] = 173 s
2. A simple way to calculate this is to determine how many half-lives it will take to go from 1.00 M to 0.250 M and use the half-life calculated in part 1.
1 half-life = 0.500 M
2 half-lives = 0.250 M
Therefore, it will take 2 x 173 s = 346 s.
3. We can use the rate-constant value in the integrated rate law to determine the concentration remaining.
ln [latex] \frac{{[A]}_t}{{[A]}_0}\ [/latex]= –[latex] \textit{k}t\ [/latex]
ln [latex] \frac{{[A]}_t}{1.0\ M}\ [/latex]= –[latex] \ (4.00 x 10{}^{-3} s{}^{-1})(500 s)\ [/latex]
ln [latex] \frac{{[A]}_t}{1.0\ M}\ [/latex] = -2
[latex] \frac{{[A]}_t}{1.0\ M}\ [/latex] = [latex] \ e{}^{-2 }\ [/latex]= 0.135
[latex] \ [A]{}_{t}\ [/latex] = 0.14 M
Key Takeaways
• The reaction rate may be determined by monitoring the concentration of reactants or products in a single trial over a period of time and comparing it to what is expected mathematically for a
first-, second-, or zero-order reaction.
• The half-life of a reaction is the duration of time required for the concentration of a reactant to drop to one-half of its initial concentration. | {"url":"https://courses.lumenlearning.com/suny-introductorychemistry/chapter/concentration-time-relationships-integrated-rate-laws-2/","timestamp":"2024-11-02T15:56:07Z","content_type":"text/html","content_length":"64059","record_id":"<urn:uuid:e1e2384c-ee35-4bf0-9a6e-b6076bd7afd2>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00410.warc.gz"} |
Motivating a change of variables in $\int \log x / (x^2 + ax + b) dx$
We will consider the improper definite integral ${\int_0^\infty \frac{\log x}{x^2 + ax + b}dx}$ for ${a,b > 0}$ (to guarantee convergence). This can be done through in many ways, but the purpose of
this brief note is to motivate a particular way of writing integrals to look for symmetries to exploit while evaluating them.
Before we begin, let us note something special about integrals of the form
$$ \int_0^\infty f(x) \frac{dx}{x}. \tag{1}$$ Under the change of variables ${x \mapsto \frac{1}{x}}$, we see that
$$ \int_0^\infty f(x) \frac{dx}{x} = \int_0^\infty f(1/x) \frac{dx}{x}. \tag{2}$$ And under the change of variables ${x \mapsto \alpha x}$, we see that
$$ \int_0^\infty f(x) \frac{dx}{x} = \int_0^\infty f(\alpha x) \frac{dx}{x}. \tag{3}$$ In other words, the integral is almost invariant under these changes of variables — only the integrand ${f(x)}$
is affected while the bounds of integration and the measure ${\frac{dx}{x}}$ remain unaffected.
In fact, the measure ${\frac{dx}{x}}$ is the Haar measure associated to the line ${\mathbb{R}_+}$, so this integral property is not random. When working with integrals over the positive real line, it
can often be fortuitous to explicitly write the integral against ${\frac{dx}{x}}$ before attempting symmetry arguments.
Here, we rewrite our integral as
$$ \int_0^\infty \frac{\log x}{x + a + \frac{b}{x}} \frac{dx}{x}. \tag{4}$$ The denominator is clearly invariant under the map ${x \mapsto \frac{b}{x}}$, while ${\log x}$ becomes ${\log(\frac{b}{x})
= \log b - \log x}$. Along with the special property above, this means that
$$ \int_0^\infty \frac{\log x}{x^2 + ax + b}dx = \int_0^\infty \frac{\log b - \log x}{x^2 + ax + b} dx. \tag{5}$$ Adding our original integral to both sides, we see that
$$ \int_0^\infty \frac{\log x}{x^2 + ax + b} dx = \frac{\log b}{2} \int_0^\infty \frac{1}{x^2 + ax + b}dx. \tag{6}$$
This now becomes a totally routine integral, albeit not entirely pleasant, to evaluate. Generally, one can complete the square and then either perform an argument by partial fractions or an argument
through trig substitution (alternately, always use partial fractions and allow some complex numbers; or use hyperbolic trig sub; etc.). Let ${c = b - \frac{a^2}{4}}$, which arises naturally when
completing the square in the denominator. If ${c = 0}$, then the change of variables ${x \mapsto x - \frac{a}{2}}$ transforms our integral into
$$ \frac{\log b}{2} \int_{a/2}^\infty \frac{dx}{x^2} = \frac{\log b}{a}. \tag{7}$$
When ${c \neq 0}$, performing the change of variables ${x \mapsto \sqrt{\lvert c \rvert} x - \frac{a}{2}}$ transforms our integral into
$$ \frac{\log b}{2\sqrt{\lvert c \rvert}} \int_{\frac{a}{2\sqrt{\lvert c \rvert}}}^\infty \frac{dx}{x^2 + 1} = \frac{\log b}{2\sqrt{\lvert c \rvert}} \left(\frac{\pi}{2} - \arctan\left(\frac{a}{2\
sqrt{\lvert c \rvert}}\right)\right) \tag{8}$$ when ${c > 0}$, or
$$ \frac{\log b}{2\sqrt{\lvert c \rvert}} \int_{\frac{a}{2\sqrt{\lvert c \rvert}}}^\infty \frac{dx}{x^2 - 1} = \frac{\log b}{4\sqrt{\lvert c \rvert}} \log\frac{a + 2\sqrt{\lvert c \rvert}}{a - 2\sqrt
{\lvert c \rvert}} \tag{9}$$ when ${c < 0}$.
Info on how to comment
To make a comment, please send an email using the button below. Your email address won't be shared (unless you include it in the body of your comment). If you don't want your real name to be used
next to your comment, please specify the name you would like to use. If you want your name to link to a particular url, include that as well.
bold, italics, and plain text are allowed in comments. A reasonable subset of markdown is supported, including lists, links, and fenced code blocks. In addition, math can be formatted using $(inline
math)$ or $$(your display equation)$$.
Please use plaintext email when commenting. See Plaintext Email and Comments on this site for more. Note also that comments are expected to be open, considerate, and respectful. | {"url":"https://davidlowryduda.com/motivating-a-change-of-variables-in-int-log-x-x2-ax-b-dx/","timestamp":"2024-11-14T04:04:19Z","content_type":"text/html","content_length":"7992","record_id":"<urn:uuid:81455396-449a-4c7f-b1e0-5b3e6c48b38b>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00286.warc.gz"} |
Systems and Means of Informatics
2022, Volume 32, Issue 1, pp 83-93
• M. P. Krivenko
The problem of analyzing a monotone trend is considered. An estimate of the maximum likelihood of distribution parameters is built when the monotonicity condition is formulated for the values of some
function from them. The solution of the corresponding problem is obtained in the form of an algorithm which generalizes the PAV (Pool-Adjacent-Violators) procedure.
As an example, the problem of estimating the monotone trend of the ratio of mathematical expectation to the standard for a sequence of normally distributed quantities is considered. The resulting
estimate is based on a count of the number of positive/negative values observed. It is shown that trend testing in this case is equivalent to the analysis of monotone changes in the probability of
success in a heterogeneous Bernoulli scheme. Thus, the connection between the parametric and nonparametric approaches in the analysis of nonstationary random sequences is revealed. An example of a
real situation when it is possible to apply the approach under consideration is the analysis of random sequences in a transformed form: a set of observations is divided into groups, for each of which
some statistics are calculated, the result of such fragmentation is considered as a sequence of values with a certain distribution.
[+] References (4)
[+] About this article | {"url":"http://www.ipiran.ru/journal_system/article/08696527220108.html","timestamp":"2024-11-06T05:38:47Z","content_type":"text/html","content_length":"5440","record_id":"<urn:uuid:27b061b9-cda4-42d8-bae9-898ebd66cbf7>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00219.warc.gz"} |
Comments on Section 2.5
Go back to the page of Section 2.5.
Typo: Immediately after introducing the Mapping Complex in Construction, the second bullet seems to end with unfinished sentence to me. Maybe something like "... can be recovered from ..." is
Comment #531 by Kerodon on
Yep. Thanks!
Comment #907 by Elizabeth Goldfinch on
Phrase "by celebrated Dold-Kan correspondence" is missing definite article.
Comment #913 by Kerodon on
Yep. Thanks!
There are also:
• 4 comment(s) on Chapter 2: Examples of $\infty $-Categories
Post a comment
Your email address will not be published. Required fields are marked.
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder,
this is tag 00ND. The letter 'O' is never used.
The tag you filled in for the captcha is wrong. You need to write 00ND, in case you are confused. | {"url":"https://kerodon.net/tag/00ND/comments","timestamp":"2024-11-08T19:13:46Z","content_type":"text/html","content_length":"14446","record_id":"<urn:uuid:827d0db3-b9e3-49a7-bb7b-a0c81b11d0b6>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00420.warc.gz"} |
4.1 Probability Distribution Function (PDF) for a Discrete Random Variable
Use the following information to answer the next five exercises: A company wants to evaluate its attrition rate, or in other words, how long new hires stay with the company. Over the years, the
company has established the following probability distribution:
Let X = the number of years a new hire will stay with the company.
Let P(x) = the probability that a new hire will stay with the company x years.
Complete Table 4.20 using the data provided.
x P(x)
0 .12
1 .18
2 .30
3 .15
5 .10
6 .05
On average, how long would you expect a new hire to stay with the company?
What does the column “P(x)” sum to?
Use the following information to answer the next four exercises: A baker is deciding how many batches of muffins to make to sell in his bakery. He wants to make enough to sell every one and no fewer.
Through observation, the baker has established a probability distribution.
x P(x)
1 .15
2 .35
3 .40
4 .10
Define the random variable X.
What is the probability the baker will sell more than one batch? P(x > 1) = ________
What is the probability the baker will sell exactly one batch? P(x = 1) = ________
On average, how many batches should the baker make?
Use the following information to answer the next two exercises: Ellen has music practice three days a week. She practices for all of the three days 85 percent of the time, two days 8 percent of the
time, one day 4 percent of the time, and no days 3 percent of the time. One week is selected at random.
Define the random variable X.
Construct a probability distribution table for the data.
We know that for a probability distribution function to be discrete, it must have two characteristics. One is that the sum of the probabilities is one. What is the other characteristic?
Use the following information to answer the next five exercises: Javier volunteers in community events each month. He does not do more than five events in a month. He attends exactly five events 35
percent of the time, four events 25 percent of the time, three events 20 percent of the time, two events 10 percent of the time, one event 5 percent of the time, and no events 5 percent of the time.
Define the random variable X.
What values does x take on?
Construct a PDF table.
Find the probability that Javier volunteers for fewer than three events each month. P(x < 3) = ________
Find the probability that Javier volunteers for at least one event each month. P(x > 0) = ________
4.2 Mean or Expected Value and Standard Deviation
Complete the expected value table.
x P(x) x*P(x)
0 .2
1 .2
2 .4
3 .2
Find the expected value from the expected value table.
x P(x) x*P(x)
2 .1 2(.1) = .2
4 .3 4(.3) = 1.2
6 .4 6(.4) = 2.4
8 .2 8(.2) = 1.6
Find the standard deviation.
x P(x) x*P(x) (x – μ)^2P(x)
2 0.1 2(.1) = .2 (2–5.4)^2(.1) = 1.156
4 0.3 4(.3) = 1.2 (4–5.4)^2(.3) = .588
6 0.4 6(.4) = 2.4 (6–5.4)^2(.4) = .144
8 0.2 8(.2) = 1.6 (8–5.4)^2(.2) = 1.352
Identify the mistake in the probability distribution table.
x P(x) x*P(x)
1 .15 .15
2 .25 .50
3 .30 .90
4 .20 .80
5 .15 .75
Identify the mistake in the probability distribution table.
x P(x) x*P(x)
1 .15 .15
2 .25 .40
3 .25 .65
4 .20 .85
5 .15 1
Use the following information to answer the next five exercises: A physics professor wants to know what percent of physics majors will spend the next several years doing postgraduate research. He has
the following probability distribution:
x P(x) x*P(x)
1 .35
2 .20
3 .15
5 .10
6 .05
Define the random variable X.
Define P(x), or the probability of x.
Find the probability that a physics major will do postgraduate research for four years. P(x = 4) = ________
Find the probability that a physics major will do postgraduate research for at most three years. P(x ≤ 3) = ________
On average, how many years would you expect a physics major to spend doing postgraduate research?
Use the following information to answer the next seven exercises: A ballet instructor is interested in knowing what percent of each year's class will continue on to the next so that she can plan what
classes to offer. Over the years, she has established the following probability distribution:
• Let X = the number of years a student will study ballet with the teacher.
• Let P(x) = the probability that a student will study ballet x years.
Complete Table 4.28 using the data provided.
x P(x) x*P(x)
1 .10
2 .05
3 .10
5 .30
6 .20
7 .10
In words, define the random variable X.
On average, how many years would you expect a child to study ballet with this teacher?
What does the column P(x) sum to and why?
What does the column x*P(x) sum to and why?
You are playing a game by drawing a card from a standard deck and replacing it. If the card is a face card, you win $30. If it is not a face card, you pay $2. There are 12 face cards in a deck of 52
cards. What is the expected value of playing the game?
You are playing a game by drawing a card from a standard deck and replacing it. If the card is a face card, you win $30. If it is not a face card, you pay $2. There are 12 face cards in a deck of 52
cards. Should you play the game?
4.3 Binomial Distribution (Optional)
Use the following information to answer the next eight exercises: Researchers collected data from 203,967 incoming first-time, full-time freshmen from 270 four-year colleges and universities in the
United States. Of those students, 71.3 percent replied that, yes, they agreed with a recent federal law that was passed.
Suppose that you randomly pick eight first-time, full-time freshmen from the survey. You are interested in the number who agreed with that law.
In words, define the random variable X.
X ~ _____(_____,_____)
What values does the random variable X take on?
Construct the probability distribution function (PDF).
On average (μ), how many would you expect to answer yes?
What is the standard deviation (σ)?
What is the probability that at most five of the freshmen reply yes?
What is the probability that at least two of the freshmen reply yes?
4.4 Geometric Distribution (Optional)
Use the following information to answer the next six exercises: Researchers collected data from 203,967 incoming first-time, full-time freshmen from 270 four-year colleges and universities in the
United States. Of those students, 71.3 percent replied that, yes, they agree with a recent law that was passed. Suppose that you randomly select freshman from the study until you find one who replies
yes. You are interested in the number of freshmen you must ask.
In words, define the random variable X.
X ~ _____(_____,_____)
What values does the random variable X take on?
Construct the probability distribution function (PDF). Stop at x = 6.
On average (μ), how many freshmen would you expect to have to ask until you found one who replies yes?
What is the probability that you will need to ask fewer than three freshmen?
4.5 Hypergeometric Distribution (Optional)
Use the following information to answer the next five exercises: Suppose that a group of statistics students is divided into two groups: business majors and non-business majors. There are 16 business
majors in the group and seven non-business majors in the group. A random sample of nine students is taken. We are interested in the number of business majors in the sample.
In words, define the random variable X.
X ~ _____(_____,_____)
What values does X take on?
Find the standard deviation.
On average (μ), how many would you expect to be business majors?
4.6 Poisson Distribution (Optional)
Use the following information to answer the next six exercises: On average, a clothing store gets 120 customers per day.
Assume the event occurs independently in any given day. Define the random variable X.
What values does X take on?
What is the probability of getting 150 customers in one day?
What is the probability of getting 35 customers in the first four hours? Assume the store is open 12 hours each day.
What is the probability that the store will have more than 12 customers in the first hour?
What is the probability that the store will have fewer than 12 customers in the first two hours?
Which type of distribution can the Poisson model be used to approximate? When would you do this?
Use the following information to answer the next six exercises: On average, eight teens in the United States die from motor vehicle injuries per day. As a result, states across the country are
debating raising the driving age.
Assume the event occurs independently in any given day. In words, define the random variable X.
X ~ _____(_____,_____)
What values does X take on?
For the given values of the random variable X, fill in the corresponding probabilities.
Is it likely that there will be no teens killed from motor vehicle injuries on any given day in the United States? Justify your answer numerically.
Is it likely that there will be more than 20 teens killed from motor vehicle injuries on any given day in the United States? Justify your answer numerically. | {"url":"https://texasgateway.org/resource/practice-2?book=79081&binder_id=78231","timestamp":"2024-11-11T07:53:50Z","content_type":"text/html","content_length":"77004","record_id":"<urn:uuid:86116e52-ba3c-4a5c-af09-b0bd151a054b>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00463.warc.gz"} |
Solving radical signs
solving radical signs Related topics: work to algebraic equations
algebra for year 7 printable work
glencoe mathematics answers
common denominator calculator
practice fraction problems for the net
pre algebra binomial
simplifying algebraic fractions
rational expressions calculator
changing difference formula
maths 11 garde taks 2004
graphing coordinate plane worksheets to make pictures
online calculator solve for x
multiplying dividing integers
Author Message
fashxam Posted: Monday 30th of Sep 15:32
Hi dude , can anyone assist me with my assignment in Basic Math. It would be good if you could just give me an idea about the links from where I can get aid on absolute values.
Back to top
kfir Posted: Wednesday 02nd of Oct 09:37
First of all, let me welcome you to the world of solving radical signs. You need not worry; this subject seems to be difficult because of the many new symbols that it has. Once you learn
the basics, it becomes fun. Algebrator is the most preferred tool amongst beginners and experts. You must buy yourself a copy if you are serious at learning this subject.
From: egypt
Back to top
Vild Posted: Wednesday 02nd of Oct 14:48
I would just add a note to what has been said above. Algebrator no doubt is the most useful tool one could have. Always use it as a guide and a means to learn and never to copy .
Back to top
sdokerbellir Posted: Thursday 03rd of Oct 10:41
Sorry guys. But whenever I see those book and piles of classwork , I just can’t seem to retain my confidence . Thanks a lot for the advice and I will use it for sure. Where can I get
this thing?
Back to top
nedslictis Posted: Saturday 05th of Oct 07:16
I remember having difficulties with difference of cubes, quadratic inequalities and adding fractions. Algebrator is a truly great piece of math software. I have used it through several
algebra classes - College Algebra, Pre Algebra and Algebra 2. I would simply type in the problem and by clicking on Solve, step by step solution would appear. The program is highly
Back to top
fveingal Posted: Sunday 06th of Oct 16:02
This one is actually quite different. I am recommending it only after trying it myself. You can find the details about the software at https://softmath.com/about-algebra-help.html.
From: Earth
Back to top | {"url":"https://softmath.com/algebra-software-2/solving-radical-signs.html","timestamp":"2024-11-11T16:24:27Z","content_type":"text/html","content_length":"42845","record_id":"<urn:uuid:afd52244-aa90-4f0e-9d3b-7d55910f18fb>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00675.warc.gz"} |
Finite Element Modeling for Numerical Simulation of Multi Step Forming of Wheel Disc and Control of Excessive Thinning
Volume 03, Issue 09 (September 2014)
Finite Element Modeling for Numerical Simulation of Multi Step Forming of Wheel Disc and Control of Excessive Thinning
DOI : 10.17577/IJERTV3IS090094
Download Full-Text PDF Cite this Publication
Prashantkumar S. Hiremath, Shridhar Kurse, Laxminarayana H. V, Vasantha Kumar M, Ravindra J L, 2014, Finite Element Modeling for Numerical Simulation of Multi Step Forming of Wheel Disc and Control
of Excessive Thinning, INTERNATIONAL JOURNAL OF ENGINEERING RESEARCH & TECHNOLOGY (IJERT) Volume 03, Issue 09 (September 2014),
• Open Access
• Total Downloads : 437
• Authors : Prashantkumar S. Hiremath, Shridhar Kurse, Laxminarayana H. V, Vasantha Kumar M, Ravindra J L
• Paper ID : IJERTV3IS090094
• Volume & Issue : Volume 03, Issue 09 (September 2014)
• Published (First Online): 05-09-2014
• ISSN (Online) : 2278-0181
• Publisher Name : IJERT
• License: This work is licensed under a Creative Commons Attribution 4.0 International License
Text Only Version
Finite Element Modeling for Numerical Simulation of Multi Step Forming of Wheel Disc and Control of Excessive Thinning
Prashantkumar S.Hirematp,a, Shridhar Kurse2,a, Laxminarayana H.V.3,a,Vasantha kumar M4,b, Ravindra J L5,b.
1 Post Graduate Student,2 Associate. Professor,3 Professor,4,5Senior Project Manager.
a Dept. of Mechanical Engineering, Dayananda Sagar College of Engineering, Bangalore
b Dept. of Product Design and Development, Altair India Bangalore
AbstractThe aim of this paper is numerical simulation of multi step sheet metal forming process of a wheel disc and an effort made to propose an efficient method to optimize the sheet metal stamping
process to obtain improved quality of a product. The main topic of the work is the prevention of excessive part thinning and the control of springback phenomena; thus, thinning and springback are the
objective functions taken into account. The blank holder force value was considered as process design variable. The approach proposed in this work is a multi-objective optimization problem consisting
of integration among Finite Element Numerical Simulation and Response Surface Methodology.
Keywords Wheel Disc Forming; Excessive Thinning; HyperForm; Radioss;
1. INTRODUCTION
Metal Stamping is a forming process by plastic deformation of a metal surface carried by punch in a die. The surface is transferred by molecular displacement of matter. To avoid trial and error
tryout procedures, finite element simulation method has been used in sheet metal stamping in a wide range to evaluate the deformation defects and optimize the design.
Nowadays, in a complex 3D industrial forming processes design, the total compensation (or at least reduction) of the springback distortions and excessive thinning is a crucial Sheet metal
stamping simulation is necessary in order to predict the part failure, understanding the % thinning, optimized blank development, press tonnage calculation and to reduce product development cost,
product development time and also the cost of Rework. Minimization of excessive thinning and springback distortions leads to minimize the costs. The actual application of any optimization
technique should take into account the possibility to properly deal with all the conflicting goals. Some approaches have been aimed to determine a cluster of possible optimal solutions by
applying multi-objective techniques in sheet metal stamping optimization.
2. REVIEW OF RELATED RESEARCH
Many research studies have been carried out to analyze the sheet metal forming simulation and to optimize the forming parameters, geometrical shapes and material parameters to improve the forming
quality in the stamping process based on the simulation results of finite element modeling (FEM).
Y. Huangan et al.[1]outlined Minimization of the thickness variation in multi-step sheet metal stamping. The aim of this paper was to develop efficient method to optimize the intermediate tool
surfaces in the multi-step sheet metal stamping process to obtain improved quality of a product at the end of forming. The proposed method is based on the combination of finite element modeling
(FEM) and the response surface method (RSM).
A multi-objective stochastic optimization approach was presented by L. Marrettan et al.[2]. The aim of this paper was to develop a design tool for stamping processes, which is able to deal with
the scattering of the final part quality due to the inner variability of such operations.
Two-step method of forming complex shapes from sheet metal was presented by Sergey F. Golovashchenko et al.[3]. A two-step method of forming a part and a method of designing a preformed shape are
being discussed.
T. Cwiekala et al.[4] outlined accurate deep drawing simulation by combining analytical approaches. The aim of this paper was to development of an analytical simulation method for deep drawing
Fast thickness prediction and blank design in sheet metal forming was presented by B.T. Tang et al.[5]. In this paper, a robust energy-based 3D mesh mapping algorithm is used to obtain the
initial solution and is followed by a reverse deformation method to improve its accuracy. The novel initial solution scheme can consider the material and the process parameters, and thus lead to
fewer Newton Raphson iterations.
3. PROBLEM STATEMENT
The accompanying sketch shown in the Figure1 is a Wheel Disc of a car, it is intended to manufacture this component using multi step forming process. Modeling and
simulation of the process to arrive at acceptable component is essential. The main aim of the paper is to simulate the sheet metal forming stages of wheel disc and optimization of necessary
forming step.
The main focus is on the prevention of excessive part thinning and the control of springback phenomena; thus, thinning and springback are the objective functions taken in to consideration. The
blank holder force (BHF) value was considered as process design variable. Geometric details and material properties have shown in the Fig 2 and Table 1 respectively and thickness of the wheel
disc is 3.5mm.
Fig 1: Wheel disc model: Isometric view
Fig 2: Geometric details of wheel disc
Table 1: Mechanical properties of wheel disc material
Yield stress, y (MPa) 521
Ultimate Tensile Strength (MPa) 610
Plastic strain anisotropic factor r 0.8
Strain hardening exponent n 0.1
Strain hardening co-efficient K(MPa) 824
Coefficient of friction,µ 0.125
Modulus of elasticity, E (MPa) 2.07e5
Density, (g/cm3) 7.8
Poissons ratio, v 0.28
True ultimate tensile strength (MPa) 701.5
4. AIM AND OBJECTIVE
The aim of the work is to perform multi step incremental forming simulation of a wheel disc using HyperForm and to propose an optimization approach in order to control the excessive part thinning
and springback phenomena. Specific objectives of this paper are given below.
□ Validation of finite element model developed by HyperForm using benchmark.
□ Development of finite element model using HyperForm.
□ Simulation of forming stages of wheel disc in order to study the thinning percentage, von Mises stress and formability.
□ Implementation of optimization approach in order to control the excessive part thinning and springback phenomena.
□ Simulation of further forming stages with optimized value of design variable.
The first step in sheet metal forming numerical analysis involves development and validation of Finite Element model. The model developed needs to be validated using benchmark, which is a
standard problem for which solution exist. The test problem for this work is taken from NAFEMS Introduction to Non Linear Finite Element Analysis edited by E. Hinton as reported in reference [6].
All dimensions are in mm
Fig 3: Schematic sketch of deep drawing technique
The test problem is illustrated in Fig 3 shows the deep drawing technique which is used to manufacture drink cans. As the deformation progresses by forcing the punch down the thin sheet makes
contact withthe punch and the die radius and draws in at the edges. The equilibrium path is non linear and higher order strains are induced.
Shell elements are used for the problem, which represents model parts that are relatively two-dimensional, such as sheet metal or a hollow plastic cowl or case and developed finite element model
shown in the Fig 4.
Figure 4: Finite element model of blank and tools
We can give the listed values of Table2 in HyperForm by invoking Elastic-Plastic material model /MAT/LAW43 (HILL_TAB), this law describes the Hill orthotropic material and is applicable only for
shell elements.
Table 2: Material properties and mateial model for the test problem
Table 3: Boundary conditions applied for test problem
Binder Force Fb
(0.5 tons)
Binder Velocity 3000 mm/sec (negative Z direction)
Punch Velocity 3000 mm/sec (negative Z direction)
Target solution for the bench mark problem is the plot showing punch load v/s punch travel shown in Fig 5 and induced higher order strain is more than 30%. The results obtained from numerical
simulation shown in Fig 6 and Fig 7 are in good agreement with the experimental results.
Figure 5: Experimental Result: Punch load v/s Punch travel plot
Figure 6: FEA result: Punch load v/s Punch travel plot
Figure 7: FEA result: Strain contour
6. CASE STUDY
1. Finite element model development
Since the wheel disc has a complicated shape it cant be formed in a single stage. Actual stages required to form wheel disc are identified and shown in the Fig 8, first step is to get the
right blank shape, after this five forming steps follows, depending on how the tools will be designed. Finite element modeling, assigning material properties and boundary conditions has done
using HyperForm.
Fig 8: Forming stages of wheel disc
The mesh generation of blank needs to be done only in the first forming step since the blank from previous steps will be used for later forming steps and shell elements used for our problem.
In the current work mesh generation for forming tools has done in each stage as per the dimensions and shape of the part to be formed. The finite element model for stage2 (cupping/ first
forming) is shown in the fig
9. Boundary conditions applied are given in the Table4.
Fig 9: Finite Element Model for stage2
Table 4: Boundary conditions applied
235360 N
Binder Force Fb
5000 mm/sec
Binder Velocity
(negative Z direction)
Punch Velocity 5000 mm/sec (negative Z direction)
2. Executing the simulation
Once the preprocessing is done, then simulation executed in Radioss. During the simulation, the results can be studied to detect problems in the simulation at an early stage. If the
simulation of the first step looks good, one can continue with the second step. Doing everything stepwise will save lot of time, since unexpected problems often arise somewhere in the
3. Post-processing of stage2
The final stage in the simulation procedure is to evaluate and analyze the results. This has done in HyperView. Generally in sheet metal forming simulations, we are interested in evaluating
the forming process by detecting cracks and wrinkles that would lead to failure.
Fig 10: Stage2: Percentage thinning contour
Fig 11: Stage2: von Mises stress contour
Fig 12: Stage2: FLD zone contour
Fig 13: Stage2: Forming Limiting diagram
4. Result interpretation of stage2
The maximum percentage thinning observed for stage2 is 2.9% as shown in the Fig 10 .Maximum true stress of
1. MPa is observed from the Fig 11, which is under the ultimate true stress of 701.5 MPa. Since there are no elements in failure region, from the Formability limiting diagrams shown in
the Fig 12 and Fig 13 shows that the component is safe, compression zone occuring in the flange region and tension in the cupping region where the material is drawn inside.
5. Forming simulation of stage3 (Reverse Forming)
In this step we will use the formed part of the previous stage as initial blank. The initial blank for stage3 is as shown in Fig 14, which is the formed part of stage2 and complete finite
model for stge3 is shown in Fig 15. The simulation methodology and boundary conditions are same as applied for stage2.
Fig 14 : Formed part of stage2 or initial blank for stage3
Fig 15: Finite element model for stage3 tools
6. Post-processing of stage3
Fig 16: Percentage thinning contour for stage3
Fig17: FLD zone counter showing elements at failure zone
Fig 18: Formability limiting diagram showing elements at failure zone
Generally in most industrial applications for steels and its alloys the maximum allowable thinning is 20%, from the Fig 16, we can observe that maximum thinning is 26.198% and the regions
with excessive thinning are prone to the crack initiation and finally to the failure of a component. From FLD plots shown in Fig 17 and Fig 18 we can see the elements in failure zone.
Now our main goal is to prevent the excessive thinning without changing the design of the component.
7. Proposed optimization approach
The main aim of optimization is the prevention of excessive part thinning but one more factor we need to keep in our mind is springback effect therefore before going for the optimization of
the forming process we need to estimate the spring back also. Excessive thinning prevention problems are typically multi-objective ones since springback effect has to be managed too.The blank
holder force (BHF) value was considered as process design variable. Proposed optimization is the general method for to control excessive thinning and spring back effect.
The proposed approach is deterministic approach in which process variability due to noise factors are completely neglected and therefore only blank holder force(BHF) influence was considered.
Variations of coefficient of friction µ and strain hardening exponent n are neglected and values of µ and n are given in the Table 1. The deterministic procedure steps are as follows.
1. Design of experiment (DOE) definition
Since one single input parameter was selected, BHF value is reduced from initial 24Ton to 4Ton with the constant decrement of 2Ton.
2. FEM simulation and results data collection
A numerical simulation for each DOE point was run and thinning percentage and springback results were collected are shown in the Table4.3.We already have the thinning percentage but we
need to calculate the springback for 24Ton BHF and other DOEs.
Fig 19 : Distance plot for springback calculation
As springback is concerned, a CAD environment was utilized in order to provide a good evaluation of springback. In particular, comparisons between deformed blank after load removing and
final stamped part were developed trying to measure different springback indicators.
HyperForm gives the distance plot between die and formed part as shown in Fig 19, then we need to export the necessary nodal values in Microsoft Excel and average value is taken as the
springback measure, the same procedure is repeated for all DOEs.
we need to follow the same procedure followed for the stage3 forming simulation to get thinning percentage and springback values for remaining DOFs.
Table 4.3: FEM simulation outputs
SPRING BACK
DOE BHF in tons %THINNING
1 24 26.198 0.16
2 22 24.6 0.1
3 20 22.3 0.21
4 18 19.32 0.30
5 16 18.5 0.44
6 14 17.68 0.50
7 12 15.6 0.62
8 10 15.1 0.66
9 8 14.32 0.67
10 6 14.02 0.68
11 4 13.32 0.69
3. Regression model development:
Second Order regression model was developed to formulate surface response (y) in terms of each objective function and then calculated regression coefficient which indicates the best fit
of the approximated curves. Fig 20 and Fig 21 shows the fitting the curves and we observe that both curves present the monotonous behavior with respect to the BHF changes.
Microsoft Office Excel Worksheet is used to generate the curves. In particular, a stepwise regression was developed by eliminating, progressively, the less statistically significant terms
and trying to optimize the
correlation index R2 (adj), which provided a measure of model approximation capability.
y (BHF) = 0.0285×2 – 0.1513x + 13.672
Where x = %thinning R² = 0.9918 (99.18%)
The value of R2 (adj) higher than 99.18% were reached, confirming a good data approximation.
Fig 20: %Thinning v/s BHF
%thinning and springback couple corresponding to a particular BHF value is a Pareto solution: no solution exists that (for given operative conditions, i.e. for given BHF) for fixed
thinning allows a lower springback. Fig 22 shows the obtained Pareto frontier.
The maximum thinning increases as springback decreases in the stamped part. Pareto frontier is a very useful design tool, since it allows visualizing all the possible compromise
solutions. Pareto frontier makes possible to predict the best possible value of a given objective function once a value of the other one is fixed: if a thinning of about 19.32% (point S
in Fig 22) is desired, the best possible value of springback to be expected is given by the Pareto frontier (0.34mm in the case of point S).
Fig 22: Pareto frontier
Now the optimal value for the BHF is 18 Ton in order to have compromising values of thinning percentage and Spring back value. Now we need to use BHF 18 Ton for stage3 simulation and next
forming simulation of Wheel Disc.
8. Stage3 simulation results for optimized value of BHF
All the boundary conditions applied will remain same except only BHF ie, 18 Ton applied.
Fig 21: Spring back v/s BHF
y (BHF) = -0.001×2 – 0.0017x + 0.7406
Where x = Springback R² = 0.9642(96.42%)
The value of R2 (adj) higher than 96.42% were reached, confirming a good data approximation.
1. Pareto frontier
Once the two objective functions were analytically formulated, it was necessary to identify compromise solutions minimizing simultaneously %thinning and Springback. Therefore, a Pareto
frontier was constructed, which helps to identify the compromise solution minimizing simultaneously thinning and springback. In fact, every
Fig 23: Thinning percentage contour for stage3 (18 Ton)
Fig 26: Formability limiting diagram for stage3 (18 Ton)
1. Result interpretation of stage3
From Fig 23 for BHF value 18 Ton the percentage thinning is 19.3%, which is less than as observed for 24Ton. Maximum true stress of 668.7 MPa observed in the Fig 24 is less than ultimate true
stress 701.5 MPa and from Figure 25 and Fig 26, hence we can observe that the component is safe.
1. Stage 4: Inner clipping
Forming in Stage4 involves circular clipping of the component at the centre as shown in Fig 27. This is achieved by pre-processing operations. Since clipping is a preprocessing operation,
stresses induced here will be carried over to the stage 5.
Fig 24: von Mises stress contour for stage3 (18 Ton)
Figure 25: FLD zone contour for stage3 (18 Ton)
Figure 27: Inner clipping
2. Stage 5: Raise boring
Raise boring set up is shown in the Fig 28 and diameter of the hole at the centre has increased and extreme edge at the hole is bent to 900.
Figure 28: Finite element model for stage5
3. Result interpretation of stage5
From von Mises contour and FLD plots shown in the Fig 30 and Fig 32 respectively ,we can observe that component is safe.
Figure 29: Thinning contour for stage5
Fig 30: von Mises stress contour of Raise boring
Fig 31: FLD zone contour for stage5
Fig 32: Forming limiting diagram for stage5
4. Stage6: Flanging
Developed finite element model for flanging is shown in Fig 33.
Figure 33: Finite element model for stage6
Figure 34: Thinning percentage contour for stage6
Figure 35: von Mises stress contour for stage6
Figure 36: FLD zone contour for stage6
Figure 37: Formability limiting diagram for stage6
5. Result interpretation of stage6 (final stage)
A maximum true stress of 608MPa observed in Figure 34, which is under the ultimate true stress of 701.5MPa and also there are no elements at the failure zone as observed from Formability limiting
diagram shown in Figure 36 and Figure 37, hence we observed that the final formed wheel disc is safe.
7. CONCLUSION
Manufacturing of complex components in multi steps like wheel disc demands numerical simulation to arrive at satisfactory tools and to identify the processing parameters involved in the forming
process. FE Modeling and Numerical simulation using commercial FEA software (HyperForm) is demonstrated in the present work.
Multi step forming simulation gives the clear idea of excessive thinning, magnitude of stress, thickness variation in the formed part at each stage and then we can identify the problem in the design
stage and take the necessary action against that.
In this work excessive thinning observed in the stage3 (reverse forming) of wheel disc and at the same time we considered the springback effect also.
In the present work Multi Objective Optimization approach has proposed in order to prevent the excessive thinning and spring back effect. The proposed optimization approach is an integration of
finite element method and surface response method. With the optimimal value of design variable we continued the forming steps and brought down the excessive thinning.
1. Y. Huang Z.Y. Lo and R.Du, Minimization of the thickness variation in multi-step sheet metal stamping, Journal of Materials Processing Technology, Vol. 177, 2006, pp 84-86.
2. L. Marretta, G. Ingarao and R. Di Lorenzo, Design of sheet stamping operations to control springback and thinning: A multi- objective optimization approach, International Journal of
Mechanical Sciences, Vol. 52, 2010, pp 914-927.
3. Sergey F. Golovashchenko, Nicholas M. Bessonov and Andrey
M. Ilinich, Two-step method of forming complex shapes from sheet metal, Journal of Materials Processing Technology, vol. 211, 2011), pp 875-885.
4. T. Cwiekala, A. Brosius and A.E. Tekkaya, Accurate deep drawing simulation by combining analytical approaches, International Journal of Mechanical Sciences, Vol. 53, 2011, pp 374-386.
5. B.T. Tang, Z. Zhao, X.Y. Lu, Z.Q. Wang, X.W. Zhao, S.Y. Chen, Fast thickness prediction and blank design in sheet metal forming based on an enhanced inverse analysis method, International
Journal of Mechanical Sciences, Vol. 49, 2007, pp 1018-1028.
6. NAFEMS, Introduction to Non Linear Finite Element Analysis, edited by E. Hinton.
7. ALTAIR HyperWorks manufacturing solutions help guide.
8. ALTAIR Radioss help guide.
You must be logged in to post a comment. | {"url":"https://www.ijert.org/finite-element-modeling-for-numerical-simulation-of-multi-step-forming-of-wheel-disc-and-control-of-excessive-thinning","timestamp":"2024-11-04T12:09:49Z","content_type":"text/html","content_length":"89187","record_id":"<urn:uuid:1a2bba44-5273-40f4-89a8-f1551cf0e348>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00275.warc.gz"} |
Springer Proceedings in Mathematics & Statistics serien - tales.dk
The aim of the workshop was to bring together researchers from two fields of probability theory: random matrix theory and the theory of iterated random functions.
This volume presents five surveys with extensivebibliographies and six original contributions on set optimization and its applicationsin mathematical finance and game theory.
This volume arose from the Third Annual Workshop on Inverse Problems, held in Stockholm on May 2-6, 2012. The proceedings present new analytical developments and numerical methods for solutions of
inverse and ill-posed problems, which consistently pose complex challenges to the development of effective numerical methods.
Geometrization and symmetries are meant in their widest sense, i.e., representation theory, algebraic geometry, infinite-dimensional Lie algebras and groups, superalgebras and supergroups, groups and
quantum groups, noncommutative geometry, symmetries of linear and nonlinear PDE, special functions, and others.
Applications to spaces of continuous functions, topological Abelian groups, linear topological equivalence and to the separable quotient problem are included and are presented as open problems. Each
chapter presents a lot of worthwhile and important recent theorems with an abstract discussing the material in the chapter.
Based on the workshop of the same name, this proceedings volume presents selected research investigating the mathematics of collective phenomena emerging from quantum theory at observable scales.
Research articles are devoted to broad complex systems and models such as qualitative theory of dynamical systems, theory of games, circle diffeomorphisms, piecewise smooth circle maps, nonlinear
parabolic systems, quadtratic dynamical systems, billiards, and intermittent maps.
This volume resulted from presentations given at the international "Brainstorming Workshop on New Developments in Discrete Mechanics, Geometric Integration and Lie-Butcher Series", that took place at
the Instituto de Ciencias Matematicas (ICMAT) in Madrid, Spain.
Based on the third International Conference on Symmetries, Differential Equations and Applications (SDEA-III), this proceedings volume highlights recent important advances and trends in the
applications of Lie groups, including a broad area of topics in interdisciplinary studies, ranging from mathematical physics to financial mathematics. The selected and peer-reviewed contributions
gathered here cover Lie theory and symmetry methods in differential equations, Lie algebras and Lie pseudogroups, super-symmetry and super-integrability, representation theory of Lie algebras,
classification problems, conservation laws, and geometrical methods. The SDEA III, held in honour of the Centenary of Noether¿s Theorem, proven by the prominent German mathematician Emmy Noether, at
Istanbul Technical University in August 2017 provided a productive forum for academic researchers, both junior and senior, and students to discuss and share the latest developments in the theory
andapplications of Lie symmetry groups.This work has an interdisciplinary appeal and will be a valuable read for researchers in mathematics, mechanics, physics, engineering, medicine and finance.
This book features papers presented during a special session on algebra, functional analysis, complex analysis, and pluripotential theory.
Thanks to the accessible style used, readers only need a basic command of Calculus.This book will appeal to scientists, teachers, and graduate students in Mathematics, in particular Mathematical
Analysis, Probability and Statistics, Numerical Analysis and Mathematical Physics.
- In Honor of Paul A. M. Dirac, CART 2014, Tallahassee, Florida, December 15-17
This book, intended to commemorate the work of Paul Dirac, highlights new developments in the main directions of Clifford analysis.
The book is intended for all those who are interested in application problems related to dynamical systems. model of a kinetic energy recuperation system for city buses; experimental evaluation of
mathematical and artificial neural network modeling for energy storage systems;
- Toronto, Canada, June, 2016, and Kozhikode, India, August, 2016
This volume contains proceedings of two conferences held in Toronto (Canada) and Kozhikode (India) in 2016 in honor of the 60th birthday of Professor Kumar Murty. The meetings were focused on several
aspects of number theory: The theory of automorphic forms and their associated L-functions Arithmetic geometry, with special emphasis on algebraic cycles, Shimura varieties, and explicit methods in
the theory of abelian varieties The emerging applications of number theory in information technology Kumar Murty has been a substantial influence in these topics, and the two conferences were aimed
at honoring his many contributions to number theory, arithmetic geometry, and information technology.
This volume, whose contributors include leading researchers in their field, covers a wide range of topics surrounding Integrable Systems, from theoretical developments to applications. Comprising a
unique collection of research articles and surveys, the book aims to serve as a bridge between the various areas of Mathematics related to Integrable Systems and Mathematical Physics.Recommended for
postgraduate students and early career researchers who aim to acquire knowledge in this area in preparation for further research, this book is also suitable for established researchers aiming to get
up to speed with recent developments in the area, and may very well be used as a guide for further study.
This book highlights the latest advances in stochastic processes, probability theory, mathematical statistics, engineering mathematics and algebraic structures, focusing on mathematical models,
structures, concepts, problems and computational methods and algorithms important in modern technology, engineering and natural sciences applications.It comprises selected, high-quality, refereed
contributions from various large research communities in modern stochastic processes, algebraic structures and their interplay and applications. The chapters cover both theory and applications,
illustrated by numerous figures, schemes, algorithms, tables and research results to help readers understand the material and develop new mathematical methods, concepts and computing applications in
the future. Presenting new methods and results, reviews of cutting-edge research, and open problems and directions for future research, the book serves as a source of inspiration for a broad spectrum
of researchers and research students in probability theory and mathematical statistics, applied algebraic structures, applied mathematics and other areas of mathematics and applications of
mathematics.The book is based on selected contributions presented at the International Conference on "Stochastic Processes and Algebraic Structures - From Theory Towards Applications" (SPAS2017) to
mark Professor Dmitrii Silvestrov's 70th birthday and his 50 years of fruitful service to mathematics, education and international cooperation, which was held at Mälardalen University in Västerås and
Stockholm University, Sweden, in October 2017.
This book is the first volume of proceedings from the joint conference X International Symposium "Quantum Theory and Symmetries" (QTS-X) and XII International Workshop "Lie Theory and Its
Applications in Physics" (LT-XII), held on 19-25 June 2017 in Varna, Bulgaria.
This book presents the proceedings of the international conference Particle Systems and Partial Differential Equations V, which was held at the University of Minho, Braga, Portugal, from the 28th to
30th November 2016. It includes papers on mathematical problems motivated by various applications in physics, engineering, economics, chemistry, and biology. The purpose of the conference was to
bring together prominent researchers working in the fields of particle systems and partial differential equations, providing a venue for them to present their latest findings and discuss their areas
of expertise. Further, it was intended to introduce a vast and varied public, including young researchers, to the subject of interacting particle systems, its underlying motivation, and its relation
to partial differential equations. The book appeals to probabilists, analysts and also to mathematicians in general whose work focuses on topics in mathematical physics, stochastic processes and
differential equations, as well as to physicists working in the area of statistical mechanics and kinetic theory.
- On the Occasion of Shun-ichi Amari's 80th Birthday, IGAIA IV Liblice, Czech Republic, June 2016
The book gathers contributions from the fourth conference on Information Geometry and its Applications, which was held on June 12-17, 2016, at Liblice Castle, Czech Republic on the occasion of
Shun-ichi Amari's 80th birthday and was organized by the Czech Academy of Sciences' Institute of Information Theory and Automation.
This volume presents the latest advances and trends in nonparametric statistics, and gathers selected and peer-reviewed contributions from the 3rd Conference of the International Society for
Nonparametric Statistics (ISNPS), held in Avignon, France on June 11-16, 2016.
- PIMS Summer School and Workshop, July 27-August 5, 2016
The second was a combination of a summer school and workshop on the subject of "Geometric Methods in the Representation Theory of Finite Groups" and took place at the Pacific Institute for the
Mathematical Sciences at the University of British Columbia in Vancouver from July 27 to August 5, 2016.
It focuses on the theoretical, applied, and computational aspects of hyperbolic partial differential equations (systems of hyperbolic conservation laws, wave equations, etc.) and of related
mathematical models (PDEs of mixed type, kinetic equations, nonlocal or/and discrete models) found in the field of applied sciences.
- In Honor of Krishna Alladi's 60th Birthday, University of Florida, Gainesville, March 2016
Gathered from the 2016 Gainesville Number Theory Conference honoring Krishna Alladi on his 60th birthday, these proceedings present recent research in number theory.
Developed from the Second International Congress on Actuarial Science and Quantitative Finance, this volume showcases the latest progress in all theoretical and empirical aspects of actuarial science
and quantitative finance.
This book presents statistical processes for health care delivery and covers new ideas, methods and technologies used to improve health care organizations. | {"url":"https://tales.dk/boeger/serier/springer-proceedings-in-mathematics-statistics/","timestamp":"2024-11-04T18:58:25Z","content_type":"text/html","content_length":"117560","record_id":"<urn:uuid:655654f2-cac4-426a-8e18-0625be6e693f>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00337.warc.gz"} |
Distributional collision resistance beyond one-way functions
Distributional collision resistance is a relaxation of collision resistance that only requires that it is hard to sample a collision (x, y) where x is uniformly random and y is uniformly random
conditioned on colliding with x. The notion lies between one-wayness and collision resistance, but its exact power is still not well-understood. On one hand, distributional collision resistant hash
functions cannot be built from one-way functions in a black-box way, which may suggest that they are stronger. On the other hand, so far, they have not yielded any applications beyond one-way
functions. Assuming distributional collision resistant hash functions, we construct constant-round statistically hiding commitment scheme. Such commitments are not known based on one-way functions,
and are impossible to obtain from one-way functions in a black-box way. Our construction relies on the reduction from inaccessible entropy generators to statistically hiding commitments by Haitner et
al. (STOC ’09). In the converse direction, we show that two-message statistically hiding commitments imply distributional collision resistance, thereby establishing a loose equivalence between the
two notions. A corollary of the first result is that constant-round statistically hiding commitments are implied by average-case hardness in the class SZK (which is known to imply distributional
collision resistance). This implication seems to be folklore, but to the best of our knowledge has not been proven explicitly. We provide yet another proof of this implication, which is arguably more
direct than the one going through distributional collision resistance.
Publication series
Name Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume 11478 LNCS
ISSN (Print) 0302-9743
ISSN (Electronic) 1611-3349
Conference 38th Annual International Conference on the Theory and Applications of Cryptographic Techniques, Eurocrypt 2019
Country/Territory Germany
City Darmstadt
Period 19/05/19 → 23/05/19
Funders Funder number
Alon Young Faculty Fellowship
European Union’s Horizon 2020 research and innovation program 742754
Air Force Office of Scientific Research FA9550-15-1-0262
Horizon 2020 Framework Programme
Blavatnik Family Foundation
Iowa Science Foundation
European Research Council 638121
Israel Science Foundation 18/484
Horizon 2020
Dive into the research topics of 'Distributional collision resistance beyond one-way functions'. Together they form a unique fingerprint. | {"url":"https://cris.tau.ac.il/en/publications/distributional-collision-resistance-beyond-one-way-functions","timestamp":"2024-11-08T08:01:28Z","content_type":"text/html","content_length":"58291","record_id":"<urn:uuid:4085f719-6d96-4ceb-bb8e-18359adb55c8>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00560.warc.gz"} |
Pioneer acceleration and variation of light speed: experimental situation
The situation with respect to the experiments is presented of a recently proposed model that gives an explanation of the Pioneer anomalous acceleration $a_{\rm P}$. The model is based on an idea
already discovered by Einstein in 1907: the light speed depends on the gravitational potential $\Phi$, so that it is larger the higher if $\Phi$. The potential due to all the mass and energy in the
universe increases in time because of its expansion, which has the consequence that light must be slowly accelerating. Moreover it turns out that the observational effects of a universal adiabatic
acceleration of light $a_\ell =a_{\rm P}$ and of an extra acceleration towards the Sun $a_{\rm P}$ of a spaceship would be the same: a blue shift increasing linearly in time, precisely what was
observed. The phenomenon would be due to a cosmological acceleration of the proper time of bodies with respect to the coordinate time. It is shown that it agrees with the experimental tests of
special relativity and the weak equivalence principle if the cosmological variation of the fine structure constant is zero or very small, as it seems now.
arXiv e-prints
Pub Date:
February 2004
□ General Relativity and Quantum Cosmology
33 pages, no figures | {"url":"https://ui.adsabs.harvard.edu/abs/2004gr.qc.....2120R/abstract","timestamp":"2024-11-12T20:53:37Z","content_type":"text/html","content_length":"37729","record_id":"<urn:uuid:be64be2d-5724-464a-9856-cf595a815d8c>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00775.warc.gz"} |
Boreal Math: Final Thoughts
Our grade 6 students have recently finished wrestling with the math problem:
"Are there enough trees in Canada's Boreal Forest to be the lungs of the earth?"
The question came from research that one of the Grade 6 teachers,
Erin Couillard
, found which stated that the northern boreal forest now produces more oxygen than any other forest in the world.
During the planning stage for this problem, our 2 grade 6 math/science teachers, Erin and Emily Brown, decided to structure it in a different way than they had previously.
Building on
the ideas of Dan Meyer
, the problem was introduced to the students will less formal structure than the teachers normally would have planned. This allowed the students to be more involved in the "formulation" of the
problem than just the "computation" of the numbers. (
Great video by Dan Meyer on the topic
) The openness of the problem allowed multiple entry points into the problem - since the math was only introduced after some brainstorming and problem solving by the students.
Due to this change in structure, far more students found a place to start with the problem. According to Erin Couillard, this is the first time she has presented a math problem to her students where
all students knew how to get started on some piece of the problem
. While normally there are a few students who approach the teacher for help, not knowing where or how to start, this was not the case with this problem.
Erin has also commented that she witnessed more sustained engagement with the problem - longer than she had seen before. Students spent over a week on the problem - and often self-organized into
small working groups, depending on the specific part of the problem they were wrestling with.
Why it worked:
In thinking through this problem, the teachers believed that a few elements increased the student buy-in and engagement:
Context Matters:
1. The problem was rooted in a context the students were already engaged
in and knowledgeable about. The class had already been studying the Boreal Forest as a science topic - and they were very familiar with the scientific and environmental aspects of the topic. This
deep background knowledge meant that students were able to understand the assumptions needed to work with the mathematics of the question. Students quickly raised questions such as:
• Are we talking about deciduous or coniferous trees?
• What about the difference in oxygen production between small trees and large trees?
• Don't different sized people use different amounts of oxygen?
• Isn't the population constantly changing?
• Isn't the size of the Boreal Forest changing?
• What about areas of the Boreal forest with lakes or rivers?
Erin has commented that these questions made the math more realistic and authentic. The students quickly realized that they were carrying out calculations on moving targets - and understood why
estimations and a critical understanding of information sources was important. During the 'classroom discussions' video - you can see how Erin embeds the importance of website credibility into the
mathematical discussions. This approach allowed students to generate the necessary assumptions, which they were asked to comment on when creating their final podcast explanations.
For Erin, this experience can reconfirmed how important the context for a math problem can be, as opposed to what often happens in a math classroom - the parachuting in of a disconnected word problem
. The students were fluent with the language, units and 'topography' of the problem - giving them a connectedness to the problem that helped with their engagement.
Space versus Structure:
2. The opening up of the problem allowed the students to focus on the problem solving first,and the calculations second. When the problem was presented to the students, there was some superfluous
background knowledge, and some information missing. This meant that all students were able to get started somewhere - even if it was just trying to find the current human population.
This opening up of the problem also meant that some students quickly bumped up against their own gaps in understanding or misconceptions of mathematics. Erin feels that a more structured problem,
similar to what she has given in the past, does not allow students the opportunity to determine which calculation is necessary - it's often laid out in the problem. The multiple ways to solve this
problem meant that student understanding of mathematic concepts became easily visible. Erin was able to easily see which students were struggling – and spend her time intervening with those students.
The Right Problem:
In the teacher interview Erin comments that this experience has changed everything for her as a math teacher. She no longer wants to teach skills in isolation - but will continue to look for rich
problems that will allow her to embed skill development in authentic and engaging contexts.
Additional Media:
This video captures some of the classroom discussions that occurred during the problem. In it you'll can see students starting with brainstorming necessary questions – and then working through the
various sub-questions that were needed to solve the question.
This second video is an interview conducted with Erin Couillard after the students had completed the problem. Emily Brown and Darrell Lonsberry (CSS Principal) are also present during the interview.
Watching this gives you a sense of the Erin’s thoughts about the problem – and where she wants to go next with her students.
There were two versions of the handout that got the students started. The teachers started with Version 1 – where they had planned to give the students all the sub-questions that they needed to
solve. Version 2 – the one that was used in the classroom – has all the sub-questions removed, and just supplied the students with some (but not all) of the background information needed to solve the
problem. You'll find both versions below: | {"url":"https://calgaryscienceschool.blogspot.com/2011/01/boreal-math-final-thoughts.html","timestamp":"2024-11-07T09:40:32Z","content_type":"application/xhtml+xml","content_length":"89555","record_id":"<urn:uuid:e5caac59-a0a4-47e7-b540-5fa875b71795>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00585.warc.gz"} |
Geoff McVittie - MATLAB Central
Geoff McVittie
Last seen: 2 months ago |  Active since 2014
Followers: 0 Following: 0
of 295,024
0 Questions
5 Answers
3,818 of 20,166
3 Files
of 153,031
0 Problems
32 Solutions
Sum all integers from 1 to 2^n
Given the number x, y must be the summation of all integers from 1 to 2^x. For instance if x=2 then y must be 1+2+3+4=10.
10 years ago
Bullseye Matrix
Given n (always odd), return output a that has concentric rings of the numbers 1 through (n+1)/2 around the center point. Exampl...
10 years ago
Pascal's Triangle
Given an integer n >= 0, generate the length n+1 row vector representing the n-th row of <http://en.wikipedia.org/wiki/Pascals_t...
10 years ago
Is this triangle right-angled?
Given three positive numbers a, b, c, where c is the largest number, return *true* if the triangle with sides a, b and c is righ...
10 years ago
Remove the vowels
Remove all the vowels in the given phrase. Example: Input s1 = 'Jack and Jill went up the hill' Output s2 is 'Jck nd Jll wn...
10 years ago
Remove any row in which a NaN appears
Given the matrix A, return B in which all the rows that have one or more <http://www.mathworks.com/help/techdoc/ref/nan.html NaN...
10 years ago
Most nonzero elements in row
Given the matrix a, return the index r of the row with the most nonzero elements. Assume there will always be exactly one row th...
10 years ago
Add two numbers
Given a and b, return the sum a+b in c.
10 years ago
Given a circular pizza with radius _z_ and thickness _a_, return the pizza's volume. [ _z_ is first input argument.] Non-scor...
10 years ago
A pangram, or holoalphabetic sentence, is a sentence using every letter of the alphabet at least once. Example: Input s ...
10 years ago
Cell joiner
You are given a cell array of strings and a string delimiter. You need to produce one string which is composed of each string fr...
10 years ago
Reverse the vector
Reverse the vector elements. Example: Input x = [1,2,3,4,5,6,7,8,9] Output y = [9,8,7,6,5,4,3,2,1]
10 years ago
Elapsed Time
Given two date strings d1 and d2 of the form yyyy/mm/dd HH:MM:SS (assume hours HH is in 24 hour mode), determine how much time, ...
10 years ago
Finding Perfect Squares
Given a vector of numbers, return true if one of the numbers is a square of one of the other numbers. Otherwise return false. E...
10 years ago
Roll the Dice!
*Description* Return two random integers between 1 and 6, inclusive, to simulate rolling 2 dice. *Example* [x1,x2] =...
10 years ago
Return the 3n+1 sequence for n
A Collatz sequence is the sequence where, for a given number n, the next number in the sequence is either n/2 if the number is e...
10 years ago
Summing digits
Given n, find the sum of the digits that make up 2^n. Example: Input n = 7 Output b = 11 since 2^7 = 128, and 1 + ...
10 years ago
Make a checkerboard matrix
Given an integer n, make an n-by-n matrix made up of alternating ones and zeros as shown below. The a(1,1) should be 1. Example...
10 years ago
Create times-tables
At one time or another, we all had to memorize boring times tables. 5 times 5 is 25. 5 times 6 is 30. 12 times 12 is way more th...
10 years ago
Fibonacci sequence
Calculate the nth Fibonacci number. Given n, return f where f = fib(n) and f(1) = 1, f(2) = 1, f(3) = 2, ... Examples: Inpu...
10 years ago
Who Has the Most Change?
You have a matrix for which each row is a person and the columns represent the number of quarters, nickels, dimes, and pennies t...
10 years ago
Determine whether a vector is monotonically increasing
Return true if the elements of the input vector increase monotonically (i.e. each element is larger than the previous). Return f...
10 years ago
Triangle Numbers
Triangle numbers are the sums of successive integers. So 6 is a triangle number because 6 = 1 + 2 + 3 which can be displa...
10 years ago
Swap the first and last columns
Flip the outermost columns of matrix A, so that the first column becomes the last and the last column becomes the first. All oth...
10 years ago
Find the sum of all the numbers of the input vector
Find the sum of all the numbers of the input vector x. Examples: Input x = [1 2 3 5] Output y is 11 Input x ...
10 years ago
Make the vector [1 2 3 4 5 6 7 8 9 10]
In MATLAB, you create a vector by enclosing the elements in square brackets like so: x = [1 2 3 4] Commas are optional, s...
10 years ago
Select every other element of a vector
Write a function which returns every other element of the vector passed in. That is, it returns the all odd-numbered elements, s...
10 years ago | {"url":"https://www.mathworks.com/matlabcentral/profile/authors/5297866?detail=cody","timestamp":"2024-11-03T15:17:50Z","content_type":"text/html","content_length":"119745","record_id":"<urn:uuid:90f3bbab-b5e6-4d6e-9aff-b2ff7f446873>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00010.warc.gz"} |
Select a figure from the optics which will continue the same series a
Doubtnut is No.1 Study App and Learning App with Instant Video Solutions for NCERT Class 6, Class 7, Class 8, Class 9, Class 10, Class 11 and Class 12, IIT JEE prep, NEET preparation and CBSE, UP
Board, Bihar Board, Rajasthan Board, MP Board, Telangana Board etc
NCERT solutions for CBSE and other state boards is a key requirement for students. Doubtnut helps with homework, doubts and solutions to all the questions. It has helped students get under AIR 100 in
NEET & IIT JEE. Get PDF and video solutions of IIT-JEE Mains & Advanced previous year papers, NEET previous year papers, NCERT books for classes 6 to 12, CBSE, Pathfinder Publications, RD Sharma, RS
Aggarwal, Manohar Ray, Cengage books for boards and competitive exams.
Doubtnut is the perfect NEET and IIT JEE preparation App. Get solutions for NEET and IIT JEE previous years papers, along with chapter wise NEET MCQ solutions. Get all the study material in Hindi
medium and English medium for IIT JEE and NEET preparation | {"url":"https://www.doubtnut.com/qna/649154851","timestamp":"2024-11-12T20:18:44Z","content_type":"text/html","content_length":"191502","record_id":"<urn:uuid:51912bfe-edbb-4e4c-8649-cb49ea82da54>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00624.warc.gz"} |
Convert bit, byte, KB, MB, GB and TB
Enter the number of bits, bytes, kilobytes, megabytes, gigabytes or terabytes.
Note: You can use
mathematical expressions
This page uses the traditional definition where one kilobyte is 1024 bytes, one megabyte is 1024 kilobytes, and so on. If you want to convert the prefixes as they are defined by the International
System of Units (SI), were each step is worth 1000 instead of 1024, you will have to use a SI prefix converter tool instead.
The bit is the smallest unit of storage on a computer. The word is derived from the term binary digit, which means a digit that has two possible values, 0 or 1.
A computer can normally not operate directly on chunks of data that are smaller than one byte. Historically the number of bits in a byte could vary depending on the computer hardware, but nowadays it
is as good as always a synonym for 8 bits which is also the assumption on this page.
The meaning of kilobyte, megabyte, gigabyte and terabyte are not always obvious. In other fields kilo normally means 1000 but the computer world has traditionally used a different definition where it
means 1024. This has to do with computers having an easier time working with numbers that are powers of two (1024 is 2^10). To avoid confusion the units are sometimes called kibibyte (KiB), mebibyte
(MiB), gibibyte (GiB) and tebibyte (TiB). | {"url":"https://onlinetoolz.net/bitbyte","timestamp":"2024-11-09T15:28:19Z","content_type":"text/html","content_length":"8451","record_id":"<urn:uuid:7f88f65b-41ad-435c-a308-25afdddb20cd>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00804.warc.gz"} |
Gcse maths function machines explained
gcse maths function machines explained
Related topics:
Home pre-algebra - adding integers - worksheets | math problems using percentages | holt biology answers "teachers edition" | holt math answers | vertex form of
Graphing and Writing Linear Functions polynomial | trick to finding the greatest common denominator | subtracting time worksheets | walter rudin solution manual principle mathematical analysis |
SOLVING EQUATIONS INVOLVING RATIONAL "101 problems in algebra" ebook | solve compositions of functions given output and input | +mathamatics video course | free books +intermediate maths
Linear Equations and Graphing
Systems of Linear Equations Author Message
Solving Polynomial Equations
Matrix Equations and Solving Systems aboemo Posted: Saturday 30th of Dec 19:35
of Linear Equations Hey, Yesterday I began solving my math assignment on the topic Basic Math. I am currently not able to complete the same since I am not
Introduction Part II and Solving familiar with the basics of algebra formulas, like denominators and geometry. Would it be possible for anyone to assist me with this?
Linear Algebra
Graphing Linear Inequalities Registered:
Using Augmented Matrices to Solve 29.07.2006
Systems of Linear Equations From:
Solving Linear Inequalities
Solution of the Equations
Linear Equations
Annotated Bibliography of Linear espinxh Posted: Sunday 31st of Dec 11:51
Algebra Books This is a common problem; don’t let it get to you. You will get adjusted with gcse maths function machines explained in a couple of weeks.
Write Linear Equations in Standard Till then you can use Algebrator to help you with your assignments.
Graphing Linear Inequalities
Introduction to Linear Algebra for Registered:
Engineers 17.03.2002
Solving Quadratic Equations From: Norway
Systems of Linear Equations
Review for First Order Differential LifiIcPoin Posted: Monday 01st of Jan 08:41
Equations Algebrator is a nice thing. I have used it a lot. I tried solving the questions myself, at least once before using the software. If I
Systems of Nonlinear Equations & couldn’t solve the question then I used the software to give me the solution. I then used to compare both the answers and correct my
their solutions errors .
METHOD FOR INFORMATION RETRIEVAL FROM Registered:
NATURAL LANGUAGE TEXTS 01.10.2002
Quadratic Equations From: Way Way Behind
Syllabus for Differential Equations
and Linear Alg
Linear Equations and Matrices
Solving Linear Equations sonanqdlon Posted: Monday 01st of Jan 15:10
Slope-intercept form of the equation Friends , Thanks a lot for the responses that you have offered. I just had a look at the Algebrator available at: https://
Linear Equations linear-equation.com/solution-of-the-equations.html. The best part that I liked was the money back guarantee that they are extending there.
DETAILED SOLUTIONS AND CONCEPTS I went ahead and purchased Algebrator. It is really easy to handle and proves to be a noteworthy tool for Algebra 2.
Linear Equation Problems Registered:
Systems of Differential Equations 26.03.2002
Linear Algebra Syllabus From:
Quadratic Equations and Problem
The Slope-Intercept Form of the 3Di Posted: Wednesday 03rd of Jan 08:53
Equation This is the site you are looking for : https://linear-equation.com/solving-polynomial-equations.html. They guarantee an secured money back
Final Exam for Matrices and Linear policy. So you have nothing to lose. Go ahead and Good Luck!
Linear Equations
From: 45°26' N,
09°10' E
Jrahan Posted: Friday 05th of Jan 07:31
I remember having often faced difficulties with conversion of units, perpendicular lines and side-angle-side similarity. A really great
piece of math program is Algebrator software. By simply typing in a problem from workbook a step by step solution would appear by a click
on Solve. I have used it through many algebra classes – College Algebra, Intermediate algebra and Basic Math. I greatly recommend the
From: UK
Author Message
aboemo Posted: Saturday 30th of Dec 19:35
Hey, Yesterday I began solving my math assignment on the topic Basic Math. I am currently not able to complete the same since I am not familiar with the basics of algebra
formulas, like denominators and geometry. Would it be possible for anyone to assist me with this?
espinxh Posted: Sunday 31st of Dec 11:51
This is a common problem; don’t let it get to you. You will get adjusted with gcse maths function machines explained in a couple of weeks. Till then you can use Algebrator to
help you with your assignments.
From: Norway
LifiIcPoin Posted: Monday 01st of Jan 08:41
Algebrator is a nice thing. I have used it a lot. I tried solving the questions myself, at least once before using the software. If I couldn’t solve the question then I used the
software to give me the solution. I then used to compare both the answers and correct my errors .
From: Way Way Behind
sonanqdlon Posted: Monday 01st of Jan 15:10
Friends , Thanks a lot for the responses that you have offered. I just had a look at the Algebrator available at: https://linear-equation.com/solution-of-the-equations.html. The
best part that I liked was the money back guarantee that they are extending there. I went ahead and purchased Algebrator. It is really easy to handle and proves to be a
noteworthy tool for Algebra 2.
3Di Posted: Wednesday 03rd of Jan 08:53
This is the site you are looking for : https://linear-equation.com/solving-polynomial-equations.html. They guarantee an secured money back policy. So you have nothing to lose.
Go ahead and Good Luck!
From: 45°26' N,
09°10' E
Jrahan Posted: Friday 05th of Jan 07:31
I remember having often faced difficulties with conversion of units, perpendicular lines and side-angle-side similarity. A really great piece of math program is Algebrator
software. By simply typing in a problem from workbook a step by step solution would appear by a click on Solve. I have used it through many algebra classes – College Algebra,
Intermediate algebra and Basic Math. I greatly recommend the program.
From: UK
Posted: Saturday 30th of Dec 19:35
Hey, Yesterday I began solving my math assignment on the topic Basic Math. I am currently not able to complete the same since I am not familiar with the basics of algebra formulas, like denominators
and geometry. Would it be possible for anyone to assist me with this?
Posted: Sunday 31st of Dec 11:51
This is a common problem; don’t let it get to you. You will get adjusted with gcse maths function machines explained in a couple of weeks. Till then you can use Algebrator to help you with your
Posted: Monday 01st of Jan 08:41
Algebrator is a nice thing. I have used it a lot. I tried solving the questions myself, at least once before using the software. If I couldn’t solve the question then I used the software to give me
the solution. I then used to compare both the answers and correct my errors .
Posted: Monday 01st of Jan 15:10
Friends , Thanks a lot for the responses that you have offered. I just had a look at the Algebrator available at: https://linear-equation.com/solution-of-the-equations.html. The best part that I
liked was the money back guarantee that they are extending there. I went ahead and purchased Algebrator. It is really easy to handle and proves to be a noteworthy tool for Algebra 2.
Posted: Wednesday 03rd of Jan 08:53
This is the site you are looking for : https://linear-equation.com/solving-polynomial-equations.html. They guarantee an secured money back policy. So you have nothing to lose. Go ahead and Good Luck!
Posted: Friday 05th of Jan 07:31
I remember having often faced difficulties with conversion of units, perpendicular lines and side-angle-side similarity. A really great piece of math program is Algebrator software. By simply typing
in a problem from workbook a step by step solution would appear by a click on Solve. I have used it through many algebra classes – College Algebra, Intermediate algebra and Basic Math. I greatly
recommend the program. | {"url":"https://linear-equation.com/linear-equation-graph/difference-of-squares/gcse-maths-function-machines.html","timestamp":"2024-11-09T11:07:37Z","content_type":"text/html","content_length":"91803","record_id":"<urn:uuid:5d7b0b24-d52f-419f-ace8-263952463f7c>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00856.warc.gz"} |
Galvagni Figures & Reid Figures for Hexominoes
A hexomino is a figure made of six squares joined edge to edge. A Galvagni figure is a figure that can be tiled by a polyform in more than one way—a kind of self-compatibility figure. A Reid figure
is a Galvagni figure without holes.
Most of the hexomino Galvagni and Reid figures were found by Michael Reid of the University of Central Florida and Erich Friedman of Stetson University. Corey Plover found the 12-tile Galvagni figure
See also Galvagni & Reid Figures for Pentominoes, Galvagni & Reid Figures for Heptominoes, and Galvagni & Reid Figures for Octominoes,
Galvagni Figures
2 Tiles
3 Tiles
4 Tiles
6 Tiles
12 Tiles
26 Tiles
44 Tiles
Mirror Variants
Reid Figures
Level Galvagni Figures
Level polyominoes may be reflected orthogonally and rotated 180° but not 90°.
For a discussion of restricted-symmetry polyominoes see Alexandre Owen Muñiz's A Polyformist's Toolkit.
Holeless Variants
Last revised 2019-04-29.
Back to Galvagni Compatibility < Polyform Compatibility < Polyform Curiosities Col. George Sicherman [ HOME | MAIL ] | {"url":"http://www.recmath.org/PolyCur/ngal/n6gal.html","timestamp":"2024-11-03T09:31:02Z","content_type":"text/html","content_length":"2681","record_id":"<urn:uuid:74c3d2d5-a519-4a14-a4f9-c4c9ccfb1f96>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00554.warc.gz"} |
Pretty squiggles: graphing almost periodic functions
Pretty squiggles
Here’s an image that came out of something I was working on this morning. I thought it might make an interesting border somewhere.
The blue line is sin(x), the green line 0.7 sin(φ x), and the red line is their sum. Here φ is the golden ratio (1 + √5)/2. Even though the blue and green curves are both periodic, their sum is not
because the ratio of their frequencies is irrational. So you could make this image as long as you’d like and the red curve would never exactly repeat.
Update: See Almost periodic functions
11 thoughts on “Pretty squiggles”
1. Little known fact: If you play the red curve through speakers you will hear the voice of God
2. Very interesting although I can’t say the “border” appeals to me aesthetically.
A few other perhaps surprising facts about sums of periodic functions:
(a) The sum of two periodic functions can be a periodic function with a frequency higher than both constituents.
(b) It is possible to write the function f(x) = x as the sum of two periodic functions.
(c) It is impossible to write x² as the sum of two periodic functions.
3. @IJ (a) seems straight forward but (b) is blowing my mind a little. I assume the periodic functions aren’t continuous?
4. @Steven, that’s right. In fact, the functions are so strange you need the Axiom of Choice to assert their existence.
5. Do you have a reference? It seems to me any periodic function defined everywhere must be bounded, and the sum of two bounded functions must be bounded… What am I missing?
6. I don’t have a solid reference handy but take a look here:
7. As far as I know, the fact that actual computers have finite precision implies that the ratio of the constituents frequency will be rational, given that each frequency will be a multiple of some
negative power of two or ten. Therefore, their sum will in fact be periodic. Irrational numbers can only be represented symbolically in computers.
8. @I. J. Kennedy, what is it that means you can write f(x) = x as the sum of two periodic functions, but not f(x) = x²? Is it to do with x being uniformly continuous but x² not?
@John D. Cook, presumably the red line comes arbitrarily close to being periodic, in some sense, in the same way that rational combinations of 1 and sqrt(2) can come arbitrarily close to being
9. I feel like I should be able to figure this out by myself, but just to confirm — when you say “the curve will never exactly repeat,” are you simply saying it’s not (globally) periodic, or that
there exists no two non-empty intervals that match exactly?
10. The math is beyond me (as are most of the other comments here!) but I attempted to implement a SAS-language version of the squiggles. Thanks for the fun exercise — and let me know if I’ve got it
11. http://i1091.photobucket.com/albums/i395/mapperx/11SUNANDJDC01_zpsa0815d74.png
I DABBLE IN THE ART OF CONFORMAL MAPPING
AND USED YOUR SHIP IN SOME WORK. | {"url":"https://www.johndcook.com/blog/2013/02/09/pretty-squiggles/","timestamp":"2024-11-13T12:37:07Z","content_type":"text/html","content_length":"66317","record_id":"<urn:uuid:bf503a42-f999-49c7-9d05-422321cbdf71>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00057.warc.gz"} |
Thoughts On Sensible Programs For Thinkster Math Cost
You and your little ones can have loads of enjoyable with maths! Table Under Stress – Multiplication game the place a fire lights a bomb if time runs out. Euclidean geometry is a branch of arithmetic
that deals with the properties and relations of factors, lines, and planes. Players give a quantity while the individual leading the sport applies a thriller rule and tells the gamers what the new
quantity is.
However, Aristotle additionally famous a deal with quantity alone might not distinguish arithmetic from sciences like physics; in his view, abstraction and learning thinkster math cost quantity as a
property « separable in thought » from actual instances set arithmetic apart.
Along with a condensed list for obtain on the finish of this list, listed below are 15 useful and principally-free math web sites for academics and 5 thinkster math reviews you’ll be able to share
with students. For example, college students in lower grades will play two cards, subtracting the lower quantity from the upper.
The Latest On Necessary Aspects In Thinkster Math
Students should pass the ball clockwise around the circle, and the one who started with it must answer the query earlier than receiving it once thinkster math worksheets more. Students at one school
district mastered sixty eight% extra math abilities on average after they used Prodigy Math.
Dux Math – This sport has players click on the quantity that corresponds to the solution to an addition or subtraction equation. We may also reply some frequently requested questions in regards
thinkster math worksheets to the history of arithmetic and highlight important figures who have superior our understanding of it at the moment.
Topics include algebra 1 and a pair of, geometry, and trigonometry. Remedy Math – Recreation which asks gamers to create the forumla which arrives at a given solution thinkster math cost to an issue.
Math Playground is a brightly coloured and fun web site that kids are sure to take pleasure in exploring.
A Look At Quick Plans In Thinkster Math
Children establish fractions to separate up wiggly worm sandwiches in this silly math recreation. A FREE downloadable video games and activities pack, together hello thinkster with 20 dwelling
learning maths actions for KS2 pupils to complete on their very own or with a associate.
Click right here to obtain a condensed record of helpful math websites for academics and college students, which you can maintain in your desk for quick reference. Math Recreation for Kids
hellothinkster review – Multiple alternative choice sport the place players have up to 3 seconds to choose the proper answer.
Mathball Roll – physics-primarily based recreation which asks players to roll balls toward even or odd containers based on the number on the ball. This website is totally different thinkster math
from the other websites for teenagers’ math studying because it is a visible program that does not make the most of phrases. | {"url":"https://www.praveena.fr/thoughts-on-sensible-programs-for-thinkster-math-cost/","timestamp":"2024-11-08T17:12:21Z","content_type":"text/html","content_length":"48367","record_id":"<urn:uuid:ba8c68cf-903d-4a43-9d92-1a36404273c6>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00405.warc.gz"} |
Thermal Simulation Predictions of Fracture Toughness
Local Approach Predictions of Fracture Toughness Behaviour in Real Multi Pass Welds using Thermal Simulation Specimens
Annette D. Karstensen ^(1), Anthony Horn ^(2) and Martin Goldthorpe ^(1)
^(1)TWI, Cambridge, UK
^(2)Corus, UK
Paper presented at 2nd International Symposium on High Strength Steel (PRESS), Stiklestad, Verdal, Norway, 23-24 April, 2002
This paper is concerned with using the Beremin Model to predict cleavage failure within the grain coarsened heat affected zone (GCHAZ) of a multi-pass weld. The GCHAZ and the intercritically reheated
grain coarsened heat affected zone (ICGCHAZ) are considered to be the most important microstructures of welds with regard to cleavage fracture. The material parameters of the Beremin Model are
measured using small-scale single edge notch bend (SENB) specimens. These specimens are thermally cycled to simulate the thermal history of the welding process in order to reproduce the tensile and
toughness properties of the GCHAZ. The material parameters are subsequently used to predict cleavage fracture in SENB specimens of real multi-pass welds. The predicted CTOD values are slightly lower
than those obtained in the experiments. At lower failure probabilities the prediction is reasonable, but at higher probabilities the under-estimate is more significant.
1. Introduction
Micromechanical or Local Approach models such as the Beremin Model ^[1] can be used to predict the conditions required for cleavage fracture in structural components. These models use a combination
of finite element analysis and the results of small-scale experimental tests. One advantage of the Beremin Model, compared with predictions made using conventional fracture mechanics, is that the
predictions are probabilistic rather than deterministic. A further advantage is that the model does not suffer restrictions encountered by conventional fracture mechanics with regard to the transfer
of results between the small-scale specimen and the large-scale structure. For example, differences in plastic constraint between different cracked structures, leading to differences in apparent
values of conventional fracture toughness, are fully taken into account using the Beremin Model. This means that highly pessimistic assessments using conventional fracture mechanics can be avoided
for defects situated in regions of low constraint.
2. The Beremin model of cleavage failure
In the Beremin Model
it is assumed that, for a particular load applied to a structure, the cumulative probability of failure by cleavage is described by a Weibull distribution as follows:
where σ [w] is the Weibull stress defined below, and σ [u] and m are known as the Weibull parameters that are assumed to be characteristic of the material and independent of temperature. The stress σ
[u] is the scale parameter of the Weibull distribution, and m is the shape parameter describing the scatter of the distribution.
The Weibull stress σ [w] is defined by
The above integration is taken over the volume of the plastic zone V [p] (usually associated with a stress raising feature such as a notch or a crack). V [o] is a material volume of such a size that
there is a finite probability that it contains an existing micro-crack that can trigger cleavage fracture. V [o] is usually set equal to an arbitrary size of (100µm) ^3 since its precise value does
not affect the predicted results. σ [1] is the maximum principal stress acting on the volume element dV [p] .
In a finite element analysis the calculation of Weibull stress in Eq.[2] is actually carried out as follows:
where the summation is taken over the n finite element integration stations within the instantaneous plastic zone, Δ V [j] is the volume associated with integration station j and σ [1] [j] is the
maximum principal stress there.
3. Method of determining the Weibull parameters
In order to make predictions of cleavage failure in engineering structures, it is necessary to determine the Weibull parameters of the constituent material, or materials, most likely to fail by
cleavage. This is usually accomplished by carrying out cleavage tests using a relatively large number of notched or pre-cracked specimens of the material to obtain precise values of the Weibull
parameters. Tests should be done at a sufficiently low temperature in order to obtain pure cleavage fracture, avoiding any pre-cleavage ductile tearing. Accompanying finite element simulations of
these tests are also needed, as described in more detail below.
As noted earlier, this work is concerned with the GCHAZ of a multi-pass weld. Cleavage tests are therefore carried out on several SENB specimens that have previously undergone a thermal simulation of
the welding process. A finite element analysis of the test geometry is subsequently undertaken in order to calculate the Weibull stress at the point of failure of each specimen. Weibull stresses are
calculated for each test using a range of trial values of m. in Eq.[3]. The appropriate Weibull parameters of the material are then determined using the maximum likelihood method described more fully
in Ref ^[2]
In the present work the CTOD is used as the linking parameter. The value is calculated for the test results and the finite element solution using the equations given in BS 7448:Part 1: 1991. ^[3] In
this way, Weibull stresses at the point of failure of each test are based partly on the measured load and partly on the crack mouth opening displacement.
4. Parent plate material
The parent material is a quenched and tempered grade 450EMZ bainitic steel in the form of 50mm thick plate. This is a structural steel used in offshore applications. It is produced to meet the
requirements of BS 7191:1991 and has a specified minimum yield strength of 450N/mm ^2. The main chemical components are given in Table 1.
Table 1 Chemical composition of grade 450EMZ bainitic steel
C Si Mn P S Cr Mo
0.10 0.28 1.20 0.011 0.002 0.02 0.14
5. Small-scale thermal simulation test specimens
5.1. Thermal simulation and specimen preparation
Fifty-two single edge notch bend (SENB) 11x11 mm specimens are machined from the parent plate. The GCHAZ microstructure is simulated using a Gleeble 1500 thermal simulator. The thermal cycle is
programmed to give a heating rate of 470°C/s, a peak temperature of 1350°C and a heat input of 3.5 kJ/mm for a hold time of 0.3s. After thermal simulation the specimens are ground to a cross section
of 10x10 mm, notched in the rolling direction and pre-cracked by fatigue.
5.2. Cleavage fracture toughness tests
The cleavage test results are shown in terms of CTOD versus temperature in Fig.1. Three of the specimens show non-linear δ [u] type behaviour. One specimens exhibit pop-in. The remaining specimens
show predominantly linear behaviour prior to failure by cleavage, giving valid values of δ [c], the critical CTOD at the onset of brittle crack extension with ductile tearing Δa less than 0.2 mm
according to BS 7448:Part 1.
5.3. Finite element modelling The SENB specimen is modelled using version 5.8-8 of the finite element program ABAQUS. ^[4] By taking advantage of two planes of symmetry, only one quarter of the
specimen is modelled using the three-dimensional finite element mesh shown in Fig.2. Eight-noded, first order brick elements of type C3D8H are used throughout the mesh. Six rows of elements are used
to represent the half thickness of the specimen. The elements are thinner near the free surface to represent the loss of constraint due to plane stress conditions.
The crack is modelled as a very narrow notch with a semi-circular tip to accommodate the effect of blunting. The mesh refinement increases near the crack tip to improve the accuracy of results and
therefore the accuracy of the Weibull stress. The first element in front of the crack has a size of 37µm.
Fig.2. The finite element mesh of the GCHAZ thermal simulation 10x10mm SENB specimens: complete mesh (top) and mesh in crack tip region (below)
Appropriate boundary conditions are applied to restrain the specimen and properly represent the two planes of symmetry. The loading is applied by means of prescribed nodal displacements.
A large strain, elastic-plastic incremental analysis is carried out of the loading of the specimen at each of the three test temperatures of -80°C, -100°C and -120°C. Figure 3 hows the true stress
versus true plastic strain behaviour used at these three temperatures. These curves are interpolated from tensile tests carried out for a range of test temperatures.
Approximately 120 increments of prescribed displacement loading are used to reach the maximum level of deformation measured at failure during the associated tests. By modelling a knife edge 2mm above
the specimen surface ( Fig.2), the crack mouth opening displacement (CMOD) is determined for each increment. The CTOD is then calculated from the numerical clip gauge records according to BS
7448:Part 1, as for the actual tests. The highest prescribed displacement applied in the analyses gives a value of CTOD just beyond the maximum measured in the cleavage tests. At each load increment
results are written to file for later post-processing.
5.4. Determination of Weibull parameters
A purpose-written computer program is used to post-process the three finite element analyses. Results for the Weibull stress at each load increment are calculated using Eq.[3] for a range of trial
values of the Weibull parameter
. In Eq.[3] the maximum principal stress, σ
[1] [j]
, is taken to be the highest value achieved during the previous history of loading of each finite element integration station contained in the zone of plastic deformation around the crack front. Only
two of the three parameters of the Beremin Model (the shape parameter,
, scaling parameter, σ
and material volume
V [o]
) are independent, thus
V [o]
is arbitrarily taken to be (100µm)
A set of Weibull stresses at the point of failure of each cleavage test is determined by linking the value of CTOD, measured at the point of cleavage failure during the test, with the corresponding
CTOD calculated in the analysis at the same temperature. This involves the interpolation of results between load increments. The resulting three sets Weibull stresses at failure (corresponding to the
three test temperatures) are combined into one set and ranked according to the value of the Weibull stress. This statistical sample of Weibull stress is then used in conjunction with the maximum
likelihood method to determine, by iteration, the values of m and σ [u] that best describe the experimental sample of Weibull stress. The process is described fully in Ref ^[2] . It usually involves
two or three iterations to arrive at the final value of m and σ [u] . The resulting values of m and σ [u] are given in Table 2.
Table 2 Results for Weibull parameters of the Beremin Model for grain coarsened heat affected zone material resulting from cleavage tests on small-scale thermal simulation SENB specimens.
m σ [u] (N/mm ^2) V [o] (µm ^3)
14 3150 100 ^3
6. Multi-pass weld test specimens
6.1. Welding procedure
Full-scale submerged arc butt welds are produced. Special weld procedures, designed to maximise the amount of GCHAZ, are used for the welding.
The weld preparation used is a half 'K' weld. This gives a vertical edge to the weld into which the pre-crack is machined, thus maximising the length of the target GCHAZ microstructure ahead of the
crack tip.
6.2. Testing of SENB specimens
In total of 13 full thickness, 50x50mm, SENB, fracture toughness specimens are machined from the welded plate. After final machining, the thickness of the specimens actually range from 47.4mm to
50mm. Each specimen is surface notched to place the crack in the GCHAZ of the vertical fusion boundary. The specimens are tested according to BS 7448:Part 1. The ratio of fatigue crack depth to
section width (
) varies from about 0.24 to 0.31. One specimen is tested at -70°C, two at -100°C, eight at -130°C, one at -160°C and one at -190°C.
After the tests the broken specimens are sectioned and scanning electron micrographs taken to determine the point of cleavage initiation and the subsequent fracture. These investigations are
described fully in Ref ^[6] .
6.3. Finite element modelling
The geometry of the weld, including the width of the HAZ, are measured on the macro section . These measurements are used to construct finite element models of the SENB specimen. Slightly different
models are set up for each temperature in order to match as closely as possible the average ratio of crack depth to width at the temperature.
By taking advantage of the plane of symmetry through the mid-thickness, only one half thickness of the specimen is modelled. Figure 5 shows the mesh. Eight-noded, first order brick elements of type
C3D8H are used throughout the mesh. Six rows of elements are used to represent the half thickness of the specimen. The elements are thinner near the free surface to better represent the loss of
The crack is modelled as a very narrow notch with a semi-circular tip to accommodate the blunting that occurs there. The mesh refinement is increased near the crack tip to improve the accuracy of
results there for Weibull stress. This is done by constructing a box of size 1.25x1.25mm (in plan view) around the crack tip, within which the elements are heavily focused towards the crack tip. The
first element in front of the crack has a size of 5.3µm. The region modelled with GCHAZ properties extends the full thickness of the mesh (along the whole half crack front modelled), about 1.9mm
ahead of the crack front and 0.95mm on either side of the plane of the crack. As noted below, this region dominates the cleavage fracture behaviour.
Appropriate boundary conditions are applied to restrain the specimen and properly represent the two planes of symmetry. The loading is applied by means of prescribed nodal displacements. The maximum
displacement applied ensures that a failure probability of at least 90% is achieved in all analyses (see the next sub-section).
A large strain, elastic-plastic incremental analysis is carried out of the loading of the specimen at each of the five test temperatures of -70°C, -100°C, -130°C, -160°C and -190°C. True stress
versus plastic strain curves are obtained at all temperatures and as an example Figure 6 shows the true stress versus true plastic strain behaviour used for the parent, weld and GCHAZ at -130°C.
A knife edge is modelled 2mm above the specimen surface to allow the CMOD to be determined at each load increment. The CTOD is calculated from the numerical clip gauge records according to BS
7448:Part 2; ^[7] the same as the actual tests. At each load increment results are written to file for later post-processing.
6.4. Determining the probability of cleavage failure
A purpose-written computer program is used to post-process the finite element analyses. Results for the predicted failure probability at each load increment are calculated by means of Eq.[3] followed
by Eq.[1], using the values of
, σ
V [o]
given in
Table 2
. The volume integral in Eq.[3] is carried out only within the GCHAZ region included in the model ahead of the crack front. This limited region of integration is acceptable, since the principal
stresses is this crack tip region provide the dominant contribution to the Weibull stress through Eq.[3]. Including the stresses outside the crack tip neighbourhood makes no significant difference to
the result.
7. Results and discussion
Figure 7
compares the predictions with the test results in terms of CTOD versus temperature.
The solid symbols in Fig.7 show the results of the SENB tests that fail by cleavage. The two curves link the values of CTOD that should give cleavage failure probabilities of 10% and 90% according to
the finite element predictions using the Beremin Model. The curve showing the 10% level of predicted failure probability gives a reasonable, though slight under-estimate, of the experimental results
for CTOD, since none of the 13 test results lie beneath it. The 90% level of predicted failure probability gives some under-estimate of the test results. Three results out of the 13 lie above the
curve, rather than the one or two that might be expected. This under-estimate of CTOD is caused by the over-estimate of failure probability for a given CTOD.
The over-estimate of failure probability, particularly at the 90% level can occur for a number of different reasons:
1. Imprecise values of the Weibull parameters of the weld GCHAZ caused, possibly, by scatter in the experimental results due to variation of the tensile properties not taken account in the finite
element analyses.
2. The use of a uniform 1.9mm wide 'box' of GCHAZ material ahead of the crack. These worst case values of inherent toughness relevant to the GCHAZ might be applied over a volume that is actually
larger than occurs in practice, andwill give the greatest over-predictions of failure at the highest CTOD as witnessed in the present study.
3. The presence of small amounts of crack tip ductile damage and possible tearing in the tests with the highest CTOD. If present, this damage is not taken into account in the finite element analyses
of the weld SENB specimens.
Reason i) cannot be readily eliminated or properly taken into account. However, it is possible that the inherent scatter in tensile properties, within the GCHAZ in particular, is taken somewhat into
account in the Weibull parameters. That is, variations in hardness and so yield strength are manifest as variations in inherent toughness and so a reduction in the value of m. Reason ii) can be
remedied by determining the Weibull stress by integration of Eq.[3] over a larger volume about the crack tip. However, Weibull parameters for the parent and weld should strictly be used as well as
GCHAZ. Finally, item iii) can be taken into account by using a more comprehensive constitutive model of material behaviour that includes ductile damage, but this was outside the scope of the present
8. Summary and conclusions
This study reported here uses the Beremin Model to predict cleavage failure within the grain coarsened heat affected zone of a multi-pass weld.
The following conclusions are reached:
1. The predicted 10% level of failure probability gives a reasonable, though slight under-estimate of the experimental results for CTOD in the real weld specimens at the various temperatures.
2. The 90% level of predicted failure probability gives some under-estimate of the test results. Three results out of the 13 lie above the curve, rather than the one or two that might be expected.
3. The predictions have shown benefits in using specimens made from simulated GCHAZ microstructure to obtain fracture toughness value for real multi-pass weld, as evident from the agreement between
the actual test results and predictions.
9. References | {"url":"https://www.twi-global.com/technical-knowledge/published-papers/local-approach-predictions-of-fracture-toughness-behaviour-in-real-multi-pass-welds-using-thermal-simulation-specimens","timestamp":"2024-11-03T18:59:47Z","content_type":"text/html","content_length":"164066","record_id":"<urn:uuid:847deb65-81f2-4eba-b86f-af396858900f>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00890.warc.gz"} |
Ab Initio Calculation of Fluid Properties for Precision Metrology
Recent advances regarding the interplay between ab initio calculations and metrology are reviewed, with particular emphasis on gas-based techniques used for temperature and pressure measurements.
Since roughly 2010, several thermophysical quantities – in particular, virial and transport coefficients – can be computed from first principles without uncontrolled approximations and with
rigorously propagated uncertainties. In the case of helium, computational results have accuracies that exceed the best experimental data by at least one order of magnitude and are suitable to be used
in primary metrology. The availability of ab initio virial and transport coefficients contributed to the recent SI definition of temperature by facilitating measurements of the Boltzmann constant
with unprecedented accuracy. Presently, they enable the development of primary standards of thermodynamic temperature in the range 2.5–552 K and pressure up to 7 MPa using acoustic gas thermometry,
dielectric constant gas thermometry, and refractive index gas thermometry. These approaches will be reviewed, highlighting the effect of first-principles data on their accuracy. The recent advances
in electronic structure calculations that enabled highly accurate solutions for the many-body interaction potentials and polarizabilities of atoms – particularly helium – will be described, together
with the subsequent computational methods, most often based on quantum statistical mechanics and its path-integral formulation, that provide thermophysical properties and their uncertainties. Similar
approaches for molecular systems, and their applications, are briefly discussed. Current limitations and expected future lines of research are assessed.
1. Introduction
On May 20, 2019, the base SI units of mass (kilogram), electric current (ampere), temperature (kelvin) and amount of substance (mole) were redefined by assigning fixed values to fundamental constants
of nature: the Planck constant, the electron charge, the Boltzmann constant, and the Avogadro constant, respectively.^1–3 By decoupling the base units from specific material artifacts, this new
redefinition is expected to lead to improved scientific instruments, reducing the degradation in accuracy when measuring quantities at larger or smaller magnitudes than a predefined unit standard.
Additionally, the most accurate experimental technique available at each scale can be used to implement a primary standard, resulting in easier calibrations, increased accuracies of measuring
devices, and further technological advancements.
At many conditions, gas-based techniques provide unparalleled performance for primary measurements of temperature and pressure. These involve acoustic, dielectric, or refractivity measurements,
because frequency and electromagnetic measurements can be made with very high accuracy. A model, typically expressed as the ideal-gas behavior with a series of corrections in powers of density, is
used to relate the measured quantity to the temperature or pressure; in the case of dielectric or refractivity measurements, one set of corrections relates the measured quantity to the gas density
and the familiar virial expansion is used to relate the density to the pressure and temperature.
These gas-based methods have been greatly facilitated in recent years by the ability to perform ab initio calculations of the thermophysical properties (such as the polarizability and the density,
dielectric, and refractivity virial coefficients) of the working gases with no uncontrolled approximations and rigorously defined uncertainties. These calculated properties often have much smaller
uncertainties than the best experimental determinations, especially when the gas considered is helium. These techniques have been successfully applied for pressures up to 7 MPa and for thermodynamic
temperatures in the range (2.5–552) K (with extension to 1000 K or more progressing^4).
These achievements have been facilitated by the increase in supercomputing power and advances in numerical techniques for electronic structure calculations. For example, state-of-the-art calculations
for up to three He atoms even include relativistic and quantum electrodynamics effects. In particular, these numerical investigations produce pair and three-body potentials, as well as single-atom,
pair, and three-body polarizabilities, with unprecedented accuracy.
Building on these results, the exact quantum statistical mechanics formulation enabled rigorous calculations of the coefficients appearing in the density (virial) expansion of the equation of state,
the speed of sound, the dielectric constant, and the refractive index. The path-integral Monte Carlo (PIMC) method has been shown to provide sufficient accuracy for these quantities. As a
consequence, it has been possible to devise a fully first-principles chain of calculations with rigorous uncertainty propagation to compute virial coefficients of helium gas.
As a result of these endeavors, since about 2010 thermophysical properties of gaseous helium have been known from theory with an accuracy that in most cases surpasses that of the most precise
experimental determinations. Currently, the uncertainties of the ab initio second and third virial coefficients of helium are at least one order of magnitude smaller than the experimental ones. The
situation is similar for the density dependence of the speed of sound, the dielectric constant, and the refractive index, where it leads to improved accuracy in Acoustic Gas Thermometry (AGT),
Dielectric Constant Gas Thermometry (DCGT), and Refractive Index Gas Thermometry (RIGT), respectively.
Section 2 describes these gas-based experimental techniques for temperature and pressure measurement, including their operating principles, temperature and pressure ranges, recent technological
improvements, and the sources of uncertainty. We highlight the ways in which theoretical knowledge, in the form of ab initio polarizabilities and virial coefficients, has improved these measurements
by reducing significant components of the uncertainty.
First-principles calculations of virial coefficients involve two steps: the ab initio electronic structure calculation of interatomic potentials and/or polarizabilities, followed by the solution of
the exact quantum statistical equations describing virial coefficients.
We therefore present in Sec. 3 a critical review of the state of the art of non-relativistic, relativistic, and quantum electrodynamic electronic structure calculations, with particular emphasis on
the determination of uncertainties. Our primary focus will be on helium – which is currently the only substance for which computations can be performed that consistently exceed the accuracy of the
best experiments – but other noble gases will be briefly covered due to their importance in metrology. For the sake of completeness, we will recall the hierarchy of physical theories involved in
quantum chemical calculations, with particular emphasis on the Full Configuration Interaction (FCI) approach, which is exact within a given orbital basis set and is currently feasible for systems
with up to ten electrons. Relativistic and quantum electrodynamic effects (expressed as expansions in powers of the fine-structure constant) have been crucial for achieving the extremely low
uncertainty of the latest helium calculations, and are also progressively important in describing larger atoms (notably, neon and argon). Additionally, the evaluation of electronic polarizabilities
and magnetic susceptibilities will be discussed. All of these theoretical advances will be exemplified for the case of helium, where we will present the current state of the art regarding interaction
potentials and many-body polarizabilities.
Knowledge of interaction potentials and polarizabilities enables calculation of the coefficients appearing in the virial expansion of the equation of state, the speed of sound, the dielectric
constant, and the refractive index, which are crucial ingredients in the uncertainty budgets of AGT, DCGT, and RIGT. In the past 15 years, the path-integral approach to quantum statistical mechanics
has been successfully applied in calculating virial coefficients without uncontrolled approximations. The main features of this method are reviewed in Sec. 4, with particular attention to the
question of uncertainty propagation from the potentials and the polarizabilities. In the case of pair properties, an alternative method based on the solution of the Schrödinger equation is available
and provides mutual validation of the path-integral results, as well as enabling the calculation of transport properties. Most of this review is focused on thermodynamic properties, but ab initio
calculations also provide viscosity and thermal conductivity. We briefly review how this leads to improvements in flow-rate measurements.
Although most efforts have been devoted to noble gases, highly accurate theoretical calculations are also available for molecular systems and have the potential to enable a similar paradigm shift in
some metrological applications. We describe in Sec. 5 the present situation in the first-principles calculation of molecular properties, and point out a few areas where computational contributions
are expected to have an increasing impact in the near future, namely humidity metrology, measurements of very low pressures, and atmospheric science. We end our review in Sec. 6, where future
perspectives and an overview of the status of highly accurate ab initio property calculations will be presented.
2. Primary Metrology and Thermophysical Properties
2.1. Paradigm reversal in temperature metrology
Traditionally, accurate measurements of temperature-dependent thermophysical properties of gases [such as: second density virial coefficient B(T), viscosity η(T), thermal conductivity λ(T)] have been
used to determine parameters in evermore-refined models for interatomic and intermolecular potentials. This tradition/paradigm can be traced back to the 18th century when “… Bernoulli had proposed
that in Boyle’s law the specific volume v be replaced by (v − b), where b was thought to be the volume of the molecules.”^5 During the past 25 years, the accuracy of the calculated thermophysical
properties of the noble gases (particularly helium) has increased dramatically. An example is shown in Fig. 1, which shows how the accuracy of the second virial coefficient B(T) of ^4He improved with
time. The data plotted are for temperatures near T[Ne]. (T[Ne] ≡ 24.5561 K is the defined temperature of the triple point of neon on the international temperature scale, ITS-90.^6) Since the year
2012, the uncertainty of B(T[Ne]), as calculated ab initio, has been smaller than the uncertainty of the best measurements of B(T[Ne]).
The paradigm reversal (replacing measured thermophysical properties of helium with calculated thermophysical properties) applies to zero-density values of the viscosity η(T), thermal conductivity λ(T
), and ^3He–^4He mutual diffusion coefficient as well as to the density and acoustic virial coefficients, relative dielectric permittivity (dielectric constant) ɛ[r](p, T), relative magnetic
permittivity μ[r](p, T), and refractive index $n(p,T)=εrμr$. For many of these properties, the values calculated for helium are standards that are used to calibrate apparatus that measures the same
properties for other gases.
The paradigm reversals for ɛ[r](p, T) and n(p, T) have been combined with technical advances in the measurement of ɛ[r](p, T) and n(p, T) to develop novel pressure standards. One standard operating
at optical frequencies and low pressures (100 Pa $≤p≤$ 100 kPa) is more accurate than manometers based on liquid columns (see Sec. 2.3.1 and Ref. 18). Other standards operating at audio and microwave
frequencies and higher pressures (100 kPa $≤p≤$ 7 MPa) have enabled exacting tests of mechanical pressure generators based on the dimensions of a rotating piston in a cylinder (see Sec. 2.3.2 and
Refs. 19 and 20). At still higher pressures (up to 40 MPa), the values of helium’s density calculated from the virial equation of state (VEOS) have been used to calibrate magnetic suspension
densimeters.^21 A more accurate high-pressure scale may result. In Sec. 2.5, we will comment on ab initio calculations of transport properties and their contribution to improved flow metrology.
During the past 25 years, the accurate calculations of the thermophysical properties of the noble gases have strongly interacted with gas-based measurements of the thermodynamic temperature T. To put
this in context, we compare in Fig. 2 the evolution of “consensus” temperature metrology with “thermodynamic” temperature metrology.^22
In Fig. 2, the squares represent estimates of the relative uncertainties u[r](T[scale]) of the consensus temperature scales disseminated by National Metrology Institutes (NMIs). We plot the values of
u[r](T[scale]) near the boiling point of water at intervals of roughly 20 years. Most of the points are at years when the NMIs agreed to disseminate a new consensus scale that was either a better
approximation of thermodynamic temperatures and/or an extension of the consensus scale to higher and lower temperatures. The most recent scale is the “International Temperature Scale of 1990”
(ITS-90), and temperatures measured using ITS-90 are denoted T[90].^6 The data underlying ITS-90 are constant-volume gas thermometry (CVGT) and spectral radiation thermometry linked to CVGT.^23 The
pre-1990 CVGT was based on the ideal-gas equation of state, as corrected by virial coefficients either taken from the experimental literature or measured during the CVGT. Post-1990 thermometry,
together with ab initio calculations of virial coefficients, revealed that the authors of ITS-90 were unaware that errors in ITS-90 exceeded their expanded (k = 2) uncertainty by roughly a factor of
two. (See Fig. 3 and the discussion at the end of Sec. 2.2.1).
In Fig. 2, the circles represent the relative uncertainty of determinations of the Boltzmann constant u[r](k[B]). To determine k[B], one measures the mean energy k[B]T per degree-of-freedom of a
system in thermal equilibrium at the thermodynamic temperature T. During the interval 1960–2019, the thermodynamic temperature of the triple point of water was defined as T[TPW] ≡ 273.16 K, exactly.
Thus, measurements of k[B]T that were conducted near T[TPW] had a negligible uncertainty from T and u[r](k[B]T) was an excellent proxy for u[r](T), the uncertainty of measurements of T under the most
favorable conditions.
As displayed in Fig. 2, u[r](T[scale]) decreased from $∼10$ to $∼2$ ppm (1 ppm ≡ 1 part in 10^6) during the 20th century. Also during the 20th century, the relative uncertainty u[r](k[B]) decreased
from $∼20000$ to $∼2$ ppm. Thus, u[r](k[B]) ≫ u[r](T[scale]) for most of the 20th century, even though k[B] was a “fundamental” constant and, therefore, a worthy challenge for metrology.
Between 1973 and 2017, AGT measurements decreased the uncertainty of u[r](k[B]) 100-fold from $∼40$ to $∼0.4$ ppm.^24,25 By 2017, DCGT achieved the uncertainty u[r](k[B]) = 1.9 ppm and Johnson noise
thermometry achieved u[r](k[B]) = 2.7 ppm.^25
In 1995, Aziz et al.^7 argued that the values of the thermal conductivity λ(T), viscosity η(T), and second density virial coefficient B(T) of helium, as calculated using ab initio input, were more
accurate than the best available measurements of these quantities. Subsequently, helium-based AGT measurements of k[B] relied on ab initio values of λ(T) to account for the thermoacoustic boundary
layer. Just before the Boltzmann constant was defined in 2019, the lowest-uncertainty measurements of k[B] used either the ab initio value of thermal conductivity of helium λ[He](273.16 K) or the
value of λ[Ar](273.16 K) that was deduced from ratio measurements using λ[He](273.16 K) as a standard.^26,27
In 2019, the unit of temperature, the kelvin, was redefined by assigning the fixed numerical value 1.380649 × 10^−23 to the Boltzmann constant, k[B], when k[B] is expressed in the unit J K^−1.^2,3
Thus, the Boltzmann constant can no longer be measured. However, the thermodynamic temperature of the triple point of water now has an uncertainty of a few parts in 10^7, although the best current
value is still 273.16 K.^20
As discussed in the next section, the techniques for measuring thermodynamic temperatures are evolving rapidly. They are becoming more accurate and easier to implement. We anticipate NMIs will
disseminate thermodynamic temperatures instead of ITS-90 at temperatures below 25 K. This would not be possible without the accurate ab initio values of the thermophysical properties of helium.
2.2. Gas thermometry
2.2.1. Acoustic gas thermometry
During the past two decades, AGT has emerged as the most accurate primary thermometry technique over the temperature range 7–552 K, achieving uncertainties as low as 10^−6T. AGT experiments were
instrumental in measuring the Boltzmann constant for the redefinition of the kelvin,^28 and have revealed small, systematic errors in the ITS-90.^20,23 The construction of ITS-90 and the definition
of k[B] force T[90] and T to be essentially equal at the temperature of the triple point of water T[TPW]; however, the derivative dT[90]/dT ≈ 1.0001 at T[TPW]. Figure 3 provides evidence that ITS-90
has errors of $∼25×10−6T$ near water’s boiling point and ∼−35 × 10^−6T near 173 K. This section is necessarily brief; for an in-depth review of AGT, the reader is referred to Ref. 29.
The underlying principle of AGT is the relationship between thermodynamic temperature,
, and the thermodynamic speed of sound,
, in a gas:
is the Boltzmann constant,
is the molar gas constant,
is the Avogadro constant,
is the average molecular mass of the gas,
is the limiting low-pressure value of
are the isobaric and isochoric heat capacities, respectively (this ratio is exactly 5/3 for a monatomic gas),
is the gas pressure, and
are the temperature-dependent acoustic virial coefficients. Helium-4 or argon gas is typically used, as these are considerably less expensive than other noble gases and available in ultra-pure forms,
although xenon has also been used.
Most modern realizations of primary AGT determine the speed of sound from the resonance frequencies of the acoustic normal modes in a cavity resonator of fixed and stable dimensions. Resonators have
been manufactured from copper, aluminum, and stainless steel, with internal volumes between 0.5 and 3 l. Cavity shapes have either been spherical, quasi-spherical (with smooth, deliberate deviations
from sphericity), or cylindrical. The use of diamond turning to produce quasi-spherical resonators (QSRs) with extremely accurate forms (∼1 μm) and smooth surfaces (average surface roughness on the
order of 3 nm) has significantly improved performance.^32 In spherical geometries, the best results are obtained from the radially symmetric acoustic modes, since these possess high quality factors
and are relatively insensitive to imperfections in the cavity shape. In cylindrical geometries, the longitudinal plane-wave modes are typically favored.
Two distinct methods of primary AGT exist: absolute and relative. In the absolute method,
is determined by using the defined value of
and by fitting Eq.
to measurements of
) to obtain
, …. Alternatively, when accurate,
ab initio
values of
, … are available,
can be determined from Eq.
using a measurement of
) at a single pressure. The terms
are known exactly;
must be determined by an auxiliary experiment; and
is calculated from the radial acoustic mode frequencies,
, of the QSR:
are the acoustic eigenvalues, Δ
is the sum of the acoustic corrections, and
is the cavity volume. If the longitudinal mode frequencies of a cylindrical cavity are used, the term proportional to
is replaced with a multiple of the cylinder length.
Improvements in QSR volume measurements are perhaps the most significant innovation in AGT in the last two decades, and were driven by efforts to redetermine the Boltzmann constant for the
redefinition of the kelvin. Modern AGT systems measure the frequency
of microwave resonances in the cavity, which are related to the volume through the equation
is the speed of light in vacuum,
in the refractive index of the gas in the cavity, Δ
is the sum of the electromagnetic corrections, and
are the microwave eigenvalues. The microwave modes do not occur in isolation, being at least three-fold degenerate in perfectly spherical cavities. The smooth deformations of the QSR shape lift these
degeneracies, enabling accurate measurement of the individual mode frequencies. A key theoretical result is that (to first order) the mean frequency of these mode groups is unaffected by
volume-preserving shape deformations.
In diamond-turned QSRs, the relative uncertainty in V from the microwave method can be less than 1 × 10^−6.^34 This was made possible by improvements in theory,^35 resonator shape accuracy, and
studies of small perturbations due to probes.^34 Recently, it has been demonstrated that comparable uncertainties can be achieved with low-cost microwave equipment.^36,37 Accurate microwave
dimensional measurements have also been performed in cylindrical acoustic resonators.^38
Relative primary AGT measures thermodynamic temperature ratios:
is the measured speed of sound at a known reference temperature
. Most AGT determinations of (
) use the relative method. The main advantages are that the molecular mass term,
, cancels in the ratio, and that only the relative volume
need be measured. Also, many small perturbations to the acoustic and microwave frequencies (e.g., due to shape deformations) either fully or partially cancel in the ratio. As a result, excellent
results can be obtained using resonators with modest form accuracies that would be unsuited to absolute AGT. The disadvantages are that relative AGT propagates underlying errors and uncertainty in
, and can require the apparatus to operate over a wide temperature range when no suitable reference points are nearby.
In both absolute and relative primary AGT, maintaining gas purity is of critical importance. Impurities will shift the average molecular mass of the gas, and hence the speed of sound, by an amount
that depends on the mass contrast between the bulk gas and impurity. For example, the speed of sound in helium is ∼16 times more sensitive to water vapor than it is in argon. Impurities can either be
present in the gas source or arise from outgassing or leaks in the apparatus itself.
Relative AGT requires only that m remain unchanged between the measurements at T and T[ref]. Temperature dependence in m can arise through several mechanisms: impurities such as water, hydrocarbons,
or heavy noble gases can be condensed out at low temperatures; higher temperatures ($>500$ K) cause significant outgassing from the walls of steel resonators.^39 Gas purity is vastly improved by
maintaining a flow of gas (typically <50 μmol/s) through the resonator and supply manifold.
Absolute AGT has more stringent requirements on gas purity than relative AGT. To determine an accurate value for m, both the isotopic abundance of the gas and any residual impurities must be
quantified. Reactive impurities, including water, can be removed from the source gas using gas purifiers, and noble gas impurities can be removed from helium using a cold trap.^26 The isotopic ratios
^36Ar/^40Ar and ^38Ar/^40Ar in argon, and ^3He/^4He in helium, have been determined by mass spectrometry, and vary significantly from source to source.^40 Alternatively, isotopically pure ^40Ar gas
can be used, although this is only available in small quantities and at great expense.^41
The low uncertainty of the AGT technique arises from the excellent agreement between acoustic theory and experiment. The simplicity of Eq. (2) hides a number of temperature-, pressure-, and
mode-dependent corrections that constitute the term Δf[a]. The largest of these are the thermoacoustic boundary layer corrections, which arise from an irreversible heat exchange between the
oscillating gas and resonator walls.^41,42 This effect both lowers the frequency of the acoustic resonances and broadens them; a valuable cross-check of experiment and theory can be made by comparing
the predicted and measured resonance widths. The radial-mode boundary layer correction in QSRs is approximately proportional to the square root of the gas thermal conductivity – in cylinders, the gas
viscosity also features in the correction.^43 For most temperature ranges, the uncertainty in these parameters can be considered negligibly small for both helium and argon due to improved ab initio
calculations (see Sec. 2.5).
AGT measurements are typically conducted on isotherms in a pressure range between 25 and 500 kPa, with the optimum pressure range depending on several factors such as the type of gas, temperature,
and particular details of the apparatus.^44 At low pressures, the accuracy in determining f[a] is compromised by weak acoustic signals, interference from neighboring modes due to resonance
broadening, and the need to account for details of the interaction of the gas with the resonator’s walls.^45 At high pressures, higher-order virial terms are required to account for molecular
interactions, and the elastic recoil of the resonator walls becomes increasingly significant. The shell recoil effect, which shifts f[a] in proportion to gas density,^46 is difficult to predict in
real resonators^47,48 because of the complex mechanical properties of the joint(s) formed when the cavity resonator is assembled.
For this and other reasons, it is not common practice to use Eq. (1) to determine T from w; instead, the measured data are fitted to low-order polynomials that account for the virial coefficients and
perturbations that are proportional to pressure. Isotherm measurements have the advantage of data redundancy and reduced uncertainty, but are very slow to execute, with each pressure point taking
several hours. Single-state AGT,^49 which utilizes low-uncertainty ab initio calculations of β[a] and γ[a] in helium, offers a much faster means of primary thermometry.
Figure 3 compares AGT measurements from five countries with ITS-90. The AGT data indicate that ITS-90 has an error of $∼25×10−6T$ near water’s boiling point and ∼−35 × 10^−6T near 173 K. Near T[TPW],
the derivative dT[90]/dT ≈ 1 + 1.0 × 10^−4. This implies that heat-capacity measurements made using ITS-90 will generate values of the heat capacity that are 0.01% larger than the true heat capacity.
However, we are not aware of heat capacity measurement uncertainties as low as 0.01%.
Prior to the AGT publications shown in Fig. 3, Astrov et al. corrected an estimate used in their CVGT. They had used measurements of the linear thermal expansion of a metal sample to estimate the
thermal expansion of the volume of their CVGT “bulb.” Using additional expansion measurements, Astrov et al. corrected their T − T[90] results. They now agree, within combined uncertainties, with the
AGT data.^56 (Because AGT uses microwave resonances to measure the cavity’s volume in situ, it is not subject to errors from auxiliary measurements of thermal expansion.)
2.2.2. Dielectric constant gas thermometry
DCGT, developed in the 1970s in the U.K.
and later improved by PTB,
is now a well-established method of primary thermometry. The basic idea of DCGT is to replace the density in the equation of state of a gas by the relative permittivity (dielectric constant)
and to measure it by the relative capacitance changes at constant temperature:
In Eq.
) is the capacitance of the capacitor at pressure
(0) that at
= 0 Pa, and
is the effective isothermal compressibility which accounts for the dimensional change of the capacitor due to the gas pressure. In the low-pressure (ideal gas) limit, the working equation can be
simply derived by combining the classical ideal-gas law and the Clausius–Mossotti equation:
with the molar polarizability
. For a real gas in a general formulation including electric fields, both input equations are power series:
), and
) are the second, third, and fourth density virial coefficient, respectively,
is the molar density, and
In the literature, the quantities
are both called the second, third, and fourth dielectric virial coefficient, respectively. The form used in Eq.
comes from the tradition of DCGT
of factoring out
so that
, and
have the same units as
, and
. Conversely,
ab initio
calculations naturally provide the quantities
, and
The DCGT working equation is obtained by eliminating the density using Eqs.
and substituting
with the relative capacitance change corresponding to Eq.
. This leads to a power expansion in terms of Ξ = (Δ
+ 3):
The higher-order terms contain combinations of both the dielectric and density virial coefficients and the compressibility. Equation (10) up to the fourth order can be found in Ref. 61.
DCGT works as a primary thermometer if the molar polarizability A[ɛ] and virial coefficients contained in Eq. (10) are known from fundamental principles or independent measurements with sufficient
accuracy. The effective compressibility κ[eff] is also required. For classical DCGT, where isotherms are measured and the data are extrapolated to zero pressure via least-squares fitting, only A[ɛ]
and κ[eff] are mandatory. This was the way thermodynamic temperature was determined for decades.^57,59,60 Consequently, in classical DCGT, ab initio data on virial coefficients serve as a consistency
check or conversely DCGT is used for determination of virial coefficients to check theory.^61 Since the theoretical calculations of the virial coefficients for helium improved drastically, it is now
possible to use higher-order virial coefficients from theory to reduce the number of fitting coefficients or even to use the working equation directly without fitting and to determine temperature at
each pressure point via the rearranged working equation. Recently, all three approaches have been tested and compared.^62 Especially, the point-by-point evaluation is a shift of paradigm and at the
moment only possible for helium, where the uncertainty of the ab initio calculations, especially of the second density virial coefficient, is small enough. Nevertheless, for other gases not only the
virial coefficients but also the molar polarizabilities determined via DCGT have comparable or smaller uncertainties than ab initio calculations.^63 This is a field of potential improvement of theory
already started with calculations of A[ɛ] for neon^64,65 and for argon.^66
DCGT was operated in the temperature range from 2.5 K to about 273 K using helium-3, helium-4, neon,^67 and argon.^68 All noble gases have the advantage that the molar polarizability is independent
of temperature at a level of precision far beyond that of state-of-the-art experiments.^69
Besides the use of dielectric measurements in primary thermometry, accurate determinations of polarizability and virial coefficients of noble gases and molecules using gas-filled capacitors have a
much longer tradition. These setups, very similar to DCGT, use thermodynamic temperature as one of the input parameters. A complete overview of measurements cannot be given here. Already a very broad
overview of existing data, partly at radio frequencies, was summarized by NBS in the 1950s.^70 In the following decades,^71 different institutes with changing teams performed measurements until the
early 1990s.^72 In the year 2000, NIST started measurements on gases using capacitors resulting in the most accurate values for the measured molecules.^73,74 Very recently, PTB established a setup
for separate measurement of dielectric and density virial coefficients using a combination of Burnett expansion techniques and DCGT.^75 The focus of this setup is the determination of properties of
energy gases such as hydrogen-methane mixtures in the context of the transition to renewable energy. The setup will also provide lower-uncertainty tests of the ab initio calculations of the
dielectric and density virial coefficients of the noble gases.
For primary thermometry, most significant recent improvements in DCGT have been achieved by independent determination of κ[eff] using resonant ultrasound spectroscopy around 0°C and an optimal
choice of capacitor materials.^76 For the Boltzmann experiment with measuring pressures of up to 7 MPa, tungsten carbide was the ideal choice, while at low temperatures beryllium copper was used
together with an extrapolation method. Relative uncertainties for κ[eff] in terms of temperature on the level of 1 ppm near 0°C have been achieved. Equally important are the improvements in pressure
measurement. In contrast to AGT, where pressure is a second-order effect, in DCGT ɛ[r] is directly linked to pressure. Therefore, the relative uncertainty in pressure is transferred to a relative
uncertainty in temperature. The major steps here are discussed in Sec. 2.3.2 regarding the mechanical pressure standard developed at PTB in the framework of the Boltzmann constant determination.^77
These systems with relative uncertainties on the level of 1 ppm at pressures up to 7 MPa have been used to calibrate commercially available systems for pressures up to 0.3 MPa with relative
uncertainties between 3 and 4 ppm. The dominant uncertainty component in DCGT measurements is the standard deviation of the capacitance measurement. The typical relative uncertainty in terms of
temperature connected to this component is on the order of 5 ppm for the low temperature range but was reduced to the 1 ppm level in the case of the Boltzmann experiment at about 0°C.^78 Finally,
one problem in DCGT using helium is the very small molar polarizability compared to all other gases and molecules. Therefore, special care must be taken concerning impurities and here an especially
severe issue is contamination with water.
The polarizability of water at frequencies of capacitance bridges and microwave resonators (see Sec. 5.1.2) is about a factor of 160 larger than that of helium. At cryogenic temperatures, water
contamination in the gas phase is naturally reduced by outfreezing, but especially at room temperature the whole measuring setup as well as the gas purifying system must be highly developed.
Furthermore, pollution with other noble gases must be treated carefully because they cannot be extracted by getters and filters. Ideally, a mass-spectrometer should be used for the detection of noble
gas impurities to allow for an upper estimate of the uncertainty due to gas purity. In summary, with DCGT in the low temperature range from 4 to 25 K uncertainties near 0.2 mK for thermodynamic
temperature are achievable. At around 0°C, the smallest uncertainty for DCGT was achieved during the determination of the Boltzmann constant.^78 Converted to an uncertainty for thermodynamic
temperature, this becomes about 0.5 mK.
In the intermediate range, the uncertainties are larger (between 1 and 2 mK at 200 K^68). The main restriction of the present low-temperature setup is the limited pressure range at intermediate
temperatures. A measurement of high-pressure isotherms in this range is planned. Together with improved ab initio calculations for the second virial coefficients of argon and neon, a single-state
version of DCGT might be possible, in analogy with single-state AGT. This could result in a significant reduction in both uncertainty and measurement time.
2.2.3. Refractive index gas thermometry
Both DCGT and RIGT are versions of polarizing gas thermometry. Both rely on virial-like expansions of either the dielectric constant
or of the refractive index
in powers of the molar density
, that is Eq.
in the case of DCGT, and the Lorentz–Lorenz equation
in the case of RIGT. In the limit of zero frequency,
≈ −1.53 × 10
for He,
, etc.
Except for the small magnetic-permeability term
(which is well-known from theory for helium
), low-frequency measurements of
and of
are analyzed using the same
ab initio
constants. RIGT determines the thermodynamic temperature
by combining measurements of the pressure
with the density virial equation of state, Eqs.
. The density is eliminated from both equations, either numerically or by iteration, to obtain
The constants B, B[R], C, C[R], etc. that appear in the higher-order terms of Eq. (12) are obtained either from theory or from fitting measurements of n^2(p) on isotherms. [DCGT determines T using a
version of Eq. (12) in which ɛ[r] replaces n^2.]
Here, we focus on RIGT conducted at microwave frequencies as developed by Schmidt et al.^81 and as recently reviewed by Rourke et al.^82 These authors determined n from measurements of the microwave
resonance frequencies f[m] of a gas-filled, metal-walled, quasi-spherical cavity. Typical frequencies ranged from 2.5 to 13 GHz; for this range, the frequency dependence of n in the noble gases is
negligible. As discussed in Sec. 2.3.1, RIGT has also been realized at optical frequencies in the context of pressure standards.^83 For helium, the corrections of A[ɛ] and B[R] from zero frequency to
optical frequencies have been calculated ab initio.^79,84
A working equation for measuring
where the brackets “⟨⟩” indicate averaging over the frequencies of a nearly degenerate microwave multiplet and
accounts for the penetration of the microwave fields into the cavity’s walls. Usually,
is determined from measurements of the half-widths of the resonances; its contribution to uncertainties is small. The term
accounts for the temperature-dependent change of the cavity’s volume in response to the gas pressure
. Often, the uncertainty of
is the largest contributor to the uncertainty of RIGT. To make this explicit, we manipulate Eqs.
to obtain:
where the term 2
)] ≈ 0.007 for a copper-walled cavity immersed in helium near
. (This estimate assumes that the cavity’s walls are homogeneous and isotropic; therefore,
/3 where
is the isothermal compressibility of copper.) Thus, a relative uncertainty
) = 0.01 contributes the relative uncertainty
) = 70 × 10
to a RIGT determination of
. In the approximation
≈ 1, this uncertainty contribution is a function of
), but it is not a function of the pressures measured on an isotherm. Because
) decreases with
, RIGT is more attractive at cryogenic temperatures than near or above
Recently, two independent groups explored a two-gas method for measuring κ[eff] of assembled RIGT resonators.^17,85 Ideally, two-gas measurements would replace measurements of κ[T] of samples of the
material comprising the resonator’s wall and also models for the cavity’s deformation under pressure. Both groups relied on new, accurately measured and/or calculated values of the density and
refractivity virial coefficients of neon or argon.^61,79 Using helium and argon, Rourke determined κ[eff] at T[TPW] with the remarkably low uncertainty u[r](κ[eff]) = 9.6 × 10^−4.^85 Madonna Ripa et
al. combined helium and neon data to reduce the uncertainty contribution from κ[eff] to their determinations of T at the triple points of O[2] ($≈54$ K), Ar ($≈84$ K), and Xe ($≈161$ K).^17 They
reported “partial success” and suggested that a revised apparatus using both gases and operating at higher pressures (p > 500 kPa) would obtain lower-uncertainty determinations of T. They also noted
that the two-gas method requires twice as much RIGT data, accurate pressure measurements, and dimensional stability between gas fillings.
Rourke’s review of RIGT^82 noted five groups implementing RIGT using microwave technology. In contrast, we are aware of only one group (at PTB) implementing DCGT.^62 The relative popularity of RIGT
results from the commercial availability of vector analyzers that can measure microwave frequency ratios with resolutions of 10^−9. To our knowledge, using commercially available capacitance bridges,
the best attainable capacitance ratio resolution is 70 × 10^−9.^86 To attain higher resolution for DCGT, PTB developed a unique bridge that measures capacitance ratios with a resolution of order 10^
−8 in a 1 s averaging time. To achieve this specification, the PTB bridge must operate at 1 kHz and both the standard (evacuated) capacitor and the unknown (gas-filled) capacitor must have identical
construction and be located in the same thermostat.^87
Figure 4 illustrates the several strategies being explored for acquiring RIGT data. Absolute RIGT acquires many (p, n) data on an isotherm and determines T via Eq. (14). This method requires
state-of-the-art, absolute pressure measurements; therefore, the pressure gradient between the gas-filled cavity and the manometer (normally at ambient temperature) is required.^88 Uncertainty
budgets for absolute RIGT can be found in Refs. 17 and 85.
Relative RIGT (rRIGT) comes in several flavors, each designed to simplify some aspect of absolute RIGT. Each flavor requires measurements on at least two isotherms: (1) a reference isotherm
for which the thermodynamic temperature is already well known, and (2) an unknown isotherm for which
will be determined. As suggested in the lower panel of
Fig. 4
, one flavor of rRIGT determines
by determining the low-pressure limit of the ratio of slopes
are low temperatures, where the pressure deformation of the cavity
is small, this strategy circumvents the problem of accurately determining
Single-pressure RIGT (spRIGT) measures (p, n, T) and (p, n, T[ref]) and determines T from $T/Tref≈(nT2−1)/(nTref2−1)$. This strategy entirely avoids accurate pressure measurements; instead, the
pressure in the cavity is required to be identical when n is measured at T and T[ref] and the pressure (actually, the density of the gas) must be sufficiently low that an approximate pressure is
adequate for making the virial corrections. This strategy was used by Gao et al. for RIGT between the triple point of neon (T[ref] ≈ 24.5 K) and 5 K.^89 After establishing T[ref] by acoustic
thermometry, they claimed the uncertainties of this implementation of RIGT were smaller than the uncertainties of ITS-90.^90
When constant-frequency RIGT (cfRIGT) is implemented, the pressure in the cavity is changed to keep the refractive index constant as the temperature is changed from T[ref] to T. In this case, T/T
[ref] ≈ p(T, n)/p(T[ref], n).^91 This scheme minimizes the frequency-dependent effects of the coaxial cables on the microwave determination of T/T[ref].
To economically search for measurement or modeling errors, one can obtain three redundant values of T/T[ref] by measuring microwave frequencies at four judiciously chosen values of (p, n). Two
measurements are made on the isotherm T[ref] at the values (p[1], n[1]) and (p[2], n[2]). Two other measurements are on the isotherm T at (p[2], n[3]) and (p[3], n[1]). spRIGT connects the points (p
[2], n[2]) and (p[2], n[3]). cfRIGT connects the points (p[1], n[1]) and (p[3], n[1]). All four points are used to approximately implement rRIGT via Eq. (15).
Compared with other forms of gas thermometry, relative RIGT has significant advantages at low temperatures. We have already emphasized the availability of microwave network analyzers and the
possibility of avoiding state-of-the art pressure measurements. By measuring several microwave resonance frequencies at each state, certain imperfections of the measurements and modeling can be
detected. Comparisons of the frequencies of transverse electric (TE) and transverse magnetic (TM) microwave modes might detect the presence of dielectric films such as oxides, oil deposits, or
adsorbed water on the cavity’s walls.^92 Because relative RIGT relies on microwave frequency ratios, the precise shape of the cavity is unimportant. Cavity shapes other than quasispheres may be
advantageous in particular applications.
RIGT is simpler and more rugged than relative AGT (rAGT) because RIGT requires neither delicate acoustic transducers nor acoustic ducts. However, RIGT is unlikely to replace rAGT at ambient and
higher temperatures because RIGT is more sensitive to the cavity’s dimensions than rAGT by the factor 1/(ɛ[r] − 1), which typically ranges from 200 to 20000. Furthermore, microwave RIGT is
especially sensitive to polar impurities. Adding 1 ppm (mole fraction) of water vapor to dilute argon gas at 293 K will increase the dielectric constant of the gas by 18 ppm and increase the square
of the speed of sound by 0.12 ppm. If the water vapor were undetected, these changes would reduce argon’s apparent RIGT temperature by 18 ppm and increase argon’s apparent rAGT temperature by 0.12
ppm. For helium, the corresponding temperatures are reduced by 145 ppm and 4 ppm.
2.2.4. Constant volume gas thermometry
The website of the International Bureau of Weights and Measures includes a document (“Mise en pratique…”) that indicates how the SI base unit, the kelvin, may be realized in practice using four
different versions of gas thermometry.^93 Surprisingly, this document omits CVGT, the version of gas thermometry that was the primary basis of ITS-90. In this section, we briefly describe the
operation of a particular realization of CVGT and the inconsistent results it generated. This may explain why CVGT was omitted from the Mise en pratique. We mention the post-1990 theoretical and
experimental developments that suggest an updated realization of CVGT might generate very accurate realizations of the kelvin.
CVGT at NBS/NIST began in 1928 and concluded in 1990. We denote the most-recent realization of NBS/NIST’s relative CVGT by “CV
.” The heart of CV
was a metal-walled, cylindrical cavity (“gas bulb”;
≈ 407 cm
) attached to a “dead space” comprised of a capillary leading from the bulb to a constant-volume valve at ambient temperature. The valve separated the gas bulb from a pressure-measurement system. A
typical temperature measurement using CV
began by admitting
≈ 0.0023 mol of helium into the gas bulb at a measured reference pressure (
≈ 13 kPa) and a measured reference temperature (
Then, the valve was closed to seal the helium in the gas bulb and dead space. The bulb was moved into a furnace that was maintained at the unknown temperature
to be determined by CVGT. After the gas bulb equilibrated, the valve was opened to measure the pressure
again. The temperature ratio
was determined by applying the virial equation at each temperature:
is determined, in leading order, by the three ratios:
, and
. For CV
≠ 1 because a tiny quantity of helium flows from the bulb into the capillary when the bulb is moved into the furnace. This quantity was calculated using the measured temperature distribution along
the capillary. For CV
was calculated using auxiliary measurements of the linear thermal expansion of samples of the platinum–rhodium alloy comprising the gas bulb. These samples had been cut out of the gas bulb after
completing all the CVGT measurements.
The simplicity of Eq. (16) hides the many complications of CVGT. We mention three examples. (1) During pressure measurements, helium outside the gas bulb was maintained at the same pressure as the
helium inside the gas bulb. (2) Thermo-molecular and hydrostatic pressure gradients in the capillary were taken into account. (3) At high temperatures, creep in the gas bulb’s volume was detected by
time-dependent pressure changes; the pressure was extrapolated back in time to its value when the bulb was placed in the furnace.
We denote the second most recent realization of NBS/NIST’s relative CVGT by “CV[NBS76].”^96 Both CV[NIST90] and CV[NBS76] shared apparatus and many procedures. However, Ref. 94 lists 11 significant
changes. Here, we mention only one. CV[NIST90]’s two cylindrical gas bulbs had been fabricated entirely from sheets of (80 wt.% Pt + 20 wt.% Rh) alloy. The sides and bottom of CV[NBS76]’s gas bulb
were fabricated from the same alloy; however, the top of the bulb was inadvertently fabricated from (88 wt.% Pt + 12 wt.% Rh) alloy. Perhaps the slight differences in thermal expansions of these
alloys led to an anomalous thermal expansion of the volume of CV[NBS76]’s gas bulb.
Unfortunately, the results from CV[NIST90] and CV[NBS76] were inconsistent, within claimed uncertainties, in the range of temperature overlap (505 K $≤T≤$ 730 K). An approximate expression for the
differences is: T[NIST90] − T[NBS76] ≈ 0.090 × (T/K − 400) mK. This inconsistency was not explained by the authors of CV[NIST90] nor by the authors of CV[NBS76]. Furthermore, the authors did not
assert the more recent CV[NIST90] results were more accurate than the earlier CV[NBS76] results. The working group that developed ITS-90 had no other data, from NIST or elsewhere, that were suitable
for resolving the inconsistency. Therefore, the working group required ITS-90 to be the average of T[NIST90] and T[NBS76] in the overlap range.^97
In the range 2.5–308 K, ITS-90 relied, in part, on another realization of CVGT that had a troubled history. Astrov et al. deduced the thermal expansion of their copper gas bulb’s volume from
measurements of the linear thermal expansion of copper samples taken from the block used to manufacture their bulb.^98 However, the thermal expansion data were inconsistent with other data for
copper. Astrov’s group repeated the thermal expansion measurements using another (better) dilatometer. The more recent expansion data, published in 1995, changed the values of T by more than 50 × 10^
−6T in the range 130 K $<T<$ 180 K, where the uncertainties had been estimated as $≤26×10−6T$.^56
Recently, a working group of the Consultative Committee for Thermometry reviewed primary thermometry below 335 K.^20 Astrov’s revised CVGT values are close to the current consensus, which is
primarily based on AGT and DCGT. The working group retained three other low-temperature realizations of CVGT. Post-1990 AGT measurements of T − T[90] near 470 and 552 K indicate that CV[NIST90] is
indeed more accurate than CV[NBS76].^50 Despite the fact that CVGT was the primary basis for the ITS-90, the Mise en pratique does not include CVGT. We speculate that no temperature metrology group
is pursuing CVGT because: (1) CVGT is complex, (2) Astrov et al.’s thermal expansion problem, (3) unexplained problems with NBS/NIST’s CVGT, and (4) rapid advances in other versions of gas
We now ask: is CVGT a viable method of primary thermometry today? The gas bulb of a modern CVGT would incorporate feedthroughs to enable measuring microwave resonance frequencies of the bulb’s
cavity. The resonance frequencies would determine the bulb’s volumetric thermal expansion, thereby avoiding auxiliary measurements of linear thermal expansion and also avoiding the assumption of
isotropic expansion. If the bulb incorporated a valve and a differential-pressure-sensing diaphragm, the dead-space corrections would vanish. (The diaphragm’s motion could be detected using optical
interferometry.) Today, the ab initio values of B(T) would reduce the uncertainty component from B(T) to near zero. A contemporary CVGT could operate at $∼5×$ higher helium densities than published
experiments without generating significant uncertainties from either the virial coefficients or from pressure-ratio measurements. The higher density, together with simultaneous pressure and microwave
measurements, might enable separation of the bulb’s creep from contamination by outgassing. Most outgassing contaminants affect helium’s dielectric constant, refractivity, and speed of sound much
more than they affect helium’s pressure, an advantage of CVGT. However, CVGT inherently uses fixed aliquots of gas. Therefore, CVGT cannot benefit from flowing gas techniques that have been used, for
example, in high-temperature AGT.^50 In summary, contemporary CVGT could be competitive with other forms of primary gas thermometry, with a possible exception at the highest temperatures, where
flowing gas might be required to maintain gas purity.
2.3. Pressure metrology
Traditionally, standards based on the realization of the mechanical definition of pressure, the normal force applied per unit area onto the surface of an artifact, include pressure balances and
liquid column manometers. The combined overall pressure working range of these instruments extends over seven orders of magnitude, roughly between 10 Pa and 100 MPa. Liquid column manometers achieve
their best performance, with relative standard uncertainty as low as 2.5 ppm, near their upper working limit at a few hundred kPa.^99 With a few notable exceptions, the typical relative standard
uncertainty of pressure balances spans between nearly 1 × 10^−3 at 10 Pa, the lowest end of their utilization range, down to 2–3 ppm in the range between 100 kPa and 3 MPa.^99,100 One such exception
is the remarkable achievement of a relative standard uncertainty as low as 0.9 ppm for the determination of helium pressures up to 7 MPa,^77 though this achievement required the extensive dimensional
characterization, and the cross-float comparison, of the effective areas of six piston–cylinder sets manufactured to extraordinarily tight specifications, with a research effort lasting several
years. In spite of this outstanding result, the accurate characterization of pressure balances is challenging, due to the complexity of the dimensional characterization of the cross-sectional area of
piston–cylinder assemblies, which includes finite-element modeling of their deformation under pressure.^101,102 International comparisons periodically provide realistic estimates of the average
uncertainty of realization of primary standards among NMIs. In 1999, a comparison of primary mechanical pressure standards in the range 0.62 MPa $<p<$ 6.8 MPa, involving five NMIs leading in pressure
metrology exchanging a selected piston–cylinder set, was completed.^103 The resulting differences ΔA[eff] ≡ 10^6(A[eff]/⟨A[eff]⟩ − 1) of the effective area A[eff] of the piston from the reference
value ⟨A[eff]⟩ spanned beyond their combined uncertainties with such significant spread to show that the pressure standards realized by different NMIs were mutually inconsistent.
These inconsistencies strengthened the motivation for the development of standards realizing a thermodynamic definition of pressure by the experimental determination of a physical property of a gas
having a calculable thermodynamic dependence on density, combined with accurate thermometry. This possibility was initially proposed in 1998 by Moldover,^104 who envisaged, already at that time, the
potential of first-principles calculation to accurately predict the thermodynamics and electromagnetic properties of helium and the maturity of experiments determining the dielectric constant using
calculable capacitors. The metrological performance of thermodynamic pressure standards has continuously improved over the last two decades to become increasingly competitive in terms of accuracy,
providing important alternatives that may test the exactness of the mechanical standards discussed above and eventually replace some of them. Also, due to their reduced complexity and bulkiness,
simplified versions of thermodynamics-based standards may be more flexibly adapted to specific technological and scientific applications of pressure metrology. The best-performing recent realizations
of gas-based pressure standards include measurements of the dielectric constant using capacitors and of the refractive index at microwave and optical frequencies, respectively using resonant cavities
and Fabry–Pérot refractometers. In Secs. 2.3.1 and 2.3.2, we separately discuss the most notable of these developments depending on the pressure range of their application.
2.3.1. Low pressure standards (100 Pa to 100 kPa)
In the low vacuum regime, several experimental methods are available which may provide alternative routes for traceability to the pascal. For the cases involving optical measurements, these methods
include: (1) refractometry (interferometry), implemented in various configurations that employ single or multiple cavities or cells with fixed or variable path lengths; (2) line-absorption methods.
The achievements and perspectives of all these methods were recently reviewed.^105
At present, Fabry–Pérot refractometry with fixed length optical cavities (FLOC) has demonstrated the lowest uncertainty for the realization of pressure standards near atmospheric pressure and down to
100 Pa. In principle, the uncertainty of this method is limited by several optical and mechanical effects, most importantly by the change in the length of the cavity due to compression by the test
gas, with the same sensitivity to the imperfect estimate of the compressibility κ[T] that affects RIGT. However, this major uncertainty contribution may be drastically reduced, though not completely
eliminated, by measuring the pressure-induced length change of a second reference FLOC monolithically built on the same spacer, which is kept continuously evacuated. In 2015, a dual-cavity FLOC
achieved an extremely accurate determination of the refractive index of nitrogen at λ = 632.9908 nm, T = 302.9190 K and 100.0000 kPa by reference to the pressure realized by a primary standard
mercury manometer, and using refractive index measurements in helium to determine the compressibility.^106 A comparison of the pressures determined by the nitrogen refractometer with the mercury
manometer below the primary calibration point at 100 kPa down to 100 Pa showed relative differences within 10 ppm. A direct comparison between laser refractometry with nitrogen and a mercury
manometer was realized one year later also at NIST.^18 The comparison showed relative differences between these instruments within 10 ppm over the range between 100 Pa and 180 kPa. The laser
refractometer outperforms the precision and repeatability of the liquid manometer and demonstrates a pressure transfer standard below 1 kPa that is more accurate than its current primary realization.
Such remarkably low uncertainty also favorably compares to the best dimensional characterization and modeling of non-rotating piston–cylinder assemblies.^107
In 2017, more accurate measurements in helium and nitrogen were performed between 320 and 420 kPa using a triple-cell heterodyne interferometer referenced to a carefully calibrated piston gauge,
showing relative differences within 5 ppm with uncertainties on the order of 10 ppm.^83 Some pressure distortion errors affecting FLOC might in principle be eliminated by refractive index measurement
with a variable length optical cavity (VLOC). The realization of this technique requires extremely challenging dimensional measurements, with displacements on the order of 15 cm that must be
determined with picometer uncertainty.^108 Gas modulation techniques, with the measuring cavity frequently and repeatedly switched between a filled and evacuated condition, have been recently
developed,^109,110 aiming at the reduction of the effects of dimensional instabilities and other short- and long-term fluctuations that affect Fabry–Pérot refractometers. A novel realization of an
optical pressure standard, based on a multi-reflection interferometry technique, has also been recently developed, demonstrating the possible realization of the pascal with a relative standard
uncertainty of 10 ppm between 10 and 120 kPa.^111 Optical refractometry for pressure measurement is also being pursued at other NMIs.^112,113
With accurate pressure measurement, these optical methods can yield the thermodynamic temperature, becoming another approach to RIGT. This was demonstrated in Ref. 83, where a refractometer was used
to measure the Boltzmann constant (albeit with higher uncertainty than AGT and DCGT measurements) prior to the 2019 SI redefinition.
At microwave frequencies, the realization of a low-pressure standard requires a substantial enhancement in frequency resolution. Recently, it was demonstrated by Gambette et al. that by coating the
internal surface of a copper cavity with a layer of niobium, and working at temperatures below 9 K where niobium becomes superconducting, pressures between 500 Pa and 20 kPa can be realized very
precisely.^114,115 The overall relative standard uncertainty of this method is currently 0.04%, with the largest contribution from non-state-of-the-art thermometry, which is likely to be
substantially reduced in future work.
2.3.2. Intermediate pressure standards (0.1–7 MPa)
Differently than initially envisaged, the first realization of a thermodynamic pressure standard was not obtained by capacitance measurements, but using a microwave resonant cavity working in the GHz
frequency range, i.e., by a RIGT method. A main motivation for this choice was the development of quasi-spherical microwave resonators, whose internal triaxial ellipsoidal shape slightly deviates
from that of a perfect sphere.^92 This particular geometry resolved the intrinsic degeneracy of microwave modes, allowing enhanced precision in the determination of resonance frequencies.
By 2007, Schmidt et al.^81 demonstrated a pressure standard based on the measurement of the refractive index of helium to achieve overall relative pressure uncertainty u[r](p) within 9 × 10^−6
between 0.8 and 7 MPa. At the upper limit of the pressure range, the uncertainty was dominated by the uncertainty of the isothermal compressibility κ[T] of maraging steel, which was determined using
resonance ultrasound spectroscopy (RUS).^76 Recently, Gaiser et al.^19 realized Moldover’s original proposal of a capacitance pressure standard using DCGT techniques that they had refined during
their measurements of the Boltzmann constant. They achieved the remarkably low uncertainty u[r](p) = 4.4 × 10^−6 near 7 MPa. Recently, the same experimental data were re-analyzed to take advantage of
the increased accuracy of the ab initio calculation of the second density virial coefficient B of He,^11 reducing the overall uncertainty of the capacitance pressure standard to u[r](p) = 2.2 × 10^
At pressures below 1 MPa, the uncertainty of the realization of a pressure standard based on DCGT or RIGT with helium is limited by the resolution of relative capacitance or frequency measurements.
This limit would be immediately reduced by up to one order of magnitude by using, instead of helium, a more polarizable gas like neon or argon. However, while a significant improvement of the
interaction potential, and hence of the ab initio calculated B, has recently been achieved for neon^117 and for Ar,^118 it is not likely that the best available calculations of the molar
polarizability A[ɛ] of neon^65 or argon^66 can be improved sufficiently to replace experiment in the near future. However, an experimental estimate of A[ɛ] of both neon and argon was obtained by
comparative DCGT measurements relative to helium, with relative uncertainty of 2.4 ppm,^63 and may now be used for the realization of pressure standards with other apparatus. For similar purposes,
the ratio of the refractivity of several monatomic and molecular gases, namely Ne, Ar, Xe, N[2], CO[2], and N[2]O, to the refractivity of helium was determined at T = 293.15 K, λ ∼ 633 nm, with
standard uncertainty within 16 × 10^−6, using interferometry.^119 At pressures higher than a few MPa, the imperfect determination of the deformation of the cavity under pressure would impact the
overall uncertainty of a pressure standard based on RIGT or DCGT. One way to overcome this limit would be to measure the refractivity of two gases at identical values of an unknown pressure using a
single apparatus at a known temperature. If the refractivity of both gases were known (either from ab initio calculations or reference measurements), the two measurements would determine both the
effective compressibility κ[T] of the apparatus and the unknown pressure. The same strategy is also applied to increase the upper pressure range where refractometry methods like FLOC can be applied,
though use of helium for the determination of distortion effects requires correcting for diffusion within the glasses used for the construction of these apparatuses.^120
2.4. High pressures and equation of state
Up to this point, we have considered interactions between temperature and pressure standards and the rigorously calculated, low-density properties of the noble gases including the polarizability and
second and third density and dielectric virial coefficients. We now compare ab initio calculations with measurements at pressures above 7 MPa and at correspondingly higher densities. The literature
includes temperature-dependent values of 6 density virial coefficients of helium,^121 7 acoustic virial coefficients of krypton,^122 and 6 density virial coefficients of argon.^123 These calculations
used the best ab initio two-body and nonadditive three-body potentials that were available at the time of publication. Many-body non-additive potentials involving four or more bodies, which are
needed for the exact calculation of virial coefficients from the fourth onwards, are not available and are generally neglected, resulting in an uncontrolled approximation. Here, we compare
measurements of the density of helium ρ[meas](p, T) with values calculated ab initio. This comparison avoids fitting ρ[meas](p, T) to the VEOS because such fits yield highly correlated values for the
separate virial coefficients, each with large uncertainties. Later in this section, we comment on comparisons using speed-of-sound data.
Measurements of gas densities with uncertainties below 0.1% are expensive and rare because they are not required for chemical and mechanical engineering. The uncertainties of most process models are
dominated by imperfect models of equipment (heat exchangers, compressors, distillation columns, etc.) and/or imperfect knowledge of the composition of feedstocks and products. An example of a
demanding application of gas density and composition measurements is custody transfer of natural gas as it flows through large pipelines near ambient temperature and at high pressures (e.g., 7 MPa).
An international comparison among NMIs achieved a k = 2 volumetric flow uncertainty of only 0.22%.^124 In this context, density and composition measurements with uncertainties of order 0.1% are
satisfactory for converting volumetric flows into mass flows and heating values.
In Fig. 5, the remarkable data of McLinden and Lösch-Will are used to test the ab initio VEOS of helium in the ranges 1 MPa $<p<$ 38 MPa and 223 K $<T<$ 323 K.^125 These data were acquired using a
magnetic suspension densimeter. A weigh scale determined the buoyant forces on two “sinkers” immersed in the helium. The data are precise, well-documented, and traced to SI standards with a claimed,
k = 2, density uncertainty of 0.015% + 0.001 kg/m^3 at the temperature extremes and at the highest density. These features attracted previous comparisons with theory.^21,121,126
For the present comparison, where recently published theoretical values of the virial coefficients are used, we converted the measured temperatures from the ITS-90 to thermodynamic temperatures using
Ref. 20 and we converted the measured mass densities to molar densities using the defined value of the universal gas constant and the molar mass for McLinden and Lösch-Will’s helium sample. At
densities below $∼4000$ mol/m^3, the uncertainties and the values of (ρ[meas]/ρ[calc] − 1) diverge on isotherms as ρ^−1 and/or p^−1. (See Fig. 5.) These low-density divergences result from
time-dependent drifts in the zeros of the densimeter and/or pressure transducer. Because the divergences contain more information about the apparatus than about helium’s VEOS, we do not discuss them.
At densities above 4000 mol/m
, we compared the
) data of McLinden and Lösch-Will
with the values of
that are implicitly defined by the truncated VEOS:
The fully quantum-mechanical values of
, and
(the latter computed neglecting four-body interactions) were taken from Refs.
, and
, respectively. The top panel of
Fig. 5
shows that the differences trend downward as the densities increase above about 4000 mol/m
. This trend, as a function of (
), was noted in Ref.
, together with the suggestion “there may have been a small error in the calibration for the sinkers….” However, the trend (
Fig. 5
, top) plotted as a function of density suggests that
is sensitive to some of the truncated virial coefficients. The truncation suggestion is confirmed by the middle panel of
Fig. 5
, which includes in
) the two additional terms
calculated semi-classically in Ref.
. Additional terms [e.g.,
from Ref.
] are less than 1.3 ppm, too small to be visible in
Fig. 5
The claimed k = 2 uncertainty of ρ[meas] is 150 ppm;^125 the span of the upper panels of Fig. 5 is ±150 ppm. The dashed curves (- -) in the middle panel of Fig. 5 represent upper bounds to the
uncertainty of ρ[calc](T, ρ) at 223 K. For these upper bounds, we used the k = 2 uncertainties of the virial coefficients U(B), U(C), … provided by their authors. In Eq. (17) we replaced B with B + U
(B); we replaced C with C + U(C), etc. The uncertainties of ρ[calc](T, ρ) are smaller at higher temperatures. We conclude that ρ[calc] agrees with ρ[meas] well within combined uncertainties.
At densities above $∼4000$ mol/m^3, the differences (ρ[meas]/ρ[calc] − 1) are nearly independent of the density; however, the average densities are 34 ppm larger than their expected values ρ[calc].
These offsets are well within the claimed measurement uncertainties (k = 2, $∼150$ ppm). However, as shown in the lower panel of Fig. 5, the offsets have both a random and a systematic dependence on
the temperature. The systematic temperature dependence can be treated as a correction to the calibration of the sinkers’ densities ρ[sinker](p, T). Such a correction does not remove the spread (±14
ppm) among the four isotherms at 273 K. Possible causes of this spread are changes between runs of temperature (±3.8 mK) and/or of impurity content (e.g., ±2.3 ppm of N[2]). In any case, the offsets
are smaller than the claimed uncertainties of ρ[meas](p, T).
Moldover and McLinden^21 extended McLinden and Lösch-Will’s data^125 to 500 K. The extended data are a less-stringent test of the VEOS than Fig. 5 because they span the same pressure range (p < 38
MPa) at higher temperatures; therefore, they span a smaller density range. If McLinden’s data could be extended to lower temperatures with comparable uncertainties, they would test helium’s VEOS in
greater detail and they might reach a regime where U(ρ[meas]) < U(ρ[calc]). Schultz and Kofke conducted much more detailed tests of McLinden and Lösch-Will’s data.^121 We agree with their conclusion
that the data are consistent with the VEOS calculated ab initio.
It may be possible to significantly reduce the uncertainty of ρ[meas] by improving magnetic suspension densimeters, as suggested by Kayukawa et al.^128 They fabricated sinkers from single crystals of
silicon and germanium because these materials have outstanding isotropy, stability, and well-known physical properties. Also, they refined the model and the functioning of their magnetic suspension
so that it was independent of the magnetic properties of the fluid under study at the level of 1 ppm. They measured the density of a liquid near ambient temperature and pressure with a claimed k = 1
relative uncertainty of 5.4 × 10^−6. To date, they have not demonstrated this uncertainty far from ambient temperature and pressure. Even if ρ[meas] achieved such low uncertainties, tests of the VEOS
would have to solve problems arising from impure gas samples and imperfect temperature and pressure measurements.
Alternative methods of measuring equations of state have been reviewed by McLinden.^129 Several methods require filling a container of known volume V[cont](p[0], T[0]) with a known quantity of gas
and then measuring the pressure as the temperature is changed. These methods resemble the CVGT method discussed in Sec. 2.2.4. Like CVGT, they require accurate values of V[cont](p, T); however,
unlike CVGT, testing a VEOS requires much higher pressures. Determining V[cont](p, T) over wide ranges is complex because: (1) containers comprised of metal alloys have anisotropic elastic and
thermal expansions; (2) containers have seals and joints or welds which have complicated mechanical properties; (3) alloys creep and/or anneal under thermal and mechanical stresses. In summary,
volumetric methods are unlikely to replace Archimedes-type densimeters because V[cont](p, T) is an assembled object subjected to complicated stresses; in contrast, the densimeter’s sinkers are single
objects subjected to hydrostatic pressure.
Remarkably, the Burnett method^130 of measuring the equation of state requires neither determining V[cont](p, T) nor measuring quantities of gas. This method uses two pressure vessels with stable
volumes V[a] and V[b]. On each isotherm, gas is admitted into V[a] and the pressure is measured. The gas is allowed to expand so that it fills both V[a] and V[b] and the pressure is measured again. V
[b] is evacuated and the process is repeated several times. The measured pressures on each isotherm are fitted to the VEOS and an apparatus parameter: the volume ratio at zero pressure (V[a,0] + V[b
,0])/V[a,0]. The pressure dependences of V[a] and V[b] must also be known. Usually, they are estimated from elastic constants and models of the pressure vessels; therefore, precise estimates
encounter complications of estimating V[cont](p, T). Perhaps this explains the large scatter in Burnett determinations of D(T).^126 A fairly recent Burnett measurement of the equations of state of
nitrogen and hydrogen (353–473 K; 1–100 MPa) claimed k = 2 uncertainties of ρ[meas] ranging from 0.07% to 0.24%.^131
In addition to ρ[meas], measurements of the squared speed of sound w^2(p, T) in gases have been used to critically test either the VEOS^122 of Eq. (7) or its acoustic analog, Eq. (1). Accurate values
of w^2(p, T) in gases are readily available. At the low gas pressures used for acoustic thermometry, the relative expanded uncertainties U[r](w^2(p, T)) measured using quasi-spherical cavity
resonators are a few parts in 10^6 and are dominated by thermometry problems and/or impurities. However, uncertainties grow approximately linearly in pressure because of imperfect models of the
recoil of the cavity’s walls in response to the resonating gas. In one study of argon, U[r](w^2) ≈ 1.2 × 10^−4(p/20 MPa) except near the critical point.^132 At pressures above $∼5$ to $∼10$ MPa,
pulse-echo techniques achieve uncertainties comparable to or smaller than resonance techniques.^122,133 Remarkably, w^2 from the two techniques agreed within 60–200 ppm within a range of overlap
(argon, 250–400 K, $∼10$ to $∼20$ MPa^133).
It is more complex to compare $wmeas2(p,T)$ to a calculated VEOS than to compare ρ[meas] to the same VEOS. To calculate the n-th acoustic virial coefficient from the n-th density virial coefficient,
one also needs the first and second temperature derivatives of the n-th virial coefficient as well as all the lower-order density virial coefficients and their temperature derivatives. There are
several routes to conduct such a comparison, which are completely equivalent. First, the temperature derivatives of the density virial coefficients can be calculated from ab initio potentials using,
e.g., the Mayer sampling Monte Carlo (MC) method. Second, the temperature derivatives can be obtained from fits of the theoretically calculated temperature-dependent density virial coefficients.
Third, the virial equation of state can be transformed by thermodynamic identities into an acoustic virial equation of state or it can be integrated to formulate a Helmholtz energy equation, from
which the speed of sound can be calculated. Speeds of sound calculated by either of the two resulting equations contain contributions from terms with higher acoustic virial coefficients than those
used in the density virial equation of state, i.e., it can be expected that the region of convergence of this virial equation of state for the speed of sound extends to higher pressures than that of
the acoustic virial equation of state with virial coefficients derived directly from density virial coefficients. These terms describe contributions of configurations of particles which are contained
in the low-order density virial coefficients to the higher-order acoustic virial coefficients. Fourth, densities can be calculated from $wmeas2(p,T)$ by the method of thermodynamic integration^134
and directly compared to the density virial equation of state. As initial conditions for the integration, the density and heat capacity on an isobar must be known. There are subtleties to integrating
$wmeas2(p,T)$.^135 In the first method the uncertainties of the virial coefficients and their temperature derivatives follow from the MC simulation and can be propagated into an uncertainty of the
acoustic virial equation of state, while in the other methods the uncertainty of the density virial coefficients or the experimental speeds of sound can be propagated into the acoustic virial
equation of state or calculated densities, respectively.
For helium, Gokul et al.^136 calculated the acoustic virial coefficients through the seventh order by the second method outlined above from density virial coefficients. They used the second density
virial coefficients reported by Czachorowski et al.,^11 which are based on the pair potential reported in the same work. The higher virial coefficients were taken from the work of Schultz and Kofke.^
121 They are based on the pair potential of Przybytek et al.^137 and the three-body potential of Cencek et al.^138 Uncertainties in the density virial coefficients were propagated into uncertainties
in the acoustic virial coefficients by the MC method recommended in Supplement 1 to the “Guide to the Expression of Uncertainty in Measurement.”^139 Gokul et al.^136 formulated the acoustic virial
equation of state as expansion in terms of density or pressure. The uncertainty of speeds of sound calculated with the acoustic VEOS was estimated from the uncertainty of the acoustic virial
The density expansion of Gokul et al. was compared to the experimental data of Gammon,^140 Kortbeek et al.,^141 and Plumb and Cataland.^142 The data of Gammon were measured with a variable-path
interferometer operating at 0.5 MHz. They cover the temperature range between 98 and 423 K with pressures up to 15 MPa, and according to the author have an uncertainty of 0.003% of w^2. For these
data, we estimated the expanded (k = 2) relative uncertainty U[r](w^2) = 0.00009 by adding uncertainties of 0.003% (for the distance between the crystals), 0.001% (for the precision), and 0.005%
(for sample impurities and/or temperature errors, based on the inconsistencies among the 14 isotherms). Gammon’s data agree with the acoustic virial equation of state within 0.01% with a few
exceptions. The data of Kortbeek et al. were measured with a double-path-length pulse-echo technique, cover the temperature range from 98 to 298 K at pressures between 100 MPa and 1 GPa, and,
according to the authors, have an uncertainty of 0.08%. They deviate from the acoustic VEOS between a few tenths of a percent at 100 MPa up to about 4% at 298 K and 1 GPa. These rather large
deviations are due to the fact that the acoustic virial equation of state is not converged at such high pressures. The measurements of Plumb and Cataland cover the low temperature range between 2.3
and 20 K at pressures up to 150 kPa. They agree with the acoustic virial equation of state of Gokul et al. to within 0.05% except at the lowest measured pressures of about 1.5 kPa, where the
deviations reach up to 0.18%. Gokul et al. also assessed the pressure range in which the acoustic VEOS is more accurate than the available experimental data for the speed of sound. At low pressures,
they observed that speeds of sound calculated with the acoustic VEOS are more accurate than the experimental data of Gammon. Gokul et al. further noticed that speeds of sound calculated with the
acoustic VEOS are more accurate than the experimental data of Kortbeek et al. up to about 300 MPa depending on temperature. At higher pressures, they considered the experimental data of Kortbeek et
al. to be more accurate than the computed virial equation of state. This conclusion appears to be too optimistic in light of the low uncertainty of 0.08% in the experimental data and the rather large
deviations of up to 2% from the virial equation of state below 300 MPa.
Gokul et al. also examined the convergence behavior of the acoustic virial equation of state more closely for the expansions in density and pressure. They considered a virial equation of state
converged if the value of the speed of sound calculated with it agrees with all higher orders of the expansion within a certain tolerance. They observed that the pressure range in which the expansion
in density converges is extended when the tolerance is increased. However, the expansion in pressure hits a pressure limit in the supercritical region, above which increasing the tolerance does not
extend the region of convergence farther. Above this pressure limit, the expansion in pressure completely fails. Recently, Wedler and Trusler measured the speed of sound in supercritical helium with
a dual-path pulse-echo technique in the temperature range between 273 and 373 K up to 100 MPa with an expanded uncertainty (k = 2) of 0.02%–0.04%.^143 Their data agree with the seventh-order acoustic
VEOS in density of Gokul et al.^136 with a few exceptions in the whole range of the measurements within 0.025%, which shows that this form of the VEOS is converged in the region of the measurements.
The first calculation of the third virial coefficient of argon using a first-principles three-body potential was performed by Mas et al.^144 using the empirical pair potential developed by Aziz.^145
The results agreed almost to within combined uncertainties with the third virial coefficients extracted from experimental data (with theoretical constraints) by Dymond and Alder.^146 Jäger et al.
calculated density virial coefficients up to seventh order for argon with their pair and nonadditive three-body potentials.^123 The calculated virial coefficients were fitted by polynomials in
temperature. The seventh-order VEOS was compared with the very accurate (p, ρ, T) data of Gilgen et al.,^147 which were measured with a magnetic suspension densimeter. These data are characterized by
a relative uncertainty (k = 2) in density of 0.02%. Pressures calculated with the theoretical virial equation of state agree with these data at the highest temperature of the measurements, 340 K,
within 0.01%.
In further work, Jäger^148 used thermodynamic identities to calculate several properties of argon including the speed of sound from the virial equation of state and compared the results with the
accurate experimental data of Estrada-Alexanders and Trusler^132 and Meier and Kabelac.^133 The data of Estrada-Alexanders and Trusler^132 were measured with a spherical resonator and cover the
temperature range between 110 and 450 K at pressures up to 19 MPa, while the data of Meier and Kabelac were measured with a dual-path-length pulse-echo technique and cover the temperature range
between 200 and 420 K with pressures between 9 and 100 MPa. The expanded (k = 2) uncertainty of these datasets was estimated to be 0.001%–0.007% and 0.011%–0.036%, respectively. At 300 and 400 K, the
calculated speeds of sound agree with both experimental datasets up to 100 MPa within 0.04% and 0.08%, respectively. At the near-critical temperature 146 K and supercritical temperature 250 K, the
deviations of the calculated values from the experimental data of Ref. 132 increase with pressure from essentially zero in the ideal-gas limit to about 0.3% at 3.7 MPa and about 0.02% at 12.2 MPa.
In another paper, Jäger et al. presented calculations of the second and third density virial coefficient of krypton.^149 They developed a very accurate pair potential for the krypton dimer, and
nonadditive three-body interactions were described by an ab initio extended Axilrod–Teller–Muto potential, which was fitted to quantum chemical calculations of the interaction energy of equilateral
triangle configurations of three krypton atoms. El Hawary et al.^122 calculated density virial coefficients from the fourth to the eighth using the pair potential and extended Axilrod–Teller–Muto
potential of Jäger et al. The calculated virial coefficients were fitted to polynomials in temperature, and the virial equation of state was integrated to formulate it as a fundamental equation of
state in terms of the Helmholtz energy. Furthermore, El Hawary et al. measured the speed of sound in liquid and supercritical krypton between 200 and 420 K at pressures from 6.1 to 100 MPa with an
uncertainty (k = 2) of 0.005%–0.018%. At 240, 320, and 420 K, the seventh-order and eighth-order virial equations of state agree with each other within 0.02% up to 7, 17, and 38 MPa, respectively. In
the region where the virial equation of state is sufficiently converged, the calculated speeds of sound are systematically about 0.08% lower than the experimental data. This small difference is
probably due to the uncertainty of the pair potential and the simplified treatment of nonadditive three-body interactions with the extended Axilrod–Teller–Muto model.
At high density in the supercritical region where the virial equation of state does not converge and in the liquid region, thermodynamic properties can be calculated by MC or molecular-dynamics (MD)
simulations.^150 Since the generation of Markov chains in MC simulations avoids some of the numerical errors of algorithms used to integrate the equations of motion in MD simulations, MC simulations
are the preferred method for calculating accurate values for thermodynamic properties. In statistical mechanics, there are eight basic ensembles for performing MC or MD simulations,^151 which are
characterized by a thermodynamic potential, three independent variables, and a weight factor, which describes the distribution of systems in the ensemble, in which MC simulations of fluids can be
performed. Ströker et al.^152 pointed out that the NpT ensemble, in which the number of particles, the pressure, and the temperature are the independent variables, is best suited for the calculation
of thermodynamic properties because only ensemble averages involving the enthalpy and volume, but no derivatives of the potential energy with respect to volume, appear in the equations for
thermodynamic properties. This means that no derivatives of the potentials are needed in a simulation.
The argon calculations of Mas et al.^144 described above were later extended by performing NVT, NpT, and Gibbs ensemble MC simulations^153 along the vapor–liquid coexistence curve. The parameters of
the critical point agreed with experiments within 0.8% or better.^154
Ströker et al.^152 carried out semiclassical MC simulations of thermodynamic properties of argon in the NpT ensemble at the subcritical isotherm 100 K and the supercritical isotherm 300 K at
pressures up to 100 MPa. The interactions between argon atoms were described by the pair potential of Jäger et al.^155 and the nonadditive three-body potential of Jäger et al.^123 Quantum effects
were accounted for by the Feynman–Hibbs corrections to the pair potential. Calculated densities agree with the accurate data measured by Gilgen et al.^147 and Klimeck et al.^156 within less than
0.01%, while calculated speeds of sound agree within less than 0.1% with the accurate experimental data of Estrada-Alexanders and Trusler^132 at low pressure in the supercritical region and Meier and
Kabelac^133 at high pressure in the liquid and supercritical region.
Ströker et al.^157 also performed MC simulations for liquid and supercritical krypton. They employed the accurate pair potential and an extended Axilrod–Teller–Muto potential of Jäger et al.^149 to
account for nonadditive three-body interactions. Quantum effects were again accounted for semiclassically. Since the potential models for krypton are not as accurate as those for argon, the
deviations of the results for the density and speed of sound from experimental data were larger than for argon, about 0.2% and 0.36%, respectively.
2.5. Transport properties and flow metrology
In this section, we describe the impact of the ab initio calculations of the zero-density limit of helium’s thermal conductivity λ[He] and viscosity η[He]. First, we mention the impact of λ[He] and λ
[Ar] on temperature metrology. Then, we describe how accurate values of η[He] have been used as standards to reduce the uncertainty of viscosity measurements of many gases by a factor of 10. We
conclude by briefly considering the impact of accurate viscosity data on metering process gases, for example, during the manufacture of semiconductor chips.
As discussed in Sec. 2.2.1, AGT requires accurate values of λ of the working gas at low densities to account for the effect of the thermo-acoustic boundary layer on the measured resonance
frequencies. For example, in 2010, Gavioso et al. used helium at $∼410$ kPa in a single-state, AGT determination of the Boltzmann constant k[B] prior to its definition in 2019.^158 They reported that
a relative standard uncertainty u[r](λ[He]) = 0.015 generated a relative standard uncertainty of the Boltzmann constant u[r](k[B]) = (1–3) × 10^−6.
Today, an uncertainty of (1–3) × 10^−6 would be the largest contributor to a state-of-the-art determination of the thermodynamic temperature T near 273 K. At low temperatures, the uncertainty of
measured values of λ[He] is much larger. Below 20 K, the λ[He] data span a range on the order of ±6%.^159 This large an uncertainty would lead to u[r](T) > 10^−5 for acoustic determinations of T.
Fortunately, the values of λ[He] calculated ab initio have extraordinarily small uncertainties, e.g., u[r](λ[He]) = 9.6 × 10^−6 at 273 K and u[r](λ[He]) = 7.3 × 10^−5 at 10 K.^10 In essence, the
calculated values of λ[He] removed u[r](λ[He]) from the uncertainty budgets of acoustic thermometers based on helium-filled quasi-spherical cavities.
Cylindrical, argon-filled cavities are being developed for high-temperature acoustic thermometry.^4,55 These projects require low-uncertainty values of both λ[Ar] and η[Ar]. Low-uncertainty values of
η[Ar] were generated from accurate measurements of the ratios η[Ar]/η[He] in the range 200–653 K and the ab initio values of η[He]. Then λ[Ar](T) was obtained by combining the ratio-deduced values of
η[Ar](T) with values of the Prandtl number Pr[Ar] calculated from model pair potentials. (Pr = C[p]η/λ, where C[p] is the constant-pressure heat capacity per mass. For the noble gases, Pr is only
weakly sensitive to the potential.)^160–162 The measured ratios η[Ar]/η[He] were consistent, within a few tenths of a percent, with highly accurate measurements made with an oscillating-disk
viscometer^163 and with calculations of η[Ar] based on ab initio Ar–Ar potentials.^164 Thus, the needs of argon-based acoustic thermometry are now met at all useable temperatures. To put this
achievement in context, we note that measuring the thermal conductivity of dilute gases is difficult, even for noble gases near ambient temperature and pressure. Evidence for this appears in Lemmon
and Jacobsen’s correlation of the “best” measurements of λ[Ar] and η[Ar] near ambient temperature (270–370 K) and pressure.^165 The average absolute deviations of selected measurements from their
correlation ranged from 0.24% to 1.0%. Lemmon and Jacobsen estimated the uncertainty of the correlated values of λ[Ar] was 2% and the uncertainty of η[Ar] was 0.5%. (With the benefit of ab initio
calculations and ratio measurements, we now know their correlation overestimated λ[Ar] by 0.54% at 270 K and by 0.45% at 370 K.)
In 2012, Berg and Moldover reviewed measurements of the viscosity of 11 dilute gases near 25°C.^166 These measurements were made using 18 different instruments that used five different operating
principles and produced 235 independent viscosity ratios during the years 1959–2012. Using the ab initio value of η[He] at 25°C as a reference, the viscosities of the ten other gases (Ne, Ar, Kr,
Xe, H[2], N[2], CH[4], C[2]H[6], C[3]H[8], SF[6]) were determined with low uncertainties u[r](η) ranging from 0.00027 to 0.00036. These ratio-derived uncertainties are less than 1/10 the
uncertainties claimed for absolute viscosity measurements, such as the measurements of η[Ar] correlated by Lemmon and Jacobsen.^165 Now, any one of these gases can be used to calibrate a viscometer
within these uncertainties. Such ratio-based calibrations have reduced uncertainties of η for many other gases^167 and they have been extended to a wide range of temperatures.^168 During their study
of viscosity ratios, Berg and Moldover observed that the viscosity ratios determined using one instrument (a magnetically suspended, rotating cylinder) were anomalous. Their observation led to an
improved theory of the instrument, thereby illustrating the power of combining a reliable standard η[He](T) with precise ratio measurements.^169
Accurate measurements of gas flows are required for tightly controlling manufacturing processes (e.g., delivery of gases to semiconductor wafers for doping). In general, gas flow meters are
calibrated using a benign, surrogate gas over a range of flows and pressures, but only near ambient temperature. However, calibrated meters are often used to measure/control flows of reactive process
gases [e.g., Ga(CH
, WF
] under conditions differing from the calibration conditions. An accurate transition between gases and conditions can be made using laminar flow meters for which there is a physical model (similar to
the model of a capillary tube). Also needed are data for the virial coefficients of the process gas and the viscosity ratio
Thus, there is a need for viscosity-ratio data for many difficult-to-measure gases over a moderate range of densities. The acquisition of such data would be facilitated by a reliable model for the
density dependence of the viscosity of surrogate gases such as SF
The initial density expansion of the viscosity has the form η/η[0] = 1 + η[1]ρ, where the low-density limit of the viscosity η[0] depends entirely on pair interactions and the virial-like coefficient
η[1] depends on the interactions among two and three molecules. Unfortunately, unlike the density and dielectric virial coefficients and η[0], no rigorous theory exists for η[1](T). An approximate
theory was developed by Rainwater and Friend,^171,172 who presented quantitative results based on the Lennard-Jones potential. It was later extended with more accurate pair potentials for noble
gases.^173 While the results from the Rainwater–Friend model are in reasonable agreement with the limited experimental data available for the initial density dependence of the viscosity for noble
gases,^173 the error introduced by its approximations is unknown. We note that it is a classical theory, which introduces another source of error for light gases (such as helium) where quantum
effects might be important, even at ambient temperatures.
3. Ab Initio Electronic Structure Calculations
3.1. Methodology of electronic structure calculations
In principle, solutions of the equations of relativistic quantum mechanics, possibly including quantum electrodynamics (QED) corrections, can predict all properties of matter to a precision
sufficient for thermal metrology applications. In practice, if the goal is to match or exceed the accuracy of experiments, the range of systems reduces to few-particle ones. The first quantum
mechanical calculations challenging experimental measurements for molecules appeared only in the 1960s (e.g., Ref. 174), while the first calculations relevant to metrology were published in the
mid-1990s.^7,175,176 Currently, the branches of metrology discussed in this review are becoming increasingly dependent on theoretical input, as discussed in Sec. 2.
Theory improvements leading to results with decreased uncertainties proceed along three main, essentially orthogonal directions: level of physics, truncation of many-electron expansions, and basis
set size. There exists an extended hierarchy of approaches in each direction. For the first direction, there exists a set of progressively more accurate physical theories that can be used in
calculations relevant for metrology, from Schrödinger’s quantum mechanics for electrons’ motion in the field of nuclei fixed in space to relativistic quantum mechanics and to QED. The second
direction is relevant for any many-electron system: one has to choose a truncation of the expansion of the many-electron wave function in terms of virtual-excitation operators at the double, triple,
quadruple, etc. level or, equivalently in methods that use explicitly correlated bases (depending explicitly on interelectronic distances), to take into account only correlations of two, three, four,
etc. electrons simultaneously. Third, for any given theory and many-electron expansion level, there are several methods of solving quantum equations specific for this level; in particular different
types of basis sets are used to expand wave functions, resulting in different magnitudes of uncertainties from such calculations.
The lowest theory level is Schrödinger’s quantum mechanics for electrons moving in the field of nuclei fixed in space, i.e., quantum mechanics in the Born–Oppenheimer (BO) approximation. At the next
level, one usually first accounts for the relativistic effects. Post-BO treatment of the Schrödinger equation can be limited to computations of the so-called diagonal adiabatic correction, which is
the simplest method of accounting for couplings of electronic and nuclear motions, or it can fully include nonadiabatic effects, i.e., account for the complete couplings of these two types of motion.
The highest level of theory applied in calculations relevant to metrology is QED, and it can be implemented at several approximations labeled by powers of the fine-structure constant α.
The many-electron expansion starts at the independent-particle model, i.e., at the Hartree–Fock (HF) approximation, but this level is never used alone in calculations for metrology purposes. For
systems with a few electrons (the current practical limit is about 10), one can use the FCI expansion that potentially provides exact solutions of Schrödinger’s equation (provided the orbital basis
set is close to completeness). In FCI, the wave function for an
-electron system is represented as a linear combination of Slater determinants constructed from “excitations” of the ground-state HF determinant |Φ
up to
-tuple excitations, where
represents a singly excited Slater determinant formed by replacing spinorbital
. Similarly,
represents a doubly excited Slater determinant formed by replacing spinorbital
and spinorbital
, and so on for higher excited determinants. The linear coefficients (CI amplitudes) are computed using the Rayleigh–Ritz variational principle. While the FCI method is conceptually straightforward,
the computation time it requires scales with the number of electrons as
!, and therefore it is computationally very costly. One can limit the expansion in Eq.
to a subset of excitations (for example, retaining only single and double excitations leads to a method denoted CISD), but truncated expansions are not size extensive. This means that the CISD energy
computed for very large separations between two monomers (atoms or molecules) is not equal to the sum of monomers’ energies computed at the CISD level. Only FCI is free of this problem. Thus,
truncated CI expansions are not appropriate for calculations of interaction energies.
Another potentially exact approach is to expand the wave function in an explicitly correlated all-electron basis set. The set most often used in metrology-related applications is the basis set of
explicitly correlated Gaussian (ECG) functions. If basis functions involve all electrons (AEs), expansions in this basis approximate solutions of Schrödinger’s equation in the BO approximation. For
, a four-electron system, the expansion can be written as
is the four-electron antisymmetrizer,
is the standard four-electron singlet spin function,
is the point-group symmetry projector,
inverting the wave function through the geometrical center,
are variational parameters, and
> 0 are ECG basis functions. The function
is the product of ECG functions for the two helium atoms. The explicit form of
> 0, functions is
, and
= (
) are nonlinear variational parameters. For a given set of nonlinear parameters, the linear parameters are obtained using the Rayleigh–Ritz variational method. The simplest way to optimize the
nonlinear ones is to use the steepest-descent method, recalculating the linear parameters in each step of this method. In actual applications, significantly more advanced optimization methods are
Currently, the standard approach to account for electron correlation effects is the coupled cluster (CC) method with single, double, and noniterative triple excitations [CCSD(T)]. To reduce
uncertainties of CCSD(T), one can use the CC methods that include full triple, T, noniterative quadruple, (Q), and full quadruple, Q, excitations. The CC method represents the wave function in an
exponential form
where the operator
is the sum of excitation operators
The operators
can be written in terms of pairs of creation
and annihilation
operators replacing the occupied spinorbitals
by the virtual ones
. For the two lowest ranks, we have
The excitation operators
, etc. acting on the ground-state determinant produce the same excited determinants as those appearing in Eq.
. For example,
However, the amplitudes
are different from the amplitudes
. The former amplitudes are obtained by using the expansion
in the Schrödinger equation and projecting this equation with subsequent determinants from Eq.
. Since the resulting set of equations is nonlinear, the solution is obtained in an iterative way. If all the excitation operators are kept in Eq.
, the method is equivalent to the FCI method, but this expansion is almost always truncated. The simplest CC approach is that of CC doubles (CCD), in which
is truncated to
The simplest extension of this model is obtained by including also single excitations (CCSD), i.e.,
The CCSD method is most often used with orbital basis sets, but can also be used with ECGs, which are then used to expand two-electron functions resulting from the actions of
and are called in this context Gaussian-type geminals (GTGs). Higher-rank approximations are
An approximation to CCSDT is a method denoted as CCSD(T), where the coefficients
of single and double excitations in Eqs.
are computed iteratively while those for triple excitations are evaluated using perturbation theory. A similar approximation, denoted CCSDT(Q), can be made for the CCSDTQ method. In contrast to the
truncated CI expansions, the truncated CC expansions are always size extensive. This results from the fact that the exponential ansatz of Eq.
can be factored for large separations between subsystems into a product of exponential operators for subsystems. The CC method is applied to interatomic or intermolecular interactions in the
supermolecular fashion, i.e., subtracting monomers’ total energies from the total energy of a cluster. Due to size extensivity, the resulting potential-energy surface (PES) dissociates correctly.
Another option for computing interaction energies at theory levels similar to truncated CC is symmetry-adapted perturbation theory (SAPT).
The basic assumption of SAPT is the partitioning of the total Hamiltonian
of a cluster into the sum of the Hamiltonians of separated monomers
and of the perturbation operator
that collects Coulomb interactions of the electrons and nuclei of a given monomer with those of the other monomers:
The solution of the zeroth-order problem, i.e., of the Schrödinger equation with
is then the product of the wave functions of free, noninteracting monomers. This product is not fully antisymmetric since permutations of electrons between different monomers do not result only in a
change of the sign of the wave function, i.e., Φ
does not satisfy Pauli’s exclusion principle. For large intermonomer separations
, one can ignore this problem and use the Rayleigh–Schrödinger perturbation theory (RSPT), the simplest form of intermolecular perturbation theory. Unfortunately, RSPT leads to unphysical behavior of
the interaction energy at short
as it fails to predict the existence of the repulsive walls on potential-energy surfaces. This failure is the result of the lack of correct symmetry of the wave function under exchanges of electrons
between interacting monomers. Thus, to describe interactions everywhere in the intermonomer configuration space, one has to perform symmetry adaptation, i.e., antisymmetrization, and this is the
origin of the phrase “symmetry-adapted.” There are several ways to do it, but the simplest is to (anti)symmetrize the wave functions of the RSPT method. This leads to the symmetrized
Rayleigh–Schrödinger (SRS) approach,
which is the only SAPT method used in practice. For a dimer, the interaction energy is then expressed as the following series in powers of
where the superscripts denote the powers of
(orders of perturbation theory) and different terms of the same order can be identified as resulting from different physical interactions: electrostatic (elst), exchange (exch), induction (ind), and
dispersion (disp). When SAPT is applied to many-electron systems, monomers can be described at various levels of electronic structure theory: from the HF level to the FCI level. This leads to a
hierarchy of SAPT levels of approximations depending on treatment of intramonomer electron correlation. If the monomers are approximated at an order
of many-body perturbation theory (MBPT) with the Møller–Plesset (MP) partition of the Hamiltonians
, denoted as MP
, we can write
which becomes an equality when
→ ∞.
The third direction determining the accuracy of electronic structure calculations involves the size of the basis sets used to expand wave functions. In the CC and CI approaches, the standard
technique is to use products of orbital (one-electron) basis sets. Many such basis sets are available; the ones most often used in metrology-related calculations are the correlation consistent (cc)
basis sets introduced by Dunning.^183 These basis sets are denoted by cc-pVXZ: cc polarized Valence (i.e., optimized using a frozen-core approximation), X-Zeta, where X = D, T, Q, 5, … is the
so-called cardinal number determining the maximum angular momentum of orbitals. Such basis sets can be augmented by an additional set of diffused functions and are then denoted as aug-cc-pVXZ, or two
such sets: daug-cc-pVXZ. Another option is to use explicitly correlated basis sets in the CC method or to expand the whole many-electron wave function in such a basis set. Explicitly correlated basis
sets provide a much faster convergence than products of orbital basis sets, but in most cases require optimizations of a large number of nonlinear parameters.
In order to achieve some target size of uncertainties, one has to choose a proper level in each of the three hierarchies defined earlier. For example, it is possible to perform an FCI calculation for
a ten-electron system such as Ne. However, since FCI calculations scale factorially with the number of orbitals, only very small basis sets can be used, resulting in a large uncertainty of the
results. Consequently, a better strategy is to use the CCSD(T) method which allows applications of the largest orbital bases available for a system like Ne[2]. The computed interaction energy will be
accurate to about four significant digits relative to the CCSD(T) limit, but will have a fairly large error, of the order of 1%–2%, with respect to the exact interaction energy at the
non-relativistic BO level. In contrast, FCI calculations for Ne[2] employing the smallest sensible basis set of augmented double-zeta size, which would be extremely difficult to perform, would have
an error of the order of 40% (such calculations might still be useful in hybrid approaches discussed below).
The orbital basis sets consist of families of bases of varying size. One usually carries out calculations in two or more such basis sets and then performs approximate extrapolations to the complete
basis set (CBS) limit. In addition to the standard extrapolations, which assume the X^−3 decay of errors, extrapolations using very accurate ECG results can be performed.^184–186 CCSD(T)/CBS results
may have sufficiently small uncertainties to make calculations of relativistic and diagonal adiabatic corrections necessary, i.e., these corrections may be of the same order of magnitude as the
uncertainties of the CCSD(T)/CBS results. To reduce the errors resulting from the truncation of the many-electron expansion, one can follow CCSD(T)/CBS calculations by CCSDT(Q) or FCI ones in smaller
basis sets. These effects are then included in an incremental way, i.e., by adding the difference between FCI and CCSD(T) energies computed in the same (small) basis set.
Accurate solutions of quantum equations are followed by estimates of uncertainties, absolutely necessary for metrology purposes. The latter step is often more time-consuming than the former. One
should emphasize that theoretical estimates of uncertainties are different from statistical estimates of uncertainties of measurements and in particular one cannot assign a rigorous confidence level
to them, although for purposes of metrology one usually assumes that theoretical uncertainties are equivalent to k = 2 expanded uncertainties (95% confidence level).
A theoretical estimate of uncertainty consists of several elements. The most rigorous and reliable estimates are those of basis set truncation errors derived from the observed patterns of convergence
in basis set. Much more difficult are estimates of uncertainties resulting from truncations of many-electron expansions. Such estimates can sometimes be made by performing higher-level calculations
at a single point on a PES, but one most often uses analogy to similar systems for which higher-level calculations have been performed. The same approach can be used to estimate the neglected
physical effects, for example, to estimate the uncertainty due to relativistic effects.
Solutions of the electronic Schrödinger equation for a given nuclear configuration of a dimer or a larger cluster, providing accurate quantum mechanical descriptions of such systems, are only the
first step in theoretical work of relevance to metrology, as most measured quantities discussed in this review are either bulk properties or response properties of atoms and molecules. In the former
case, i.e., to predict properties of gases or liquids relevant for metrology, one needs to know energies of such systems for a large number configurations, i.e., for different geometries of clusters.
This issue is approached by using the many-body expansion, where here the bodies are atoms or molecules forming the cluster, starting from two-body (pair) interactions, followed by three-body
(pairwise nonadditive) interactions. The approach can be continued to higher-level many-body interactions, but so far this has not been done. The ab initio energies are usually fitted to analytic
forms only for the two- and three-body interactions.
In addition to energies, metrology applications often require knowledge of accurate values of various properties of atoms and molecules, mainly the static and dynamic polarizabilities and magnetic
susceptibilities. These quantities can be computed as analytic energy derivatives with respect to appropriate perturbations. Properties of a single atom or molecule change in condensed phases and the
so-called interaction-induced corrections to properties of isolated atoms or molecules are of interest to metrology.
As already mentioned above, although Schrödinger’s quantum mechanics at the BO level provides the bulk of the physical values of interest to metrology, computations of various effects beyond this
level are often needed to reduce uncertainties of these properties to the magnitude needed for metrology standards. We will refer to these as post-BO effects. It should be stressed that we really
have in mind here the post-nonrelativistic-BO level, since both the relativistic and QED corrections for molecules are usually computed using the BO approximation. One goes beyond this approximation
when computing adiabatic and nonadiabatic corrections. Any reasonably detailed description of methodologies used in post-BO calculations would be too voluminous for the present review. Therefore, we
refer the reader to the original papers, in particular Refs. 10, 11, 84, 137, 177, and 187–192.
Systems of interest to thermodynamics-based precision metrology are mainly noble-gas atoms and their clusters, and this section will be restricted to such systems, with the majority of text devoted
to helium. Apart from being the substance whose behavior is closest to the ideal gas, it is also the only system where theory can currently provide values of physical quantities that are generally
more accurate than the measured ones. Nevertheless, neon and argon are also of significant interest since they may be used in secondary standards to improve instrument sensitivity or ease of use.
Although for most properties computations for neon and argon have larger uncertainties than the best measurements, such results are still useful as independent checks of experimental work and to
guide extrapolation beyond the measured range.
3.1.1. Importance of explicitly correlated basis sets
The current theoretical results for helium owe their very small uncertainties mostly to the use of explicitly correlated basis sets. The calculations involving helium atoms are probably one of the
best examples where an important science problem was solved using these bases. To clearly show where this field would be without the use of such basis sets, we discuss in this subsection numerical
comparisons of ECG and orbital calculations for He[2], performed recently in Ref. 193. The majority of molecular electronic structure calculations are carried out using orbital basis sets. This means
that many-electron wave functions are expanded in products of orbitals. The simplest example is the CI method discussed earlier, where the wave function is a linear combination of Slater determinants
built of orbitals that are usually obtained by solutions of HF equations. However, expansions in orbital products converge slowly due to the difficulty of reproducing the electron cusps in wave
A way around this difficulty is to use bases that depend explicitly on r[12] = |r[2] − r[1]|, the distance between electrons. Bases of this type are called explicitly correlated. For few-electron
systems, such bases are mostly used to directly expand the N-electron wave functions of the nonrelativistic BO approximation. The explicitly correlated bases are also often used for many-electron
systems within a perturbative or CC approach.^194,195 For two-electron systems, one uses Hylleraas bases^196 with polynomial-only dependence on r[12] or, recently more frequently, expansions in
purely exponential functions of r[12], called Slater geminals.^84,192 Bases combining both types of dependence on r[12] are also sometimes employed.^197 For more than two electrons, integrals needed
for such bases become very expensive and bases involving Gaussian correlation factors $e−γrij2$, i.e., ECG bases, are mostly used. For a review of the ECG approach, see Refs. 194 and 198.
Since expansions in explicitly correlated bases of the type described above approach solutions of the Schrödinger equation, the equivalent orbital calculations should be performed at the FCI level.
As already mentioned, FCI calculations scale as N! with the number of electrons and therefore are the most expensive of all orbital calculations. Even for He[2], FCI calculations cannot be performed
using the largest available orbital basis sets. Therefore, the optimal orbital-based strategy is a hybrid one consisting of performing calculations in the largest basis sets at a lower level of
theory, for example, at the CCSD(T) level, and adding to these results FCI corrections computed in smaller basis sets.
The BO energies computed in ECG basis sets in Ref. 177 established a new accuracy benchmark for the helium dimer; see the description of these calculations in Sec. 3.3.1. These ECG interaction
energies were compared in Ref. 193 to those computed in orbital bases at the hybrid CCSD(T) plus FCI level. The largest available basis sets were applied. For most points, the CCSD(T)+ΔFCI approach
gives errors nearly two orders of magnitude larger than the ECG estimated uncertainties. For a couple of points, the CCSD(T)+ΔFCI results are fairly close to the ECG results, but this is mainly due
to the former method overestimating the magnitude of the interaction energy at small R and underestimating at large R. Since these points are near the van der Waals minimum, some previous evaluations
of the performance of orbital methods restricted to this region might have been too optimistic. When the whole range of R is considered, CCSD(T)+ΔFCI is no match for the ECG approach. One should also
realize that any improvements of accuracy of the CCSD(T)+ΔFCI approach would require a huge effort; in particular, one would have to develop quadruple-precision versions of all needed orbital
electronic structure codes.
3.2. Helium atom polarizability
One of the properties of helium required by precision measurement standards^199–201 is the helium atom polarizability, both static and dynamic (frequency-dependent). Nonrelativistic calculations of
the static polarizability date back to the 1930s and reached an accuracy of 0.1 ppb in 1996 calculations using Hylleraas basis sets.^202 However, the relativistic correction, which is proportional to
α^2, could be expected to contribute at the 60 ppm level relative to the total polarizability. Unfortunately, the values of these corrections published before 2001 differed significantly from one
another. These discrepancies were resolved by accurate calculations of Refs. 187 (using GTGs) and 203 (using Slater geminals) with uncertainties of 20 ppb relative to the total polarizability. This
work used the Breit–Pauli operator,^204 whose expectation values were computed with the ground-state wave function for the nonrelativistic Hamiltonian.
The authors of Ref. 203 also computed the QED correction of order α^3, which turned out to be significant, amounting to about 20 ppm relative to the total polarizability. However, a part of the α^3
QED correction to the polarizability, resulting from the so-called Bethe logarithm, was only roughly estimated due to the very difficult to compute second electric-field derivative of a
second-order-type perturbation theory expression involving the logarithm of the Hamiltonian. The first complete calculation of Bethe-logarithm contribution to helium polarizability was reported in
Ref. 188. As such a calculation had never been done before, the algorithms and their numerical implementations had to be developed from scratch. The term containing the electric-field derivative of
Bethe’s logarithm turned out to be unexpectedly small, representing only about 0.6% of the total α^3 QED correction. Thus, this correction still makes a contribution of about 20 ppm to the total
Further improvement of the accuracy of helium’s static polarizability was achieved in Ref. 192, which concentrated on the second derivative of the Bethe logarithm with respect to the electric field.
This quantity can be obtained in a couple of ways, with completely different algorithms. The goal was to achieve agreement between two such approaches and also with Ref. 188. This goal was met,
providing a reliable cross-validation for both approaches. The results of Ref. 192, providing currently the most accurate theoretical determination of the polarizability of helium, are shown in Table
1. The value of the α^3 QED contribution computed in Ref. 188 differs from the current one by only 0.03% or 7 ppb relative to the total polarizability. This error is much smaller than the current
uncertainty of the α^4 QED contribution, which is estimated to amount to 0.1 ppm; see Table 1.
TABLE 1.
Contribution . Reference 192 .
Nonrelativistic 1.383 809 986 4
α^2 relativistic −0.000 080 359 9
α^2/m relativistic recoil −0.000 000 093 5(1)
α^3 QED $−∂ϵ2lnk0$ term 0.000 030 473 8
$∂ϵ2lnk0$ term 0.000 000 182 2
α^3/m QED recoil 0.000 000 011 12(1)
α^4 QED 0.000 000 56(14)
Finite nuclear size 0.000 000 021 7(1)
Total 1.383 760 78(14)
Contribution . Reference 192 .
Nonrelativistic 1.383 809 986 4
α^2 relativistic −0.000 080 359 9
α^2/m relativistic recoil −0.000 000 093 5(1)
α^3 QED $−∂ϵ2lnk0$ term 0.000 030 473 8
$∂ϵ2lnk0$ term 0.000 000 182 2
α^3/m QED recoil 0.000 000 011 12(1)
α^4 QED 0.000 000 56(14)
Finite nuclear size 0.000 000 021 7(1)
Total 1.383 760 78(14)
Calculations of Refs. 187 and 188 were extended to frequency-dependent polarizabilities.^84,191 This polarizability was expanded in inverse powers of the wavelength λ up to λ^−8. Different levels of
theory were used for each power of λ: up to α^4 for the static term, α^2 for inverse powers 2 through 6 (only even powers contribute), and nonrelativistic for 8. The dynamic polarizability at the
He–Ne laser wavelength of 632.9908 nm had an uncertainty of 0.1 ppm. This uncertainty results entirely from the uncertainty of the static polarizability. The latter was reduced compared to Ref. 188
mainly because the work of Ref. 205 has shown that the error of the so-called one-loop approximation used to evaluate the α^4 terms is smaller than previously expected, amounting to only about 5%
when applied to the excitation energies of helium. Another small change in the static polarizability was due to a slightly improved value of the Bethe-logarithm contribution; see also Ref. 192.
The polarizabilities computed in Refs. 84, 187, 188, and 191 had uncertainties orders of magnitude smaller than the best experimental results. However, recently a new, very accurate measurement of
this quantity was published.^63 The measured value of the molar polarizability, 0.5172544(10) cm^3/mol, is consistent with the theoretical molar polarizability computed from the atomic one listed
in Table 1 and equal to 0.51725408(5) cm^3/mol. The combined uncertainty is more than three times the difference while the experimental uncertainty is 20 times larger than the theoretical one.
When a helium atom is in a gas or condensed phase, one can expect that its polarizability changes due to interactions with other atoms. More precisely, the polarizability of a helium cluster is not
equal to the sum of polarizabilities of helium atoms. This change is often referred to as collision-induced polarizability and for atoms is a function of interatomic distance. For a pair of helium
atoms, reliable values of this quantity were computed in Ref. 206, reconciling previously published inconsistent calculations. The results of Ref. 206 were used to compute the second^79 and third^207
dielectric virial coefficients of helium. Very recently, the collision-induced three-body polarizability of helium was computed.^208
Due to inversion symmetry, a system consisting of one or two helium atoms cannot have a dipole moment in the BO approximation. However, configurations of three or more atoms may have a non-zero
dipole moment, which in turn influences the value of the third dielectric virial coefficient.^207 Presently, the only ab initio description of the three-body dipole moment of noble gases is the one
developed by Li and Hunt.^209 However, the results of Ref. 209 apply only at large separations, and do not have associated uncertainties. A dipole-moment surface for the helium trimer with rigorously
defined uncertainty is currently being developed.^210
3.3. Helium dimer potential
3.3.1. BO level
The interest in the helium dimer potential is nearly as old as quantum mechanics. In 1928, Slater^211 developed the first potential for this system, which gave the interaction energy of −8.8 K at the
internuclear distance R = 5.6 bohrs (1 bohr $≈52.91772109$ pm). There is a wide range of helium dimer potentials available in the literature; see Ref. 212 for a comparison of bound-state calculations
using a large number of potentials. Figure 6 illustrates the remarkable progress in accuracy of predictions achieved since 1979. Empirical potentials dominated the field until the end of the 1980s;
the two most widely used ones, HFDHE2^213 and HFD-B,^214 were developed by Aziz et al. The first really successful ab initio one was the LM-2 potential (published only in a tabular form) developed by
Liu and McLean.^215 Those authors performed CI calculations and, by analyzing the configuration space and basis set convergence, obtained extrapolated interaction energies with estimated
uncertainties. Although these estimates were rather crude and do not embrace the current best values for most values of R, cf. Fig. 6, they are reasonable.
Aziz and Slaman^216 used the HFD-B functional form with refitted parameters to “mimic” the behavior of the LM-2 potentials, of the unpublished ab initio data computed by Vos et al.,^217 and of the
small-R Green's-function MC (GFMC) data^218 to obtain potentials denoted as LM2M1 and LM2M2, differing by assuming, respectively, the smallest and the largest well depth of the LM-2 potential as
determined by the estimates of uncertainty. The parameters of these potentials were not fitted directly to ab initio data, but chosen by trial and error to reproduce both theoretical data and
measured quantities to within their error bars. The LM2M2 potential was considered to be the best helium potential until the mid-1990s, when purely ab initio calculations took the lead. Among the
latter ones, the TTY potential developed by Tang et al.^219 has a remarkably simple analytical form based on perturbation theory. The HFD-B3-FCI1 potential was obtained by Aziz et al.,^7 who used the
HFD-B functional form with its original parameters adjusted so that the new potential runs nearly through the ab initio data points. These points were GFMC results of Ref. 218 and the FCI results of
van Mourik and van Lenthe.^220 No uncertainties were assigned to HFD-B3-FCI1, and Fig. 6 shows that it was about as accurate as LM2M2.
The SAPT96 potential^175,176 opens an era of helium potentials based mostly on calculations with explicitly correlated functions. It was the first fully first-principles He[2] potential with a
systematic estimation of uncertainties. The potential was obtained using a two-level incremental strategy. The leading SAPT corrections (the complete first-order and the bulk of the second-order
interaction energies) were computed using GTG basis sets. The GTG-based variant of SAPT was developed in Refs. 221–225. Higher-order SAPT corrections were computed using the general SAPT program
based on orbital expansions.^226–230 Large orbital basis sets including up to g-symmetry functions and midbond functions (placed between the nuclei)^231 were used. The remaining many-electron effects
were computed using both SAPT based on FCI-level monomers, with summations to a very high order of perturbation theory (using He[2]-specific codes), and supermolecular FCI calculations in small
orbital basis sets. It is interesting to note that the actual errors of the SAPT96 potential relative to the current best results turned out to be completely dominated by the residual orbital (rather
than GTG) contributions. For instance, at R = 5.6 bohrs, the orbital part constitutes only −1.81 K out of −11.00 K, but its error was −0.05 K out of the total SAPT96 error of −0.06 K. The factor of 2
underestimation of the uncertainties seen in Fig. 6 for R = 5.6 bohrs was entirely due to this issue. SAPT96 is about as accurate as LM2, except for large R where it is more accurate, with SAPT96
overestimating and LM2 underestimating the magnitude of interaction energy. With an added retardation correction, SAPT96 was used (under the name SAPT2) by Janzen and Aziz^232 to calculate properties
of helium and found to be the most accurate helium potential at that time.
In 1999, van Mourik and Dunning^233 calculated CCSD(T) energies in basis sets up to daug-cc-pV6Z, CCSDT − CCSD(T) differences in the daug-cc-pVQZ basis set, and FCI − CCSDT differences in the
daug-cc-pVTZ basis set. The CCSD(T) energies were CBS-extrapolated and then refined by adding a correction equal to the R-interpolated differences between highly accurate CCSD(T)-R12 results
(available at a few distances in Ref. 234) and the obtained CBS limits. The CC-R12 methods are analogous to CC-GTG methods, but the explicit correlation factor enters linearly.^195 As seen in Fig. 6,
the results of Ref. 233 were more accurate than any previously published ones, but no estimates of uncertainties were provided and the computed interaction energies were not fitted.
Supermolecular ECG-based calculations for He[2] started to appear in the late 1990s,^235,236 and were initially aimed at providing upper bounds to the interaction energies (by subtracting essentially
exact monomer energies), as the authors did not attempt to extrapolate their results to the CBS limits. Another application of explicitly correlated functions to the He–He interaction was a series of
papers by Gdanitz,^237–239 who used the multireference averaged coupled-pair functional method with linear r[12] factors, r[12]-MR-ACPF. The extrapolated results from the last paper of the series,
Ref. 239 (denoted “Gdanitz01” in Fig. 6), were among the most accurate results available at that time. However, the reported uncertainties were strongly underestimated at shorter distances (as much
as 5 times at 5.6 bohrs and 17 times at 4.0 bohrs).
Another important series of papers was published by Anderson et al.,^240–242 who reported quantum MC energies with progressively reduced statistical uncertainties. Although these results were
obtained only for a few internuclear distances, they represented very valuable benchmarks for mainstream electronic structure methods. In fact, until the publication of the CCSAPT07 potential,^185
the result from Ref. 242, −10.998(5) K (see “Anderson04” in Fig. 6), was the most accurate value available at 5.6 bohrs.
In Refs. 243 and 244, a hybrid supermolecular ECG/orbital method was applied to the helium dimer. The bulk of the correlation effect on the interaction energy, at the CCSD level, was evaluated using
GTG functions and the method developed in Refs. 245–256. The nonlinear parameters were optimized at the MP2 level. The effects of noniterative triple excitations [the “(T)” contribution], i.e., the
differences between CCSD(T) and CCSD energies, were calculated using large orbital basis sets (up to aug-cc-pV6Z with bond functions and daug-cc-pV6Z) and extrapolated to the CBS limits. Finally, the
FCI corrections [differences between FCI and CCSD(T) energies] were obtained in basis sets up to aug-cc-pV5Z with bond functions and daug-cc-pV5Z, and also extrapolated. Results for three distances
were reported in Ref. 244 (see “Cencek04” in Fig. 6).
Hurly and Mehl (HM) analyzed the best existing ab initio data for the helium dimer and created a new potential^9 representing a compromise based on uncertainties of existing data and their mutual
agreement (for instance, as can be seen in Fig. 6, the result from Ref. 244 was used at R = 7.0 bohrs). The diagonal adiabatic corrections from Ref. 257 were added to the final potential, which was
then used to calculate the second virial coefficient, viscosity, and thermal conductivity of helium. HM recommended that the values of these thermophysical properties should serve as standards for
The CCSAPT07 potential^185 based on the hybrid GTG/orbital method, published in 2007, was a significant improvement over the previous complete potential of this type, i.e., the SAPT96 potential.^
175,176 CCSAPT07 combined three different computational techniques, according to the criterion of the lowest uncertainty available for a given internuclear distance. Variational four-electron ECG
calculations were used for R ≤ 3.0 bohrs and SAPT+FCI was employed for R > 6.5 bohrs. At intermediate distances, the hybrid supermolecular method developed in Refs. 243 and 244 and described above
provided the highest accuracy. Compared to Refs. 243 and 244, several computational improvements were introduced,^184 resulting in significantly reduced uncertainties. The SAPT calculations^185 of
CCSAPT07 followed the SAPT96 recipe, but also with larger basis sets and some computational improvements. The uncertainties of this potential were smaller than some effects that are neglected at the
nonrelativistic BO level. Calculations of these effects will be discussed in Sec. 3.3.2.
Another highly accurate potential, by Hellmann, Bich, and Vogel (HBV),^258 appeared at almost the same time as CCSAPT07. Those authors used very large basis sets (up to daug-cc-pV8Z with added bond
functions at the CCSD level, and gradually smaller bases for higher levels of theory up to FCI) followed by CBS extrapolations. After augmenting the HBV potential with adiabatic, approximate
relativistic, and retardation corrections, the authors used it to calculate thermophysical properties of helium.^259 However, the uncertainties of the HBV potential were not estimated, which
restricts its usefulness. A direct accuracy comparison between the pure BO component of HBV and CCSAPT07 is now possible because of the much higher accuracy of the present-day benchmark energies,^177
and we performed such analysis using the values reported in the last column of Table 3 in Ref. 258. Out of 11 distances for which all three energies are available, the largest relative error (with
respect to the results of Ref. 177), equal to 0.90%, occurs for the CCSAPT07 energy at 5.0 bohrs, while the error of the HBV energy at this distance is 0.48%. If one excludes this distance, which is
close to where the helium potential crosses zero, and calculates the average relative error at the remaining distances, one obtains 0.007% for CCSAPT07 and 0.011% for HBV. Therefore, both potentials
exhibit a similar accuracy and represent a significant improvement over all previously published helium dimer potentials.
The current most accurate nonrelativistic BO potential for the helium dimer (labeled as “Przybytek17” in Fig. 6) was published in Ref. 177; see Ref. 193 for details of these calculations. The
significant improvement over all previous potentials was achieved by a combination of three factors. First, a pure ECG approach was used, i.e., with all four electrons explicitly correlated and no
contributions calculated with orbital methods. Indeed, the residual errors of the older hybrid ECG/orbital potentials, SAPT96^175,176 and CCSAPT07,^185 were dominated by insufficient basis set
saturation of the relatively small orbital contributions. Second, the use of the monomer contraction method,^189,260 i.e., the use of the product of helium atoms wave functions as one of the
functions in the basis set, dramatically improved the energy convergence with respect to the ECG expansion size. Furthermore, a replacement of the simple product of monomer wave functions by a more
compact sum of four-electron functions optimized for two noninteracting helium atoms^206,261 reduced the computational cost at the nonlinear optimization stage. Third, a near-complete optimization of
nonlinear parameters in large basis set expansions was possible due to this reduced cost and due to other improvements of the optimization algorithm.
3.3.2. Physical effects beyond the nonrelativistic BO level
With the small uncertainties of the CCSAPT07 BO potential,
it became clear that a further reduction of uncertainties required inclusion of post-BO effects. The first calculation of all relevant such effects for the whole potential-energy curve was presented
in Ref.
and later improved in Refs.
, and
. Some post-BO effects for the whole curve were included even earlier in Refs.
, but this work omitted non-negligible two-electron terms in the
relativistic and
QED corrections. The helium dimer potentials of Refs.
include at the post-BO level the diagonal adiabatic correction, relativistic corrections (earlier computed in Ref.
, but for the minimum separation only), the QED correction, and the retardation effect (a long-range QED correction)
In Refs.
, all the post-BO corrections were computed in the supermolecular way as the differences of expectation values of appropriate operators with the dimer and monomer wave functions, except at the CCSD
(T) level, see below. The nuclear kinetic energy operator was used for the adiabatic correction, the
Breit–Pauli operator
for the relativistic correction, and the
QED operator
for the QED correction. In the latter case, one approximation was made in the operator. In the term
where the sum over
is over the nuclei, the value of ln
should be computed for each
, but instead a constant value was taken, equal to the value of ln
for the helium atom. This is an excellent approximation since ln
depends very weakly on
. Calculations for two interacting ground-state hydrogen atoms
have shown that ln
changes by less than 1.15% when
varies from 1.4 bohrs, the distance of the potential minimum, to infinity, where it assumes the atomic value. For H
, this
-dependence is important since its inclusion changes the dissociation energy by 0.004 cm
, while the uncertainty of this quantity is 0.001 cm
. This inclusion changes the value of the QED term by 1.8%. The same relative change for He
would result only in a 0.00002 K contribution to the interaction energy at the minimum of the potential, negligible compared to the uncertainties coming from other sources.
All post-BO corrections were computed using both four-electron ECG basis sets and orbital basis sets (except for the so-called Araki–Sucher part of the QED operator where only ECG functions were
used). The calculations with smaller uncertainties were selected for the final potential. Orbital calculations were performed using a combination of CCSD(T) and FCI approaches or FCI alone. For the
adiabatic correction, only FCI was used. The calculations of the average values of the operators listed above with ECG and FCI wave functions are straightforward (although regularization techniques
have to be used for singular operators). However, the CCSD(T) wave function needed to compute expectation values is not available (not defined) and instead the CCSD(T) linear response method was
used.^264 The retardation effects of long-range electromagnetic interactions were computed from the Casimir–Polder formula^265 by subtracting the retardation part of the α^2 relativistic and α^3 QED
The calculations of Ref. 177 significantly improved the accuracy of the helium dimer potential, with uncertainties reduced by an order of magnitude compared to those of Refs. 137 and 185. As already
discussed, the main improvement was due to the use of larger and better optimized ECG wave functions at the nonrelativistic BO level of theory for all R ≤ 9 bohrs. Accuracy of the adiabatic and
relativistic corrections was also improved by using larger basis sets than in Refs. 10 and 137. A major theoretical advance was the calculation of the properties of the very weak bound state of He[2]
(the so-called halo state) with full inclusion of nonadiabatic effects.
The accuracy of relativistic and QED contributions was further improved in Ref. 11. The contributions to the interaction energy at the van der Waals minimum are presented in Table 2. Clearly, with
the uncertainty of the BO contribution of 0.00020 K, all the included post-BO contributions are relevant, except for the retardation contribution, but this contribution does become important at very
large separations.^190 One can also see that uncertainties of the adiabatic, relativistic, and QED terms are almost negligible compared to the uncertainty of the BO term. The potential of Ref. 11 was
used to compute the second virial coefficient and the second acoustic virial coefficient of helium.
TABLE 2.
Contribution . Value . Uncertainty .
V[BO] −11.000 71 0.000 20
V[ad] −0.008 904 8 0.000 009 7
V[rel] 0.015 391 1 0.000 015 4
V[QED] −0.001 332 7 0.000 001 8
V[ret] 0.000 012
Contribution . Value . Uncertainty .
V[BO] −11.000 71 0.000 20
V[ad] −0.008 904 8 0.000 009 7
V[rel] 0.015 391 1 0.000 015 4
V[QED] −0.001 332 7 0.000 001 8
V[ret] 0.000 012
3.4. Nonadditive helium potentials
In any fluid, the total interaction energy includes terms beyond pairwise-additive interactions between monomers. These so-called nonadditive contributions begin with three-body nonadditive terms
defined as the part of the trimer interaction energy that cannot be recovered by the sum of two-body interactions. The additive and nonadditive interactions form a series called the many-body
expansion of interaction energy. Fortunately, for all fluids consisting of monomers interacting via noncovalent forces, this expansion converges very fast and usually it is sufficient to limit
calculation to two- and three-body terms. For a review of the many-body expansion, see Ref. 266. For metrology, the three-body potential is needed to calculate the third virial coefficient.
A pairwise-nonadditive potential for helium was developed in Ref. 267 and improved in Ref. 138. In the earlier work, two independent potentials were obtained. One was based on three-body SAPT^268–272
and the other on the supermolecular CCSD(T) approach. Orbital basis sets up to aug-cc-pV5Z were used. The two potentials were in very good agreement. In Ref. 138, the CCSD(T) potential was improved
by calculating the FCI correction in an incremental approach and increasing the number of grid points, with CCSD(T) values taken from Ref. 267 except for the new grid points. Near the minimum of the
total potential, the three-body contribution is only −0.0885 K, which should be compared to the total interaction energy of about $−33$ K, but the three-body contribution is much larger than the
uncertainty resulting from the two-body term, which is 0.0006 K. The uncertainty of the three-body term at the minimum of the total potential was estimated to be 0.002 K.
Recently, the three-body potential for helium was further improved^273 by adding the relativistic and adiabatic corrections, as well as using a new set of correlation-consistent basis sets
specifically developed for helium atoms.^177 An improved functional form was also used to analytically represent the potential at large distances. In particular, new terms were developed for the case
when two atoms remain close while the third is progressively more distant. These refinements resulted in a reduction of the uncertainty by a factor of about 5 overall. In particular, the uncertainty
at the minimum was reduced to 0.5 mK, a factor of 4 smaller than that of the previous work.^138
3.5. Heavier noble-gas atoms
While theory is superior to experiment for the helium atom and helium clusters, this is not the case for neon, and even less so for argon. The simple reason is the number of electrons per atom: 2,
10, and 18, respectively. While for the helium atom and small helium clusters the N-electrons explicitly correlated bases can reach ppm or smaller uncertainties, and FCI calculations can be performed
in fairly large bases, for neon neither type of calculation can be performed in bases large enough to get meaningful results. To quantify this statement, let us examine the most accurate calculations
for the neon dimer,^117 see Table 3. The calculations at the CCSD(T) level of theory were performed in the largest available basis sets: modified daug-cc-pV8Z with bond functions. The uncertainty of
the interaction energy obtained in this way is about 200 ppm, which is only ten times larger than the 20 ppm uncertainty of the He[2] BO interaction energy. However, uncertainties coming from some
excitations of higher rank are significantly larger: the pentuple excitation contribution, Δ(P), increases the uncertainty of the total value of interaction energy to about 1000 ppm. The increase of
uncertainties is due to the use of smaller and smaller basis sets as the number of excitations increases: at the CCSDTQ(P) level of theory only the daug-cc-pVDZ basis set could be used. Furthermore,
based on the results in Table 3, it is not possible to estimate the uncertainty resulting from neglecting excitations beyond (P). The lower part of Table 3 shows the convergence in the rank of
excitation. One can see that while the contribution of the triple excitations is very substantial, a 29% increase in the magnitude of interaction energy relative to the CCSD level, the contribution
of quadruple excitations is 57 times smaller than that of triple ones. However, the contribution of pentuple excitations breaks this fairly fast convergence: it is of similar magnitude to that of the
quadruple excitations. Note that one cannot blame the noniterative character of the pentuple excitations, as for lower-rank excitations the iterated and noniterated values are fairly similar. One may
ask if the value of the pentuple contribution computed in Ref. 117 could be a numerical artifact resulting from the use of a rather small basis set. This issue was investigated in Ref. 117, and the
results computed in the aug-cc-pVDZ and aug-cc-pVTZ were 0.0227 and 0.1113 K, respectively. While these results may indicate that even the first digit in the pentuple excitations contribution may be
uncertain, they also indicate that the order of magnitude will likely remain the same when going to larger basis sets. This would indicate that for Ne[2] the coupled-cluster expansion converges very
slowly, whereas for other closed-shell systems investigated in the literature CCSDTQ(P) agrees with FCI very well, indicating that effects of higher excitations are negligible. Unfortunately, FCI
calculations would be extremely difficult to perform for Ne[2] even in the aug-cc-pVDZ basis set.
TABLE 3.
Contribution . Value . Uncertainty .
CCSD(T) −41.3301 0.0100
CCSDT–CCSD(T) −0.5730 0.0115
CCSDT(Q)–CCSDT −0.1602 0.0112
CCSDTQ–CCSDT(Q) −0.0043 0.0009
CCSDTQ(P)–CCSDTQ 0.1179 0.0589
CCSD −32.5355
CCSDT–CCSD −9.4437
CCSDTQ–CCSDT −0.1645
CCSDTQ(P)–CCSDTQ 0.1179
Contribution . Value . Uncertainty .
CCSD(T) −41.3301 0.0100
CCSDT–CCSD(T) −0.5730 0.0115
CCSDT(Q)–CCSDT −0.1602 0.0112
CCSDTQ–CCSDT(Q) −0.0043 0.0009
CCSDTQ(P)–CCSDTQ 0.1179 0.0589
CCSD −32.5355
CCSDT–CCSD −9.4437
CCSDTQ–CCSDT −0.1645
CCSDTQ(P)–CCSDTQ 0.1179
Similar calculations at the limits of the available technology were reported for Ar[2] in Ref. 274 (see also earlier calculations^275 with accurate treatment at very small values of interatomic
distances R). The value of the interaction energy obtained at the van der Waals minimum is −142.86 K and its uncertainty was estimated at 0.46 K. This uncertainty, representing 3000 ppm (0.3%) of the
computed well depth, does not include an estimate of effects beyond CCSDTQ. The results of Ref. 117 for Ne[2] indicate, however, that the post-CCSDTQ contribution may be not negligible.
The first first-principles three-body potential for argon was developed in Ref. 269 using three-body SAPT. It was then used to compute the third virial coefficient of argon^144 and to simulate
vapor–liquid equilibria.^153 An improved three-body potential for argon was developed in Ref. 276 using the CCSDT(Q) level of theory and including core correlation and relativistic effects.
Uncertainties of the potential were estimated. The authors of Ref. 276 also computed the third virial coefficient, obtaining good overall agreement with experimental data. In particular, in some
regions of temperature, theoretical values exhibited smaller uncertainties than experiment and comparisons with theory allowed evaluation of different experiments. When the experimental data were
refitted by a new model that included an approximate fourth virial coefficient,^123 the agreement with theory improved, which can be considered to be a validation of the new model. The work of Ref.
276 shows that despite limitations of accuracy, for some properties of argon theory may provide information relevant for metrology and its accuracy may be competitive with experimental accuracy.
3.6. Magnetic susceptibility
Magnetic susceptibilities of noble gas atoms are relevant for RIGT; see Eq. (11). In general, the magnetic susceptibility is several orders of magnitude smaller than its electric counterpart (hence,
A[μ] is several orders of magnitude smaller than A[ɛ]). This means that only modest accuracy for the magnetic susceptibility, on the order of 0.1% or even 1%, is sufficient for it to make a
negligible contribution to the uncertainty budget of current or planned refractivity-based thermodynamic metrology. Calculations at the BO level are therefore probably sufficient, but it is still
desirable to compute additional effects, at least at lowest order, to verify that they are relatively small.
The first comprehensive calculation of the magnetic susceptibility of the helium atom was performed by Bruch and Weinhold.^277 They added corrections for relativistic effects and nuclear motion to an
existing high-accuracy calculation at the BO level. However, their calculation included only some of the relativistic corrections that enter at lowest order. Recently, Puchalski et al.^80 presented a
definitive calculation of all effects through order α^4, along with a more accurately computed value for the nonrelativistic BO limit using Slater geminals. They obtained agreement within mutual
uncertainties with the calculations of Bruch and Weinhold for individual terms,^277 but included some terms that had been omitted in the previous work. When converted from the atomic units used in
the paper, the final result for ^4He corresponds to A[μ] = −7.92128(13) × 10^−6 cm^3 mol^−1. The relative uncertainty of this result, primarily due to neglected QED effects that enter at the α^5
level, was conservatively estimated at 16 ppm. This is far more than sufficient for any conceivable application of refractivity for temperature or pressure measurement, and the agreement with
previous work encourages confidence in the result.
As with other properties, the greater number of electrons renders the calculation of magnetic susceptibility much more difficult for neon and especially for argon. The current state-of-the-art
calculations for neon^64 and argon^66 were performed only at the nonrelativistic BO level, with a rough uncertainty estimate for neglected relativistic effects based on the magnitude of those effects
for the electric polarizability. The estimated uncertainty of this calculated quantity was ∼0.2% for neon^64 and 1% for argon.^66 The limited experimental information for the magnetic susceptibility
is discussed in Sec. 4.5.3.
4. From Electronic Structure to Thermophysical Properties
Virial expansions are exact results from quantum statistical mechanics which enable a systematically improvable evaluation of various thermophysical properties as a power series in density starting
from the ideal-gas reference system. The coefficients appearing in the N-th term of the series can be computed from the knowledge of the interaction of clusters of N particles.
In the case of the equation of state – i.e., the expansion of the pressure
as a function of density
– one obtains Eq.
together with rigorous expressions for the virial coefficients
), etc., which turn out to be functions of temperature only and are given by
) is the partition function of a system of
particles evaluated in the canonical ensemble. These partition functions can be calculated once the interaction potential
, …,
) among
particles is known; the potential is generally expressed as
is the pair potential,
is the non-additive contribution to the three-body potential, and so on. In Eq.
we specialized to the case of atomic systems, which is the principal topic of this review; in this case
represents the position of the
-th atom. In the case of molecules, which we will discuss in Sec.
, the various potentials appearing in Eq.
depend also on coordinates
that describe the intramolecular configuration of molecule
. In particular, a single-body potential
) will also appear in Eq.
. The potentials and their uncertainties can be computed from first principles using the methods described in Sec.
. The most general expression for
) in quantum statistical mechanics is given by
and the primed sum in Eq.
is on a complete set of states |
⟩ of the
-body Hamiltonian
with the proper symmetry upon particle exchange due to the bosonic or fermionic nature of the particles involved. Equation
is an equivalent expression where the sum over the states has no restriction on the symmetry and the operators
represent the
-th permutation of particles in the Hilbert space, including the sign of the permutation in the case of fermions. The latter expression will be the most convenient when discussing the path-integral
MC approach for the calculation of virial coefficients.
The non-relativistic
-body Hamiltonian is conveniently written as
where we have introduced the momentum operator
and mass
for the
-th particle and the second equality defines the
-body kinetic energy
Virial expansions of the form of Eq.
have been derived for several other quantities measured by the gas-based devices described in Sec.
: the speed of sound in Eq.
, the dielectric constant in Eq.
, and the index of refraction in Eq.
. The coefficients appearing in Eq.
are given by
where the quantity
The density expansion of the dielectric constant
is generally given as a generalization of the Clausius–Mossotti equation in one of the two equivalent forms given by Eqs.
. Until recently, derivations for the coefficients appearing in these equations would agree on the expression for the second dielectric virial coefficient,
, but differ in the case of the higher-order coefficients.
A systematic review of the dielectric expansion showed that the correct expressions are
is the atomic polarizability and the functions
are given by expressions similar to Eq.
, where the interaction Hamiltonian among the constituent particles of Eq.
is extended with two terms in order to include the effect of the interactions of the dipole moment and the electronic polarizability of the system with an external electric field of magnitude
. In Eqs.
, the derivatives are to be evaluated at
= 0. The two additional terms in the Hamiltonian are
are the (non-additive) dipole moments and the (non-additive) electronic polarizabilities of a system of
particles. In the case of atoms,
are both zero, but a system of three particles has, in general,
≠ 0.
An expression analogous to the Clausius–Mossotti equation
was derived by Lorentz and Lorenz for the refractive index
and is given in Eq.
. The Lorentz–Lorenz equation
is relevant to those experiments where the refractive index is measured by optical methods. In this case, the refractive virial coefficients are a function of the angular frequency
of the electromagnetic radiation as well as the temperature. Usually, the frequency dependence is approximated as a power-law expansion of the form
depends on the interaction-induced Cauchy moment Δ
4.1. Classical limit
Although the focus of this review is on calculations with no uncontrolled approximation, let us briefly discuss the classical limit of the approach we have outlined. Classical expressions can be
computed relatively easily, and provide a useful high-temperature check for the more involved calculations described below.
Since quantum exchange effects are absent in classical mechanics, the only term that remains in Eq.
is the one corresponding to the identity permutation, giving rise to the “correct Boltzmann counting” factor of 1/
! in the partition functions.
In the same limit, the kinetic term in the Hamiltonian
commutes with the potential energy
as well as with
. Its contribution can be integrated exactly, resulting in a factor of the form
is the thermal de Broglie wavelength of the atoms under consideration. Putting all of this together, one obtains
where the
is the same as Eq.
, but without the terms corresponding to
. Additionally, we have denoted by d
the integration element in the space of all the coordinates needed to describe a system of
atoms, e.g., the Cartesian coordinates
, …,
. Since the system is translationally invariant, the integration produces a factor of
with the understanding that one particle, usually labeled as 1, is fixed at the origin of the coordinate system. Using rotational invariance, one can further write for the integration elements
= |
| = |
| and
is the angle between the vectors
. In Eq.
, the angle
is the polar angle corresponding to the vector
in spherical coordinates.
Using Eqs.
, and
, one obtains the classical expressions
for the second density, acoustic, and dielectric virial coefficient, respectively. In Eq.
we have defined
, which is the average of the interaction-induced pair polarizability. The classical expression for
is analogous to Eq.
, where Δ
is substituted by the Cauchy moment Δ
In the same way, one can derive expressions for the classical limit of the third density, acoustic, and dielectric virial coefficients using Eqs.
, and
. After some lengthy, but straightforward, evaluation, they turn out to be
, and
. The classical expression for the third acoustic virial coefficient
is more involved and is given in the
4.2. Quantum calculation of virial coefficients
The classical approach can be expected to be valid when Λ/σ ≪ 1, where σ is the size of the hard-core repulsive region of atoms (which is around 6 bohrs for the noble gases); this implies that the
classical formulas will be asymptotically valid for high temperatures and heavy atoms. However, in the case of helium this approximation is too drastic even at room temperature.
The inclusion of quantum effects in the calculation of virial coefficients (density, acoustic, or dielectric) requires evaluating the N-body partition functions Q[N] of Eq. (43) in a quantum
framework. A straightforward approach would be to consider in Eq. (43) the eigenstates |i⟩ of the N-body Hamiltonian, H[N]|i⟩ = E[i]|i⟩, so that Eq. (43) becomes a simple sum. To the best of our
knowledge, this method has been demonstrated to date only in the case of the second dielectric virial coefficient.^79
In the case of
(which enables the calculation of virial coefficients of order 2), a very fruitful approach dating back to the late 1930s
is to rewrite it as the sum of three terms: one depending on the bound-state energies, one depending on the phase shifts of the scattering states, and one depending on the bosonic or fermionic nature
of the atoms involved. The expression of
) then becomes
is the reduced mass of the pair of atoms considered,
is the energy of the
-th bound state with relative angular momentum
, and
) = 1 + (−1)
+ 1) with
the nuclear spin in the case of identical atoms (the case of different atoms can be recovered by letting
→ ∞). The quantity
) in Eq.
is the
scattering phase shift for two particles with relative energy
and angular momentum
. Absolute phase shifts are continuous functions of
that tend, in the limit of
→ 0, to
times the number of bound states at angular momentum
. With the advent of electronic computers, the use of Eqs.
enabled the calculation of accurate numerical values
and it is still the most efficient way to compute the second virial coefficient of atomic species.
One important benefit of this method is that once the energies of all the bound states have been computed and phase shifts are known for a sufficiently high number of total angular momenta and
scattering energies, the values of
) and its derivatives, and hence
), can be easily computed at all temperatures; knowledge of the collision-induced pair polarizability also enables the calculation of
Additionally, transport properties such as the viscosity and the thermal conductivity – see Sec.
below – can be computed in a straightforward manner.
Unfortunately, this approach cannot be easily extended to higher-order coefficients. Some attempts in this direction were made in the 1960s,^292,293 but all of them required the introduction of some
uncontrolled approximations and did not take into account the non-additive parts of the many-body potential.
4.2.1. Path integral approach
At the same time, the path-integral approach to quantum statistical mechanics
was shown by Fosdick and Jordan to provide a systematic way to compute virial coefficients of any order without any uncontrolled approximation.
The path-integral formulation is based on a controlled approximation of the exponential of the
-body Hamiltonian, that is
Equation (70) is the Li–Broughton expansion of the exponential of the sum,^296 which was independently discovered by Kono et al.^297 based on an initial idea by Takahashi and Imada.^298 It can be
shown that Eq. (70) becomes an exact equality in the case P → ∞, although in practice satisfactory convergence is reached for a finite value of the parameter P. Actually, Eq. (70) becomes an equality
in the P → ∞ limit also when O is omitted in Eq. (70) (this is the original Trotter–Suzuki approach),^299,300 although in this case convergence requires higher values of P; this approach is called
the primitive approximation,^281 and, for the sake of simplicity, will be used throughout this review.
The path-integral approach is obtained by using Eq.
in Eq.
and inserting
− 1 additional completeness relations between the
factors in Eq.
. Additionally, one uses as a complete set the (generalized) position eigenstates
, where we have included a superscript (1) for later convenience. In this case, the sum over
in Eq.
becomes an integral over the 3
and the
− 1 completeness relations can be written as
= 2, …,
. Notice that in this case the effect of the permutation operators
is to exchange atomic coordinates in the rightmost ket. For example, if
denotes the permutation of particles 1 and 2 (assumed to be bosons), one has
Let us first proceed assuming that
is the identity permutation (that is, we are considering Boltzmann statistics; this approximation is essentially exact for
≳ 10 K even in the case of helium) and the case of density virials of pure species [so that our Hamiltonian is given by Eq.
]. The operators
(and, if needed,
) of Eq.
are diagonal in the position basis. The matrix elements of the exponential of the kinetic energy operators can be calculated exactly
and are given by
so that
can be written as
with the understanding that
. Equations
, which correspond exactly (in the
→ ∞ limit) to the original quantum statistical formulation, can be interpreted as the partition function of a
For each of the original
particles of coordinates
, one has introduced
− 1 copies of coordinates
, which, as one can see from Eq.
, are connected via harmonic potentials. The equivalent classical system is then made by
ring polymers of
monomers each. As shown by Eq.
, these polymers interact with the original potential averaged over all the monomers. It can be shown that the functions
of Eq.
represent probability distributions.
Although they are not Gaussian probabilities, because of the ring-polymer condition
, they can be sampled exactly using an interpolation formula due to Levy
(also known as “the Brownian bridge”). The harmonic intra-polymer interaction, which ultimately comes from the kinetic energy term
of the quantum Hamiltonian
, has the effect that the average “size” of the ring-polymer corresponding to each particle is of the order of the de Broglie thermal wavelength Λ, thus taking into account quantum diffraction (that
is, the Heisenberg uncertainty principle).
In order to compute the functions Z[N] (and, hence, the virial coefficients), it is convenient to separate the NP vector coordinates $xi(k)$ as follows: first of all, we notice that the energy of the
equivalent classical system is invariant upon an overall rigid rotation or rigid translation. We can use the latter property to extract a factor of V and at the same time pin one of the coordinates –
conventionally the first monomer of particle 1, that is $x1(1)$ – at the origin of the coordinate system. The rotational invariance can be taken into account by assuming that the first monomer of one
particle (particle 2, say) lies along the x axis of the coordinate system and that the first monomer of another particle (particle 3) lies in the xy plane. This convention brings about a factor of 4π
when N = 2 (corresponding to the integration over the two polar angles describing $x2(1)$) and a factor of 8π^2 (that is the integration over the two polar angles describing $x2(1)$ and the
azimuthal angle of $x3(1)$) when N ≥ 3. The remaining 3NP − 6 coordinates (or 3NP − 5 in the case of N = 2) can be conveniently divided into
1. The coordinates of the first bead of all the particles, that is $r12=|x2(1)−x1(1)|$ and, for N ≥ 3, $r13=|x3(1)−x1(1)|,cosθ23$ and $xi(1)$ (the latter only for N ≥ 4), where θ[23] is the angle
between the position of particles 2 and 3 in the xy plane.
2. The relative coordinates $Δri(k)$ (k = 1, …, P − 1).
Since the functions
depend only on
, one can rewrite the partition functions
of Eq.
in the form
denotes the average of the Boltzmann factor of the potential energy over the internal configurations of the ring polymers. Finally, using Eqs.
and the definition of the virial coefficients
, one obtains
which are very similar to the classical expressions reported in Sec.
. The path-integral expressions are obtained from the classical expressions by substituting the evaluation of potentials and polarizabilities as averages over the ring-polymer beads [see Eq.
] and averaging the resulting expressions over the configurations of the ring polymers, as evidenced by the angular brackets in Eqs.
. The path-integral expression for
is obtained from Eq.
by the substitution of
Explicit expressions for the third acoustic virial coefficient in the path-integral formulation are quite cumbersome, for reasons discussed in the
; they can be found in Ref.
It is important to notice that in the case of C(T) the terms coming from $Z22$ in Eq. (39) actually involve averages over four ring polymers, since these two terms involve two particles each and have
to be treated as independent, lest spurious correlations be introduced in the calculation of the ⟨⋯⟩ average. In fact, in the last term of Eq. (85) two of these polymers are used to compute $e−βU2̄
(r12)−1$ and the other two to compute $e−βU2̄(r13)−1$. Similar considerations also apply when calculating γ[a] and C[ɛ] using path integrals.
Quantum effects are taken into account by averaging over the ring-polymers configurations, and at the same time evaluating the interaction energy as an average over the monomers, as in Eq. (76). We
recall that in Eqs. (83) and (85) the radial variables $rij=|xi(1)−xj(1)|$ are the distances between the first monomer of particles i and j. In the classical limit, the size of the ring polymers
shrinks to zero so that one recovers the results of Sec. 4.1.
It is worth noting that one can find several semi-classical approximations of the exact path-integral expressions of Eqs. (83)–(86). In general, they can be obtained by expanding the full
quantum-mechanical results in powers of ℏ^2, where the first term is the classical one. This approach was pioneered by Wigner and Kirkwood^304,305 and subsequently developed by Feynman and Hibbs,^280
who put forward the idea of estimating semiclassical values by using the classical expressions with suitably modified (and temperature-dependent) potentials. Although the Feynman–Hibbs approach
considered systems with pair potentials only, a systematic derivation of semiclassical expressions in the case of three-body interactions has been developed by Yokota.^306 Even if semiclassical
approaches introduce uncontrolled approximations, they are quite effective in the case of heavier atoms such as argon at high temperatures and provide a useful check for the fully quantum
4.2.2. Exchange effects
The bosonic or fermionic nature of the particles enters in those terms of Eq.
where the permutation operator is different from the identity. In the case of the equivalent classical system, the main effect of the permutation operators is that the condition of closed ring
polymers, that is
, is no longer valid. For a general permutation, one would have
denotes the particle exchanged with
under the action of permutation
. This is equivalent to saying that some of the ring polymers would coalesce into larger polymers, depending on the specific permutation that is being considered in the sum of Eq.
. These larger ring polymers are still described by probability distributions similar to those of the Boltzmann case, that is Eq.
. As an illustrative example, let us see how the probability distribution for the internal coordinates of particles 1 and 2 is modified in the presence of exchange for bosons of spin 0. Defining
= 1, …,
as well as Δ
(notice that
because we are considering the permutation involving only particle 1 and 2), and
, the kinetic energy terms that would give rise to the probabilities
can be written as
where we recognize the probability distribution of a single ring polymer of 2
monomers describing a particle of mass
/2 at the same temperature [cf. Eq.
]. In the case of the second virial coefficient, where this is the only exchange term present, this contribution is just a simple average over the larger polymer, and can then be written as
In addition to this, the various terms in the sum over permutation of Eq. (44) also acquire factors depending on the number of nuclear spin states of the particles, that is factors of 1/(2I + 1) for
a nuclear spin I. A detailed derivation of these factors is reported in Refs. 79 and 126.
4.3. Uncertainty propagation
As is apparent from their definition, the calculation of virial coefficients depends on the knowledge of few-body properties of atoms, namely interaction potentials, polarizabilities, and dipole
moments. In a completely ab initio calculation of virial coefficients, these quantities – as seen in Sec. 3 – are determined by electronic-structure calculations and are provided with a full
uncertainty estimation. In this section, we will show how this uncertainty can be propagated to the uncertainty in virial coefficients, using the third virial coefficient C(T) as an example.
The first approach consists of calculating values of
) using perturbed pair and three-body potentials, that is:
where we have assumed that the uncertainties in the potential –
= 2 or
= 3 in the case of the pair and three-body potential, respectively – are given as expanded (
= 2) uncertainties. Assuming that a (
= 2) perturbation of the potential results in a (
= 2) perturbation of the virial coefficient, one fourth of the absolute value of the difference, that is
in Eq.
, is interpreted as a standard (
= 1) uncertainty. The overall standard uncertainty in
) due to the uncertainty in the potentials is then obtained as a sum in quadrature
Although this approach was used in early calculations of the virial coefficients,
it is unsatisfactory for several reasons. First of all, it considers only rigid shifts of the potentials, while in principle the actual potential can be closer to the upper bound for some
configurations and closer to the lower bound for others. Secondly, the uncertainty
is obtained as a difference of quantities which are themselves computed with some statistical uncertainty. This requires very long runs to make sure that the difference in Eq.
is not influenced by the statistical error in the calculation of
A more satisfactory approach is obtained by considering that the virial coefficients are functions of the temperature
as well as functionals of the potentials.
A variation
in the potential will then produce a corresponding variation in the value of the virial coefficient, given by
where we have used the definition of the functional derivative
. The absolute value in Eq.
comes from the conservative choice of assuming that all the variations will contribute with the same (positive) sign to the final uncertainty. We note in passing that in the case of the second virial
), Eqs.
produce the same result. As is apparent from Eq.
, the evaluation of Eq.
requires the functional derivative of
with respect to the pair and three-body potential. As a first approximation, one can use the classical expression Eq.
(possibly augmented by semiclassical corrections).
More accurate results (especially at low temperatures) are obtained by functional differentiation of the path-integral expressions
so that one has
where we have defined
The same approach can be used in the calculation of the propagated uncertainties for dielectric virial coefficients.
In actual practice, these expressions enable rigorous estimation of the uncertainty propagated from the potentials with a much smaller computational effort than that needed to compute virial
coefficients. Additionally, the a priori knowledge of a lower bound on the uncertainty and its temperature dependence facilitates the process of finding the optimal set of parameters for the
path-integral simulations (cutoff distance, number of beads P, number of MC integration points) in order to make the statistical uncertainty of the calculation a minor contributor to the total
4.4. Mayer sampling and the virial equation of state
Equations (38)–(40) show that the expressions for the virial coefficients become more involved when the order is increased. Although these expressions can be systematically derived using
computer-algebra systems, their subsequent implementation in classical or quantum frameworks becomes more and more time-consuming. Taking also into account the limited availability of ab initio
many-body potentials (at the time of this writing, these are limited to three bodies and have been developed only for a small set of atoms and molecules), it might seem that a fully ab initio
calculation of the equation of state using virial expansions could not be feasible. Nevertheless, it is observed that the largest contributions to the value of the virial coefficients come from the
many-body potentials of lower orders, as already discussed in Sec. 2.4. As a consequence, even if only pair and three-body potentials are available, a calculation of higher-order virial coefficients
can provide useful and reasonably accurate representations of the equation of state.^309,310
A very efficient procedure to perform this task is based on the diagrammatic approach by Ursell^311 and Mayer,^312,313 who showed how the various terms contributing to the virial coefficients can be
related to simpler cluster integrals that can be cataloged using a diagrammatic form. The contributions from the diagrams can be added very efficiently using MC sampling methods.^314 Although the
number of diagrams increases exponentially with the order of the virial coefficient, it has been shown that calculations can be kept within a manageable size up to virial coefficients of order 16,^
315–317 resulting in equations of state with very good accuracy up to the binodal (condensation) density.^310
Mayer sampling methods, originally developed for monatomic systems, have been extended to molecules^318 and therefore can also be used to perform path-integral calculations of density^121,319 and
acoustic virial coefficients.^136 This approach provides an independent validation of the framework outlined in this review. Virial coefficients calculated using both approaches are found to be
compatible within mutual uncertainties.^303
4.5. Numerical results for virial coefficients
As seen in Sec. 4.2, a fully first-principles calculation of virial coefficients requires the knowledge of many-body potentials and, in the case of dielectric properties, polarizabilities, which can
be obtained by ab initio electronic structure calculations. Currently, as discussed in Sec. 3, the only system for which these calculations can be made without uncontrolled approximations is helium.
Much effort has been devoted to produce high-quality potentials from first principles. At the time of writing, the most accurate pair potential is the one developed by Czachorowski et al.,^11 which
includes relativistic and QED effects. This potential was developed using exactly the same approach as the potential of Ref. 177, the only difference being that the relativistic and QED corrections
were computed using a larger basis set. As a consequence of including the adiabatic corrections and recoil terms, slightly different pair potentials are available for the ^4He–^4He, ^3He–^3He, and ^
4He–^3He interactions.
Recently, a new three-body potential for ^4He, including relativistic effects, has been developed,^273 resulting in a significant increase of accuracy with respect to the previous non-relativistic
potential (see Sec. 3.4).^138 In the case of dielectric properties, the single-atom polarizability has been calculated with outstanding accuracy.^192 The most accurate pair-induced polarizability
currently available is that of Cencek et al.^206 and, recently, fully ab initio calculations of the three-body polarizability^208 and dipole moment^210 have been performed, enabling a calculation of
the third dielectric virial coefficient with well-defined uncertainties completely from first principles.^210
In the case of neon, the most recent pair potentials and polarizabilities have been computed by Hellmann and coworkers.^65,117 Parametrizations of three-body potentials have appeared in the
literature,^320 but no first-principles calculations have been published so far.
Due to its easy accessibility and large measurement effects, argon has been the subject of many theoretical studies. However, the large number of electrons prevents calculations of potentials and
polarizabilities with the same accuracy as the lighter noble gases, and some uncontrolled approximations are still necessary. The most accurate pair potential so far has been developed by Lang et al.
,^118 while a three-body potential with well-characterized uncertainties was computed and characterized by Cencek and co-workers.^276 Regarding dielectric properties, the most accurate pair
polarizability is the one developed by Vogel et al.^164 In the case of neon and argon, no three-body polarizabilities are available. Calculations have been performed using the superposition
approximation^321,322 for the three-body polarizability. Although the results of these calculations compare well with the available experimental data, their uncertainty is to a large extent unknown.^
We report in Table 4 the most up-to-date references regarding ab initio calculations of virial coefficients. This table to some extent serves as an update to the table of recommended data presented
by Rourke.^323
TABLE 4.
. Helium . Neon . Argon .
B Reference 11 Reference 117 Reference 118
C References 273 and 303 Reference 324 Reference 276
D Reference 126 Reference 324 Reference 123
β[a] Reference 11 Reference 117 Reference 118
γ[a] References 273 and 303 ⋯ Reference 325
A[ɛ] Reference 192 Reference 65 Reference 66
B[ɛ] Reference 79 Reference 117 References 79 and 164
C[ɛ] References 208 and 210 Reference 207 Reference 207
A[μ]a Reference 80 Reference 64b Reference 66b
B[R] Reference 79 References 79 and 117c Reference 79
η Reference 10 Reference 117 Reference 118
λ Reference 10 Reference 117 Reference 118
. Helium . Neon . Argon .
B Reference 11 Reference 117 Reference 118
C References 273 and 303 Reference 324 Reference 276
D Reference 126 Reference 324 Reference 123
β[a] Reference 11 Reference 117 Reference 118
γ[a] References 273 and 303 ⋯ Reference 325
A[ɛ] Reference 192 Reference 65 Reference 66
B[ɛ] Reference 79 Reference 117 References 79 and 164
C[ɛ] References 208 and 210 Reference 207 Reference 207
A[μ]a Reference 80 Reference 64b Reference 66b
B[R] Reference 79 References 79 and 117c Reference 79
η Reference 10 Reference 117 Reference 118
λ Reference 10 Reference 117 Reference 118
Improvement in progress; see Ref. 326.
Best values can be obtained by applying the frequency dependence of Ref. 79 to B[ɛ] calculated from Ref. 117.
4.5.1. Density virial coefficients
The most accurate ab initio values of the second virial coefficients of helium for both isotopes are those computed by Czachorowski et al.^11 In order to visualize the recent progress in this field,
we report in Fig. 7 the evolution of the theoretical uncertainty of B(T) in the past 20 years. Theoretical and computational improvements enabled a reduction of two orders of magnitude in the
relative uncertainty, which is presently on the order of 10^−4 at low temperatures ($<10$ K) and decreases to less than 10^−5 at higher temperatures. In general, the current theoretical
uncertainties of B(T) are more than one order of magnitude smaller than the best experimental determinations.
Figure 8 shows the development of the uncertainty in the calculations of C(T) for helium in the past 12 years, starting from the first calculation with fully characterized uncertainties from 2011,^
127 whose results were independently confirmed a year later using the Mayer sampling approach.^319 One can clearly see that the subsequent improvement of the pair potential resulted in a reduction of
the uncertainty at the lowest temperatures (T ≲ 50 K), while the uncertainty at the highest temperatures is dominated by the propagated uncertainty from the three-body potential. Recent improvements
resulted in a further reduction of the uncertainty by a factor of $∼5$ across the whole temperature range 10–3000 K. The current theoretical uncertainty in C(T) is a few parts in 10^4 at high
temperature, and increases to a few parts per 10^3 below 50 K. At temperatures below ∼10 K, the theoretical uncertainty budget is dominated by the propagated uncertainty from the pair potential.
Although no well-characterized four-body potential has yet been published for helium, several groups have performed calculations of the fourth virial coefficient, D(T). Although initially the effect
of the four-body potential was neglected,^319 more recent work tried to estimate its contribution using known asymptotic values.^126 These results are in good agreement with the limited experimental
In the case of neon, the most recent calculations for B(T) with a pair potential having well-characterized uncertainties^117 resulted in a relative uncertainty at T = 273.16 K of u[r](B) = 2 × 10^−3.
As expected, this is larger than the corresponding uncertainty for helium, due to the fact that electronic structure calculations for the heavier atoms are much more computationally demanding.
Unfortunately, the three-body potential for neon is only approximately known at the moment. To the best of our knowledge, no first-principles calculation is available in the literature, and only a
semi-empirical parametrization is currently known.^320 As a consequence, no ab initio calculation of higher-order coefficients has been performed to date and only approximate values are known.^324
The pair potential of argon is well characterized and has been calculated independently by two groups,^155,274 and hence thermophysical properties at the pair level are well characterized.^29,164,325
The relative uncertainty of B(T) at T = 273.16 K is u[r] ∼ 0.6%. The pair potential has recently been improved by including relativistic effects, but the uncertainty of the resulting second virial
coefficients is still larger than for the best experimental determinations.^118
The three-body potential for argon has also been computed independently by two groups^123,276 and its uncertainty has been rigorously assessed. Therefore, the third virial coefficient of argon is
also known with rigorously propagated uncertainties. The relative uncertainty is on the order of u[r] ∼ 1% at T = 273.16 K and increases up to u[r] ∼ 6% at T = 80 K. Analogously to the other noble
gases, the four-body (and higher) non-additive contribution to the potential energy of argon is not known from first principles. Nevertheless, higher-order virial coefficients for argon, up to the
seventh, have been computed based on pair and three-body potentials.^123
4.5.2. Acoustic virial coefficients
The situation regarding first-principles calculations of acoustic virial coefficients closely follows that of the density virials. In the usual approach using phase shifts, the calculation of B(T)
also provides the temperature derivatives needed to compute β[a](T), and therefore very accurate values for the second acoustic virials for helium,^11 neon,^117 and argon^29,118,164 can be found in
the papers where the pair potential and B(T) calculations are reported.
In the case of the third acoustic virial coefficient, the situation is similar. The most accurate values of γ[a] for helium isotopes are reported in Refs. 273 and 303, which are in very good
agreement with the values obtained independently using the Mayer sampling approach.^136 The current relative uncertainty in γ[a] for helium from ab initio calculations is u[r] ∼ 0.02% − 0.2% across
the temperature range from 10 to 1000 K.^303
As already mentioned, the lack of an accurate three-body potential for neon has prevented a fully first-principles calculation of the third virial coefficient, and hence no ab initio values of γ[a]
are currently available for neon.
Regarding argon, ab initio acoustic virial coefficients up to the fourth, together with a thorough analysis of their associated uncertainties, have been reported by Wiebke et al.^325 The uncertainty
of γ[a] at T = 273.16 is $∼1.4%$.
4.5.3. Dielectric and refractivity virial coefficients
The first dielectric virial coefficient A[ɛ] for helium has been computed in Ref. 192 with an accuracy exceeding the best experimental determination. In the case of neon and argon, the most accurate
theoretical results are less accurate than the best experimental determination.^63 The most accurate computed value for neon can be found in Ref. 65, and a calculation for argon, including the
frequency dependence needed for refractivity estimates, has recently appeared.^66
Magnetic susceptibilities computed from first principles and the corresponding quantities A[μ] that are used in RIGT are available for helium,^80 neon,^64 and argon.^66 Work in progress will
significantly reduce the uncertainties from theory for neon and argon.^326 As noted by Rourke,^323 there are some discrepancies between the ab initio calculations of the susceptibilities and the
experimental values often cited from Barter et al.;^328 the discrepancies are many times larger than the stated uncertainties in the theoretical calculations. This may be due to errors in the
1930s-era argon data used in Barter’s calibration; error in the theoretical value seems unlikely at least for helium, where there is independent verification as discussed in Sec. 3.6. It is
noteworthy that the large discrepancy between theory and Barter’s experiments is in the opposite direction for helium than it is for neon and argon, suggesting that Barter might have had an
experimental problem specific to helium. A modern experimental determination of A[μ] for helium and argon (perhaps involving measuring the ratio of the two) would be highly desirable. Even a 1%
uncertainty for this measurement would be good enough to resolve the existing discrepancies, which are on the order of 7%.
First-principles calculations of B[ɛ](T) for helium have been available for a long time.^284 Reference values from the latest pair potential and polarizability can be found in Ref. 79. These results
have been independently confirmed (except at the lowest temperatures) by semiclassical calculations.^329 Due to the recent development in three-body polarizabilities^208 and dipole-moment surfaces,^
210 ab initio values of C[ɛ](T) with well-defined uncertainties are also available for both helium isotopes.^210 These values agree with the limited experimental data available, but have much
smaller uncertainties.
In the case of neon, the most accurate ab initio B[ɛ](T) has been computed by Hellmann and co-workers,^117 who also reported well-characterized uncertainties. The results are in very good agreement
with DCGT measurements. The third dielectric virial coefficient of neon is only approximately known from ab initio calculations, since the contributions from the three-body polarizability and
dipole-moment surfaces can only be estimated with several uncontrolled approximations.^207
Regarding argon, the second dielectric virial coefficient has been computed using a fully ab initio procedure in Refs. 79, 164, and 329. Analogously to neon, the lack of ab initio three-body surfaces
for the polarizability and dipole moment has prevented a fully first-principles calculation of C[ɛ](T) for argon. Approximate values were reported in Ref. 207.
Calculations of the second refractivity virial coefficient, B[R], for helium, neon, and argon were performed by Garberoglio and Harvey^79 using the best pair potentials and Cauchy moments available
at the time, although in many cases a rigorous uncertainty propagation was not possible. In the case of neon, the subsequent improved B[ɛ] from Hellmann et al.^117 can be combined with the
frequency-dependent correction from Ref. 79 to provide improved values of B[R].
4.6. Transport properties
When the thermodynamic equilibrium of a gas is perturbed, dynamic processes will tend to restore it. The actual response depends on the specific kind of induced non-homogeneity: density variations
will give rise to diffusive processes, relative motions will be damped by internal friction, and temperature gradients will result in heat flowing through the system.
The kinetic theory of gases^330 provides a theoretical framework to analyze non-equilibrium behavior and transport properties of gases, determining how the flux of matter, momentum, or heat depends
on the spatial variation of density, velocity, or temperature. The most accurate description is based on the Boltzmann equation, which describes the evolution of the state of a fluid where
simultaneous interactions of three or more particles are neglected; hence, it is valid in the low-density regime only. Despite this limited scope, additional approximations are needed to make the
kinetic equations manageable, for example by limiting the strength of the inhomogeneities to the linear or quadratic regime, which are situations that find widespread application.
In the following, we will briefly review the theory and the main computational results regarding heat and momentum transport in monatomic fluids, and how the relevant quantities – viscosity and
thermal conductivity – can be calculated from first principles. In the low-density and linear regime, the shear viscosity (
) and thermal conductivity (
) describe the linear relation between momentum and temperature inhomogeneities and the resulting internal friction and heat
is the pressure tensor,
the isotropic pressure,
the macroscopic velocity,
the heat flux, and
the temperature. Kinetic theory shows how to compute
from the details of the microscopic interaction between atoms. To this end, it is useful to define
) is the differential cross section for two particles with energy
in the scattering reference frame (
/2, where
/2 is the reduced mass and
the modulus of the relative velocity). The quantities defined by Eq.
are known as collision integrals. Equation
is valid when the cross section is calculated either in the classical or quantum regime; in the latter case one must further consider the fermionic or bosonic nature of the interacting atoms.
The viscosity and thermal conductivity are given by
are factors of order 1 that depend on the specific order
of the approximations involved, which in turn involve collision integrals of higher order. In the quantum case, collision integrals cannot be computed using path-integral MC methods, but their value
depends on the scattering phase shift (see Sec.
). For example, the expression for
) becomes
and explicit expressions for
can be found in Refs.
= 3 and
= 5, respectively.
As pointed out in Sec. 2.5, the accuracy of ab initio calculations of transport properties for helium vastly exceeds that of experiments. We report in Fig. 9 the evolution of the relative uncertainty
in the theoretical calculation of η[He] in the past 20 years. The most recent theoretical values, which have an accuracy that is more than enough for several metrological applications, can be found
in Ref. 10. It is worth noting that a more accurate pair potential has been published in the meantime,^11 although no corresponding calculation of transport properties has yet been published.
In the case of neon, the best theoretical estimates of transport coefficients are given in Ref. 117, while for argon they can be found in Ref. 118. For both gases, the best experimental results are
obtained from ratio measurements using the ab initio value of the viscosity or thermal conductivity of helium.
5. Molecular Systems
While the focus of this review is on noble gases, which are the fluids of choice for most ab initio-based primary temperature and pressure metrology, first-principles thermophysical properties for
molecular species can also be of interest and make significant contributions. Three of the most promising areas are humidity metrology, low-pressure metrology, and atmospheric physics.
There are two main factors that make rigorous ab initio calculations of properties much more difficult for molecules than for monatomic species. The first is the increased dimensionality, where
interactions depend not only on distance but on the relative orientations of the molecules. This not only complicates the development of potential-energy surfaces between molecules, but also makes
the calculation of properties such as virial coefficients a sampling problem in many dimensions. Second, for rigorous calculations the internal degrees of freedom of the molecule must be considered,
because properties of interest (such as the mean polarizability) depend on the molecular geometry and a distribution of geometries is sampled for each quantum state of the molecule. In some cases it
may be adequate to assume a rigid molecule, but at a minimum an estimate of the uncertainty introduced by this assumption is needed, even though it might be difficult to compute.
In this section, we will describe the calculation of single-molecule quantities and quantities involving two or more molecules, along with their use to calculate properties of interest for metrology.
Particular attention will be given to methods for addressing the challenges specific to molecular species. Finally, we will discuss some metrological applications that use properties of molecular
5.1. Single-molecule calculations
5.1.1. Intramolecular potentials
In order to compute values of a property of a molecule averaged over nuclear motions, it is necessary to have a PES for the molecule. Such surfaces can be developed with ab initio calculations, and
they can often be refined if accurate spectroscopic measurements are available. Development of the intramolecular potential is relatively straightforward for diatomic molecules such as H[2], N[2],
and CO because the potential is one-dimensional, but the dimensionality and complexity increases quickly with the number of atoms. Surfaces of sufficiently high quality for most purposes have been
developed for the triatomic molecules H[2]O^335 and CO[2].^336 These intramolecular potential-energy surfaces are also needed in order to sample configurations when considering molecular flexibility
for pair calculations as described in Sec. 5.2.2. Except for few-electron diatomic species and two-electron triatomics, pure ab initio surfaces are not accurate enough to provide rovibrational
spectra competitive with experiments, and the most accurate molecular surfaces are always semiempirical.
5.1.2. Electromagnetic properties
In contrast to noble gases, molecular species have multipole moments in the BO approximation (dipole, quadrupole, etc.). The most significant for metrology is the electric dipole moment. Rigorous
ab initio calculation of the dipole moment for a molecule such as H[2]O requires the development of a surface in which the dipole moment vector is given as a function of atomic coordinates, along
with the single-molecule PES. The dipole moment for a given rovibrational state can then be computed as the expectation value averaged over the wave function of that state. Because the population of
states changes with temperature, the average dipole moment will also change (slightly) with temperature; this has been analyzed for H[2]O and its isotopologues by Garberoglio et al.^337
The polarizability is another important quantity, both in the static limit for capacitance-based metrology and at higher frequencies for metrology based on optical refractivity. Unlike a noble gas
whose polarizability at a given frequency is a single number, the polarizability of a molecule is a tensor that reflects the variation with direction of the applied field and of the molecular axes.
However, the quantity of interest for metrology is the mean polarizability, defined as 1/3 of the trace of the polarizability tensor.
Polarizability reflects the response of the electrons to an electric field. It can be computed ab initio in a relatively straightforward way. While for monatomic species (and homonuclear diatomic
species) the electronic polarizability is the only contribution, more complicated molecules have an additional contribution in the static limit and at low frequencies; this is usually called the
vibrational polarizability. It can be thought of as the electric field distorting the molecule (and therefore its charge distribution) by pushing the negatively and positively charged parts of the
molecule in opposite directions.
The molecular dipole moment and polarizability are defined as the first- and second-order response to an externally applied electric field E[0], respectively. They can be computed by numerical
differentiation of the molecular energy computed in the BO approximation as a function of E[0], or by perturbation theory. Although in principle these two approaches should give the same result, in
practice some differences are observed. For atomic systems, the results from perturbation theory are found to be more accurate than numerical differentiation and are generally preferred.^206 In the
case of water, numerical differentiation is considered more accurate for dipole-moment calculations.^338
Once intramolecular potential-energy surfaces, polarizability surfaces, and dipole-moment surfaces are available, one can calculate the temperature-dependent electromagnetic response of a molecule,
that is the first dielectric virial coefficient A[ɛ] [see Eq. (8)], which is generally given by two contributions:^337 the first is proportional to the rovibrational and thermal average of the
electronic polarizability surface, while the second depends on the squared modulus of the transition matrix element of the dipole-moment surface. Additionally, one can separate the contribution from
the dipole-moment transition matrix elements into those transitions where the vibrational state of the molecule changes and those for which the vibrational state of the molecule does not change, but
the rotational state does: these two components of the dipole-moment contribution to the molecular polarizability are known as vibrational and rotational polarizabilities, respectively.^339
For small molecules (two or three atoms), one can solve directly the many-body Schrödinger equation for nuclear motion^340 (e.g., using the efficient discrete-variable representation^341 of the
few-body Hamiltonian^342) and then perform the appropriate rovibrational and thermal averages to obtain A[ɛ]. It has recently been shown that the path-integral approach outlined in Sec. 4 can be
successfully used to compute the first dielectric virial coefficient of water.^337 It can possibly be generalized to larger molecules, where the direct solution of the many-body Schrödinger equation
becomes very demanding in terms of computational power.
In the case of water, computational results using the most accurate intramolecular potential-energy surface,^335 polarizability surface,^343 and dipole-moment surface^338 are within 0.1% of the
experimental value for the static dipole moment,^344 although the theoretical surfaces for water do not yet have rigorously assigned uncertainties.
5.1.3. Spectroscopy
It is now possible, especially for molecules containing only two or three atoms, to compute the positions and intensities of spectroscopic lines ab initio. The calculation of line positions requires
only the single-molecule PES. The more important quantity for thermodynamic metrology, however, is the intensity of specific lines. This requires both the PES and a surface for the dipole moment as a
function of the coordinates. Accurate ab initio dipole-moment surfaces have been developed for H[2]O,^338 CO[2],^345,346 and CO.^347 The possible use in pressure metrology of intensities calculated
from the surfaces for CO and CO[2] will be discussed in Sec. 5.4.
5.2. Calculations for molecular clusters
5.2.1. Interaction potentials
The development of interaction potentials for molecular gases is more difficult than for atomic ones due to the additional degrees of freedom, but much of the description in Sec. 3 is still
applicable. A common approximation when developing intermolecular pair potentials is to treat the molecules as rigid rotors, which reduces the dimensionality considerably. For example, the PES of a
pair of flexible water molecules has 12 degrees of freedom. By freezing the four OH bond lengths and the two HOH bond angles, only six degrees of freedom, usually taken to be the center-of-mass
separation and five angles describing the mutual orientation, remain. To minimize the consequences of freezing the intramolecular degrees of freedom, the zero-point vibrationally averaged structures
of the monomers are often used instead of the corresponding equilibrium structures.^348,349
However, even a six-dimensional dimer PES requires investigating thousands or even tens of thousands of pair configurations with high-level ab initio methods. As discussed in Sec. 3, the most
commonly applied level of theory is CCSD(T)^350 for molecular monomers; this method is usually applied with the frozen-core (FC) approximation. Such a level of theory was only the starting point in
the schemes used to develop the most accurate pair potentials for the noble gases beyond helium. For the CCSD(T) method, the computational cost scales with the seventh power of the size of the
molecules, and the scaling becomes even steeper for post-CCSD(T) methods.
In recent years, several intermolecular PESs have been developed that go beyond the CCSD(T)/FC level of electronic structure theory. The first step is to include all electrons in the calculations.
Examples of all-electron (AE) surfaces are the flexible-monomer water dimer PES of Ref. 351 and the rigid-monomer ammonia dimer PES of Ref. 352. Also, post-CCSD(T)/AE terms were used in the H[2]–CO
flexible-monomer PESs starting in 2012.^353,354 The T(Q) contributions were shown to have surprisingly large effects on the H[2]–CO spectra.^355
Intermolecular pair potentials can be accurately represented analytically by a number of different base functional forms. Mimicking the anisotropy of the PES is most commonly achieved either by using
spherical harmonics expansions or by placing interaction sites at different positions in the molecules, with each site in one molecule interacting with each site in the other molecule through an
isotropic function. The site-site form is also often used for the empirical effective pair potentials commonly employed in MD and MC simulations of large molecular systems. The analytic functions
used to represent high-dimensional ab initio PESs for pairs of small rigid molecules typically have a few tens up to a few hundred fit parameters.
Determination of these parameters, i.e., fitting a PES to a set of grid points in a dimer configurational space and the corresponding interaction energies, was until recently a major task taking
often several months of human effort. This bottleneck has recently been removed by computer codes that perform such fitting automatically. In particular, the autoPES program^351,356 can develop both
rigid- and flexible-monomer fits at arbitrary level of electronic structure theory. The automation is complete: a user just inputs specifications of monomers and the program provides on output an
analytic PES. This means that the program determines the set of grid points, runs electronic structure calculations for each point, and performs the fit. In addition to developing automation, the
autoPES project introduced several improvements in the strategy of generating PESs. In particular, the large-R region of a PES is computed ab initio from the asymptotic expansion. Such expansion
predicts interaction energies well down to R about two times larger than the van der Waals minimum distance. This means that no electronic structure calculations are needed in this region and autoPES
can develop accurate PESs for dimers of few-atomic monomers using only about 1000 grid points, while most published work used dozens of thousands of points.
Accurate analytic rigid-rotor PESs exist for a large number of both like-species and unlike-species molecule pairs. For metrology, the most noteworthy of these are N[2]–N[2],^357 CO[2]–CO[2],^358,359
H[2]O–CO[2],^360 H[2]O–N[2],^361 and H[2]O–O[2].^362 Other accurate PESs of this type are: N[2]–HF,^363 H[2]O–H[2]O,^351,364 (HF)[2],^365 and H[2]–CO.^353,355
Many of these PESs (e.g., those from Refs. 357, 358, and 360–362) are based on nonrelativistic interaction energies corresponding to the frozen-core CCSD(T) level of theory in the CBS limit and are
represented analytically by site-site potential functions, with each site-site interaction modeled by a modified Tang–Toennies type potential^366 with an added Coulomb interaction term. In the case
of the N[2]–N[2] PES,^357 corrections to the interaction energies for post-CCSD(T), relativistic, and core-core and core-valence correlation effects were considered. Motivated by the availability of
extremely accurate experimental data for the second virial coefficients of N[2] and CO[2], the N[2]–N[2]^357 and CO[2]–CO[2]^358 PESs were additionally fine-tuned such that these data are almost
perfectly matched by the values resulting from the PESs. The maximum well depths of the PESs were changed by the fine-tuning by less than 1%. Such fine-tuning does, however, mean that properties such
as virial coefficients calculated from these tuned potentials cannot be considered to be truly from first principles for the purpose of metrology.
The second group of PESs listed above was also developed using either CCSD(T), with FC or AE, or SAPT. Post-CCSD(T) terms were considered in some cases, as already mentioned above. A range of
different functional forms was used in the fitting; for larger monomers it was most often the site-site form.
While the error introduced by approximating molecules as rigid rotors is believed to be small for the molecules considered here, more rigorous calculations should include the intramolecular degrees
of freedom; this has been done for example for the H[2]–H[2], H[2]–CO, and H[2]O–H[2]O potentials.^351,353,354,367,368 There are several difficulties involved in the generation of fully flexible
potentials. The first is the larger number of degrees of freedom. A system of N molecules approximated as rigid rotors can be described by C[r] = 6N − 6 coordinates, while C[f] = 3nN − 6 coordinates
are necessary to fully describe a configuration of the same molecules if each of the monomers has n atoms. For sampling c configurations per degree of freedom, the number of calculations needed to
explore the PES grows exponentially as $cCr|f$. In the case of, say, the water trimer (N = 3, n = 3), even assuming c = 3 one goes from 3^12 ≈ 5 × 10^5 configurations for rigid models to 3^21 ≈ 10^
10 configurations for a fully flexible approach. The exponential increase of the number of configurations as a function of the number of degrees of freedom to be considered is sometimes called the
dimensionality curse. Not all of these configurations are equally important and there is room for significant pruning and clever sampling strategies: one of the most useful starts from potentials
developed for rigid molecules and enables the development of fully flexible versions optimizing the number of additional molecular configurations to be evaluated.^369,370 More generally, even for a
few degrees of freedom, the product of dimensions strategy leading to the c^C is the worst strategy to follow. Instead, one uses various types of guided MC generation of grid points. In particular,
the statistically guided grid generation method of Ref. 371 reduces the number of points needed for a six-dimensional PES to about 300 (assuming the use of ab initio asymptotics). Another important
issue regards the choice of a suitable form for the analytic potential and the fitting procedure. As in the case of rigid potentials, site-site interaction models (based on exponential functions at
short range, inverse powers at long range, and Coulomb potentials) are commonly used for intermolecular flexible potentials. For the intramolecular interactions, Morse functions are often used but
polynomial expansions work sufficiently well for molecules in their low-energy rovibrational state.^351 Nevertheless, the dimensionality curse drastically limits the development of fully flexible
potentials and for the time being only pair and three-body potentials involving diatomic and triatomic molecules (notably water^364,372,373) have been developed.
5.2.2. Density virial coefficients
The calculation of density virial coefficients for molecular systems can be performed in a way very similar to that for noble gases. The main difference concerns the evaluation of the matrix elements
of the free-molecule kinetic energy operator, that is the generalization of Eq. (74) which in turn depends on the specific degrees of freedom considered in the molecular model under consideration.
In the most general case, one considers the translational degrees of freedom of all the atoms in the molecule. Equation (74) remains the same (with the obvious modification of an atom-dependent mass
m), but one needs an intramolecular potential to keep the molecule bound and, in general, a large number of beads, especially if light atoms (such as hydrogen or one of its isotopes) are to be
considered. This approach allows flexibility effects to be fully accounted for and has been applied to investigate the second virial coefficient of hydrogen^367 and water^368 isotopologues. As one
might expect, flexibility is more important at higher temperatures. On the other hand, this approach requires intramolecular and intermolecular potentials that depend on all the degrees of freedom,
which in turn call for very demanding ab initio electronic structure calculations.
At sufficiently low temperatures, molecules occupy their vibrational ground state, and rigid-monomer models are expected to be quite (although not perfectly) accurate. In this case, a whole molecule
is described as a rigid rotor, that is by three translational and three rotational degrees of freedom (2 in the case of linear molecules). The matrix elements of the kinetic energy operator are, in
this case, more complicated than that in Eq. (74), but their expression has been worked out for both linear^374 and non-linear^375,376 rotors.
The rigid-rotor approximation of a molecular system is, in principle, an uncontrolled approximation and, consequently, cannot directly provide rigorous data for metrological applications. On the
other hand, the associated uncertainties can be partially offset by the fact that potential-energy surfaces can be generated with higher accuracy than in the case of fully flexible models.^261,377
Validation of the ab initio results with experimental data can be used to establish the temperature range in which a rigid model is valid, and provide useful estimates of virial coefficients where
experimental data are lacking. Additionally, rigid models can be a stepping stone toward the more accurate fully flexible approaches.
Also, semiclassical approximations of density^378 or dielectric virial coefficients^285 for molecular systems are available. They are generally much easier to evaluate than by path-integral
calculations, and are quite accurate in many cases.^337,368,379
5.2.3. Dielectric and refractivity virial coefficients
The calculation of dielectric and refractivity virial coefficients for molecular species is much more difficult than for the monatomic systems discussed in Sec. 4.5.3. In addition to the increased
dimensionality, the charge asymmetry creates additional polarization effects in interacting molecules. A complete treatment must therefore include the effect of the molecular interactions not only on
the polarizability of the molecules, but also on their charge distribution. Because of this complexity, it seems unlikely that coefficients beyond the second virial will be calculated in the
foreseeable future, and quantitatively accurate calculations with realistic uncertainty estimates may be limited to diatomic molecules such as N[2] or H[2].
The only attempt at such calculations we are aware of for realistic (polarizable) molecular models is the work of Stone et al.,^380 who calculated the second dielectric virial coefficient for several
small molecules, including CO and H[2]O. A recent experimental determination of the second dielectric virial coefficient for CO^381 was in qualitative but not quantitative agreement with the
prediction of Stone et al.
For rigorous metrology, it would be necessary to characterize the uncertainty of the surfaces describing the mutual polarization and pair polarizability of the molecules. The dimensionality, and
therefore the complexity, of these calculations for a diatomic molecule like N[2] would be similar to that for the three-body polarizability and dipole surfaces for monatomic gases.
5.2.4. Molecular collisions
In some pressure-metrology applications near vacuum conditions, collision rates, which are related to collision integrals, are required. We already introduced collision integrals for atom–atom
collisions in Sec. 4.6, but the concept can be generalized to include atom–molecule and molecule–molecule collisions, enabling the calculation of transport properties for dilute molecular gases.
While the collision integrals for atom–atom collisions result in a classical treatment from the solution of the linearized Boltzmann equation and in the quantum-mechanical case from the solution of
the linearized Uehling–Uhlenbeck equation,^382 the corresponding classical and quantum-mechanical equations for collisions involving molecules are the linearized Curtiss–Kagan–Maksimov equation^
383–386 and the linearized Waldmann–Snider equation.^387–390
The formalism for the calculation of collision integrals involving molecules is much more complex than in the case of atom–atom collisions. Relations for classical collision integrals were derived by
Curtiss for rigid linear molecules^391 and extended to rigid nonlinear molecules by Dickinson et al.^392 The quantum-mechanical calculation of collision integrals involving two molecules has rarely
been attempted because of the mathematical complexity and large computational requirements, whereas atom–molecule collisions have been studied quantum-mechanically more often. For collisions between
a helium atom and a nitrogen molecule, collision integrals were calculated both classically and quantum-mechanically.^393,394 The comparison showed that quantum effects are small except at low
temperatures. The degree to which the quantum nature of collisions can be neglected for pairs with larger expected quantum effects, such as H[2]O–H[2]O, remains an open question, but the agreement
with experiment of classically calculated dilute-gas viscosities for H[2]O^395 suggests that the classical approximation is adequate for most purposes.
5.3. Humidity metrology
Much humidity metrology requires knowledge of humid air’s departure from ideal-gas behavior. Because the densities are low, this can be described by the virial expansion. The second virial
coefficient of pure water has been calculated^368 based on flexible ab initio pair potentials computed at a high level of theory.^364,372,373 It is necessary to take the flexibility of the water
molecule into account to obtain quantitative accuracy.^368
The most important contribution to the nonideality of humid air comes from the interaction second virial coefficient of water with air. While fairly accurate measurements of this quantity exist near
ambient temperatures, it can now be computed with similar or better uncertainty by combining the cross second virial coefficients for water with the main components of dry air.^396 Good quality pair
potentials exist for water with argon,^397 nitrogen,^361 and oxygen,^362 and these have been combined by Hellmann^362 to produce accurate water–air second virial coefficients between 150 and 2000 K.
For humidity metrology at pressures significantly higher than atmospheric, corrections at the third virial coefficient level become significant. Only very limited data exist for the relevant third
virial coefficients (water–water–air and water–air–air),^398 so ab initio calculation of these quantities would be useful. This requires development of three-body potential-energy surfaces for
systems such as H[2]O–N[2]–N[2] and H[2]O–H[2]O–O[2]. To our knowledge, no high-accuracy surfaces exist for these three-molecule systems, but their development should be feasible with current
The same framework can be used for humidity metrology in other gases. Hygrometers are typically calibrated with air or nitrogen as the carrier gas, but some error will be introduced if the
calibration is used in the measurement of moisture in a different gas. Calibrations can be adjusted if ab initio values of the cross second virial coefficient are known for water with the gas of
interest. Such values have been developed for several important gases, such as carbon dioxide,^360 methane,^399 helium,^400 and hydrogen.^401
Some emerging technologies for humidity metrology can be aided by ab initio property calculations. Instruments to measure humidity from the change in dielectric constant with water content of a gas^
402,403 require the first dielectric virial coefficient of water, which depends on its molecular polarizability and dipole moment. These quantities and their temperature dependence have been a
subject of recent theoretical study.^337
Spectroscopic measurement of humidity has also been proposed;^404 this requires the intensity of an absorption line for the water molecule. Thus far, work in this area has used measured line
intensities due to their smaller uncertainty compared to ab initio values. The recent work of Rubin et al.^405 demonstrated mutually consistent sub-percent accuracy for both experimental and
theoretical intensities based on a semiempirical PES for an H[2]O line, offering promise for the future use of calculated intensities to reduce the uncertainty of humidity metrology.
5.4. Pressure metrology
Molecular calculations are also promising for pressure metrology at low pressures.^105 Refractivity-based pressure measurements using noble gases are discussed in Sec. 2.3. Some proposed approaches
use ratios of the refractivity of a more refractive gas (such as nitrogen or argon) to that of helium. Use of nitrogen in these systems would be aided by good ab initio results for the polarizability
of the N[2] molecule and its second density and refractivity virial coefficients.
For low pressures, on the order of 1 Pa and below, absorption spectroscopy is a promising approach for pressure measurement. The absorption of a gas such as CO or CO[2] can be used to measure low gas
densities (from which the pressure is calculated by the ideal-gas law, perhaps with a second virial correction); this can be a primary pressure standard if the line intensity is calculated from
semiempirical potential-energy and dipole-moment ab initio surfaces tuned to spectral data. Even if measured intensities are used, theoretical results are valuable to check their accuracy. For CO[2],
uncertainty of intensity measurements and agreement between theory and experiment below 0.5% has been obtained.^346,406 The simpler CO molecule is more amenable to accurate theoretical calculations;
consistency between experimental and theoretical line intensities on the order of 0.1% has recently been achieved.^347 In these calculations, the potential-energy curve was purely empirical, but the
dipole-moment surface was obtained ab initio. An unresolved question in this work so far is the uncertainty of ab initio calculated line intensities, which must depend in a complex way on the
uncertainties in the intramolecular potential and in the dipole-moment surface. Without reasonable estimates for the uncertainty of calculated intensities, the utility of this spectroscopic method
for primary pressure standards is diminished.
For ultrahigh vacuum, gas densities can be measured based on the collision rate between the gas and a collection of trapped ultra-cold atoms. Both lithium and rubidium have been proposed as the
trapped species.^407–413 While in some implementations an apparatus constant is derived from measurements,^409,410 it has recently been recognized^414 that the proposed procedure introduces error
when light species (such as Li and H[2]) are involved in the collisions.
It is also possible to determine the relevant proportionality factor for the collision rate from first principles using collision cross sections calculated from ab initio pair potentials and quantum
collision theory. These calculations have been performed for lithium with H[2] (the most common gas in metallic vacuum systems) and He;^415,416 ab initio calculations with rubidium are more
challenging due to the large number of electrons. A recent paper has reported first-principles collision rate coefficients for both Rb and Li with noble gases, H[2], and N[2].^417 It is also possible
to measure the ratio of two collision pairs (for example, Rb–H[2] versus Li–H[2]) to obtain the coefficient for a system that is more difficult to calculate ab initio;^407,414 in this approach a low
uncertainty for the simpler-to-calculate system (that with fewer electrons) is essential.
5.5. Atmospheric physics
In atmospheric physics, the interaction of radiation with atmospheric gases, particularly H[2]O and CO[2], has received increasing attention for climate studies; it is also important for Earth-based
astronomy where the atmosphere is in the optical path. Scientists in these fields rely on line positions and intensities in the HITRAN database.^418 Increasingly, ab initio calculations are being
used to supplement experimental measurements for these quantities, as has recently been summarized for CO[2].^419
5.6. Transport properties
While transport properties of molecular gases are of little relevance in precision metrology, for the sake of completeness we mention briefly the current state of the art for pure molecular gases.
Most of the transport property calculations for such gases performed so far are based on classically calculated collision integrals for rigid molecules using the formalism of Curtiss^383 for linear
molecules and of Dickinson et al.^392 for nonlinear molecules (see Sec. 5.2.4).
A representative example of such calculations for gases consisting of small molecules other than H[2] are the classical shear viscosity and thermal conductivity calculations of Hellmann and Vogel^395
and Hellmann and Bich,^420 respectively, for pure H[2]O. The agreement with the best experimental data is within a few tenths of a percent for the viscosity and a few percent for the thermal
conductivity. For both properties, these deviations correspond to the typical uncertainties of the best experimental data. The significant contribution to the thermal conductivity due to the
transport of energy “stored” in the vibrational degrees of freedom, which is not directly accounted for by the classical rigid-rotor calculations, was estimated using a scheme that only requires
knowledge of the ideal-gas heat capacity in addition to the rigid-rotor collision integrals.^420 The main assumption in this scheme is that collisions that change the vibrational energy levels of the
molecules are so rare that their effects on the collision integrals are negligible.
For pure H[2], classical calculations are not accurate enough even at ambient temperature. Fully quantum-mechanical calculations were performed by Mehl et al.^421 using a spherically-averaged
modification of a H[2]–H[2] PES,^261 thus reducing the complexity of the collision calculations to that for monatomic gases. Despite this approximation, the calculated shear viscosity and thermal
conductivity values for H[2] agree very well with the best experimental data, particularly in the case of the viscosity where the agreement is within 0.1%.
6. Concluding Remarks and Future Perspectives
The outstanding progress achieved during the last three decades by the ab initio calculation of the thermophysical properties of pure fluids and mixtures has drastically reduced the uncertainty of
the measurement of these properties and of the thermodynamic variables temperature, pressure, and composition.
For example, consider primary thermometry. Ab initio calculations directly contributed to the acoustic and dielectric determination of the value of the Boltzmann constant that is used in the new SI
definition of the kelvin. The remarkably accurate theoretical calculations of the polarizability and the non-ideality of thermometric gases have also facilitated simplified measurement strategies and
techniques.^29,49,53,54 Consequently, new paths directly disseminating the thermodynamic temperature are now available at temperatures below 25 K, where the realization of ITS-90 is particularly
complicated. Various methods of gas thermometry have determined T with uncertainties that are comparable to or even lower than the uncertainty of realizations of ITS-90.^35,62,63 Improved theory has
also suggested that primary CVGT could usefully be revisited, as discussed in Sec. 2.2.4.
In the near future, technical achievements will likely further reduce the uncertainty of measurements of the thermodynamic temperature and the thermophysical properties of gases. Efforts include: (1)
improving the purity of the thermometric gases at their point of use, (2) implementing two-gas methods to reduce the uncertainties from compressibility of the apparatus, and (3) developing robust
microphones (possibly based on optical interferometry) to facilitate cryogenic AGT. In the remainder of this section, we will summarize current limitations and describe some prospects for future
6.1. Current limitations of ab initio property calculations
As described in Sec. 3, ab initio calculations of properties for individual helium atoms and pairs of atoms have achieved extraordinarily small uncertainties. Even for three-body interactions, the
potential energy is now known with small uncertainty, and good surfaces are available for the three-body polarizability and dipole moment. This enables accurate calculations, with no uncontrolled
approximations, of the second and third density, acoustic, and dielectric virial coefficients. This high accuracy is due to the small number of electrons involved; electron correlation at the FCI
level is still tractable for three helium atoms with a total of six electrons.
For DCGT and RIGT, it would be desirable to have similarly accurate properties for neon and argon, because their higher polarizability (and therefore stronger response) reduces the relative effect of
other sources of uncertainty such as imperfect knowledge of the compressibility of the apparatus or the presence of impurities in the gas. Unfortunately, this level of accuracy for neon and argon is
unlikely to be obtained in the foreseeable future. The neon atom has ten electrons, as many as five helium atoms, and argon has 18. While recent efforts have (at large computational expense)
significantly reduced the uncertainty of single-atom and dimer quantities for neon and argon,^64–66,117,118 they do not approach the levels of accuracy achieved for helium. For example, the relative
uncertainty of the best calculation of the static polarizability of a neon atom^65 is more than 100 times greater than that of a helium atom.^192 Similarly, the relative uncertainty of the pair
potential minimum energy is about 100 times larger for neon^117 than for helium.^11 Therefore, the relative uncertainties of calculated gas-phase thermophysical properties will be much higher for
other gases than for helium. In such cases, the most accurate values of properties may be obtained by measuring ratios of properties relative to that of helium. This has already been done for the
static polarizability of neon and argon^63 and for the low-density viscosity of several gases.^166–168
Refractivity-based thermal metrology^82,323 requires A[R], and preferably also B[R] and C[R]. At microwave frequencies, the static values (A[ɛ], B[ɛ], etc.) can be used. At optical frequencies, A[R]
has been computed at a state-of-the-art level for helium,^84 neon,^64 and argon.^66 B[R] has been computed at a state-of-the-art level for helium,^79 but corresponding calculations for neon and
argon rely on values for the Cauchy moment ΔS(−4) that could be significantly improved.
Even with state-of-the-art ab initio results, it seems likely that ratio measurements using helium, such as those of Egan et al. for A[R],^119 will produce lower uncertainties. To our knowledge, the
theory for calculating C[R] at optical frequencies is not available. Therefore, at the moment, it is necessary to take rather uncertain values from experiment or assume (based on the small difference
between B[R] and B[ɛ]) that it is equal to C[ɛ].
As mentioned in Sec. 4.5.3 and also noted by Rourke,^323 another issue for refractivity methods is the unclear situation surrounding the A[μ] contribution. The best calculations of the magnetic
susceptibility for helium,^80 neon,^64 and argon^66 disagree with the old, sparse measurements of these quantities^328 by amounts much larger than their stated uncertainties. Independent calculations
of the magnetic susceptibility for one or more of these species would be helpful in assessing this discrepancy, but what is most needed is a modern measurement of the magnetic susceptibility of a
noble gas (probably argon), either as an absolute measurement or as a ratio to a substance with a better-known magnetic susceptibility, such as liquid water.
To reach higher pressures with helium-based apparatus, it would be desirable to have reliable values, with uncertainties, for the fourth virial coefficient D(T). The most complete first-principles
estimate so far^126 used high-accuracy two-body and three-body potentials, but had a significant uncertainty component due to the unknown four-body potential. Accurate calculations of the nonadditive
four-body potential for helium are feasible with modern methods. A four-body PES for helium, even if its relative uncertainty was as large as 10%, would allow reference-quality calculation of D(T)
and enable improved metrology. The fitting of ab initio calculations to functional forms with many variables could, in this case, benefit from recent progress in machine-learning-based methods.^422
6.2. Molecular gases
Nitrogen is an attractive option for gas-based metrology due to its availability in high purity and its longstanding use in traditional apparatus such as piston gauges, but its lack of spherical
symmetry and its internal degree of freedom add complication to ab initio calculation of its properties. The development of potential-energy surfaces for pair and three-body interactions for rigid
molecular models is certainly feasible. This is also possible for flexible models, pending the difficulties discussed in Sec. 5.2.1. Once these surfaces are available, the methods for calculations of
density virial coefficients have already been proven.^261,368,377 (see Sec. 5.2.2). To the best of our knowledge, no fully ab initio calculation of dielectric virial coefficients for molecular
systems has been performed. This task will require the development of the molecular interaction-induced polarizability function and dipole-moment function. The path-integral approach described in
Sec. 4 can certainly be extended to compute these quantities as well as rigorously propagate their uncertainties.
6.3. Improved uncertainty estimations
As mentioned in Sec. 4.3, much progress has been made in estimating realistic uncertainties for density and dielectric virial coefficients. The old method of simply displacing the potentials in a
“plus” and “minus” direction, while correct for one-dimensional integrations such as B and B[ɛ], is inefficient and can produce inaccurate results for higher coefficients. The functional
differentiation approach discussed in Sec. 4.3 provides more rigorous results.
However, it is not entirely clear how to obtain uncertainties for acoustic virial coefficients, because they involve temperature derivatives of B(T) and C(T). The rigorous assignment of uncertainty
to a derivative of a function computed from uncertain input is an unsolved problem as far as we are aware. Binosi et al.^303 recently applied a statistical method (the Schlessinger Point Method) to
the estimation of uncertainties for acoustic virial coefficients; this may provide a way forward.
A similar issue exists for the low-density transport properties. The very low uncertainty of the viscosity of helium shown in Fig. 9 near 40 K, obtained with the traditional method of “plus” and
“minus” perturbations to the pair potential, is an artifact of competing effects on the collision integral of perturbations from different parts of the potential. While B(T), for example, exhibits
monotonic behavior with respect to perturbations in the potential, that is not the case for the collision integrals used to compute transport properties, which can cause uncertainties to be
artificially underestimated. This was recognized by Hellmann and co-workers, who created potentials perturbed in additional ways to provide a non-rigorous but reasonable estimation method for the
uncertainty of low-density transport properties for krypton,^149 xenon,^423 and neon.^117 Further analysis would be welcome to improve the rigor of uncertainty estimates for transport collision
6.4. Transport properties
In addition to the uncertainty issue just mentioned, we see two areas for improvement in the field of transport properties. The first concerns the density dependence beyond the low-density limiting
values discussed in this work. As mentioned in Sec. 2.5, for flow metrology it would be desirable to know the viscosity with small and rigorous uncertainties not only at zero density, but at the real
densities at which instruments are calibrated. The first correction should be a virial-like term linear in density, but the most successful theory so far^171–173 relies on some simplifying
assumptions. A more rigorous theory would be a significant advance. Even if the initial density dependence were only known for helium, that would enable better metrology for other gases because of
the established methods for measuring viscosity ratios.
The second area is the transport properties of molecular species, such as N[2] or H[2]O. As mentioned in Sec. 4.6, classical collision integrals can be calculated for these species when they are
modeled as rigid rotors. While it is believed that the errors introduced by the assumptions of classical dynamics and rigid molecules are small, it would be desirable to have verification from a more
rigorous calculation. One might expect quantum effects to be significant for the dynamics of H[2]O collisions, since they make a large contribution to B(T) for H[2]O.^367 Since fully quantum
calculation of collision integrals is currently intractable for all but the simplest systems, the development of a viable “semiclassical” method for transport properties would be desirable. No such
formulation exists to our knowledge.
6.5. Simulations of liquid helium
While we have focused on the gaseous systems where ab initio properties are already making major contributions to metrology, the thermophysical properties of condensed phases (particularly for
helium) are also important in temperature metrology. For example, the vapor pressures of liquid ^3He and ^4He are part of the definition of ITS-90.^6 With highly accurate two- and three-body
potentials for helium (perhaps eventually supplemented by a four-body potential), high-accuracy simulation of thermodynamic properties of liquid helium may become feasible.
In fact, path-integral simulations of liquid ^4He can be performed without uncontrolled approximations,^281 although, to the best of our knowledge, the most recent ab initio potentials have not yet
been employed to compute any liquid helium property (e.g., the specific heat – and hence the vapor pressure, via the Clapeyron equation – or the temperature of superfluid transition). Consequently,
the accuracy of first-principles many-body potentials for liquid ^4He is largely unknown. The use of three-body (or higher, when available) potentials would require considerable computational
resources, as has been recently observed in simulations of liquid para-H[2],^424 but theoretical developments in efficient simulation methods for degenerate systems^425 might pave the way for a fully
ab initio calculation of the thermophysical properties of condensed ^4He.
In the case of fermionic systems such as ^3He, the path-integral approach suffers in principle from a “sign problem,”^426 which generally requires some approximations and results in a large
statistical uncertainty. However, two research groups have recently claimed to have overcome these limitations,^427,428 which might result in accurate calculations of thermophysical properties in the
liquid phase also for this isotope.
6.6. Reproducibility and validation
It is desirable for metrological standards to be based on multiple independent studies, so that they will not be distorted by a single unrecognized error. For example, for the recent redefinition of
the SI in which several fundamental physical constants were assigned exact values, it was required that the value assigned to the Boltzmann constant be based on consistent results from at least two
independent experiments using different techniques and meeting a low uncertainty threshold.^28 Similarly, metrological application of the calculated results discussed in this Review would be on a
firmer basis if there was independent confirmation of the results.
The danger of an unrecognized error in calculated quantities is not merely hypothetical. For several years, the “best” calculated values of C for ^3He were in error below about 4.5 K because the
effects of nuclear spin on the quantum exchange contribution had been incorporated incorrectly; this was eventually recognized and corrected in Errata.^126,127 An early quantum calculation of B[ɛ] of
argon^429 disagreed with a later study,^79 apparently because of inexact handling of resonance states in the earlier work. Ideally, there would be independent confirmation of all the results cited in
Table 4 so that any errors could be detected.
One helpful step in this direction would be more complete documentation of calculations, including computer code, so that others can reproduce or check the work. It is common to provide computer code
for potential-energy surfaces, but the calculation of virial coefficients has typically been performed with specialized software that is not public.
More important for metrology, however, would be independent verification of the calculated results. Conceptually, this has two parts: validation of the calculated quantities and surfaces described in
Sec. 3 (potential-energy, polarizability, and dipole surfaces; atomic and magnetic polarizabilities) and validation of the calculation of virial coefficients from these quantities (described in Sec.
Validation of calculated virial coefficients is probably the easier of the two parts, because it is typically less computationally demanding. This has been done for a few quantities; for example, two
groups have performed fully quantum calculations (in one case neglecting exchange effects that become important below 7 K) of C^127,319 and D^126,319 for ^4He. Consistency checks can also be made by
comparing different calculation methods, including classical and semiclassical approaches that should agree with the quantum calculations at high temperatures. The error in B[ɛ] for argon mentioned
above was detected by comparing phase-shift calculations to PIMC and semiclassical calculations, showing the value of multi-method comparisons.
The independent validation of calculated atomic quantities and intermolecular surfaces is more difficult, because these require large amounts of dedicated computer time. There have been a few cases
where parallel efforts have produced independent, high-quality results; these include A[ɛ] for neon^64,65 and the three-body potential of argon.^123,276 Some validation is also provided when the
state of the art advances and new potentials are produced that agree with previous potentials (but have smaller uncertainties); this has been the case with the sequential development of pair
potentials for helium (Sec. 3.3). In some cases, however, these are not truly independent verifications because they are developed by the same group and use many of the same methods. While it may be
difficult to justify the extensive work required to independently confirm a state-of-the-art calculated surface, there would be value in performing spot checks of a few points. This would require
developers of surfaces to make their calculated points available (or at least a subset of them), and also the multiple calculated quantities that typically contribute to each point.
We believe that more attention should be paid to the reproducibility and validation of the calculated results that are increasingly important in precision metrology. Work of this nature may not be
very attractive to funding agencies (or graduate students), but it is needed for more confident use of gas-based metrology.
We thank Mark McLinden, Patrick Egan, and Ian Bell of NIST for helpful comments, and Richard Rusby of NPL for valuable discussion regarding the CVGT technique. K.S. acknowledges support from the NSF
Grant No. CHE-2154908.
We acknowledge support from Real-K Project No. 18SIB02, which has received funding from the EMPIR program co-financed by the Participating States and from the European Union’s Horizon 2020 research
and innovation program.
7. Author Declarations
7.1 Conflict of Interest
The authors have no conflicts to disclose.
Data Availability
Data sharing is not applicable to this article as no new data were created or analyzed in this study.
9. Appendix: Formulae for the Third Acoustic Virial Coefficient, γ[a]
As is apparent from Eqs.
, and
, the explicit expression of the third acoustic virial coefficient as a function of the pair and three-body potential is quite involved. We found that it is most conveniently expressed by defining
are functions of the temperature
as well as
, and
through their dependence on
. Performing the substitution
= 5/3, we obtain
The path-integral expression for
is more complicated, due to the fact that the ring-polymer distribution function
of Eq.
depends on temperature. In particular, defining
so that
one can show that
and derive path-integral expressions for
. However, this straightforward approach is characterized by large variance in the MC simulations, since Eq.
has a form analogous to the
estimator of the kinetic energy.
It is possible to derive equivalent expressions with smaller variance, using the same ideas that lead to the
The resulting formulas are very cumbersome, and can be found in Ref.
P. J.
D. B.
, and
B. N.
, “
CODATA recommended values of the fundamental physical constants: 2018
J. Phys. Chem. Ref. Data
de Mirandés
, and
M. J. T.
, “
The revision of the SI—The result of three decades of progress in metrology
P. J.
D. B.
B. N.
, and
, “
Data and analysis for the CODATA 2017 special fundamental constants adjustment
H. C.
et al, “
Methodologies and uncertainty estimates for T − T[90] measurements over the temperature range from 430 K to 1358 K under the auspices of the EMPIR InK2 project
Meas. Sci. Technol.
J. S.
, “
Legacy of van der Waals
, “
The International Temperature Scale of 1990 (ITS-90)
R. A.
A. R.
, and
M. R.
, “
Ab initio calculations for helium: A standard for transport property measurements
Phys. Rev. Lett.
J. J.
M. R.
, “
Ab initio values of the thermophysical properties of helium as standards
J. Res. Natl. Inst. Stand. Technol.
J. J.
J. B.
, “
^4He thermophysical properties: New ab initio calculations
J. Res. Natl. Inst. Stand. Technol.
J. B.
, and
, “
Effects of adiabatic, relativistic, and quantum electrodynamics interactions on the pair potential and thermophysical properties of helium
J. Chem. Phys.
, and
, “
Second virial coefficients for ^4He and ^3He from an accurate relativistic interaction potential
Phys. Rev. A
, and
H. L.
, “
The virial coefficients of helium from 20 to 300°K
J. Phys. Chem.
K. H.
, “
NPL-75: A low temperature gas thermometry scale from 2.6 K to 27.1 K
R. C.
W. R. G.
, and
L. M.
, “
A determination of thermodynamic temperatures and measurements of the second virial coefficient of ^4He between 13.81 K and 287 K using a constant-volume gas thermometer
, “
Helium virial coefficients—A comparison between new highly accurate theoretical and experimental data
, “
Highly-accurate second-virial-coefficient values for helium from 3.7 K to 273 K determined by dielectric-constant gas thermometry
Madonna Ripa
, and
R. M.
, “
Refractive index gas thermometry between 13.8 K and 161.4 K
P. F.
J. A.
J. E.
, and
J. H.
, “
Comparison measurements of low-pressure between a laser refractometer and ultrasonic manometer
Rev. Sci. Instrum.
, and
, “
Primary gas-pressure standard from electrical measurements and thermophysical ab initio calculations
Nat. Phys.
et al, “
2022 update for the differences between thermodynamic temperature and ITS-90 below 335 K
J. Phys. Chem. Ref. Data
M. R.
M. O.
, “
Using ab initio ‘data’ to accurately determine the fourth density virial coefficient of helium
J. Chem. Thermodyn.
D. R.
, “
The Boltzmann constant and the new kelvin
et al, “
Present estimates of the differences between thermodynamic temperatures and the ITS-90
Int. J. Thermophys.
E. R.
B. N.
, “
The 1973 least-squares adjustment of the fundamental constants
J. Phys. Chem. Ref. Data
D. B.
et al, “
The CODATA 2017 values of h, e, k, and N[A] for the revision of the SI
et al, “
New measurement of the Boltzmann constant k by acoustic thermometry of helium-4 gas
de Podesta
et al, “
Re-estimation of argon isotope ratios leading to a revised estimate of the Boltzmann constant
et al, “
The Boltzmann project
M. R.
R. M.
J. B.
de Podesta
, and
J. T.
, “
Acoustic gas thermometry
Note that in the literature one finds multiple and inconsistent definitions of the acoustic virial coefficients, depending on the variable chosen for the expansion of w^2 (the pressure p or the molar
density ρ) and the powers of RT included in the definition of the acoustic virials. We used the convention put forward in Ref. 282; in this case β[a] has the same dimensions as the second virial
coefficient B and RTγ[a] has the same dimensions as the third virial coefficient C.
M. R.
S. J.
C. W.
, and
A. R. H.
, “
Thermodynamic temperatures of the triple points of mercury and gallium and in the interval 217 K to 303 K
J. Res. Natl. Inst. Stand. Technol.
, Ph.D. thesis,
Cranfield University
Cranfield, UK
J. B.
M. R.
, “
Measurement of the ratio of the speed of sound to the speed of light
Phys. Rev. A | {"url":"https://pubs.aip.org/aip/jpr/article/52/3/031502/2910900/Ab-Initio-Calculation-of-Fluid-Properties-for","timestamp":"2024-11-14T20:25:54Z","content_type":"text/html","content_length":"1049432","record_id":"<urn:uuid:319e7f31-3b7c-4ecf-b3c8-6cadd6bd3d79>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00525.warc.gz"} |
How to Apply Custom Function to Grouped Pandas Data?
To apply a custom function to grouped pandas data, you can use the apply() method in combination with the groupby() method. First, group your data using the groupby() method according to the desired
criteria. Then, use the apply() method along with your custom function to apply your function to each group of data. This will allow you to perform operations and calculations on each group
individually, making it a powerful tool for data analysis and manipulation in pandas.
How to use the agg() function with custom functions in pandas?
To use the agg() function with custom functions in pandas, you can pass a dictionary to the agg() function where the keys are the column names and the values are the custom functions you want to
apply to those columns. Here's an example:
1 import pandas as pd
3 # Create a sample DataFrame
4 data = {'A': [1, 2, 3, 4, 5],
5 'B': [10, 20, 30, 40, 50],
6 'C': [100, 200, 300, 400, 500]}
7 df = pd.DataFrame(data)
9 # Define a custom function to calculate the sum of squares
10 def sum_of_squares(x):
11 return sum([i**2 for i in x])
13 # Use the agg() function with the custom function
14 result = df.agg({'A': sum_of_squares, 'B': sum_of_squares, 'C': sum_of_squares})
16 print(result)
In this example, we defined a custom function sum_of_squares that calculates the sum of squares for a given list of numbers. We then used the agg() function to apply this custom function to each
column in the DataFrame df. The result will be a Series with the sum of squares for each column.
What is the resample() function in pandas?
The resample() function in pandas is used to generate aggregated time series data based on a specified frequency. It allows you to change the frequency of your time series data, by either upsampling
(increasing the frequency of the timeseries data) or downsampling (decreasing the frequency of the timeseries data). This function is commonly used for tasks such as data aggregation, downsampling,
and summarizing time series data.
How to use the cut() function with custom functions in pandas?
You can use the cut() function with custom functions in pandas by first creating a custom function that defines your criteria for binning or categorizing your data. Then you can pass this custom
function as an argument to the cut() function in pandas.
Here's an example of how you can use the cut() function with a custom function:
1 import pandas as pd
3 # Create a custom function to define your binning criteria
4 def custom_function(value):
5 if value < 50:
6 return "Low"
7 elif value >= 50 and value < 100:
8 return "Medium"
9 else:
10 return "High"
12 # Create a sample DataFrame
13 data = {'value': [25, 75, 125]}
14 df = pd.DataFrame(data)
16 # Use the cut() function with the custom function
17 df['category'] = df['value'].apply(lambda x: custom_function(x))
18 print(df)
In this example, the custom_function() function categorizes the values in the 'value' column of the DataFrame into three categories: 'Low', 'Medium', and 'High' based on certain criteria. The apply()
function is used to apply the custom function to each value in the 'value' column, and the resulting categories are stored in a new column called 'category'.
You can customize the custom_function() function to suit your specific binning criteria and apply it to your data using the cut() function in pandas.
How to use the max() function with custom functions in pandas?
You can use the max() function with custom functions in pandas by passing your custom function as an argument to the max() function. Here's an example:
1 import pandas as pd
3 # Create a sample dataframe
4 data = {'A': [1, 2, 3, 4, 5],
5 'B': [10, 20, 30, 40, 50]}
6 df = pd.DataFrame(data)
8 # Define a custom function to find the maximum value in a column
9 def custom_max(column):
10 return column.max() * 2 # Multiply the max value by 2
12 # Use the custom function with the max() function
13 result = df.apply(custom_max)
14 print(result)
In this example, we defined a custom_max() function that multiplies the maximum value of a column by 2. We then applied this custom function to every column in the dataframe using the apply()
function, and passed the custom function as an argument to the max() function. The resulting output will be the maximum value of each column multiplied by 2. | {"url":"https://stock-market.uk.to/blog/how-to-apply-custom-function-to-grouped-pandas-data","timestamp":"2024-11-05T01:00:03Z","content_type":"text/html","content_length":"161248","record_id":"<urn:uuid:bebf481c-991d-476f-97b2-c7e1513fdcc8>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00680.warc.gz"} |
Making Statistics Make Sense
Learning statistics is unavoidable for researchers. Aside from a few last refuges like literary theory, philosophy, or ethnography, understanding the world by analysing quantitative data reigns king
in nearly all fields. If a psychologist wants to know whether power posing makes people behave more confidently, he conducts experiments and uses statistics to judge whether his hypotheses were
supported. If a social scientist wants to understand motivations of Trump vs Clinton voters, statistics tell her how many people to survey, and afterwards whether the voters are different.
For most people, however, statistics is hard, and many find the mere idea of learning it terrifying. I remember taking a statistics course in which the lecturer after each class would project a huge
image of cute puppies onto the screen, to calm down the frustrated humanities students in the room…
Even among established academic researchers, the level of statistical literacy is often not what it ought to be. One researcher used a computer program to recalculate critical statistical tests in
250,000 articles published in major psychology journals, and found that one in eight had at least one grossly inconsistent p-value that may have invalidated the article’s statistical conclusions. A
recent review of papers published in top-journals (e.g. Science) found that 15% used incorrect statistical procedures, and another review drily remarked that it’s “easier to get a paper published if
one uses erroneous statistical analysis than if one uses no statistical analysis at all”.
What to do?
So what do we do about the fact that statistical literacy is exceedingly important, yet people find stats difficult and scary to learn and even senior researchers make blatant mistakes? One approach
is to make statistics more fun to learn by using sexy and eye-grabbing data sets in statistics classes. For example, Andy Field’s popular series of textbooks Discovering Statistics carry the subtitle
“… and Sex and Drugs and Rock ’n’Roll” and has their reader learn statistics by analysing things like (lack of) personal hygiene at festivals, the effect of using Coke as a contraceptive, and the
relationship between being off-one’s-face drunk and getting into physical fights (!).
Such a ‘sugar coating’ approach, however, only takes us so far. A more radical approach is to reconsider the statistical vocabulary we use in the first place.
The purpose of a scientific vocabulary, I would argue, is to (a) give concrete form to useful concepts (b) in a way that’s concise and puts minimal strain on our working memory, and which (c) is
quick and easy to learn and remember. How are we doing in statistics, relative to this target? Do we use terms that make our concepts clear, transparent, and easy to memorise? Or (you can probably
tell where I’m going with this…) have we created a wasteland of arbitrary symbols and names of statistical inventors that both makes terms difficult to remember and obscures their internal
relationships? I think the latter is true to a disturbing degree.
The problem with statistics - Example 1: the chi-squared test
Imagine I want to know if there’s a relationship between how often people go fishing and how often they eat fish. I might go and ask 1,701 random people two questions: How often do you go fishing?
and How often do you eat fish? Let’s say I get these results:
These are the distributions of people answering ’Never’, ‘Sometimes’, and ‘Always’ to my two questions (‘Always’ means that you just can’t ever stop yourselves fishing, or, that you just can’t stop
putting fish in your mouth because it’s too delicious). I ask each person both questions, so each person has one score in one distribution and one in the other, and it’s the relationship between the
two that we are interested in. We can show the data in another way to get the combinations of answers:
There seems to be a relationship - among people that “Always” go fishing, 48% never eats fish, whereas among people that “Never” go fishing, only 29% never eats fish. We seem to have a negative
relationship, where people who go fishing more often are less likely to eat fish. But maybe this relation is just coincidence. How might I test that? One way is to think about what proportion of
responses of ‘Never’, ‘Sometimes’ and ‘Always’ I’d expect on ‘Eats fish’ if it actually doesn’t matter whether people go fishing. In this case, no matter what people answer on ‘Goes fishing’ I should
see the same distribution of answers to ‘Eats fish’, as I do overall. Like this:
In other words, I can find out what proportions of eating fish I expect to see if it doesn’t change with how often one goes fishing. Then I check how different the proportions I actually observe are
from what I expected. Finally, I can find out how likely it is that I get this difference just by chance, if there isn’t actually a real difference - and this is my “p-value”.
Now, what might be a sensible name for such a test? It’s kind of a Proportions_Observed-vs-Proportions_Expected test. Maybe it’s called a POPE-test? And maybe there’s a subscript that disambiguate it
from similar tests that use the same idea, while making it obvious that the tests share a relationship? Well, the convention statisticians settled on is to call it a “chi-squared”-test. You know,
just to make things obvious.
The problem with statistics - Example 2: chi-square’s measures of association
Okay, let’s return to our example. Let’s say that my POPE, apologies, my chi-squared test tells me it is super unlikely to see a difference between expected and observed proportions this large by
chance. In the jargon, my result is “statistically significant”.
“Statistically significant”, however, doesn’t mean “this effect is big and important”. It just means “I am unlikely to observe this data by chance if there’s no actual relationship between my
variables”. It doesn’t tell me anything about the size of the relationship. (We should all pledge to stop calling effects “significant” and instead say e.g. “detectable”, so we don’t in the minds of
our readers - or ourselves - conflate the existence of a relationship with its magnitude or importance. A ‘statistically significant effect’ only means that it’s ‘statistically detectable’. Moving
on.) So what do I do if I want to know how strong the association is?
Here’s one approach: Say I find that how often people eat fish varies depending on how often they go fishing. When we look at the data it seems that there is a negative relationship, where people
less frequently eat fish the more often they go fishing. (Maybe people who fish more also eat less fish because they acquire greater empathy with fish, after seeing them repeatedly trying to escape
from the hook…) How do I put an objective number on how strong the relationship is?
I might start by counting the number of times where a participant who scores higher than other participants on ‘goes fishing’ also scores higher than others on ‘eats fish’. Statisticians call this
“concordant pairs”:
In my drawing, some people say they ‘always’ goes fishing, and also say that they ‘always’ eat fish. The participants who score lower than ‘always’ on goes fishing and also lower than ‘always’ on
eats fish, form ‘concordant’ pairs with the always-always people.
Next I count the number of times where the opposite happens. That is, the times where a participant who scores higher than other participants on ‘goes fishing’ scores lower on ‘eats fish’ than the
others. Statisticians call this “discordant pairs”:
In my drawing, some people say they ‘never’ goes fishing, and also that they ‘always’ eat fish. People who score higher than ‘never’ on goes fishing but lower than ‘always’ on eats fish, form
‘discordant’ pairs with the never-always people.
Now, if I’ve got a strong negative relationship where people who more often goes fishing almost always eat less fish than those who less often goes fishing, then there will be many more discordant
pairs than concordant pairs. So one simple measure of effect size is the difference between concordant pairs and discordant pairs, compared to the total number of those pairs.
What might such a test be called? It’s a kind of Concordant-vs-Discordant-Pairs Test. Maybe it’s called a Concordance-Difference-Test, or ConDif for short? Again, clever people might come up with
lots of variations on this kind of test, so maybe there’s a good name for this general approach, like ConDif, and then various subscripts that disambiguate related tests from one another, while
making it obvious that they take the same kind of approach?
Well, statisticians’ name for this test is ‘Goodman-Kruskal’s gamma test’, or just ‘gamma test’. Okay, okay. But have they at least been consistent and named the test variations that use the
difference between concordant and discordant pairs something with ‘gamma’? Not quite. Some of the popular variations on the approach are called “Yule’s Q”, “Somer’s d”, and “Kendall’s tau” (Kendall’s
tau in turn has versions a, b, and c).
How to do statistics better
I could go on, but this should be enough to illustrate the point. For students who struggle with learning statistics, or for researchers who aren’t themselves statisticians, our obscure vocabulary
makes understanding and remembering the concepts unnecessarily difficult. It increases the number of terms to learn (with the consequence that many give up in frustration or just can’t be bothered)
and puts a smokescreen over the relationship between tests.
If we accept that statistics is both a) super important and b) super difficult for many people, then we need to think hard about the best ways to teach and communicate it. We might try to sugar coat
our current terms to students. However, sugar coating only takes us so far when our vocabulary is arcane and obscure in the first place. In some cases we do have okay terminology or have made an
effort to improve the ones we had. For example, the term ‘ANOVA’ is alright - it’s short for ‘ANalysis Of Variance’ and gives a hint that it works on a principle of comparing variance between groups.
Similarly, the simple term ‘bell-curve’, used in place of the obscure ‘Gaussian distribution’, substitutes veneration of a statistics guru with a term that gives an immediate reminder of what it
points to.
Reforming our statistics vocabulary is a low-hanging fruit that should make statistics quicker to learn, more transparent to understand, and easier to use. As we move into the era of big data, where
a basic level of statistical literacy is essential, it’s about time to get started.
Current term Better term
chi-squared test POPE (w/ chi-squared as subscript)
statistically significant statistically detectable?
Kruskal-Goodman’s gamma ConDif
Yule’s Q ConDif (it’s just gamma for a 2x2 table)
Somer’s d ConDif (w/ Somer’s d as subscript)
Kendall’s tau ConDif (w/ Kendall’s tau as subscript)
t-test StdMeanDif (w/ Student’s t as subscript)
F-ratio …
… …
I use the open source, zero-tracking utteranc.es widget for comments - it's built on GitHub issues, so you need a GitHub account to comment. | {"url":"https://ulriklyngs.com/post/2017/02/26/making-statistics-make-sense/","timestamp":"2024-11-04T06:58:16Z","content_type":"text/html","content_length":"21947","record_id":"<urn:uuid:67423534-c936-4a54-b10f-efc97c98911a>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00234.warc.gz"} |
Prime Number Formula with Problem Solution & Solved Example
There exists a plenty of formulas to calculate the prime number in a series. However, each formula needed a deeper knowledge to implement the concept or effectively need the knowledge of prime
numbers for a successful implementation of prime number formula. There are also simple prime number generating polynomials that helps in finding the first number of integer values only.
There are also formulas to find the sums and products of prime numbers that is generally calculated in the closed form. Prime numbers in mathematics are the special numbers that could be divided by
one and itself too. If they are not divisible by themselves then they are composite number or the prime number is not complete.
To check either a number is prime or not we first need to find the factors of prime numbers. Prime numbers have generally two factors only i.e. either one or itself. For example, 5 is a prime number
whose factors are 7 and one. With this discussion, it is clear what is prime number in mathematics. There are plenty of methods for finding the prime numbers and students can even use the calculators
to make the things easier and quick.
One of the popular solutions is factorization that helps you in determining quickly either a number is prime or not. The other possible examples are natural numbers, composite numbers, real number,
virtual numbers, whole numbers etc. we will discuss all these concepts in our later blog posts.
Question 1: Find if 47 is a prime number or not ?
Solution:The factors of 47 are 1 and 47.
So 47 is only divisible by 1 and 47.
So, 47 is a prime number.
Question 2: Is 16 a prime number or not ?
The factors of 16 are 1, 2, 4, 8 and 16.
So 16 is not only divisible by 1 and 16 but also by 2, 4 and 8.
So 16 is not a prime number.
16 is a composite number. | {"url":"https://www.andlearning.org/prime-number-formula/","timestamp":"2024-11-05T00:18:17Z","content_type":"text/html","content_length":"72760","record_id":"<urn:uuid:b6132468-bd04-4046-8cf5-4f67878175aa>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00556.warc.gz"} |
Portability non-portable
Stability experimental
Maintainer sjoerd@w3future.com
data :**: whereSource
:**: :: c1 a1 b1 -> c2 a2 b2 -> :**: c1 c2 (a1, a2) (b1, b2)
(Category c1, Category c2) => Category (:**: c1 c2) The product category of category c1 and c2.
(HasTerminalObject c1, HasTerminalObject c2) => HasTerminalObject (:**: c1 c2) The terminal object of the product of 2 categories is the product of their terminal objects.
(HasInitialObject c1, HasInitialObject c2) => HasInitialObject (:**: c1 c2) The initial object of the product of 2 categories is the product of their initial objects.
(HasBinaryProducts c1, HasBinaryProducts c2) => HasBinaryProducts (:**: c1 c2) The binary product of the product of 2 categories is the product of their binary products.
(HasBinaryCoproducts c1, HasBinaryCoproducts c2) => HasBinaryCoproducts (:**: c1 c2) The binary coproduct of the product of 2 categories is the product of their binary coproducts. | {"url":"http://hackage.haskell.org/package/data-category-0.4.1/docs/Data-Category-Product.html","timestamp":"2024-11-05T13:20:51Z","content_type":"application/xhtml+xml","content_length":"5176","record_id":"<urn:uuid:e1c606c6-6d44-44b3-b0e1-4515c92de40a>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00029.warc.gz"} |
Statistics and machine learning / Deep Learning (Part 1) - Feedforward neural networks (FNN) / Slides: Feedforward neural networks (FNN)
Deep Learning - Part 1
View markdown source on GitHub
Feedforward neural networks (FNN) Deep Learning - Part 1
• What is a feedforward neural network (FNN)?
• What are some applications of FNN?
• Understand the inspiration for neural networks
• Learn activation functions & various problems solved by neural networks
• Discuss various loss/cost functions and backpropagation algorithm
• Learn how to create a neural network using Galaxy’s deep learning tools
• Solve a sample regression problem via FNN in Galaxy
last_modification Published: Jun 2, 2021
last_modification Last Updated: Jul 27, 2021
What is an artificial neural network?
Speaker Notes
What is an artificial neural network?
Artificial Neural Networks
• ML discipline roughly inspired by how neurons in a human brain work
• Huge resurgence due to availability of data and computing capacity
• Various types of neural networks (Feedforward, Recurrent, Convolutional)
• FNN applied to classification, clustering, regression, and association
Inspiration for neural networks
• Neuron a special biological cell with information processing ability
□ Receives signals from other neurons through its dendrites
□ If the signals received exceeds a threshold, the neuron fires
□ Transmits signals to other neurons via its axon
• Synapse: contact between axon of a neuron and denderite of another
□ Synapse either enhances/inhibits the signal that passes through it
□ Learning occurs by changing the effectiveness of synapse
Celebral cortex
• Outter most layer of brain, 2 to 3 mm thick, surface area of 2,200 sq. cm
• Has about 10^11 neurons
□ Each neuron connected to 10^3 to 10^4 neurons
□ Human brain has 10^14 to 10^15 connections
Celebral cortex
• Neurons communicate by signals ms in duration
□ Signal transmission frequency up to several hundred Hertz
□ Millions of times slower than an electronic circuit
□ Complex tasks like face recognition done within a few hundred ms
□ Computation involved cannot take more than 100 serial steps
• The information sent from one neuron to another is very small
□ Critical information not transmitted
□ But captured by the interconnections
• Distributed computation/representation of the brain
□ Allows slow computing elements to perform complex tasks quickly
Learning in Perceptron
• Given a set of input-output pairs (called training set)
• Learning algorithm iteratively adjusts model parameters
• So the model can accurately map inputs to outputs
• Perceptron learning algorithm
Limitations of Perceptron
• Single layer FNN cannot solve problems in which data is not linearly separable
• Adding one (or more) hidden layers enables FNN to represent any function
□ Universal Approximation Theorem
• Perceptron learning algorithm could not extend to multi-layer FNN
• Backpropagation algorithm in 80’s enabled learning in multi-layer FNN
Multi-layer FNN
• More hidden layers (and more neurons in each hidden layer)
□ Can estimate more complex functions
□ More parameters increases training time
□ More likelihood of overfitting
Activation functions
Supervised learning
• Training set of size m: { (x^1,y^1),(x^2,y^2),…,(x^m,y^m) }
□ Each pair (x^i,y^i) is called a training example
□ x^i is called feature vector
□ Each element of feature vector is called a feature
□ Each x^i corresponds to a label y^i
• We assume an unknown function y=f(x) maps feature vectors to labels
• The goal is to use the training set to learn or estimate f
□ We want the estimate to be close to f(x) not only for training set
□ But for training examples not in training set
Classification problems
Output layer
• Binary classification
□ Single neuron in output layer
□ Sigmoid activation function
□ Activation > 0.5, output 1
□ Activation <= 0.5, output 0
• Multilabel classification
□ As many neurons in output layer as number of classes
□ Sigmoid activation function
□ Activation > 0.5, output 1
□ Activation <= 0.5, output 0
Output layer (Continued)
• Multiclass classification
□ As many neurons in output layer as number of classes
□ Softmax activation function
□ Takes input to neurons in output layer
□ Creates a probability distribution, sum of outputs adds up to 1
□ The neuron with the highest proability is the predicted label
• Regression problem
□ Single neuron in output layer
□ Linear activation function
Loss/Cost functions
• During training, for each training example (x^i,y^i), we present x^i to neural network
□ Compare predicted output with label y^1
□ Need loss function to measure difference between predicted & expected output
• Use Cross entropy loss function for classification problems
• And Quadratic loss function for regression problems
□ Quadratic cost function is also called Mean Squared Error (MSE)
Cross Entropy Loss/Cost functions
Quadratic Loss/Cost functions
Backpropagation (BP) learning algorithm
• A gradient descent technique
□ Find local minimum of a function by iteratively moving in opposite direction of gradient of function at current point
• Goal of learning is to minimize cost function given training set
□ Cost function is a function of network weights & biases of all neurons in all layers
□ Backpropagation iteratively computes gradient of cost function relative to each weight and bias
□ Updates weights and biases in the opposite direction of gradient
□ Gradients (partial derivatives) are used to update weights and biases
□ To find a local minimum
Backpropagation error
Backpropagation formulas
Types of Gradient Descent
• Batch gradient descent
□ Calculate gradient for each weight/bias for all samples
□ Average gradients and update weights/biases
□ Slow, if we have too many samples
• Stochastic gradient descent
□ Update weights/biases based on gradient of each sample
□ Fast. Not accurate if sample gradient not representiative
• Mini-batch gradient descent
□ Middle ground solution
□ Calculate gradient for each weight/bias for all samples in batch
☆ batch size is much smaller than training set size
□ Average batch gradients and update weights/biases
Vanishing gradient problem
• Second BP equation is recursive
□ We have derivative of activation function
□ Calc. error in layer prior to output: 1 mult. by derivative value
□ Calc. error in two layers prior output: 2 mult. by derivative values
• If derivative values are small (e.g. for Sigmoid), product of multiple small values will be a very small value
□ Since error values decide updates for biases/weights
□ Update to biases/weights in first layers will be very small
☆ Slowing the learning algorithm to a halt
□ The reason Sigmoid not used in deep networks
☆ Why ReLU is popular in deep networks
Car purchase price prediction
• Given 5 features of an individual (age, gender, miles driven per day, personal debt, and monthly income)
□ And, money they spent buying a car
□ Learn a FNN to predict how much someone will spend buying a car
• We evaluate FNN on test dataset and plot graphs to assess the model’s performance
□ Training dataset has 723 training examples
□ Test dataset has 242 test examples
□ Input features scaled to be in 0 to 1 range
For references, please see tutorial’s References section
Speaker Notes
• If you would like to learn more about Galaxy, there are a large number of tutorials available.
• These tutorials cover a wide range of scientific domains.
Getting Help
Speaker Notes
• If you get stuck, there are ways to get help.
• You can ask your questions on the help forum.
• Or you can chat with the community on Gitter.
Join an event
Speaker Notes
• There are frequent Galaxy events all around the world.
• You can find upcoming events on the Galaxy Event Horizon.
Thank you!
This material is the result of a collaborative work. Thanks to the Galaxy Training Network and all the contributors! Creative Commons Attribution 4.0 International License. | {"url":"https://galaxyproject.github.io/training-material/topics/statistics/tutorials/FNN/slides-plain.html","timestamp":"2024-11-08T18:05:17Z","content_type":"text/html","content_length":"37304","record_id":"<urn:uuid:47c4062b-ecb0-4c94-923c-91b1e4c041c1>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00462.warc.gz"} |
Department course offerings
This page lists the courses which the department offers regularly every year.
Courses with an available typical syllabus have an active link that you can click on.
Course Title Offered
Math 100 Basic Mathematics Skills for the Modern World Fall/Spring
Math 101 Precalculus Algebra with Functions and Graphs Fall/Spring
Math 102 Analytic Geometry and Trigonometry Fall/Spring
Math 103 Precalculus Trigonometry Fall
Math 104 Algebra, Analytic Geometry, and Trigonometry Fall/Spring
Math 113 Mathematics for Elementary Teachers I Fall/Spring
Math 114 Mathematics for Elementary Teachers II Fall/Spring
Math 121 Linear Methods and Probability for Business Fall/Spring
Math 127 Calculus for the Life and Social Sciences I Fall/Spring
Math 127H Honors Calculus for the Life and Social Sciences I Fall
Math 128 Calculus for the Life and Social Sciences II Fall/Spring
Math 131 Calculus I Fall/Spring/Summer
Math 131H Calculus I (honors) Fall
Math 132 Calculus II Fall/Spring/Summer
Math 132H Calculus II (honors) Fall
Math 196 Independent Study By arrangement
Math 233 Multivariable Calculus Fall/Spring/Summer
Math 233H Multivariable Calculus (Honors) Fall
Math 235 Introduction to Linear Algebra Fall/Spring/Summer
Math 235H Introduction to Linear Algebra (Honors) Fall
Math 296 Independent Study By arrangement
Math 300 Fundamental Concepts of Mathematics Fall/Spring
Math 370 Writing in Mathematics Fall/Spring
Stat 108 Foundations of Data Science Spring
Stat 111 Elementary Statistics Fall/Spring
Stat 240 Introduction to Statistics Fall/Spring/Summer
Course Title Offered
Math 331 Ordinary Differential Equations Fall/Spring/Summer
Math 396 Independent Study By arrangement
Math 405 Mathematical Computing Irregular
Math 411 Introduction to Abstract Algebra I Fall/Spring
Math 412 Introduction to Abstract Algebra II Spring
Math 421 Complex Variables Fall/Spring
Math 437 Actuarial Financial Math Fall
Math 455 Introduction to Discrete Structures Fall/Spring
Math 456 Mathematical Modeling Fall/Spring
Math 461 Affine and Projective Geometry I Fall
Math 462 Affine and Projective Geometry II Irregular
Math 471 Theory of Numbers Fall/Spring
Math 475 History of Mathematics Spring
Math 481 Knot Theory Irregular
Math 491A Problem Seminar: Putnam Exam Preparation Fall
Math 491P Problem Seminar: Preparation for the GRE subject exam Fall
Math 496 Independent Study By arrangement
Math 499C Capstone Course I Fall
Math 499D Capstone Course II Spring
Math 499T Honors Thesis By arrangement
Math 499Y Honors Research By arrangement
Math 513 Combinatorics Irregular
Math 522 Fourier Methods Irregular
Math 523H Introduction to Modern Analysis Fall/Spring
Math 524 Introduction to Modern Analysis II Spring
Math 532H Nonlinear Dynamics & Chaos with Applications Fall
Math 534H Introduction to Partial Differential Equations Spring
Math 536 Mathematical Foundations of Actuarial Science Spring
Math 537 Introduction to Mathematics of Finance Fall/Spring
Math 545 Linear Algebra for Applied Mathematics Fall/Spring
Math 548 Stochastic Processes and Simulation Irregular
Math 551 Introduction to Scientific Computing Fall/Spring
Math 552 Applications of Scientific Computing Spring
Math 557 Linear Optimization & Polytopes Irregular
Math 563H Introduction to Differential Geometry Spring
Math 571 Introduction to Math Cryptography Irregular
Math 596 Independent Study By arrangement
Stat 310 Fundamental Concepts of Statistics Fall/Spring
Stat 315 Statistics I Fall/Spring
Stat 496 Independent Study By arrangement
Stat 499T Honors Thesis By arrangement
Stat 499Y Honors Research By arrangement
Stat 501 Methods of Applied Statistics Fall/Spring
Stat 516 Statistics II Fall/Spring
Stat 525 Regressions and Analysis of Variance Fall/Spring
Stat 526 Design of Experiments Spring
Stat 535 Statistical Computing Fall/Spring
Stat 596 Independent Study By arrangement
Courses marked with a * are offered every other year, alternating with the course numbered one higher or one lower (except Statistics 705/Statistics 725, which form an alternating pair) .
Course Title Offered
Math 605 Probability Theory I Fall
Math 606 Stochastic Processes and Applications Spring
Math 611 Algebra I Fall
Math 612 Algebra II Spring
Math 621 Complex Analysis Spring
Math 623 Analysis I Fall
Math 624 Analysis II Spring
Math 645 Differential Equations and Dynamical Systems Fall
Math 646 Applied Mathematics and Math Modeling Spring
Math 651 Numerical Analysis I Fall
Math 652 Numerical Analysis II Spring
Math 671 Intro to General Topology I Fall
Math 672 Algebraic Topology Spring
Stat 607 Mathematical Statistics I Fall
Stat 608 Mathematical Statistics II Spring
Stat 610 Bayesian Statistics Spring
Stat 625 Regression Modeling Fall
Math 703 Topics in Geometry I Fall
Math 704 Topics in Geometry II Spring*
Math 705 Symplectic Topology Spring*
Math 706 Stochastic Calculus Spring
Math 707 Algebraic Geometry Spring*
Math 708 Complex Algebraic Geometry Spring*
Math 713 Intro to Algebraic Number Theory Fall*
Math 714 Arithmetic of Elliptic Curves Fall*
Math 717 Representation Theory Fall*
Math 718 Lie Algebras Fall*
Math 725 Intro to Functional Analysis I Spring
Math 731 Intro to Partial Differential Equations I Fall
Stat 705 Linear Models I Fall*
Stat 725 Estimation Theory and Hypothesis Testing I Fall* | {"url":"https://www.umass.edu/mathematics-statistics/regularly-offered-courses","timestamp":"2024-11-02T15:37:57Z","content_type":"text/html","content_length":"61904","record_id":"<urn:uuid:89c5ac2b-a4e4-413b-b05b-8e9b24ade400>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00254.warc.gz"} |
How to Join Two Tables using Query function
Joining two tables using the Query function in Google Sheets involves using the CONCATENATE or "&" operator to combine data from two separate tables into a single table. The Query function will then
be used to filter, sort, or manipulate the combined data as needed.
Here's a step-by-step guide on how to join two tables using the Query function in Google Sheets:
1. Prepare your data: Ensure that you have two separate tables with related information. Make sure the tables have the same number of columns and the columns are in the same order.
2. Combine the tables: Use the CONCATENATE or "&" operator to join the two tables together. You can do this by creating a new cell and typing in the following formula:
Replace "Table1" and "Table2" with the appropriate sheet names or ranges for your data. This formula will combine the data from the two tables into a single table.
3. Apply the Query function: In a new cell, type the following formula to apply the Query function to the combined data:
=QUERY(Table1!A1:C&Table2!A1:C, "SELECT *")
Replace "Table1" and "Table2" with the appropriate sheet names or ranges for your data. This formula will display the combined data in a new table.
4. Customize your query: You can customize the query by adding additional clauses such as WHERE, ORDER BY, and LIMIT. For example, to filter the combined data based on a specific condition, you can
use the following formula:
=QUERY(Table1!A1:C&Table2!A1:C, "SELECT * WHERE C > 1000")
This formula will display data from the combined table where the value in column C is greater than 1000.
Let's say we have two tables in separate sheets: Sheet1 with columns A, B, and C, and Sheet2 with columns A, B, and C.
1. Combine the tables:
In a new sheet, type the following formula in cell A1:
2. Apply the Query function:
In cell A1, change the formula to:
=QUERY(Sheet1!A1:C&Sheet2!A1:C, "SELECT *")
3. Customize your query:
For example, if you want to display the combined data where the value in column C is greater than 1000, use the following formula:
=QUERY(Sheet1!A1:C&Sheet2!A1:C, "SELECT * WHERE C > 1000")
This will display the filtered combined data from Sheet1 and Sheet2 in the new sheet.
Did you find this useful? | {"url":"https://sheetscheat.com/google-sheets/how-to-join-two-tables-using-query-function","timestamp":"2024-11-11T13:57:26Z","content_type":"text/html","content_length":"14232","record_id":"<urn:uuid:bc02268c-36f7-44e8-8daa-4404088f8068>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00115.warc.gz"} |
30% sharding attack
Let’s assume that an attacker controls some proportion a of validator deposits in the VMC, and some proportion b of mining power in the main chain. Because the current getEligibleProposer method is
subject to blockhash grinding the attacker can make himself the eligible proposer on a shard (actually, several shards, depending on how fast the attacker can grind) with proportion a + (1 - a)*b.
If we set a = b (i.e. the attacker controls the same proportion of validator deposits and mining power) and solve for a + (1 - a)*a = 0.5 (i.e. solve for the attacker having controlling power) we get
a = 0.292. That is, an attacker controlling just 30% of the network can do 51% attacks on shards.
One defense strategy is to use a “perfectly fair” validator sampling mechanism with no repetitions, e.g. see here. Another strategy is to improve the random number generator to something like RANDAO
or Dfinity-style BLS random beacons.
4 Likes
I think I was being stupid 🤦. The blockhash wraps the over the nonce so blockhash grinding is limited by PoW. Maybe there’s a 30% sharding attack with full PoS, but the situation is not nearly as bad
with PoW.
1 Like
And it seems harder to simultaneously control so many stakes and hashing powers. Don’t know how to measure this kind of hybrid condition.
I am inclined to say don’t bother initially for this exact reason. In the longer term, there are better random beacons that we can introduce, and will have to introduce anyway for full PoS.
1 Like | {"url":"https://ethresear.ch/t/30-sharding-attack/1340","timestamp":"2024-11-10T14:20:22Z","content_type":"text/html","content_length":"19310","record_id":"<urn:uuid:767e9462-ec4a-48d7-a076-babe7459d5dd>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00230.warc.gz"} |
3rd Grade Common Core Math Workbook
Effortless Math Common Core Workbook provides students with the confidence and math skills they need to succeed on the Common Core State Standards Math test, providing a solid foundation of basic
Math topics with abundant exercises for each topic. It is designed to address the needs of students who must have a working knowledge of basic Math.
This comprehensive workbook with over 1,500 sample questions and 2 complete 3rd Grade Common Core Math tests is all a student needs to fully prepare for the Math tests. It will help students learn
everything they need to ace the math exams.
Effortless Math unique study program provides a student with an in-depth focus on the math test, helping them master the math skills that students find the most troublesome. This workbook contains
most common sample questions that are most likely to appear in the Common Core Math exams. Inside the pages of this comprehensive workbook, students can learn basic math operations in a structured
manner with a complete study program to help them understand essential math skills. It also has many exciting features, including:
• Dynamic design and easy-to-follow activities
• A fun, interactive and concrete learning process
• Targeted, skill-building practices
• Fun exercises that build confidence
• Math topics are grouped by category, so students can focus on the topics they struggle on
• All solutions for the exercises are included, so you will always find the answers
• 2 Complete Common Core Math Practice Tests
Effortless Common Core Math Workbook is an incredibly useful tool for those who want to review all topics being covered on the Common Core Math tests. It efficiently and effectively reinforces
learning outcomes through engaging questions and repeated practice, helping students to quickly master basic Math skills. | {"url":"https://testinar.com/product.aspx?p_id=W%2BOp7SAya9DMOk46aMO5xw%3D%3D","timestamp":"2024-11-06T05:21:25Z","content_type":"text/html","content_length":"54371","record_id":"<urn:uuid:10c9af3b-aa7f-4365-a66d-c89553cb1ea8>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00240.warc.gz"} |
EViews Help: did
Estimate a equation in a panel structured workfile using the difference-in-difference estimator.
did(options) y [x1] [@ treatment]
List the dependent variable, followed by an optional list of exogenous regressors, followed by an “@” and then the binary treatment variable. You should not include a constant in the specification.
coef=arg Specify the name of the coefficient vector. The default behavior is to use the “C” coefficient vector.
prompt Force the dialog to appear from within a program.
p Print results.
did asmrs @ post
estimates an equation by difference-in-difference with ASMRS as the outcome variable, and POST as the treatment variable.
did lemp lpop @ treated
estimates an equation by difference-in-difference with LEMP as the outcome variable, TREATED as the treatment variable, and LPOP as an exogenous regressor.
“Difference-in-Difference Estimation”
for a discussion of difference-in-difference models. | {"url":"https://help.eviews.com/content/commandcmd-did.html","timestamp":"2024-11-12T06:11:45Z","content_type":"application/xhtml+xml","content_length":"12382","record_id":"<urn:uuid:59ef1245-759d-4cd1-af35-4cd5c1a56abc>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00548.warc.gz"} |
Applied Mathematics Seminar | Yihui Quek, The (Quantum) Signal and the Noise: towards the intermediate term of quantum computation | Applied Mathematics
Tuesday, March 26, 2024 2:00 pm - 2:00 pm EDT (GMT -04:00)
QNC 101
The (Quantum) Signal and the Noise: towards the intermediate term of quantum computation
Can we compute on small quantum processors? In this talk, I explore the extent to which noise presents a barrier to this goal by quickly drowning out the information in a quantum computation. Noise
is a tough adversary: we show that a large class of error mitigation algorithms -- proposals to "undo" the effects of quantum noise through mostly classical post-processing – can never scale up.
Switching gears, we next explore the effects of non-unital noise, a physically natural (yet analytically difficult) class of noise that includes amplitude-damping and photon loss. We show that it
creates effectively shallow circuits, in the process displaying the strongest known bound on average convergence of quantum states under such noise. Concluding with the computational complexity of
learning the outputs of small quantum processors, I will set out a program for wrapping these lower bounds into new directions to look for near-term quantum computational advantage. | {"url":"https://uwaterloo.ca/applied-mathematics/events/applied-mathematics-seminar-yihui-quek-quantum-signal-and","timestamp":"2024-11-12T19:29:48Z","content_type":"text/html","content_length":"111291","record_id":"<urn:uuid:be2c87d2-cd26-4b89-b72a-23c50ffbe276>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00195.warc.gz"} |
Data Linearizing | Dagster Glossary
Dagster Data Engineering Glossary:
Data Linearizing
Transforming the relationship between variables to make datasets approximately linear.
Data linearizing - a definition
Linearizing data is a process where the relationship between variables in a dataset is transformed or adjusted to make it linear or approximately linear. This transformation is often done in data
engineering to simplify analysis, improve model performance, or meet assumptions of certain statistical methods.
Methods for linearizing data can vary depending on the nature of the dataset and the specific goals of the analysis. Common techniques include logarithmic transformations, exponential
transformations, polynomial transformations, and Box-Cox transformations. Additionally, feature engineering techniques can be employed to create new linear combinations of variables that better
capture the underlying relationships in the data.
When to use data linearizing in data engineering
Linearizing data can be useful in data engineering for several reasons:
Data Modeling: Many statistical and ML models assume a linear relationship between variables. By linearizing the data, we can make these models more applicable and effective. And example here is
ANOVA (Analysis of Variance): ANOVA tests for significant differences in the means of two or more groups. In its simplest form (one-way ANOVA), it assumes a linear relationship between a categorical
independent variable (factor) and a continuous dependent variable.
Normalization: Linearizing data can also involve normalizing it, which makes the data comparable and easier to work with across different scales. Normalization can involve techniques such as scaling
variables to have a mean of zero and a standard deviation of one.
Error Reduction: In some cases, linearizing data can help reduce errors or biases in the analysis. For example, when dealing with non-linear relationships, errors might be larger in certain parts of
the dataset. Linearizing the data can help mitigate this issue.
Interpretability: Linear relationships are often easier to interpret than non-linear ones. By linearizing the data, we can make the relationships between variables more transparent and
Assumptions of Analysis Techniques: Certain statistical techniques, such as linear regression, assume that the relationship between variables is linear. Linearizing the data ensures that these
assumptions are met, thereby improving the validity of the analysis.
An example of data linearizing in Python
To demonstrate the concept of linearizing data, we will use a common non-linear dataset that follows a power law relationship and then apply a linear transformation to linearize it. This process is
often used in data analysis to apply linear regression techniques on datasets that do not initially exhibit a linear relationship.
Consider a dataset that follows the equation y = ax^b, where a and b are constants. This is a common form of a power law relationship. To linearize this data, we can apply a logarithmic
transformation to both sides of the equation, resulting in a linear relationship that can be analyzed with linear regression techniques.
The transformed equation becomes log(y) = log(a) + blog(x).
Here is a step-by-step Python example:
1. Generate a synthetic dataset that follows a power law relationship.
2. Plot the original dataset to show its non-linear nature.
3. Apply a logarithmic transformation to both x and y to linearize the data.
4. Plot the transformed (linearized) dataset.
5. Perform a linear regression on the transformed data as a demonstration of how linearization allows for linear modeling techniques to be applied to non-linear data.
The dataset includes random noise, simulating data variability. The plots and linear regression analysis demonstrate how the logarithmic transformation still effectively linearizes the data, allowing
for linear regression to be applied. The added variability makes the data analysis scenario more representative of real-world data challenges.
Noise is added as a small fraction of the y_power_law value to minimize the risk of y becoming non-positive. This approach should prevent the occurrence of NaN values during the logarithmic
transformation and allow the linear regression analysis to proceed without errors.
Please note that you need to have the necessary Python libraries installed in your Python environment to run this code.
import numpy as np
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression
# Step 1: Generate a synthetic non-linear dataset
np.random.seed(0) # For reproducibility
x = np.random.uniform(1, 10, 100) # Generate 100 random samples from 1 to 10
a = 2
b = 3
# Ensure y is always positive by adjusting the magnitude of noise
# Calculate the power law component
y_power_law = a * x**b
# Add noise that is a small fraction of the y_power_law value to avoid negative or zero values
noise = np.random.normal(0, y_power_law * 0.1, x.shape) # Adjust noise level here
y = y_power_law + noise # y follows a power law relationship with x, with adjusted noise
# Step 2: Plot the original dataset
plt.figure(figsize=(10, 5))
plt.subplot(1, 2, 1)
plt.scatter(x, y, color='blue')
plt.title('Original Non-linear Dataset with Adjusted Variability')
# Step 3: Linearize the dataset
x_log = np.log(x)
y_log = np.log(y) # This should now work without resulting in NaN values
# Step 4: Plot the transformed (linearized) dataset
plt.subplot(1, 2, 2)
plt.scatter(x_log, y_log, color='red')
plt.title('Transformed Linearized Dataset')
# Perform linear regression on the transformed data
model = LinearRegression()
model.fit(x_log.reshape(-1, 1), y_log) # Reshape x_log for sklearn
slope = model.coef_[0]
intercept = model.intercept_
print(f"Linear regression on transformed data: y = {slope:.2f}x + {intercept:.2f}")
# Plotting the regression line
y_pred = model.predict(x_log.reshape(-1, 1))
plt.plot(x_log, y_pred, color='green', label=f'y = {slope:.2f}x + {intercept:.2f}')
Running this code will output a plot along these lines:
This example demonstrates how to linearize non-linear data to analyze it using linear methods. The first plot (blue) shows the original non-linear relationship, while the second plot (red) shows the
data after applying a logarithmic transformation, making it linear. The linear regression model is then fitted to the transformed data, demonstrating the effectiveness of linearization in analyzing
non-linear relationships.
Other data engineering terms related to
‘Data Transformation’: | {"url":"https://dagster.io/glossary/data-linearizing","timestamp":"2024-11-10T00:13:13Z","content_type":"text/html","content_length":"72833","record_id":"<urn:uuid:8ff63657-f4f6-49fe-b37e-dceb3e57d55b>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00303.warc.gz"} |
Repêchage – CMO Qualifying Repêchage
The competition has been marked.
We have emailed all participants to inform them of their results. If you are a competitor and can’t find your results in your email, first check your spam folder.
Those whose Repêchage results earn them an invitation for the CMO or the CJMO will be emailed a separate formal invitation with instructions within two more business days.download the problem set to
challenge themselves.
The official solutions for this year’s competition are also available for download.
The Repêchage is intended for students whose COMC scores were just a bit below the cutoff score, for direct invitations to the CMO. That is, sometimes you make a small but costly mistake that
obscures your overall problem solving talent. We want to provide you a “second chance” to demonstrate you deserve to write the CMO.
This is similar to a “take-home” exam.
Participants in the Repêchage get a week to solve a set of (usually 8) problems. The responses must show all your work. Students may submit handwritten work (scanned and uploaded) or PDF, such as
from using LaTeX. The use of LaTeX does not factor into the scoring, but it does eliminate penmanship problems.
Results and the Olympiad
The results are normally announced before the end of February on this page and by email. At the discretion of the marker, the result may be “pass/fail” for an invitation to the CMO or may have
detailed scoring released to the participants.
The competitors who demonstrate the most innovative, clear, and complete solutions are immediately invited to register for the Canadian Mathematical Olympiad (CMO), Canada’s most prestigious math
competition for secondary students and the gateway representing Canada at the summer’s International Mathematical Olympiad (IMO).
Students in grade ten or under who do well on the Repêchage, but not well enough for the CMO earn invitations to the Junior CMO (CJMO).
There are no other prizes or certificates for participation.
How Do I Earn an Invitation?
Students who participated in the COMC and did well, but did not qualify for the CMO are invited to write the Repêchage. Approximately 75 students are invited to write the Repêchage. Based on the
results, roughly 20 of these students are selected and invited to participate in the top national math competition: the Canadian Mathematical Olympiad (CMO).
Preparing for the Repêchage
In addition to the archived Repêchage problem sets and solutions below, students are encouraged to use past COMC and CMO problems to help prepare for the Repêchage. The CMS also has other
problem-solving resources available for competitors.
Feel free to download the problems or solutions from previous years to sharpen your problem solving skills!
The results are in!
• 76 students were invited to write based on their results in COMC 2022,
• 65 competitors registered and submitted their work,
• 22 of these earned invitations to the 2023 CMO, and
• 18 other competitors earned invitations to the 2023 CJMO.
Score Distribution: This year, unlike in previous years, we are releasing scores to the participants. The Repechage was scored out of 100 possible points. The following histogram indicates the | {"url":"https://cms.math.ca/competitions/repechage/","timestamp":"2024-11-02T04:46:19Z","content_type":"text/html","content_length":"198048","record_id":"<urn:uuid:6d6080fe-b467-4eb7-b25a-c33e14280584>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00236.warc.gz"} |
Evaluating the Definite Integral of a Quotient by Using Substitution
Question Video: Evaluating the Definite Integral of a Quotient by Using Substitution Mathematics • Higher Education
Determine β «_(1)^(π ) (ln π ₯/π ₯) dπ ₯.
Video Transcript
Determine the integral from one to π of the natural logarithm of π ₯ divided by π ₯ with respect to π ₯.
Weβ re given a definite integral which weβ re asked to evaluate. We can see our integrand is the natural logarithm of π ₯ over π ₯. And this is not easy to integrate. We donβ t know an
antiderivative for this expression. And this is the quotient of two functions. And we donβ t know how to integrate this directly. So, weβ re going to need to find some other way of integrating this
We might be tempted to use integration by parts. However, in this case, weβ ll see that integration by substitution is easier. And the reason integration by substitution will be easier is to take a
look at our numerator, the natural logarithm of π ₯. If we were to try integrating this by using the substitution π ’ is equal to the natural logarithm of π ₯, then by differentiating both sides
of this equation with respect to π ₯, we would see that dπ ’ by dπ ₯ is one over π ₯.
And this is a useful result. Since one over π ₯ appears in our integrand, it will help simplify our integral. So, weβ re ready to start using our substitution. First, although we know dπ ’ by dπ
₯ is not a fraction, when weβ re using integration by substitution, it can help to think of it a little bit like a fraction. This gives us the equivalent statement in terms of differentials dπ ’ is
equal to one over π ₯ dπ ₯.
Next, remember, weβ re evaluating a definite integral by using integration by substitution. So, we need to rewrite our limits of integration. We do this by substituting our limits of integration
into our expression for π ’. To find our new upper limit of integration, we substitute π ₯ is equal to π into this expression. This gives us π ’ is equal to the natural logarithm of π , which
we know is equal to one. And we do the same to find our new lower limit of integration. We substitute in π ₯ is equal to one, giving us π ’ is equal to the natural logarithm of one, which we know
is equal to zero.
So, weβ re now ready to rewrite our integral by using our substitution. First, we showed the new limits of integration would be lower limit zero, upper limit one. Next, weβ ll replace the natural
logarithm of π ₯ with π ’. And then finally, we know that one over π ₯ dπ ₯ is equal to dπ ’. So, weβ ve rewritten our integral as the integral from zero to one of π ’ with respect to π ’.
And we can just evaluate this integral by using the power rule for integration. The integral of π ’ with respect to π ’ is π ’ squared over two. Then, all we do is evaluate this at the limits of
integration, giving us one squared over two minus zero squared over two, which we can calculate is just equal to one-half. Therefore, by using the substitution π ’ is equal to the natural logarithm
of π ₯, we were able to show the integral from one to π of the natural logarithm of π ₯ over π ₯ with respect to π ₯ is equal to one-half. | {"url":"https://www.nagwa.com/en/videos/748134625981/","timestamp":"2024-11-11T10:54:02Z","content_type":"text/html","content_length":"246937","record_id":"<urn:uuid:1102d53a-e7a1-4d93-8191-70759aaf04a4>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00085.warc.gz"} |
Water storage dam survey. Conduct volume calculations for water management.
Tailings dam pit expansion and capping project
Tailings dam survey. Calculate current volume. Plot expected life span of dam.
Volume and area calculations for current fresh water dams and unused Tailings dam survey. Calculate current volume. Plot expected life span of dam.
Volume and area calculations using a combination of LIDAR and land survey.
Tailings dam survey. Calculate current volume. Plot expected life span of dam.
Dam expansion surveys, Tailings dam survey. Calculate current volume. Plot expected life span of dam.
Tailings dam survey. Calculate current volume. Plot expected life span of dam.
Tailings dam survey. Calculate current volume. Plot expected life span of dam.
Tailings dam survey. Calculate current volume. Plot expected life span of dam. | {"url":"https://www.hcsurvey.com.au/suburb/bulga/","timestamp":"2024-11-05T04:18:23Z","content_type":"text/html","content_length":"34868","record_id":"<urn:uuid:e2c6077e-82cb-470d-966e-ba91a6cefd94>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00666.warc.gz"} |
Superstructure -- from Wolfram MathWorld
In nonstandard analysis, the limitation to first-order analysis can be avoided by using a construction known as a superstructure. Superstructures are constructed in the following manner. Let
and let
Then entity of
Using the definition of ordered pair provided by Kuratowski, namely one-to-one correspondence with) the set of real numbers
To do nonstandard analysis on the superstructure ultrapower of the relational structure Los' theorem yields the transfer principle of nonstandard analysis. | {"url":"https://mathworld.wolfram.com/Superstructure.html","timestamp":"2024-11-06T13:29:00Z","content_type":"text/html","content_length":"58679","record_id":"<urn:uuid:0bc2fb60-cc89-48f8-8318-3d3e700acd3c>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00853.warc.gz"} |
Rules of Thumb
Used correctly, rules of thumb (sometimes know as “heuristics") can assist significantly in pilot decision making and understanding. A rule of thumb is a principle with broad application that is not
intended to be strictly accurate or reliable for every situation. It is an easily learned and easily applied procedure for approximately calculating some value. It is particularly useful as a means
of cross-checking or confirming the validity of information being displayed by aircraft navigation systems and flight management systems.
1 in 60 Rule
A 1 degree offset angle at 60 nm equates to 1 nm of displacement.
Flying speeds that simplify mental arithmetic can help you in many ways, such as keeping retaining situational awareness during radar vectoring.
120Kts = 2nms/min
180Kts = 3nms/min
240Kts = 4nms/min
300Kts = 5nms/min
The 1 in 60 rule combined with Speed/Distance/Time assumptions is the basis of many other ‘rules of thumb’ that can be useful in pilot navigation (or to check that an FMS-calculated track makes
sence. For example:
At 120 kt groundspeed, the aircraft travels 60nm in 30 minutes. A 10 kt wind blows the aircraft 5 nm in 30 minutes At 120 kt groundspeed, a 10 kt crosswind will cause 5 degrees of drift
Maximum drift angle (Max Drift) = Windspeed divided by Groundspeed in miles per minute
1 m/s = 2 its = 4 km/hr approx
Crosswind Component
Useful for evaluating runway crosswind from reported wind, the crosswind is a function of the SINE of the angle between the runway and the wind direction. Therefore, crosswind can be estimated as
┃ Angle between wind │ Crosswind component │ Sine of angle between ┃
┃ │ │ ┃
┃ and runway (degrees) │ (% of wind strength) │ wind direction and runway in degrees ┃
┃ 0 │ 0 │ 0 ┃
┃ 15 │ 25 │ 0.26 ┃
┃ 30 │ 50 │ 0.5 ┃
┃ 60 or more │ 100 │ 0.87 ┃
The analogue clock face provides an easy way to remember this:
15 min = ¼ of an hour 15 degrees off = ¼ of the total wind across 30 min = ½ of an hour 30 degrees off = ½ of the total wind across 60 min = A full hour 60 degrees off = All of the wind across
A similar process can be used to estimate wind effect on groundspeed
Combining Max drift and Crosswind component:
Flying at 420 kt groundspeed (7 nm/min) in the vicinity of a 60 kt wind (approx. 8½ degrees max drift) headwind from 30 degrees off track, the expected drift angle is just over 4 degrees.
Flying at 420 kt airspeed in the vicinity of a 60 kt wind from 30 degrees off track, groundspeed will be approximately 360 kt (~6 nm/min) so max drift is 10 degrees and the expected drift angle will
be 5 degrees.
Slant Range
Overhead a Distance Measuring Equipment (DME) the indicated range will be equal to the altitude of the aircraft. One NM is approximately 6,000’ (actually 6,076’)
Horizon Range
The horizon (in nautical miles) will be approximately the square root of the height in feet:
• At 10,000ft, the horizon at at approximately 100nm
• At 20,000ft, the horizon at at approximately 140nm
• At 30,000ft, the horizon at at approximately 170nm
Descent Range
Different types will have different performance so pilots must establish and check any ‘rule’ for their own aircraft.
30 per 10 plus 10…
For many older jet transports, a normal descent from cruise altitude descent required about 30 nm for each 10,000ft of height loss and a further 10 nm to slow down. Therefore:
30,000’ cruise = (3 x 30) + 10 = 100 nm descent
35,000’ cruise = (3.5 x 30) + 10 = 115 nm descent
Although not strictly accurate, it provided a good first guesstimate.
Modern, more efficient aircraft, will need greater distances but similar rules of thumb can often be defined from a review of performance figures and line experience. You may find that (e.g.) “40 per
10 plus 15” works better for your type. The important point here is that well practiced rules of thumb may need to be revised dramatically when changing from one type to another.
Similarly, to confirm that a descent profile is going well:
30 out at 10 and 250….
Thirty miles from the airport at 10,000' and 250 knots.
If at 30 nm from destination, the aircraft is still above either 10,000’ or 250 kt (or both!) getting down and reducing speed to achieve a stabilized approach will be a real challenge in many jet
Final Approach
3 degree glideslope = 300’/nm to touchdown
Again from the 1 in 60 rule, 3 degrees at 60 nm ~ 3 nm ~ 18,000’ so 3 degrees at 6 nm ~ 1,800’ and 3 degrees at 1 nm ~ 300’)
This is not exact – and approach plates will show precise figures for any approach - but it provides a simple way to spot any gross errors.
Rate of Descent on Final Approach
For a 3 degree glideslope, required rate of descent in feet per minute is approximately equal to ground speed in knots multiplied by 5.
From the above, at 120 knots GS, the rate of descent to maintain a 3 degree glideslope is approximately 600 fpm
Payload and Fuel
It is always useful to check mentally that loading figures make sense. While it may not be true for every situation (so pilots must review the circumstance of their own operation before using this),
many pilots find the following rule of thumb effective:
10 pax equates to 1 ton.
For a very rough estimate: Trip fuel = flight time x cruise fuel flow
At the threshold, 1/2 LOC dot = 1/2 runway width
On a very foggy take-off if you think you are lined up on the C/`L lights and you see half a dot deviation on the ILS, you must be looking at the edge lights!
Weather Radar
You can use the 1 in 60 rule to determine height of weather returns. However, remember the beam width is typically +/- 2 deg.
You can measure beam width in flight by looking for the range of the (first) ground return from altitude for a given search angle, e.g.: At 30,000ft, tilt -1 deg, if first ground return is 100 miles,
beam width is +/-2 deg (lower edge of the beam is tilted -3 degrees).
If you have any other rules of thumb that you find useful then please send the information to the Editor
Related Articles | {"url":"https://skybrary.aero/articles/rules-thumb","timestamp":"2024-11-14T10:26:15Z","content_type":"text/html","content_length":"59099","record_id":"<urn:uuid:b8a8f2b0-ebcb-487d-ae13-c319ef5a387e>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00569.warc.gz"} |
Navigation Computer - Position Update
The Navigation Computer system and the Engineering computer system provide the information to update the position of the StarShip in space. The Position Update task in the simulation software has no
direct controls, like switches or lights. It is a task that executes in the background, but does a few important things which are quite noticable at the outside. The responsibilities for the Position
Update task are:
• calculation of the position in space of the StarShip every (wall-clock) second
• update the position of the StarShip
• check if the StarShip enters a territory (Klingon, Romulan, Neutral Zone) and activate the correct Alert Status
• determine if the StarShip is on a collision course and issue a warning if it is
• calculate the new course heading and the course change angle for the DOM meter.
Software implementation
Whenever the Navigation Computer accepts a course change command, some (software) modules are called that process a specific function, such as:
• CALCUTF - calculates the so-called Unit-of-Travel Factors.
These are numbers for the X-, Y- and Z-direction, UTX, UTY and UTZ. These numbers are the amount the StarShip will move in space within one second. Of course, the velocity of the StarShip has
also to be taken into account.
• CALCVCT - calculate the vector (course heading).
This is a calculation with some 3D geometry to determine the new course heading. The course heading is an angle X/Y (from X-axis to Y-axis) and an XY/Z angle, the angle from the X/Y plane to the
Also calculated is the course change angle. This is the angle between the current course heading and the new course heading. This angle is an input variable for the "Degrees Of Movement" meter,
short DOM.
DOM - "Degrees Of Movement" meter
Below the DOM meter there are 3 buttons. With these buttons you can allocate the meter to a specific function. The 3 functions are mutually exclusive since there is just one meter. Each of the
functions is explained below.
1. Navigation course change angle
2. Tactical torpedo tracking monitor
3. Sciences area scan indicator
Navigation course change angle
When you press the left button the light in that button is turned on and indicates that the DOM meter is allocated to the Navigation Computer.
In the normal situation the DOM meter reads zero and the light is on continuously.
When the course of the StarShip changes, the Position Update task calculates the "error" angle. This error angle is the angle between the current course heading and the entered new course heading.
This error angle is displayed on the DOM meter. The software moves the needle of the meter fast to the error angle value. Since there is a course deviation, the light in the first button starts to
blink on and off. To simulate the course correction to the new course heading the needle of the DOM meter slowly returns to zero. When the needle is at zero, so no course deviation exists anymore,
the light in the first button is lit continuously again. Whenever the button is pressed a second time the light goes off and the DOM meter is no longer allocated to the Navigation Computer. This also
happens when one of the other 2 buttons is pressed.
Tactical torpedo tracking monitor
When you press the button in the middle the light in that button is turned on and indicates that the DOM meter is allocated to the Tactical Computer.
In the normal situation the DOM meter reads zero and the light is on continuously. When a photon topedo is launched the torpedo has a course heading that will intercept the target. This course
heading is displayed as an angle on the DOM meter. As long as the torpedo exists the light in the button in the middle blinks. When the target changes its position the tracking capabilities of the
photon torpedo are reflected in the adjustment of the needle of the DOM meter.
When the torpedo no longer exists (for example detonated!) the needle returns to zero.
Sciences area scan indicator
When you press the right button the light in that button is turned on and indicates that the DOM meter is allocated to the Sciences Computer.
In the normal situation the DOM meter reads zero and the light is on continuously.
When the Sciences computer runs the "Scan Area" or "Scan Random" command, the needle of the meter indicates in what direction the scan is currently active. For the "Scan Area" command the needle
sweeps through a 60 degree range back and forth. For example, the needle moves from 120 degrees to 180 degrees, and then back from 180 degrees to 120 degrees. As long as the "Scan Area" command is
running the needle of the meter shows the 60 degree range. (Note: "Scan Area" scans a specified area of space). The "Scan Random" command is like the "Scan Area" command but now the Sciences Computer
gathers information from everywhere. One 60 degree range is scanned and then, at random, an other 60 degree area is scanned. So, the needle of the DOM meter now sweeps through a 60 degree range and
then jumps to an other 60 degree range, and so on.
Future developments
At this moment all calculations are done in 16-bit integer. However, I want to add more precision, which in fact boils down to more resolution of the StarShip's position in space. Especially at low
travel speeds (impulse engines) I want a smaller update step. Also, even at high warp velocities, I want a more realistic travel time (within limits...).
So, what I am going to implement quite soon is fixed-point arithmatic. Every number is still an integer, but 16 bits form the "whole" number and an other 16 bits form a fixed-point fraction. The nice
thing about fixed-point arithmatic is that is a lot faster than floating pount arithmatic, because the processor still executes integer instructions. Not on 16 bits, but now on 32 bit, which is just
a very little amount slower.
Also, learned a few things since I started this project early in the 1980's. The unit of travel and distances in general have NO dimension. That makes things a bit vague when talking about it, so I
want to convert everything to the same unit, for example "light years". However, this sounds a bit "common", so I prefer to change all dimension into parsec's which is a bit more exotic. One parsec
is equal to 3.26 light years.
If you are interested in writing your own fixed-point software, here is how it works.
Fixed point is an easy way to represent non-integer numbers with only integers in the processor. You do that by taking a certain amount of bits in a register and dedicate them to decimal positions
instead of integer values. So, if you have a 16-bit word, you can do 8.8 fixed point by using the upper 8 bits as the "whole" portion of the number, and the lower 8 bits as the "fractional" portion.
See the table for some examples.
│ 8.8 fixed point │
│"whole"."fraction" │16 bit hexadecimal │
│ 1.0 │ 0100 │
│ 2.5 │ 0280 │
│ 3.75 │ 03C0 │
│ 4.125 │ 0420 │
All that you do is using the high byte for whole numbers, and the low byte for the fraction. Since the fraction is 8 bits the precision is 1/256. The more bits you use for the fractional part the
better the precision gets.
For Motorola 68000 processors a good choice is 16.16 fixed point numbers. A register in these CPU's are 32 bits, the CPU has many 32 bit instructions (especially the 68020 which I use in the main
computer of my simulation), and a neat "SWAP" instruction that swaps the upper and lower 16 bits in a register.
Doing math in fixed point is easy.
To add and subtract fixed point numbers is just like the integer add and subtract operation.
1.50 + 1.50 = 3.0 0180h + 0180h = 0300h
2.75 - 2.25 = 0.5 02C0h + 0240h = 0080h
However, with multiplication and division you must make an adjustment for the number of bits that you choose for the fractional part. Otherwise the resulting number would be too high for
multiplication, and too low for division.
1.5 * 3.0 = 4.5 0180h * 0300h = 048000h
divide afterward by 100h (8 bit) --> 0480h
7.5 / 2.5 = 3.0 multiply before by 100h (8 bit) --> 078000h
078000h / 0280h = 0300h
So, for multiplication you remove the number of bits that you choose for the fractional part after the multiplication. For a division you multiply by the bits of the fractional parts before you do
the division.
Remember that you need more than 32 bits for 16.16 fixed point numbers when using MUL or DIV.
Actually, you can choose any representation. So if you have big numbers, but do not need a large precision, why not use 24.8 fixed point numbers? | {"url":"https://pdp-11.nl/ncc1701/infopages/nav/pos-update.html","timestamp":"2024-11-03T07:10:12Z","content_type":"text/html","content_length":"11056","record_id":"<urn:uuid:863647fb-388e-492d-98f9-130abc6daad9>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00832.warc.gz"} |
What is Nodal Voltage Analysis Method? - Circuit Globe
Nodal Voltage Analysis Method
The Nodal Voltage Analysis is a method to solve the electrical network. It is used where it is essential to compute all branch currents.The nodal voltage analysis method determines the voltage and
current by using the nodes of the circuit.
A node is a terminal or connection of more than two elements. The nodal voltage analysis is commonly used for networks having many parallel circuits with a common terminal ground.
This method requires less number of the equation for solving the circuit.
In Nodal Voltage Analysis, Kirchhoff’s Current Law (KCL) is used, which states that the algebraic sum of all incoming currents at a node must be equal to the algebraic sum of all outgoing currents at
that node.
It is the method of finding the potential difference between the elements or branches in an electric circuit. This method defines the voltage at each node of the circuit. This method has two types of
nodes. These are the non-reference node and the reference node.
The non-reference nodes have a fixed voltage, and the reference node is the reference points for all other nodes.
In the nodal method, the number of independent node pair equations needed is one less than the number of junctions in the network. That is if n denotes the number of independent node equations and j
is the number of junctions.
n = j – 1
In writing the current expression, the assumptions are made that the node potentials are always higher than the other voltages appearing in the equations.
Let us understand the Nodal Voltage Analysis Method with the help of an example shown below:
Steps for Solving Network by Nodal Voltage Analysis Method
Considering the above circuit diagram, the following steps are explained below
Step 1 – Identify various nodes in the given circuit and mark them
in the given circuit, we have marked the nodes as A and B.
Step 2 – Select one of the nodes as the reference or zero potential nodes at which a maximum number of elements are connected, is taken as reference.
In the above figure, node D is taken as the reference node. Let the voltages at nodes A and B be V[A] and V[B ]respectively.
Step 3 – Now apply KCL at the different nodes.
Applying KCL at node A, we have
Applying KCL at the node B, we have
Solving equation (1) and the equation (2) we will get the value of V[A ]and V[B].
The nodal voltage analysis has the advantage that a minimum number of equations needs to be written to determine the unknown quantities.
1 thought on “Nodal Voltage Analysis Method”
nice information here
Leave a Comment | {"url":"http://tongbu.biz/nodal-voltage-analysis-method.html","timestamp":"2024-11-01T23:50:11Z","content_type":"text/html","content_length":"157486","record_id":"<urn:uuid:d5164bf6-eef2-4605-86fa-3b596b58c71b>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00323.warc.gz"} |
The ISPMT Function | SumProduct are experts in Excel Training: Financial Modelling, Strategic Data Modelling, Model Auditing, Planning & Strategy, Training Courses, Tips & Online Knowledgebase
A to Z of Excel Functions: The ISPMT Function
Welcome back to our regular A to Z of Excel Functions blog. Today we look at the ISPMT function.
This function calculates the interest paid (or received) for the specified period of a loan (or an investment) for a constant instant rate with equal principal repayments. In reality, this is quite
an easy financial instrument to calculate using basic formulae, but the ISPMT function makes it slightly simpler than computing from first principles.
The ISPMT function employs the following syntax to operate:
ISPMT(rate, per, nper, pv)
The ISPMT function has the following arguments:
• rate: this is required and represents the constant interest rate for the loan or investment
• per: this is required, and specified the period to be considered, between periods 1 and nper
• nper: this is also required and denotes the total number of payments for the loan or investment
• pv: also necessary, this is the present value, or the total amount that a series of future payments is worth now, also known as the principal (i.e. what you are borrowing).
It should be further noted that:
• the payment returned by PPMT relates to the principal but considers no taxes, reserve payments or other fees sometimes associated with loans
• make sure that you are consistent about the units you use for specifying rate and nper. If you make monthly payments on a four-year loan at an annual interest rate of 12%, use 12%/12 for rate and
4*12 for nper. If you make annual payments on the same loan, use 12% for the rate and 4 for nper
• ISPMT counts each period beginning with zero (0), not with one (1)
• most loans use a repayment schedule with even periodic payments. The IPMT function returns the interest payment for a given period for this type of loan
• some loans use a repayment schedule with even principal payments. The ISPMT function returns the interest payment for a given period for this type of loan
• this is one of Excel’s financial functions which distinguishes between cash inflows (positive) and outflows (negative).
Please see my example below:
We’ll continue our A to Z of Excel Functions soon. Keep checking back – there’s a new blog post every other business day.
A full page of the function articles can be found here. | {"url":"https://www.sumproduct.com/blog/article/a-to-z-of-excel-functions/the-ispmt-function","timestamp":"2024-11-06T20:23:18Z","content_type":"text/html","content_length":"22171","record_id":"<urn:uuid:1fe39aa7-9201-46ae-bd0f-5c576c0f6486>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00308.warc.gz"} |
Some Natural Phenomena Class 8 Notes
Welcome to Class 8 Science chapter 12 notes. On this page, you will find notes, questions, and answers to class 8 science chapter 12. These Some Natural Phenomena Class 8 Notes, explanations,
examples, and questions and answers are according to CBSE and the NCERT textbook. If you like the study material, feel free to share the link as much as possible.
The natural phenomenon can be defined as the naturally occurring calamity or physical process.
Types of the natural phenomenon include
• Thunder
• Weather
• Germination
• Erosion
Some natural phenomena can be destructive such as
• Earthquakes
• Lightning
• Volcanic eruption
• Cyclones
Charged Bodies
What is Charge?
Charge is most commonly used to refer to electric charge. It is a fundamental property of matter like mass. It is a physical property because of which matter experience a force in an electromagnetic
Electric charges may be positive or negative in nature. If there is no net electric charge, the matter is considered neutral or uncharged.
Types of charges and their interactions
Charges are of two types
1. Positive charge – When the matter has more protons than the number of electrons.
2. Negative charge – When the matter has more electrons than protons. A negative charged body has excess of electrons.
This concept arose from the fact that when we rubbed a glass rod with silk, the glass rod gained a positive charge while the silk fabric gained a negative charge.
What are electrons, protons, and neutrons, and where it is present?
Electrons, protons and neutrons are fundamental particles. Electrons are negatively charged particles that move around the nucleus of an atom.
Protons are positively charged particles. Neutrons are electrically neutral which means that they carry no charge.
Both protons and neutrons are concentrated at the centre of an atom known as the nucleus of the atom. Electrons surround the nucleus.
How the object becomes positively charged and negatively charged?
When an object loses electrons, it becomes positively charged because it has more protons than electrons. After gaining electrons, the objects become negatively charged.
When a glass rod is rubbed with silk cloth, for example, it becomes positively charged, whereas the silk cloth becomes negatively charged.
Figure (a) shows a glass rod being rubbed with a silk cloth. Figure (b) shows that on rubbing with silk cloth glass rod becomes positive in charge because electrons are rubbed off glass rod. Due to
this silk cloth becomes negative.
Charging by rubbing or friction
It means that when two objects rub against each other, electrons are transferred between them and they become charged.
For example, rubbing a plastic comb through dry hair charges it. This charged object (plastic comb) can now attract other charged and uncharged objects. The charged plastic comb is capable of
attracting scraps of paper.
Properties of charge
Like charges repel each other. This means that two positive charges repel each other. Similarly, two negative charges would also repel each other.
Unlike charges, they attract each other. This means that a Positive-Negative charge would attract each other.
For example, a charged rubber balloon is repelled by another charged balloon, whereas an uncharged balloon is attracted by another charged balloon.
like charges repel
unlike charges attract
The reason for repulsion is that both the balloons contain the same type of charge, whereas attraction happens because both the balloons contain different types of charge.
Static charge vs Current Electricity
The electric charge produced by rubbing is known as a static charge, whereas charges that move together form an electric current. A static charge is one that does not move.
Current electricity on the other hand is study of moving charges.
Transfer of Charge
There are two ways to transfer charge from one object to another.
1. Conduction – When a charged object makes contact with a conductor, charges are transferred through the conductor. The object gains the same charge as the charged body. This method requires
physical contact between the objects.
2. Induction – When a charged object is brought near a neutral object, the object gets induced and becomes charged. The object acquires the opposite charge to that of the charged body. This process
doesn't require physical contact.
The transfer of electric charges does not create or destroy charges and in this process charge remains conserved.
It is a device or an instrument that can be used to test whether an object is carrying a charge or not. An electrical charge can be transferred from one charged object to another through a metal
conductor. It consists of a metal rod with a thin metal strip or leaf attached to it at the bottom.
Because gold and silver are good conductors of electricity, they are commonly used to make electroscopes.
How does the electroscope detect the charge?
• A charged object is brought in contact with the open end of the wire.
• The charges are transferred through the wire, which is a good conductor of electricity.
• The gold plates also get charged and repel each other as they are similarly charged.
Discharged bodies
The body is said to be discharged if it loses its charge to the earth or any other body.
Explain why a charged body loses its charge if we touch it with our hand?
When a charged body is touched by our hand, our body conducts its charge to the earth as the human body is a good conductor of electricity.
The process of transfer of charges from a charged object to the earth takes place is called earthing.In general, every tall building has earthing to protect it from electrical shocks caused by
electric current leakage.
Lightning – A natural destructive phenomenon
Lightning is an electric discharge seen in the sky between oppositely charged clouds or between charged clouds and the earth. Lightning is defined as the transfer of charge from cloud to another
cloud or from one cloud to the earth.
Cause of lightning
Lightning is caused by the accumulation of charges. When negative and positive charges meet, they produce a streak of bright light accompanied by sound. It causes a lot of damage.
Mostly, lightning occurs within the clouds. Lightning is also caused due to static electricity.
• During rain, the air current moves upwards and water droplets move downwards. This movement leads to the separation of charge in a cloud.
• The positive charges accumulate near the upper side and negative charges collect near the bottom of the cloud, which causes a rearrangement of charges on the ground surface.
• Thus, positive charges accumulate near the ground surface. When the accumulated charge becomes high, the flow of charges takes place.
• The flow of charges takes place through the air. However, the air is a poor conductor of electricity, but the magnitude of accumulated charges becomes high, causing the air to get ionized.
• When the negative charge meets the ground's positive charge, it results in streaks of bright light.
• Lightning strikes the highest objects, like tall buildings, bridges, and monuments, and causes damage. To prevent this damage, lightning protection systems are installed.
Lightning Conductor
A metal rod (generally made of copper) placed on top of tall buildings with its lower end connected to the ground. It is used to protect buildings from the effects of lightning. When lightning
strikes, the metal rod, being a good conductor, provides an easy passage for the transfer of charge to the ground. This way, the electric discharge flows from the clouds into the ground without
damaging the clouds.
Things to do during lightning
• Switch off the electrical appliances like computer, TV, refrigerator etc.
• If travelling in a car or bus, remains inside the vehicle and shut all its doors and windows.
• Get inside as quickly as possible.
• Check the forecast before going outside in the monsoon.
Things to avoid during lightning.
• Do not roam here when there is lightning.
• Avoid contact with running water.
• Do not lie on the ground.
• Do not sit in open vehicles.
• Do not carry an umbrella.
Earthquake – Another natural destructive phenomenon
A sudden trembling or shaking of the earth for a short interval of time is caused by disturbances deep inside the earth's crust. It can cause large-scale destruction. It is not possible to predict
the occurrence of an earthquake. An earthquake produces waves on the surface of the earth. Earthquakes may be caused by the sliding of tectonic plates. Sometimes, an earthquake may be followed by
aftershocks that occur as the rocks settle down in their new position.
Why do earthquakes occur?
• The earth’s crust is made up of fragments called plates, also called tectonic plates.
• These plates are continuously moving.
• Due to continuous motion, these plates slide past or collide with each other.
• The rocks at the boundaries of these plates get interlocked and prevent the plates from moving, which results in pressure being formed on these rocks.
• The increase in pressure leads to the slipping of rocks and causes the rocks to vibrate.
• These vibrations travel up to the surface and cause earthquakes.
Focus, Epicentre and Fault zones
1. Focus:-The point where the earthquake originates or starts is called the focus.
2. Epicentre:-The point on the surface of the Earth immediately above the focus is known as the epicentre.
3. Fault zones or seismic zones:-Weak zones (The boundaries of the tectonic plates) where earthquakes are most likely to occur are called fault zones or seismic zones.
Seismology, Seismic waves, and Seismograph
1. Seismology:- The study of earthquakes is called seismology.
2. Seismic waves:- The waves produced on the surface of the earth in an earthquake are called seismic waves.
3. Seismograph:- The instrument is used to measure seismic waves.
How can the intensity of an earthquake be measured?
• The destructive energy of an earthquake is measured on the Richter scale designed by an American Seismologist, Charles F. Richter, using a seismograph.
• On the Richter scale, an earthquake measuring
□ 2 to 4 – It is a mild earthquake and does not cause any damage.
□ 4 to 8 – It is moderate to severe.
□ 8 to 9 – It is very severe and destructive earthquakes. It causes a lot of damage to life and property.
• A major earthquake occurred in India on 8 October 2005 in Uri and Tangdhar towns in North Kashmir.
• On 26 January 2001, in the Bhuj district of Gujarat, a major earthquake occurred which caused a lot of damage to life and property.
• Both the Bhuj and Kashmir had a earthquake of magnitude greater than 7.5.
Protection against Earthquakes
People living in seismic zones have to be specially prepared. Firstly, the buildings in these zones should be so designed that they can withstand major tremors.
• Steps to protect ourselves in an earthquake
• Stay away from tall and heavy objects.
• Take shelter under a table.
• If you are outdoors, stay away from buildings, trees, and overhead power lines. Try to move to the open ground.
Frequently Asked Questions
What is a Lightning Conductor?
It's a device that protects buildings from the effect of lightning.
What is the effect of lightning conductor?
The lightning can be absorbed by the conductor and sent to the surface of the land when it falls on a building.
How do you determine the intensity of earthquake?
By using Richter scale we can determine the intensity of earthquake.
What is static electricity?
Static electricity is an electric phenomenon that involves the transfer of charged particles from one body to another.
Here is Some Natural Phenomena Class 8 Notes summary
• Positive and negative charges are the two types of charges.
• Unlike charges attract one another while like charges repel one another.
• Static charges are the electrical charges created by rubbing.
• Electric current is made up of moving charges.
• To determine whether a body is charged or not, use an electroscope.
• Lightning is a result of an electric discharge that occurs between clouds and the earth or between various clouds.
• A abrupt shaking or trembling of the earth is referred to as an earthquake.
Also Read Class 8 Maths Class 8 Science | {"url":"https://physicscatalyst.com/class8/some-natural-phenomena-class-8-notes.php","timestamp":"2024-11-03T23:26:09Z","content_type":"text/html","content_length":"80063","record_id":"<urn:uuid:03dffe82-14fd-4d12-a598-4e86035500fb>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00875.warc.gz"} |
Analysis of longitudinal vibration acceleration based on continuous time-varying model of high-speed elevator lifting system with random parameters
Issue Mechanics & Industry
Volume 22, 2021
Article Number 28
Number of page(s) 11
DOI https://doi.org/10.1051/meca/2021027
Published online 12 April 2021
Mechanics & Industry
, 28 (2021)
Regular Article
Analysis of longitudinal vibration acceleration based on continuous time-varying model of high-speed elevator lifting system with random parameters
School of Mechanical and Electrical Engineering, Shandong Jianzhu University, Jinan 250101, Shandong Province, PR China
^* e-mail: zhangqing@sdjzu.edu.cn
Received: 4 June 2019
Accepted: 1 March 2021
In this paper, for studying the influence of the randomness of structural parameters of high-speed elevator lifting system (HELS) caused by manufacturing error and installation error, a continuous
time-varying model of HELS was constructed, considering the compensation rope mass and the tension of the tensioning system. The Galerkin weighted residual method is employed to transform the partial
differential equation with infinite degrees of freedom (DOF) into the ordinary differential equation. The five-order polynomial is used to fit the actual operation state curve of elevator, and input
as operation parameters. The precise integration method of time-varying model of HELS is proposed. The determination part and the random part response expression of the longitudinal dynamic response
of HELS are derived by the random perturbation method. Using the precise integration method, the sensitivity of random parameters is determined by solving the random part response expression of
time-varying model of HELS, and the digital characteristics of the acceleration response are analyzed. It is found that the line density of the hoisting wire rope has the maximum sensitivity on
longitudinal vibration velocity response, displacement response and acceleration response, and the sensitivity of the elastic modulus of the wire rope is smallest.
Key words: High-speed elevator lifting system / time-varying / random parameters / longitudinal vibration / acceleration response
© Q. Zhang et al., Published by EDP Sciences 2021
This is an Open Access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution,
and reproduction in any medium, provided the original work is properly cited.
1 Introduction
As a “vertically moving car”, the elevator has been widely used in high-rise buildings and super high-rise buildings. In developed countries, the number of people taking elevators per day is more
than that of other means of transportation, and elevators have become one of the symbols to measure the degree of modernization of the country. Withthe development of elevator toward high speed and
large strokes, various vibration phenomena inevitably appear in elevators, and a large part of themis related to the elevator's lifting system.
Manufacturing error and installation error in the high-speed elevator lifting system are objective. The random parameters such as wire rope density, and elastic modulus existing in the lifting system
cause the vibration of the HELS to be random vibration. The random vibration system not only affects the eigenvalues and eigenvectors of the various modes of the system, but also affects the
statistical characteristics of the response [1]. In addition, studies have shown that when the initial conditions are consistent, the longitudinal vibration of the lifting system has a much greater
impact on the system than the lateral vibration. Therefore, it is of great significance to study the dynamic response of the longitudinal vibration random parameters of the HELS on the elevator car
vibration reduction, random parameter sensitivity analysis, and safety assessment.
At present, the research on HELS mainly focuses on the dynamic characteristics of deterministic parameters [2–7]. It is rare to consider the random parameters of HELS [1,8–10]. The research on the
random parameters of longitudinal vibration of HELS is rarer. Lin et al. [11] established elevator virtual prototype model through Solid Works, and analyzed the dynamics of the high-speed elevator
car with the ADAMS, then the dynamic model of i-DOF of the elevator system in the vertical direction was established, and the sensitivity analysis is used to optimize the elevator dynamic parameters.
Feng et al. [12] considered the time-varying characteristics of the elevator traction rope stiffness, established an elevator dynamics model with 8 DOF coupled vibrationand performed modal analysis
on the system, according to the relationship between the natural frequency of the dynamic structure system and the excitation frequency difference, the failure mode of the system resonance is
defined, and the reliability sensitivity analysis was performed on the random variables of the system. Wu et al. [4] used virtual prototyping technology to analyze and simulate the elevator operation
dynamically, the 11 DOF vertical vibration model of the elevator system was established, through the sensitivity analysis of the high-speed elevator vibration signal, the influence of the main
dynamic parameters on the high-speed elevator vibration was obtained. Although the above literatures consider random parameters for the study of longitudinal vibration of elevators, its research is
based on the discrete model of elevator concentration parameters, but its research is based on the elevator lumped parameter model. The ordinary differential control equations established by this
type of model are simple, easy to understand, andsolve. However, because such models ignore the continuous characteristics of the wire rope, they cannot better reflect the dynamic characteristics of
the elevator lifting system.
The establishment of the distributed parameter model of the HELS draws on the research theory of the axially moving string, which is simplified into a section of axial motion string with concentrated
mass, which can better describe the flexible time-varying characteristics of the traction wire rope, so it is gradually being applied. Zhang et al. [13] simplified the elevator hoisting rope to a
variable length axial motion string with a certain mass attached to one end, the differential equations and energy equations for the vertical vibration of the HELS were established by the energy
method and the Hamilton principle. Bao et al. [2,14] used the Hamliton principle to construct a lateral vibration control equation for flexible wire rope without external excitation and external
excitation, and evaluated the theoretical model through experiments, the experimental results well agree with the theoretical predictions. In addition [15], considering the interaction between the
rigid motion and the deformation motion of the steel wire rope, the differential equation of the wire rope motion of the lifting system is constructed, and the model is analyzed. However, the above
literature does not consider the effect of the compensation rope mass and the tension of the tensioning system on the vibration of the HELS.
Therefore, under the premise of comprehensively considering the influence of compensation rope mass and tension of the tensioning system, the time-varying continuous model of HELS is constructed by
combining energy method and Hamilton principle. Using random perturbation method to derive the dynamic equation of system response under random parameters. Applying the precise integration method of
HELS to analyze the sensitivity and standard deviation of structural random parameters of elevator operation process. Then the influences of each structural parameters on the dynamic characteristics
of the lifting system are analyzed.
2 Establishment of longitudinal time-varying model for high-speed traction elevator lifting system
To study the time-varying characteristics of the longitudinal vibration of traction rope conveniently, the modeling and solution of this paper are based on the following three assumptions:
• Hoisting ropes are continuous and uniform, with constant cross-sectional area A and elastic modulus E during movement;
• The influence of lateral vibration from the hoisting ropes is ignored, and elastic deformation caused by the vertical vibration of the hoisting ropes is smaller than the length of the ropes;
• The influences of bending rigidity on hoisting ropes, friction force, and airflow are ignored.
Figure 1 shows the time-varying model of longitudinal vibration in a high-speed traction elevator lifting system. The hoisting rope of the high-speed traction elevator is simplified as a
variable-length string along the axial force and movement. The specified structure of the car is ignored and the structure is simplified into a rigid weight block of mass m connected to the lower end
of the cord. ρ[1] is the density of the hoisting rope, A is the cross-sectional area, E is the elastic modulus, and ρ[2] is the density of compensation rope. The origin of the coordinates is the
tangent point of the traction sheave and the hoisting rope, and the direction vertically downward is the positive direction of the X axis. The length of hoisting rope at the top of the car from the
origin of the coordinate is l(t). The vibration displacement at the string x(t) is y(x(t), t), and v(t) is the operating speed of the high-speed traction elevator. l[0] is the maximum lift
height (high-speed traction elevators are generally used in super high-rise buildings. The height of the car is very small compared to the super high-rise buildings, so the height of the car is
By using the finite deformation theory of continuum, the displacement vector and velocity vector of x(t) in the X-axis are as follows:$r = [ x ( t ) + y ( x ( t ) , t ) ] j ,$(1) $V = [ v ( t ) + y
t ( x ( t ) , t ) ] j ,$(2)where j is the unit vector in the X-axis direction, y(x(t), t) and y[t](x(t), t) are the partial derivatives of t, and y, y[t] represent y(x(t), t) and y[t](x(t)
, t), respectively.
Similarly, the displacement vector and velocity vector of the car in the direction of X-axis are as follows:$r c = [ l ( t ) + y ] j ,$(3) $V c = [ v ( t ) + y t ] j ,$(4)
The kinetic energy of the system can be expressed as follows:$E k = 1 2 m V 2 |x=l(t) + 1 2 ρ 1 ∫ 0 l ( t ) V 2 d s ,$(5)
The elastic potential of the system is:$E s = ∫ 0 l ( t ) ( P y x + 1 2 E A y x 2 ) d s ,$(6)where y[x](x(t), t) is the partial derivative of x, and y[x] represents y[x](x(t), t).
P is the tension of the rope during a static balance of tension. While the hoisting rope is subjected to its own gravity and the gravity of the car, it is also subjected to the gravity of the
tensioning rope and the pre-tensioning force f of the tensioning device. Thus, tension P in the static balance can be expressed as:$P = [ m + ρ 1 ( l ( t ) − x ) + ρ 2 ( l 0 − l ( t ) ) ] g + f ,$(7)
The gravitational potential energy of the system is expressed as:$E g = − ∫ 0 l ( t ) ρ 1 g d t − m g y |x=l(t) ,$(8)
According to the Hamilton principle:$I = ∫ t 1 t 2 [ δ E k − δ E s − δ E g ] d t = 0 ,$(9)
The longitudinal vibration dynamics equation of the high-speed traction elevator lifting system arederived as follows:$ρ 1 ( y t t + a ) − P x − ρ 1 g − E A y x x = 0 , ( 0 < x < l ( t ) ) ,$(10) $m
( a + y t t ) + ρ 1 v ( v + y t ) + E A y x + P − m g = 0 , ( x = l ( t ) ) ,$(11)
Equation (11) is the boundary condition where the string is at x=l(t).
Fig. 1
Time-varying model of the hoisting rope in an elevator lifting system.
3 Galerkin discretization of time-varying partial differential equations for the HELS
The algebraic equation coefficient matrix obtained by Galerkin discrete method is symmetric, and approximation accuracy is higher than those of the other methods. Therefore, the Galerkin method is
used to discretize the partial differential control equation.
For facilitating the discrete method, a dimensionless parameter ξ is introduced, and normalize the original variables, that is, $ξ = x / l ( t )$. The time domain of x becomes the fixed domain [0,1]
of ξ. Assuming that the solution of equation (10) can be represented by infinite DOF distribution function y:$y ( x , t ) = ∑ i = 1 n ϕ i ( ξ ) q i ( t ) = ∑ i = 1 n ϕ i ( x l ( t ) ) q i ( t ) ,$
ϕ[[i](ξ)] is the trial function, and q[i](t) is the time-dependent generalized coordinates$ϕ i ( ξ ) = 2 sin ( 2 i − 1 2 π ξ ) ( i = 1,2 , ⋯ , n ) ,$(13)
Then,$y x = 1 l ( t ) ∑ i = 1 n ϕ ′ i ( ξ ) q i ( t ) , y x x = 1 l ( t ) ∑ i = 1 n ϕ ″ i ( ξ ) q i ( t ) y t = ∑ i = 1 n ϕ i ( ξ ) q ˙ i ( t ) − ξ v l ( t ) ∑ i = 1 n ϕ ′ i ( ξ ) q i ( t ) , y t t =
∑ i = 1 n ϕ i ( ξ ) q i ( t ) − 2 ξ v l ( t ) ∑ i = 1 n ϕ ′ i ( ξ ) q ˙ i ( t ) + 2 ξ v 2 l 2 ( t ) ∑ i = 1 n ϕ ′ i ( ξ ) q i ( t ) − a ξ l ( t ) ∑ i = 1 n ϕ ′ i ( ξ ) q i ( t ) + ξ 2 v 2 l 2 ( t ) ∑
i = 1 n ϕ ″ i ( ξ ) q i ( t )$(14)
Substitute equation (14) into the kinetic equation (11), and multiply both sides by ϕ[j](ξ), and integrate ξ in the range [0,1]. Substitute equation (14) into the boundary condition (12), and
multiply both sides by ϕ[j](1) after transformation. The original partial differential equations are discretized into the following equation by using the weighted residual method:$M q j + C q j + K
q j = F ,$(15)
where, q[j]=[q[1](t), q[2](t), ⋯, q[n](t)], is the generalized coordinate vector, M, C, K, and F are the mass, damping, stiffness, and generalized force matrices, respectively. And,$M = ρ 1 δ
I J + m l ϕ i ( 1 ) ϕ j ( 1 ) , C = − 2 ρ 1 v l ∫ 0 1 ξ ϕ ′ i ϕ j d ξ + ρ 1 v l ϕ i ( 1 ) ϕ j ( 1 ) ,$ $K = m v 2 l 3 ϕ i ″ ( 1 ) ϕ j ( 1 ) − ρ 1 a l ∫ 0 1 ξ ϕ ′ i ϕ j d ξ − ρ 1 v 2 l 2 ∫ 0 1 ξ 2 ϕ ′
i ϕ ′ j d ξ − E A l 2 ∫ 0 1 ϕ i ″ ϕ j d ξ ,$ $F = − ρ 1 a ∫ 0 1 ϕ j d ξ − m a l ϕ j ( 1 ) − ρ 1 v 2 l ϕ j ( 1 ) − ρ 2 g ( l 0 − l ) + f l ϕ j ( 1 ) .$
4 Precise integration method for vertical vibration model of the time-varying system in the high-speed traction elevator
For the time-varying dynamic model of the high-speed elevator traction hoisting system established above, because of its strong time-varying characteristics, the mass, damping and stiffness of the
system are changing every moment. It is difficult for the general numerical method to achieve high accuracy for this kind of problem. Precise integration method, due to its explicit stability and
high accuracy [16], has been widely used in solving dynamics of nonlinear time-varying systems, and achieved good results [17,18]. Therefore, for the high-speed elevator time-varying model
established in this paper, the precise integration method of the longitudinal time-varying model of HELS is proposed to analyze the model, so as to make the result more accurate.
First, follow the introduction of the dual variable of Hamiltonian system [19],$p = M x + C ( t ) x / 2 or = M − 1 p − M − 1 C ( t ) x / 2$(16)
By substituting the equation (16) into the dynamic equation, the following equation can be obtained:$p = ( C ( t ) M − 1 C ( t ) / 4 − K ( t ) ) x − C ( t ) M − 1 p / 2 + f ( t )$(17)
The above equations are written in the general form of a linear system$x = A x + C p + r x p = B x + D p + r p$(18) where, A=−M ^−1C(t)/2, B=C(t)M ^−1C(t)/4−K(t), C=−C(t)M ^−1/2, D=M ^−1,
r[p]=0, r[x]=f(t).
Therefore,$z = H z + ϕ ( t )$(19)where, $z = [ x p ]$, $H = [ A D B C ]$, $ϕ ( t ) = [ r x r p ]$.
Assume that the nonhomogeneous term is linear in time step (t[k], t[k]+1), the equation is$z = H z + ϕ k + ϕ k ( t − t k )$(20)
Then, the solution at t[k][+1] moment can be written as$z k + 1 = T k [ z k + H k − 1 ( ϕ k + H k − 1 ϕ k ) ] − H k − 1 [ ϕ k + H k − 1 ϕ k + ϕ k ( t k + 1 − t k ) ]$(21)
where, T[k]=e ^H[k](t[k+1]−t[k])[.]
Then the solution of the equation is transformed into the solution matrix T[k], and the accuracy of the matrix T[k] becomes the key to solving the equation. Zhong [19] proposed a 2^N algorithm by
using the additive theorem. For the above formula, there is,$T k = e H k Δ t = ( e H k Δ t m ) m = ( e H k τ ) m$(22)
Among them, choose m=2^N, usually, Δt is a small time interval, so τ=Δt/m is a very small time interval, for τ, there is,$e H k τ ≈ I + H k ⋅ τ + ( H k ⋅ τ ) 2 2 = I + T a$(23)
where I is the identity matrix,T[a]=(H[k]⋅τ)⋅(I+H[k]⋅τ/2).
Therefore, the matrix T[k] can be decomposed as follows,$T k = ( I + T a ) 2 N = [ T a + I ] 2 N − 1 × [ T a + I ] 2 N − 1$(24)
According to the flow chart, as shown in Figure 2, T[k]=I+T[a]. Then, according to equation (21), given initial condition z[0], the steps are gradually performed to obtain z[1], z[2],⋯, z[k],⋯,
which is a typical “self-starting” algorithm.
Fig. 2
Operation flow chart.
5 The quintic polynomial fitting for the running state curve of the elevator
According to the actual operation state of the elevator, the description of the operation state stage of the elevator when it goes up is shown in Table 1 below. The fifth-order polynomial (25) is
used to fit the actual operation state of the elevator, and the operation curves of the elevator in each stage can be obtained, as shown in Figure 3 below.$l i ( t ) = C 0 i + C 1 i t + C 2 i t 2 + C
3 i t 3 + C 4 i t 4 + C 5 i t 5$(25)
Table 1
Stage division of elevator operation state curve.
Fig. 3
Elevator running state curve.
6 Sensitivity analysis of random parameters of lifting system based on random perturbation method
The structural parameters (such as the line density of traction rope ρ[1], cross-sectional area of traction rope A, elastic modulus of traction rope E) of HELS have certain randomness. Therefore, M,
C, K, F in the differential equation of system dynamics has stochastic property. The following transformations are required.$M = M d + ϵ M r C = C d + ϵ C r K = K d + ϵ K r Y = Y d + ϵ Y r F ( t ) =
F d ( t ) + ϵ F r ( t )$(26)where ϵ is a small parameter [20,21]. The subscripts d and r respectively represent the determined part and the random part of the random variable.
Substituting equation (26) into (16) and expand to compare ε with the same power coefficient. Omitting higher-order terms above O(ϵ^2) the following equations are obtained.$ϵ 0 : M d q d + C d q d +
C d q d = F d ( t )$(27) $ϵ 1 : M d q r + C d q r + K d q r = F r ( t ) − ( M r q d + C r q d + K r q d )$(28)
Equations (27) and (28) represent the deterministic part and the random part of the response, respectively. For convenience, the random response {q[r]} is divided into two parts:$q r = q r 1 + q r 2$
(29)where {q[r1]} and {q[r2]} respectively satisfy the following equation.$M d q r 1 + C d q r 1 + K d q r 1 = F r ( t )$(30) $M d q r 2 + C d q r 2 + K d q r 2 = − ( M r q d + C r q d + K r q d )$
Equations (30) and (31) represent random responses due to randomness of excitation and randomness of parameters, respectively. For equation (31), the random variable can be Taylor-expanded near the
determined portion b[dj] (j=1,2,…m) of the random parameter [22,23].$q r 2 = ∑ j = 1 m ∂ q d ∂ b j ⋅ b r j$(32) $q r 2 = ∑ j = 1 m ∂ q d ∂ b j ⋅ b r j$(33) $q r 2 = ∑ j = 1 m ∂ q d ∂ b j ⋅ b r j$
(34) $M r = ∑ j = 1 m ∂ M d ∂ b j ⋅ b r j$(35) $C r = ∑ j = 1 m ∂ C d ∂ b j ⋅ b r j$(36) $K r = ∑ j = 1 m ∂ K d ∂ b j ⋅ b r j$(37)
Substituting equations (31)–(37) into equation (31) and comparing the coefficients of b[rj] $M d ∂ q d ∂ b j + C d ∂ q d ∂ b j + K d ∂ q d ∂ b j = − ( ∂ M d ∂ b j q d + ∂ C d ∂ b j q d + ∂ K d ∂ b j
q d ) ( j = 1,2,… , m )$(38)
Using the precise integration method for vertical vibration model of the time-varying system in the high-speed traction elevator to solve the equation (38), the sensitivity of the system response, $q
d ∂ b j$, $q d ∂ b j$, and $∂ q d ∂ b j$ can be get.
7 Analysis of mean and standard deviation of longitudinal vibration acceleration of high-speed elevators with random parameters
Define the covariance matrix of the displacement response of the continuous time-varying model as N[q], the random parameter covariance matrix as N[b], and the displacement response sensitivity
matrix as $[ ∂ q d ∂ b ]$.$N q = [ V a r ( q ( 1 ) ) C o v ( q ( 2 ) , q ( 1 ) ) ⋯ C o v ( q ( k ) , q ( 1 ) ) C o v ( q ( 2 ) , q ( 1 ) ) V a r ( q ( 1 ) ) ⋯ C o v ( q ( k ) , q ( 2 ) ) ⋮ ⋮ ⋱ ⋮ C o
v ( q ( k ) , q ( 1 ) ) C o v ( q ( k ) , q ( 2 ) ) ⋯ V a r ( q ( k ) ) ]$(39) $N b = [ V a r ( b 1 ) C o v ( b 2 , b 1 ) ⋯ C o v ( b m , b 1 ) C o v ( b 2 , b 1 ) V a r ( b 2 ) ⋯ C o v ( b m , b 2 )
⋮ ⋮ ⋱ ⋮ C o v ( b m , b 1 ) C o v ( b m , b 2 ) ⋯ V a r ( b m ) ]$(40) $[ ∂ q d ∂ b ] = [ ∂ q d ∂ b 1 ∂ q d ∂ b 2 ⋯ ∂ q d ∂ b m ]$(41)
where Var(q ^(k)) represents the variance of the k[th] element in the vector q, and Cov represents the covariance.$N q = [ ∂ q d ∂ b ] N b [ ∂ q d ∂ b ] T$(42)
The standard deviation of displacement response can be obtained by solving equation (36).$σ q i = ( ∑ j = 1 m ∑ k = 1 m ∂ q d i ∂ b j ∂ q d i ∂ b k σ b j σ b k ρ j k ) 1 / 2$(43)
where $σ q i$ is the standard deviation $[ V a r ( q ( i ) ) ] 1 / 2$ of the i-th element in vector X, ρ[jk] is the correlation coefficient between b[j] and b[k].
Similarly, the standard deviations of velocity and acceleration responses can be obtained:$σ q ˙ i = ( ∑ j = 1 m ∑ k = 1 m q d i ∂ b j q d i ∂ b k σ b j σ b k ρ j k 1 / 2$(44) $σ q i = ( ∑ j = 1 m ∑
k = 1 m q d i ∂ b j q d i ∂ b k σ b j σ b k ρ j k 1 / 2$(45)
8 Case analysis
Taking a HELS as an example. Its maximum operating speed is v=5 m/s, the elevator operating parameters are as shown in Section 4. the elastic modulus of the wire rope is E=8×10^10 N/m^2, lifting
wire rope cross-sectional area A=89.344cm^2, compensation rope line density ρ[2]=0.343kg/m, tension F=300N. The longitudinal time-varying model of hoisting system has independent random
parameters (the lift mass, the elastic modulus of the wire rope, and the wire rope density) and obeys normal distribution. The coefficient of variation takes CV=0.02, and each random parameter is
shown in Table 2 below.
Table 2
HELS random parameter value.
8.1 Random parameter sensitivity analysis
The precise integration method is employed to solve the response sensitivity equation of the random parameter system, equation (38). The vibration displacement, vibration velocity and acceleration
response sensitivity of each random parameter are obtained. After taking the absolute value and calculating the mean values $E | ∂ q d ∂ b j |$, $E | ∂ q d ∂ b j |$, $E | ∂ q d ∂ b j |$, the
calculation results are shown in Table 3.
It can be seen from Table 3 that, among the three random parameters, the linear density of the hoisting wire rope is the most sensitive to the vibration velocity response, the vibration displacement
response, and the vibration acceleration response. And the three random parameters of the lift mass, the elastic modulus of the lifting steel wire rope and the wire rope density are the most
sensitive to the acceleration response, followed by the vibration velocity response, and the sensitivity to the vibration displacement response is the smallest.
Table 3
Random parameter sensitivity mean.
8.2 Random parameter acceleration and jerk response analysis
Using the response expression constructed by the perturbation theory, the precise integral method is used to solve the equation (38) to obtain the response acceleration $q d$, the longitudinal
vibration acceleration response of the high-speed elevator is determined as shown in Figure 4a. The longitudinal vibration jerk response is determined as shown in Figure 4b.
Take the random part of the random parameter b[rj]=±σ[bj], and substitute it with the obtained $q d$ into equation (38), Solve the random part of the acceleration response, and then superimpose
with $q d$ to get the total acceleration response, as shown in Figure 5a. In addition, the total jerk response as shown in Figure 5b.
Analysis the acceleration and jerk response determination part of the HELS, it can be seen that the absolute value of the longitudinal maximum acceleration and jerk increases after considering the
influence of the random parameter, which is increased by about 50%. The corresponding values of the longitudinal acceleration and jerk total response have different degrees of dispersion at each
moment, indicating that the dispersion of the longitudinal acceleration and jerk response is increased after considering the randomness of the parameters.
Fig. 4
Lifting system longitudinal acceleration and jerk response determination section. (a) Acceleration response (b) Jerk response.
Fig. 5
Overall response of the longitudinal acceleration and jerk of the system. (a) Acceleration response (b) Jerk response.
8.3 Longitudinal acceleration and passenger comfort analysis of HELS
Select 33–33.5s with the largest vibration acceleration as the research object of acceleration response, determine the response $q d$ as the acceleration response mean $q ‾$. Calculate the standard
deviation $σ x ¨$ due to the randomness of the parameters by combining (30) and Precise integration method for vertical vibration model of the time-varying system in the high-speed traction elevator,
and calculate the coefficient of variation CV, the results are shown in Table 4.
As can be seen from the Table 4, in the case where the coefficient of variation of the random parameter is 0.02, the coefficient of variation of the longitudinal vibration acceleration response of
high-speed elevators varies greatly. Compare the longitudinal vibration acceleration determination part and the total acceleration image obtained in the previous section, the actual response is more
discrete, and the randomness of the system parameters is more obvious to the longitudinal vibration acceleration of high-speed elevators.
The whole process of car operation is selected as the research object, and the vibration does VDV is used to detect the passenger comfort [24]. VDV is defined as:$V D V = [ ∫ 0 T a w 4 ( t ) d t ] 1
/ 4$(46)
where T is the duration of the vibration signal and $a w 4$ is the acceleration the vibration signal after the weighting of the frequency meter. The VDV values of acceleration determination part and
total acceleration of high-speed elevator lifting system are calculated respectively, as shown in Table 5.
As can be seen from the Table 5, the VDV values of both are more than 0.5. Compared with the acceleration determination part of high-speed elevator lifting system, the VDV value increases by 4.71%
after considering the influence of random parameters, which indicates that the influence of random parameters reduce the comfort of passengers.
Table 4
Acceleration response mean, standard deviation and coefficient of variation.
Table 5
The VDV value of the determination part and total acceleration.
9 Conclusion
• In this paper, considering the mass of the compensation rope and the tension provided by the tensioning system, based on the axial string theory, combined with the energy method and the Hamilton
principle, the longitudinal vibration time-varying continuous model of the HELS is constructed. The Galerkin method is used to transform the infinite dimensional partial differential equation
into the ordinary differential equation with finite DOF. The fifth-order polynomial is used to fit the actual operating state parameters of the high-speed elevator and input as parameters of the
dynamic equation. The precise integration method for longitudinal vibration model of the time-varying system in the high-speed traction elevatoris proposed, and the whole running process of
elevator random dynamics is calculated.
• The determination and the random part response expression of the longitudinal dynamic response of the HELS are established by random perturbation theory. The displacement, velocity and
acceleration sensitivity expressions of the random parameters are determined by solving the random response expression. The response sensitivity expression is used to solve the sensitivity values
of each random parameter. It is found that the hoisting wire rope density has the highest sensitivity to longitudinal vibration velocity response, displacement response and acceleration response,
the second is the lift mass, and the elastic modulus of the hoisting wire rope is the least sensitive. In the elevator manufacturing and installation process, parameters with strict sensitivity
should be strictly controlled to improve the longitudinal dynamic performance of the high-speed elevator.
• Through the analysis of the digital characteristics of the acceleration response, the acceleration response and VDV values generated by the random parameters are calculated, which accurately
reflects the dispersion degree of the longitudinal acceleration response and passenger comfort of the high-speed elevator under the influence of random parameters.
This research was supported by the Natural Science Foundation of Shandong Province (Grant No. ZR2017MEE049), the Introduce urgently needed talents project for the western economic uplift belt and the
key areas of poverty alleviation and development in Shandong Province.
Cite this article as: Q. Zhang, T. Hou, H. Jing, R. Zhang, Analysis of longitudinal vibration acceleration based on continuous time-varying model of high-speed elevator lifting system with random
parameters, Mechanics & Industry 22, 28 (2021)
All Tables
Table 1
Stage division of elevator operation state curve.
Table 2
HELS random parameter value.
Table 3
Random parameter sensitivity mean.
Table 4
Acceleration response mean, standard deviation and coefficient of variation.
Table 5
The VDV value of the determination part and total acceleration.
All Figures
Fig. 1
Time-varying model of the hoisting rope in an elevator lifting system.
In the text
Fig. 2
Operation flow chart.
In the text
Fig. 3
Elevator running state curve.
In the text
Fig. 4
Lifting system longitudinal acceleration and jerk response determination section. (a) Acceleration response (b) Jerk response.
In the text
Fig. 5
Overall response of the longitudinal acceleration and jerk of the system. (a) Acceleration response (b) Jerk response.
In the text
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on
Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while. | {"url":"https://www.mechanics-industry.org/articles/meca/full_html/2021/01/mi190193/mi190193.html","timestamp":"2024-11-04T13:43:21Z","content_type":"text/html","content_length":"192936","record_id":"<urn:uuid:23e954c9-e823-4c7c-a0a2-b680784f5e86>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00667.warc.gz"} |
Display *big.Rat Losslessly and Smartly in Golang
Floating-point numbers, as we know, are notorious for losing precision when their values become too large or too small. They are also bad at representing decimals accurately, yielding confusions like
0.1 + 0.2 != 0.3 for every beginner in their programming 101.
Albeit being imprecise, floats are good enough for most daily scenarios. For those not, however, Golang provides *big.Rat to the rescue. Rats are designed to represent rational numbers with arbitary
precision, addressing most flaws of floats, yet at a cost of much slower computation speed and bigger memory footprint. For example, we are confident to compare 0.1 + 0.2 to 0.3 using Rats without
caring about tolerance:
package main
import (
func main() {
x, y := 0.1, 0.2
f := x + y
fmt.Printf("%.20f %v\n", f, f == 0.3)
// 0.30000000000000004441 false
a, b := new(big.Rat).SetFrac64(1, 10), new(big.Rat).SetFrac64(2, 10)
c := new(big.Rat).Add(a, b)
c2 := new(big.Rat).SetFrac64(3, 10)
fmt.Printf("%s %v\n", c.FloatString(20), c.Cmp(c2) == 0)
// 0.30000000000000000000 true
You may have noticed that the Rats are initialized by the z.SetFrac64(a, b) method, which sets z to the fractional number a/b. In fact there’s even a z.SetString() to parse a Rat from either its
fractional or decimal representation, which is a convenient utility:
r1, ok1 := new(big.Rat).SetString("3/5")
r2, ok2 := new(big.Rat).SetString("0.6")
In the above listing, both r1 and r2 equal to the same number of 0.6. .SetString() smartly infers the input format and performs parsing.
Now let’s think about the reversed problem – how to display a *big.Rat as string smartly and loselessly?
To clarify, we would like to obtain a string s from the given Rat z. s is formatted as decimal when z could be written as finite decimal, and otherwise formatted as fractional. If we give name
SmartRatString() to such a function, some samples may be:
SmartRatString(new(big.Rat).SetFrac64(3, 5)) == "0.6"
SmartRatString(new(big.Rat).SetFrac64(1, 3)) == "1/3" // instead of 0.33333...
This is a legitimate use case. You may want to print out some numbers to the user, and expect they would be parsed exactly as they were if being typed back, for the motive of reproducibility or
whatever. Simultaneously, the numbers should be in decimal form whenever they could, to conform the preference of human.
Unfortunately, *big.Rat does not come with such conversion method. The most relevant ones we could find are RatString() and FloatString(prec). RatString() always converts the Rat into fractional form
a/b, while FloatString(prec) displays it as decimal form with exactly prec digits after the decimal point.
A straightforward thought is to combine the two utility methods in an adaptive way. If the Rat couldn’t be written as finite decimal, we call the RatString(). Otherwise, we compute the appropriate
prec for FloatString() such that the Rat is converted into decimal form without any truncation. In such a way, we reduce the problem into two simpler ones:
1. How to determine a Rat has a finite decimal representation?
2. How to compute the number of digits after the decimal point?
Answers to both problems concealed in the factorization of the denominator. Say we have a rational number $z=a/b$ where $b \in \mathbb{Z}^+$ and $gcd(a, b)=1$. $z$ has a finite decimal form if and
only if $b=2^n5^m$ for some natural numbers $n$ and $m$, while $\max(n, m)$ being the number of digits after the decimal point. These conclusions can be derived from some easy math so we won’t
discuss in this post.
The following section will focus on the implementation. We can sketch out the framework of SmartRatString():
func SmartRatString(r *big.Rat) string {
denom := new(big.Int).Set(r.Denom())
n := ... // compute 2's power in the factorization of denom
// denom = denom / 2^n
m, isFiveExp := ... // check power of 5
if !isFiveExp {
return r.RatString()
return r.FloatString(int(max(n, m)))
Estimating n is the easiest part. We can compute n by counting zero bits at the rear of denom‘s binary form, with the help of TrailingZeroBits() method. Dividing denom by 2^n can also be achieved
efficiently with bitwise right shifting. We complete the first blank as follows:
func SmartRatString(r *big.Rat) string {
denom := new(big.Int).Set(r.Denom())
n := denom.TrailingZeroBits()
denom.Rsh(&denom, n)
m, isFiveExp := ... // check power of 5
if !isFiveExp {
return r.RatString()
return r.FloatString(int(max(n, m)))
For the second part, however, there’s no shortcut, at least I don’t have an idea. We have to iteratively divide denom by 5 until the process cannot proceed. For readability I write a small function
var intOne = new(big.Int).SetUint64(1)
var intFive = new(big.Int).SetUint64(5)
// log5 checks x in the form of $5^m$ or not. If so, isExp is true
// and cnt stores the power $m$.
func log5(x *big.Int) (cnt uint, isExp bool) {
tmp2 := new(big.Int)
m := new(big.Int) // m stores the modulo
for x.CmpAbs(intOne) > 0 {
tmp2.DivMod(x, intFive, m)
if m.Sign() != 0 { // m != 0
return cnt, false
x, tmp2 = tmp2, x
return cnt, true
This function is not efficient but it serves the purpose. We modify the second part of SmartRatString() accordingly:
var intOne = new(big.Int).SetUint64(1)
var intFive = new(big.Int).SetUint64(5)
// log5 checks x in the form of $5^m$ or not. If so, isExp is true
// and cnt stores the power $m$.
func log5(x *big.Int) (cnt uint, isExp bool) {
tmp2 := new(big.Int)
m := new(big.Int) // m stores the modulo
for x.CmpAbs(intOne) > 0 {
tmp2.DivMod(x, intFive, m)
if m.Sign() != 0 { // m != 0
return cnt, false
x, tmp2 = tmp2, x
return cnt, true
func SmartRatString(r *big.Rat) string {
denom := new(big.Int).Set(r.Denom())
n := denom.TrailingZeroBits()
denom.Rsh(&denom, n)
m, isFiveExp := log5(&denom)
if !isFiveExp {
return r.RatString()
return r.FloatString(int(max(n, m)))
With all these in hand, we have finished the complete function of SmartRatString().
Author: hsfzxjy.
License: CC BY-NC-ND 4.0.
All rights reserved by the author.
Commercial use of this post in any form is NOT permitted.
Non-commercial use of this post should be attributed with this block of text.
A comment box should be right here...But it was gone due to network issues :-(If you want to leave comments, make sure you have access to disqus.com. | {"url":"https://i.hsfzxjy.site/display-rat-losslessly-smartly-go/","timestamp":"2024-11-02T02:03:13Z","content_type":"text/html","content_length":"24653","record_id":"<urn:uuid:328e9088-e7a5-4ebc-84a8-062a8b0fdca0>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00064.warc.gz"} |
IGCSE: 1. General Physics : 1.2 Motion
No Description
<b>What I should know:</b><br><br> • Define speed and calculate average speed from total time/total distance<br> • Plot and interpret a speed-time graph or a distance/time graph<br> • Recognise from
the shape of a speed-time graph when a body is<br> – at rest<br> – moving with constant speed<br> – moving with changing speed<br> • Calculate the area under a speed-time graph to work out the
distance travelled for motion with constant acceleration<br> • Demonstrate understanding that acceleration and deceleration are related to changing speed including qualitative analysis of the
gradient of a speed-time graph<br> • State that the acceleration of free fall for a body near to the Earth is constant<br><br> <b>When you click the next button the quiz will begin.</b><br><br> <b>
Note: You can enlarge any image by clicking on the image</b>
Return to the quiz maker homepage.
Sorry. You need to have Javascript enabled to view this content. It is designed for use with Internet Explorer 8+, Firefox 10+, Chrome 17+, Android 2.0+, iOS 4.0+ | {"url":"http://www.e-physics.org.uk/quizzes/igcse/motion1/","timestamp":"2024-11-11T20:51:58Z","content_type":"text/html","content_length":"3285","record_id":"<urn:uuid:73bb07ee-67d4-41b4-9e2a-1c8cbf8b967a>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00128.warc.gz"} |
How to Notate the Moves in Rubik’s 3×3 Magic Cube
The Rubik’s 3×3 Magic Cube is the best-selling and most popular puzzle on the market. Its appeal is undeniable and the fact that you can play it virtually anywhere makes it a must-have item for
In the world of puzzles, a Rubik’s Cube is a three layer cube that can be solved with some basic rotations. The cube can be rotated around the x, y, and z space axis, and can be solved by swapping
The first step to solving the cube is to rotate the top two layers. This is done with a series of moves, called an algorithm, that is written in a sequence of letters. These letters are the smallest,
but can be followed by smaller letters or numbers.
A good algorithm will move the corner pieces in the correct direction, and in the correct order. While solving the cube, you may need to perform the algorithm multiple times on a single corner.
One of the most important aspects of solving the Rubik’s Cube is to identify the right corner. You may need to rotate the cube three times to do this.
First, you need to look at the front face of the cube. It should be yellow. If you have a half cross, you can use the F algorithm.
If you’re thinking of trying your hand at positioning Rubik’s 3×3 magic cube, you’re probably wondering how to go about it. You’ve probably seen advertisements for the puzzle, which claims to have a
whopping 3,000,000,000 combinations. Although that’s not true, it’s certainly possible to solve it.
The first algorithm you might try is to align the edge pieces to the correct color of the side you’re on. For example, if you’re on the red side, the edge piece should be on the top layer.
This step is a lot simpler than you might think. All you need to do is move an edge piece one layer to the left. Once the edge is in the right position, you can then move it to the bottom row.
For the x, y, and z axes, you can make an anticlockwise turn and clockwise turn. A 90-degree turn can be accomplished in four turns.
There are also algorithms to turn your cube to form a yellow cross, a white cross, and a red triangle. Unfortunately, they all require a little practice.
Singmaster notation
Singmaster notation is a technique for denoting the moves in Rubik’s 3×3 magic cube. It is one of the most widely used notations for the puzzle. There are several variants of this notation.
The first solution was created by Erno Rubik. He made the first prototype in 1974. Later in 1976, Japanese toy manufacturer Stonefur completed the mechanics of the cube.
Another method is known as the Ideal Solution. It uses different conventions and a different set of numbers. However, it still requires the user to know the number of turns. This method can be
computed quickly in a modern computer.
Other general solutions include the “corners first” methods. These methods search for a heuristic in the right coset space. Each time the user makes a move, the algorithm finds the heuristic that
corresponds to the move.
For example, if the user makes the following move: ‘l2 f’, the cube is rotated two rightmost layers counterclockwise. Also, the number two indicates that this is a two-turn move.
Wolstenholme notation
Wolstenholme notation is a form of relative notation. It uses consonants and vowels for faces and turns. This allows the cube to be solved without having to memorize the letters and numbers.
For example, “3Lw2” is the notation for moving all three left layers by 180 degrees. The letter followed by two indicates the direction of the turn. Also, the asterisks tell the user that the turn is
on the two layers at once. In addition, the prime symbol (‘) means the face is turned anticlockwise.
Wolstenholme notation makes it easier to memorize the sequences. In the end, the cube is reassembled. As the cube is rotated, the remaining colours will appear in their appropriate positions.
The solution was developed by Patrick Bossert in 1981. It was later published as You Can Do the Cube. During the development of the solution, graphical notation was also introduced. However, this was
not widely known at the time.
Other algorithms used to solve the cube involve switching edges. Most of these methods involve layer by layer methods. These methods do not interfere with the solved parts.
Leave a Comment | {"url":"https://shareitapk.org/how-to-notate-the-moves-in-rubiks-3x3-magic-cube/","timestamp":"2024-11-06T20:56:37Z","content_type":"text/html","content_length":"122256","record_id":"<urn:uuid:ffa05535-2550-4ac8-98f5-8292ccfc9f0e>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00117.warc.gz"} |
Stair Game
• Time limit: 1.00 s
• Memory limit: 512 MB
There is a staircase consisting of n stairs, numbered 1,2,\ldots,n. Initially, each stair has some number of balls.
There are two players who move alternately. On each move, a player chooses a stair k where k \neq 1 and it has at least one ball. Then, the player moves any number of balls from stair k to stair k-1.
The player who moves last wins the game.
Your task is to find out who wins the game when both players play optimally.
Note that if there are no possible moves at all, the second player wins.
The first input line has an integer t: the number of tests. After this, t test cases are described:
The first line contains an integer n: the number of stairs.
The next line has n integers p_1,p_2,\ldots,p_n: the initial number of balls on each stair.
For each test, print "first" if the first player wins the game and "second" if the second player wins the game.
• 1 \le t \le 2 \cdot 10^5
• 1 \le n \le 2 \cdot 10^5
• 0 \le p_i \le 10^9
• the sum of all n is at most 2 \cdot 10^5 | {"url":"https://cses.fi/alon/task/1099","timestamp":"2024-11-12T08:18:05Z","content_type":"text/html","content_length":"5598","record_id":"<urn:uuid:14246eaa-ddb6-4bcc-8787-1b155d35f0e8>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00760.warc.gz"} |
Boxes and Polygons in ADQL/STC. Questions and recommendation.
Boxes and Polygons in ADQL/STC. Questions and recommendation.
Tom McGlynn thomas.a.mcglynn at nasa.gov
Fri Oct 23 19:20:57 PDT 2009
Alberto Micol wrote:
> On 23 Oct 2009, at 21:19, Arnold Rots wrote:
>> 4.5.1.5 Box
>> A Box is a special case of a Polygon, defined purely for
>> convenience. It is
>> specified by a center position and size (in both coordinates)
>> defining a cross
>> centered on the center position and with arms extending, parallel to
>> the
>> coordinate axes at the center position, for half the respective
>> sizes on either side.
>> The box’s sides are line segments or great circles intersecting the
>> arms of the
>> cross in its end points at right angles with the arms.
> My trouble is with the sentence that the arms extend "parallel to the
> coordinate axes".
> "Parallel" to the equator cannot be a great circle unless it is the
> equator itself. Hence:
> Does that mean that the I should measure the size of the "horizontal"
> arm along
> the small circle parallel to the equator?
> If this is correct, then a size of 180 deg is an hemisphere if and
> only if the centre is placed
> on the equator.
> I appreciate some help, thanks!
Hi Alberto,
I understood this to mean that the horizontal arm goes along great
circle which has an apex (highest latitude, or lowest
if the point is in the southern hemisphere) at the point. So the great
circle is 'parallel' to the equator but only
instantaneously at that point However I wouldn't mind one of the
experts chiming in here.
> Then, regarding the usefulness of a BOX made of great circle arcs:
> that is useful because to find if a point is inside or outside such BOX
> it is just matter to compute the scalar product of the vector
> representing the point
> and the 4 vectors representing the half-spaces of the 4 box sides.
> Of course this means that it will no longer be possible to use (ra,
> dec) as we are used to,
> as in: ra BETWEEN this AND that AND dec BETWEEN d0 AND d1
> and instead we have to go to a vectorial representation of the sky
> coordinates.
My problem is that as far as I can see, the problem that the astronomers
want to answer will be phrased
in terms of limits on RA and Dec so that even though it might be
mathematically handy it's not necessarily relevant
to the problems we want to solve. In any case there's nothing special
about the box here. It's true for any
(convex?) polygon isn't it?
P.S. I think I've gotten the equations for the vertices of a box
(assuming my interpretation above is correct). The derivation
was pretty easy once I abandoned trying to do things using pure geometry
and attacked it using the centers of the great circles and
analytic geometry. I'll try to post it somewhere tomorrow.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.ivoa.net/pipermail/dal/attachments/20091023/e07c2fc7/attachment-0003.html>
More information about the dal mailing list | {"url":"http://mail.ivoa.net/pipermail/dal/2009-October/005686.html","timestamp":"2024-11-05T23:18:28Z","content_type":"text/html","content_length":"6316","record_id":"<urn:uuid:24ffac0d-d9a9-4096-a7a6-e23596e9a9f4>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00776.warc.gz"} |
Top 3 Math Books Everybody Should Buy
In this review I will go over a short list of math books that I consider to be useful for most people interested in the subject of math. The books included in this list should provide a general math
education. I tried to include books that cover the following aspects of general math education: introduction to basic concepts, introduction to major mathematical fields, relationship between math
and other mathematical sciences, historical context of various developments in math, recent math developments, simple geometric constructions etc. I also think that math books are more engaging and
useful if diagrams, tables, graphs, or geometrical constructions illustrate the mathematical concepts presented. Thus, I strived to include books that have many useful illustrations.
The books included in this review are meant to be useful to gain a general and holistic knowledge of the subject of mathematics. Nonetheless, these books should be useful as a springboard for a more
in-depth study of various mathematical concepts. After people get a bird’s eye view of the subject, they can attack a more narrow area or field that appeals to them.
Euclid’s Elements (Thomas L. Heath translation, Green Lion Press)
Thomas Heath’s translation of Euclid’s Elements (Green Lion Press)
For many hundreds of years, the primary math textbook was Euclid’s Elements. Euclid’s Elements is divided in 13 books (equivalent to the modern chapters) that cover 2D geometry, number theory and
Platonic solids. Some of the things covered in the book are: geometric constructions, proof of the Pythagoren theorem (Book 1 Proposition 47), Euclidean algorithm to find the greatest common divisor
(Book 7 Propositions 1-2) and Euclid’s Theorem regarding prime numbers (Book 9 Proposition 20). The most important lesson from Euclid’s Elements is the emphasis on systematic and rigorous proofs.
I think it is also important to mention why I like the Thomas L. Heath translation published by Green Lion Press. The translation by Thomas L. Heath is probably the most respected translation for the
English language. The main benefit of buying the Green Lion Press edition is the fact that it is just one volume. I believe that Heath’s original edition was divided in 3 volumes because it had
extensive commentary and the Greek text. The Green Lion edition has minimal commentary at the start of the book and it doesn’t have the Greek text (it has Greek words in the commentary and in the
glossary). The Green Lion edition also has good diagrams, and if a proposition continues on the back of a page, the diagrams are also printed on the back of the page (so you don’t have to turn the
page to see the diagrams). I want to end by saying that even though the Heath translation is relatively modern, most people may still find the language a little bit hard to follow since it doesn’t
use the modern mathematical notation.
What I like the most about Euclid’s Elements, are the propositions that show how to do various constructions. The very first proposition in book 1 shows how to construct an equilateral triangle.
Other construction propositions show how to bisect an angle, how to find the center of a circle and how to construct various regular polygons. You can attempt to do these constructions using a
software like GeoGebra. In my opinion, it is a shame that geometry classes (in high school) don’t go over these constructions. Yes, the language in Euclid is difficult and the proofs are probably a
bit too abstract even for high school students. But at least a geometry curriculum can cover the part that deals with constructions (even if they omit the proof part). These geometric constructions
may help students develop their visual reasoning. I recommend this article for people interested in the topic of visual or diagrammatic thinking (Euclid’s Elements is mentioned many times).
Mathematics 1001 by Dr. Richard Elwes
Mathematics 1001 by Dr. Richard Elwes
Mathematics 1001 by Elwes is like a small (around 400 pages) math encyclopedia. The book can be very useful to get an overview of mathematics. Areas covered by the book are: Numbers, Geometry,
Algebra, Discrete Mathematics, Analysis, Logic, Metamathematics, probability and Statistics, Mathematical Physics, Games and Recreation. The book mentions current mathematical developments and
problems like The Clay Institute millennium problems and the Hilbert’s problems. The book also has many useful equations, tables, diagrams, geometrical constructions, and illustrations. The book can
be very useful to find an interesting area of research (individual research, research for a paper etc).
An alternative to this book is The Princeton Companion to Mathematics. The Princeton Companion to Mathematics has about 1000 pages and it is written for people with a more serious interest in
mathematics. Mathematics 1001 is much cheaper alternative.
Quadrivium (Wooden Books)
Quadrivium (Wooden Books). Wooden Books also published a trivium book that covers grammar, logic, rhetoric and a few other related topics.
In antiquity the mathematical sciences were: Arithmetic or the study of number, Geometry or the study of number in space, Harmony/Music or the study of number in time and Astronomy or the study of
number in space and time. The 4 disciplines formed the quadrivium (4 ways), and the quadrivium was usually paired with trivium (study of Grammar, Logic/Dialectic and Rhetoric). The book Quadrivium
(Wooden Books) has a more modern approach to the quadrivium. The book is actually a compilation of 6 independent books that were put together. The six books are: Sacred Number by Miranda Lundy,
Sacred Geometry by Miranda Lundy, Platonic & Archimedean Solids by Daud Sutton, Harmograph by Anhony Ashton, The Elements of Music by Jason Martineau and A Little Book of Coincidence by John
Martineau. The book also has a few appendices that discuss a few things such as: early number systems, Pythagorean numbers (triangular, square and pentagonal numbers), Gematria, magic squares,
properties of various numbers, ruler and compass constructions, polyhedral data table, musical scales, planetary tables, measurements units etc. This book has many beautiful illustrations, in fact
each page that has written information is complemented by a right page that has only illustrations. Actually, this book can be used as a educational coffee table book.
Some people may complain that the style of the writing is a bit mystical. For example, in the chapter that corresponds to “A Little Book of Coincidence” the author shows how the orbit of the planets
from the solar system seem to fit various geometric patterns. The chapter presents similar ideas to the ones found in the book Harmonices Mundi by Kepler. Of course, the author acknowledges that
modern scientists consider these connections as mere coincidences. The modern astronomers or astrophysicists that consider these connections as mere coincidences are a bit arrogant in my opinion.
Maybe these geometric patterns are just coincidences, but we can still allow the possibility that there is a bigger picture that we still don’t see or understand. Overall the book has a holistic
approach, since it considers all these mathematical sciences or fields as connected and the connection is found in various interesting patterns. The book makes us wonder if there is a bigger picture
that connects all these mathematical patterns. | {"url":"http://www.raulprisacariu.com/math/math-books-everybody-should-buy/","timestamp":"2024-11-03T16:19:12Z","content_type":"text/html","content_length":"42796","record_id":"<urn:uuid:45829bcd-26de-4fa4-81e9-e4eccd377946>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00123.warc.gz"} |
Initial Conditions
In all civil or mining engineering projects, there is an in-situ state of stress in the ground, before any excavation or construction is started. By setting initial conditions in the 3DEC model, an
attempt is made to reproduce this in-situ state, because it can influence the subsequent behavior of the model. Ideally, information about the initial state comes from field measurements. But, when
these are not available, the model can be run for a range of possible conditions. Although the range is potentially infinite, there are a number of constraining factors (e.g., the system must be in
equilibrium, and the chosen yield and slip criteria must not be violated anywhere).
In a uniform layer of soil or rock with a free surface, the vertical stresses are usually equal to \(g \rho z\), where \(g\) is the gravitational acceleration, \(\rho\) is the mass density of the
material and \(z\) is the depth below surface. However, the in-situ horizontal stresses are more difficult to estimate. There is a common (but erroneous) belief that there is some “natural” ratio
between horizontal and vertical stress, given by \(\nu\)/(1 − \(\nu\)), where \(\nu\) is the Poisson’s ratio. This formula is derived from the assumption that gravity is suddenly applied to an
elastic mass of material in which lateral movement is prevented.
This condition hardly ever applies in practice, due to repeated tectonic movements, material failure, overburden removal and locked-in stresses due to faulting and localization. Of course, if we had
enough knowledge of the history of a particular volume of material, we might simulate the whole process numerically, in order to arrive at the initial conditions for our planned engineering works.
This approach is not usually feasible. Typically, we compromise: a set of stresses is installed in the model, and then 3DEC is run until an equilibrium state is obtained. It is important to realize
that there are an infinite number of equilibrium states for any given system. In the following sections, we examine progressively more complicated situations, and the ways in which the initial
conditions may be specified. The user is encouraged to experiment with the various data files that are presented.
Was this helpful? ... Itasca Software © 2024, Itasca Updated: Aug 13, 2024 | {"url":"https://docs.itascacg.com/itasca910/3dec/docproject/source/modeling/problemsolving/initialconditions3dec.html","timestamp":"2024-11-13T21:54:07Z","content_type":"application/xhtml+xml","content_length":"10805","record_id":"<urn:uuid:2baeafd0-323c-429d-888b-d054493bab19>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00124.warc.gz"} |
How To Calculate The Cut Off Marks For CBSE
Check your eligibility for admission to college or university by calculating your CBSE cut-off mark. The Central Board of Secondary Education (CBSE) is a body that monitors and controls courses and
examinations for High School students across India. Each year, students are allocated a cut-off mark, calculated from their examination results, that is then used by colleges and universities as an
indicator of potential success in their courses. The cut-off marks are used as part of the admission criteria and, as such, are important for students who wish to make applications for further study.
Step 1
Add together the marks for physics and chemistry. Divide the result by four.
Step 2
Divide the mathematics marks by two.
Step 3
Add the totals from step 1 and step 2 together to calculate the cut-off score. This can be used as an indicator for engineering-based courses. If a medical course is preferred, substitute biology
marks for mathematics.
Cite This Article
Drake, Adam. "How To Calculate The Cut Off Marks For CBSE" sciencing.com, https://www.sciencing.com/how-to-calculate-the-cut-off-marks-for-cbse-12750069/. 17 June 2011.
Drake, Adam. (2011, June 17). How To Calculate The Cut Off Marks For CBSE. sciencing.com. Retrieved from https://www.sciencing.com/how-to-calculate-the-cut-off-marks-for-cbse-12750069/
Drake, Adam. How To Calculate The Cut Off Marks For CBSE last modified March 24, 2022. https://www.sciencing.com/how-to-calculate-the-cut-off-marks-for-cbse-12750069/ | {"url":"https://www.sciencing.com:443/how-to-calculate-the-cut-off-marks-for-cbse-12750069/","timestamp":"2024-11-10T16:04:23Z","content_type":"application/xhtml+xml","content_length":"69426","record_id":"<urn:uuid:cc8b7981-c6a4-4b24-be6f-daa8f71a6e43>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00780.warc.gz"} |
Personal Reflections on Dick Lipton in honor of his 1000th blog/75th bday.
Is it an irony that Lipton's 1000th post and 75th bday are close together? No. Its a coincidence. People use irony/paradox/coincidence interchangeably. Hearing people make that mistake makes me
literally want to strangle them.
The community celebrated this milestone by having talks on zoom in Lipton's honor. The blog post by Ken Regan that announced the event and has a list of speakers is here. The talks were recorded so
they should be available soon. YEAH KEN for organizing the event! We may one day be celebrating his 2000th blog post/Xth bday.
I will celebrate this milestone by writing on how Lipton and his work have inspired and enlightened me.
1) My talk at the Lipton zoom-day-of-talks was on the Chandra-Furst-Lipton (1983) paper (see here) that sparked my interest in Ramsey Theory, lead to a paper I wrote that improved their upper and
lower bounds, and lead to an educational open problem that I posted on this blog, that was answered. There is still more to do. An expanded version of the slide talk I gave on the zoom-day is here.
(Their paper also got me interested in Communication complexity.)
2) I read the De Millo-Lipton-Perlis (1979) paper (see here) my first year in graduate school and found it very enlightening. NOT about program verification, which I did not know much about, but
about how mathematics really works. As an ugrad I was very much into THEOREM-PROOF-THEOREM-PROOF as the basis for truth. This is wrongheaded for two reasons (1) I did not see the value of intuition,
and (2) I did not realize that the PROOF is not the END of the story, but the BEGINNING of a process of checking it- many people over time have to check a result. DLP woke me up to point (2) and (to
a lesser extend) point (1). A scary thought: most results in math, once published, are never looked at again. So their could be errors in the math literature. However, the important results DO get
looked at quite carefully. Even so, I worry that an important result will depend on one that has not been looked at much...Anyway, a link to a blog post about a symposium about DLP is here.
3) The Karp-Lipton theorem is: if SAT has poly sized circuits than PH collapses (see here), It connects uniform and non-uniform complexity. This impressed me but also made me thing about IF-THEN
statements. In this case something we don't think is true implies something else we don't think is true. So--- do we know something? Yes! The result has been used to get results like
If GI is NPC then PH collapses.
This is evidence that GI is not NPC.
4) Lipton originally blogged by himself and a blog book came out of that. I reviewed it in this column. Later it became the Lipton-Regan blog, which also gave rise to a book, which I reviewed here.
Both of these books inspired my blog book. This is a shout-out to BOTH Lipton AND Regan.
5) Lipton either thinks P=NP or pretends to since he wants people to NOT all think the same thing. Perhaps someone will prove P NE NP while trying to prove P=NP. Like in The Hitchhiker's Guide to the
Galaxy where they say that to fly, you throw yourself on the ground and miss. I took Lipton's advice in another context: While trying to prove that there IS a protocol for 11 muffins, 5 students
where everyone gets 11/5 and the smallest piece is 11/25, I wrote down what such a protocol would have to satisfy (I was sincerely trying to find such a protocol) and ended up proving that you could
not do better than 13/30 (for which I already had a protocol). Reminds me of a quote attributed to Erdos: when trying to prove X, spend half your time trying to prove X and half trying to prove NOT
6) Lipton had a blog post (probably also a paper someplace) about using Ramsey Theory as the basis for a proof system (see here). That inspired me to propose a potential randomized n^{log n)
algorithm for the CLIQUE-GAP problem (see here). The comments showed why the idea could not work-- no surprise as my idea would have lead to NP contained in RTIME(n^{log n}). Still, it was fun to
think about and I learned things in the effort.
7 comments:
1. > A scary thought: most results in math, once published,
> are never looked at again. So there could be errors in
> the math literature.
If you don't use the result, then not much harm. If you use the result, maybe you should check the proof!
1. In an ideal world one would check the proofs of all theorems one uses in a proof. In the real world this could be difficult. Hence I do wonder if some big result will end up not having a
correct proof. Or even incorrect!
2. I confess to sometimes using a theorem without checking the proof.
3. @DM: Very relevant topic! If my recollections serve me well, Dudley made you disprove one of his published theorems via counterexample!
So, you witnessed the phenomenon of flawed published proofs first hand.
4. Dudley checked all his proofs, and (when he was my advisor) all my proofs. But, even Dudley sometimes made a mistake. Dudley already knew that his proof of that theorem was wrong, but didn't
have a proof that the theorem (as stated) was false.
2. There are definitely lots of errors in the math literature. There are many MathOverflow questions on this topic; e.g.,
Voevodsky got interested in formal proofs in part because of wrong proofs that he himself had published. https://www.ias.edu/ideas/2014/voevodsky-origins
In other cases, a theorem is announced as proved, but the proof has yet to appear, even after many years. In some cases, later researchers have built on those theorems, despite the lack of
proofs. https://mathoverflow.net/q/357317
3. @TC: Good points! Voevodsky is indeed a very interesting story,
and figure in this space.
The DLP paper that Bill refers to in (2) highlights some other
stories that have equally surprised me. Well-written gem!
The Easter Egg in the DLP paper is the spelling of Norbert Wiener as "Norbert Weiner"; I wonder what feedback this oversight generated.
Then again CACM has not surprised me, my eyes have come across
quite a few hiccups that the editors/reviewers should have spotted. (Who was the editor/reviewer for this piece back then ...). | {"url":"https://blog.computationalcomplexity.org/2022/01/personal-reflections-on-dick-lipton-in.html","timestamp":"2024-11-11T22:57:27Z","content_type":"application/xhtml+xml","content_length":"188293","record_id":"<urn:uuid:56f7fe23-f16f-4013-826d-6c31539e95db>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00117.warc.gz"} |
is a unit of speed equal to one nautical mile per hour, approximately 1.151 mph. The ISO Standard symbol for the knot is
. The same symbol is preferred by the IEEE;
are also seen. The knot is a non-SI unit that is "accepted for use with the SI". Worldwide, the knot is used in meteorology, and in maritime and air navigation—for example, a vessel travelling at
1 knot along a meridian travels approximately one minute of geographic latitude in one hour.
The above text is a snippet from Wikipedia: Knot (unit)
and as such is available under the Creative Commons Attribution/Share-Alike License.
and FM translator
are commercial radio stations in Prescott, Arizona, simulcasting to the Flagstaff-Prescott, Arizona, area. In August 2011, the stations dropped classic country and switched to a 1960s oldies format.
Bo Woods, who worked at Los Angeles oldies station KRTH, 2002-06, is the program director and morning disc jockey.
The above text is a snippet from Wikipedia: KNOT
and as such is available under the Creative Commons Attribution/Share-Alike License.
1. A looping of a piece of string or of any other long, flexible material that cannot be untangled without passing one or both ends of the material through its loops.
Climbers must make sure that all knots are both secure and of types that will not weaken the rope.
2. A tangled clump.
The nurse was brushing knots from the protesting child's hair.
3. A maze-like pattern.
4. A non-self-intersecting closed curve in (e.g., three-dimensional) space that is an abstraction of a knot (in sense 1 above).
A knot can be defined as a non-self-intersecting broken line whose endpoints coincide: when such a knot is constrained to lie in a plane, then it is simply a polygon.
''A knot in its original sense can be modeled as a mathematical knot (or link) as follows: if the knot is made with a single piece of rope, then abstract the shape of that rope and then
extend the working end to merge it with the standing end, yielding a mathematical knot. If the knot is attached to a metal ring, then that metal ring can be modeled as a trivial knot and the
pair of knots become a link. If more than one mathematical knot (or link) can be thus obtained, then the simplest one (avoiding detours) is probably the one which one would want.
5. A difficult situation.
I got into a knot when I inadvertently insulted a policeman.
6. The whorl left in lumber by the base of a branch growing out of the tree's trunk.
When preparing to tell stories at a campfire, I like to set aside a pile of pine logs with lots of knots, since they burn brighter and make dramatic pops and cracks.
7. Local swelling in a tissue area, especially skin, often due to injury.
Jeremy had a knot on his head where he had bumped it on the bedframe.
8. A protuberant joint in a plant.
9. Any knob, lump, swelling, or protuberance.
10. The point on which the action of a story depends; the gist of a matter.
the knot of the tale
11. A node.
12. A kind of epaulet; a shoulder knot.
13. A group of people or things.
14. A bond of union; a connection; a tie.
Noun (etymology 2)
1. A unit of speed, equal to one nautical mile per hour.
Cedric claimed his old yacht could make 12 knots.
Noun (etymology 3)
1. One of a variety of shore birds; the red-breasted sandpiper (variously Calidris canutus or ).
1. To form into a knot; to tie with a knot or knots.
We knotted the ends of the rope to keep it from unravelling.
2. To form wrinkles in the forehead, as a sign of concentration, concern, surprise, etc.
She knotted her brow in concentration while attempting to unravel the tangled strands.
3. To unite closely; to knit together.
4. To entangle or perplex; to puzzle.
The above text is a snippet from Wiktionary: knot
and as such is available under the Creative Commons Attribution/Share-Alike License.
Need help with a clue?
Try your search in the crossword dictionary! | {"url":"https://crosswordnexus.com/word/KNOT","timestamp":"2024-11-06T14:33:44Z","content_type":"application/xhtml+xml","content_length":"15192","record_id":"<urn:uuid:961b8945-2949-4433-9646-5409d6e7b748>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00026.warc.gz"} |
Physicists save Schrödinger's cat and bring us closer to quantum computers
Managing quantum data and correcting errors are the biggest challenges that scientists face in the development of fully practical quantum computers. A new study performed by researchers at Yale
University might offer the means to overcome this predicament — while also saving Schrödinger’s famous cat.
Credit: RMS.
In 1935, in an attempt to mock the Copenhagen interpretation of quantum mechanics, Erwin Schrödinger proposed a thought experiment: a cat is placed in a sealed box along with a radioactive sample, a
Geiger counter and a bottle of poison.
If the Geiger counter detects that the radioactive material has decayed, it will trigger the smashing of the bottle of poison, killing the cat. Effectively, the cat’s life depends on the quantum
mechanics determined state of a radioactively decaying atom.
The ‘Copenhagen interpretation’ of quantum mechanics states that a particle exists in all states at once until observed — something which physicists call “superposition”. Conversely, the radioactive
material can have simultaneously decayed and not decayed in the sealed environment. It follows that Schrödinger’s cat is both alive and dead until one opens the box. Of course, everyone thought this
was absurd, but it’s precisely this absurdity that Schrödinger was trying to convey. However, we now know from experiments that superposition is actually real in quantum mechanics, no matter how
weird it may sound.
So, the radioactive atom and kitty are intimately “entangled” with each other. But once an observer opens the box, the “superposition” of the cat—the idea that it was in both states—would collapse
into either the knowledge that “the cat is alive” or “the cat is dead,” but not both. This abrupt change in the atom’s quantum state is supposedly random and called a “quantum jump.” The notion of a
quantum jump was first described by Danish physicist Niels Bohr but it wasn’t until the 1980s that it was observed in atoms for the first time.
“These jumps occur every time we measure a qubit,” said Michel Devoret, Professor of Applied Physics and Physics at Yale and member of the Yale Quantum Institute. “Quantum jumps are known to be
unpredictable in the long run.”
The nature of this superposition collapse is very annoying and troublesome for practical applications of quantum technology. Devoret and colleagues wanted to see whether it was possible to get an
advanced warning signal that a jump was about to occur.
For their experiment, the researchers indirectly monitored a superconducting atom or qubit (the basic unit of information in a quantum computer) which was blasted by three microwave sources inside a
3-D cavity made of aluminum. Some of the microwave radiation switched the qubit between energy states, while another beam of radiation measured the cavity. In the qubit’s ground state, the microwave
beam exposure releases photons. So a sudden absence of photons means that the qubit is about to make a quantum jump into an excited state.
“The beautiful effect displayed by this experiment is the increase of coherence during the jump, despite its observation,” said Devoret.
“You can leverage this to not only catch the jump, but also reverse it,” lead author Zlatko Minev added in a statement.
The experiment’s findings contradict Bohr showing that quantum jumps are neither abrupt nor as random as previously believed. Instead, a quantum jump always occurs in the same, predictable manner
from its random starting point. This deterministic nature means that it can also be reversed with another pulse of microwave radiation, sending the qubit back into a ground state. In other words,
saving Schrödinger’s cat.
“Quantum jumps of an atom are somewhat analogous to the eruption of a volcano,” Minev said. “They are completely unpredictable in the long term. Nonetheless, with the correct monitoring we can
with certainty detect an advance warning of an imminent disaster and act on it before it has occurred.
The new study, published in the journal Nature, will prove useful in the development of quantum computers where qubits jump all the time, causing computing errors. Where traditional computers perform
their calculations in binary – using 1s and 0s – quantum computers exploit the odd characteristics of the quantum state of particles at the atomic scale. Like Schrödinger’s cat, the value of a qubit
isn’t definitely 1 or 0, but both at the same time. A quantum computer is theoretically thousands of times faster than a traditional computer. | {"url":"https://www.zmescience.com/science/news-science/physicist-schrodinger-cat-04323/","timestamp":"2024-11-06T01:53:44Z","content_type":"text/html","content_length":"148575","record_id":"<urn:uuid:3f2f498a-92d3-4bde-b961-0ac0ad0f4585>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00533.warc.gz"} |
Incorporation of fuzzy information into fuzzy model identification
Fuzzy logic inherently enables incorporating of a prior knowledge about the system into the identification algorithms of nonlinear dynamic systems using measured input-output data, a process that is
called grey-box modelling. A prior knowledge is often available from physical grounds, e.g. exact knowledge of steady-state input-output characteristics, its monotonicity, monotonicity of step
response, approximate knowledge of partial derivatives of the outputs along the particular inputs or other qualitative properties that are either global or valid only in some regions or for
particular inputs. Nevertheless, it is not easy to incorporate such a rough knowledge usually described linguistically into the analytical formulas of black-box identification.
State of the art:
In the past there were few attempts to incorporate the monotonicity condition for multi-input mapping corresponding to Mamdani fuzzy logic. Unfortunately, usability of all the algorithms is
restricted to a special choice of membership functions and the derived conditions are very conservative that results in poor approximation capability of fuzzy mapping. A simple and intuitive result
for membership functions with finite support telling that monotonicity with respect to all inputs is enforced if both input and output membership functions are ordered in the same manner was
presented in [1]. In [2] monotonicity conditions were derived for Gaussian input membership functions with the same variance and in [3] it was proven that such a system is universal approximator of
monotonic functions. The conditions were used for ensuring monotonicity of steady-state input-output characteristics for some applications in [4]. In [5] conditions for convexity of single-input
single-output fuzzy mapping with triangular input membership functions were derived and their universal approximation ability of convex functions was proven at the same time.
• Incorporation of exact knowledge of steady-state input-output characteristics into the input-output data identification algorithms of Mamdani and Takagi-Sugeno fuzzy systems.
• Incorporation of monotonicity condition of steady-state input-output characteristics into the input-output data identification algorithms of Mamdani and Takagi-Sugeno fuzzy systems.
Paper in IEEE Transactions on Fuzzy Systems and/or Fuzzy Sets and Systems
Hušek, P.: Modelling ellipsoidal uncertainty by multidimensional fuzzy sets, Expert Systems with Applications, vol. 39, no. 8, 2012, pp. 6967–6971
Řídicí technika a robotika | {"url":"https://control.fel.cvut.cz/incorporation-fuzzy-information-fuzzy-model-identification-0","timestamp":"2024-11-12T07:27:54Z","content_type":"text/html","content_length":"19969","record_id":"<urn:uuid:30cce6ff-6260-4275-b5ef-540613ae9c10>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00572.warc.gz"} |
scipy.signal.remez(numtaps, bands, desired, *, weight=None, type='bandpass', maxiter=25, grid_density=16, fs=None)[source]#
Calculate the minimax optimal filter using the Remez exchange algorithm.
Calculate the filter-coefficients for the finite impulse response (FIR) filter whose transfer function minimizes the maximum error between the desired gain and the realized gain in the specified
frequency bands using the Remez exchange algorithm.
The desired number of taps in the filter. The number of taps is the number of terms in the filter, or the filter order plus one.
A monotonic sequence containing the band edges. All elements must be non-negative and less than half the sampling frequency as given by fs.
A sequence half the size of bands containing the desired gain in each of the specified bands.
weightarray_like, optional
A relative weighting to give to each band region. The length of weight has to be half the length of bands.
type{‘bandpass’, ‘differentiator’, ‘hilbert’}, optional
The type of filter:
■ ‘bandpass’ : flat response in bands. This is the default.
■ ‘differentiator’ : frequency proportional response in bands.
‘hilbert’filter with odd symmetry, that is, type III
(for even order) or type IV (for odd order) linear phase filters.
maxiterint, optional
Maximum number of iterations of the algorithm. Default is 25.
grid_densityint, optional
Grid density. The dense grid used in remez is of size (numtaps + 1) * grid_density. Default is 16.
fsfloat, optional
The sampling frequency of the signal. Default is 1.
A rank-1 array containing the coefficients of the optimal (in a minimax sense) filter.
J. H. McClellan and T. W. Parks, “A unified approach to the design of optimum FIR linear phase digital filters”, IEEE Trans. Circuit Theory, vol. CT-20, pp. 697-701, 1973.
J. H. McClellan, T. W. Parks and L. R. Rabiner, “A Computer Program for Designing Optimum FIR Linear Phase Digital Filters”, IEEE Trans. Audio Electroacoust., vol. AU-21, pp. 506-525, 1973.
In these examples, remez is used to design low-pass, high-pass, band-pass and band-stop filters. The parameters that define each filter are the filter order, the band boundaries, the transition
widths of the boundaries, the desired gains in each band, and the sampling frequency.
We’ll use a sample frequency of 22050 Hz in all the examples. In each example, the desired gain in each band is either 0 (for a stop band) or 1 (for a pass band).
freqz is used to compute the frequency response of each filter, and the utility function plot_response defined below is used to plot the response.
>>> import numpy as np
>>> from scipy import signal
>>> import matplotlib.pyplot as plt
>>> fs = 22050 # Sample rate, Hz
>>> def plot_response(w, h, title):
... "Utility function to plot response functions"
... fig = plt.figure()
... ax = fig.add_subplot(111)
... ax.plot(w, 20*np.log10(np.abs(h)))
... ax.set_ylim(-40, 5)
... ax.grid(True)
... ax.set_xlabel('Frequency (Hz)')
... ax.set_ylabel('Gain (dB)')
... ax.set_title(title)
The first example is a low-pass filter, with cutoff frequency 8 kHz. The filter length is 325, and the transition width from pass to stop is 100 Hz.
>>> cutoff = 8000.0 # Desired cutoff frequency, Hz
>>> trans_width = 100 # Width of transition from pass to stop, Hz
>>> numtaps = 325 # Size of the FIR filter.
>>> taps = signal.remez(numtaps, [0, cutoff, cutoff + trans_width, 0.5*fs],
... [1, 0], fs=fs)
>>> w, h = signal.freqz(taps, [1], worN=2000, fs=fs)
>>> plot_response(w, h, "Low-pass Filter")
>>> plt.show()
This example shows a high-pass filter:
>>> cutoff = 2000.0 # Desired cutoff frequency, Hz
>>> trans_width = 250 # Width of transition from pass to stop, Hz
>>> numtaps = 125 # Size of the FIR filter.
>>> taps = signal.remez(numtaps, [0, cutoff - trans_width, cutoff, 0.5*fs],
... [0, 1], fs=fs)
>>> w, h = signal.freqz(taps, [1], worN=2000, fs=fs)
>>> plot_response(w, h, "High-pass Filter")
>>> plt.show()
This example shows a band-pass filter with a pass-band from 2 kHz to 5 kHz. The transition width is 260 Hz and the length of the filter is 63, which is smaller than in the other examples:
>>> band = [2000, 5000] # Desired pass band, Hz
>>> trans_width = 260 # Width of transition from pass to stop, Hz
>>> numtaps = 63 # Size of the FIR filter.
>>> edges = [0, band[0] - trans_width, band[0], band[1],
... band[1] + trans_width, 0.5*fs]
>>> taps = signal.remez(numtaps, edges, [0, 1, 0], fs=fs)
>>> w, h = signal.freqz(taps, [1], worN=2000, fs=fs)
>>> plot_response(w, h, "Band-pass Filter")
>>> plt.show()
The low order leads to higher ripple and less steep transitions.
The next example shows a band-stop filter.
>>> band = [6000, 8000] # Desired stop band, Hz
>>> trans_width = 200 # Width of transition from pass to stop, Hz
>>> numtaps = 175 # Size of the FIR filter.
>>> edges = [0, band[0] - trans_width, band[0], band[1],
... band[1] + trans_width, 0.5*fs]
>>> taps = signal.remez(numtaps, edges, [1, 0, 1], fs=fs)
>>> w, h = signal.freqz(taps, [1], worN=2000, fs=fs)
>>> plot_response(w, h, "Band-stop Filter")
>>> plt.show() | {"url":"https://scipy.github.io/devdocs/reference/generated/scipy.signal.remez.html","timestamp":"2024-11-06T16:53:20Z","content_type":"text/html","content_length":"46428","record_id":"<urn:uuid:1f2f8d06-4fde-4a6e-b2b0-f11210f188cd>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00087.warc.gz"} |
Til Wehrhan, University of BONN and MPIM - The rim hook rule in equivariant quantum Schubert calculus via Bethe and Clifford algebras - Department of Mathematics
Til Wehrhan, University of BONN and MPIM – The rim hook rule in equivariant quantum Schubert calculus via Bethe and Clifford algebras
September 16, 2022 @ 4:00 pm - 5:00 pm
Mode: In- Person
Title: The rim hook rule in equivariant quantum Schubert calculus via Bethe and Clifford algebras
Abstract: It was proven by Bertiger, Milicevic and Taipale that the the strucutre coefficients in the equivariant quantum cohomology of a Grassmannian can be determined by computing the usual cup
product in a certain larger Grassmannian and then applying a combinatorial rim hook algorithm. In this talk, we discuss a generalization of this result using the realization of the equivariant
quantum cohomology of Grassmannians as Bethe algebras of a specific integrable model established by Gorbounov, Korff and Stroppel. If time permits, we also discuss further possible generalizations to
equivairant quantum K-theory. | {"url":"https://math.unc.edu/event/til-wehrhan-university-of-bonn-and-mpim-the-rim-hook-rule-in-equivariant-quantum-schubert-calculus-via-bethe-and-clifford-algebras/","timestamp":"2024-11-06T08:17:38Z","content_type":"text/html","content_length":"113721","record_id":"<urn:uuid:24d347aa-bf9c-4062-894e-4543baceb50f>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00855.warc.gz"} |
chrono::fea::ChElasticityCosseratAdvancedGeneric Class Reference
Advanced linear elasticity for a Cosserat section, not assuming homogeneous elasticity.
This is the case where one uses a FEA preprocessor to compute the rigidity of a complex beam made with multi-layered reinforcements with different elasticity - in such a case you could not use
ChElasticityCosseratAdvanced because you do not have a single E or G, but you rather have collective values of bending/shear/axial rigidities. This class allows using these values directly, bypassing
any knowledge of area, Izz Iyy, E young modulus, etc. This material can be shared between multiple beams. The linear elasticity is uncoupled between shear terms S and axial terms A as to have this
stiffness matrix pattern:
n_x [A A A ] e_x
n_y [ S S S ] e_y
n_z = [ S S S ] * e_z
m_x [ S S S ] k_x
m_y [A A A ] k_y
m_z [A A A ] k_z
#include <ChBeamSectionCosserat.h>
Inheritance diagram for chrono::fea::ChElasticityCosseratAdvancedGeneric:
Collaboration diagram for chrono::fea::ChElasticityCosseratAdvancedGeneric:
Public Member Functions
ChElasticityCosseratAdvancedGeneric (const double mAx, const double mTxx, const double mByy, const double mBzz, const double mHyy, const double mHzz, const double malpha, const double mCy,
const double mCz, const double mbeta, const double mSy, const double mSz)
void SetAxialRigidity (const double mv)
Sets the axial rigidity, usually A*E for uniform elasticity, but for nonuniform elasticity here you can put a value ad-hoc from a preprocessor.
void SetXtorsionRigidity (const double mv)
Sets the torsion rigidity, for torsion about X axis, at elastic center, usually J*G for uniform elasticity, but for nonuniform elasticity here you can put a value ad-hoc from a preprocessor
void SetYbendingRigidity (const double mv)
Sets the bending rigidity, for bending about Y axis, at elastic center, usually Iyy*E for uniform elasticity, but for nonuniform elasticity here you can put a value ad-hoc from a
void SetZbendingRigidity (const double mv)
Sets the bending rigidity, for bending about Z axis, at elastic center, usually Izz*E for uniform elasticity, but for nonuniform elasticity here you can put a value ad-hoc from a
void SetYshearRigidity (const double mv)
Sets the shear rigidity, for shear about Y axis, at shear center, usually A*G*(Timoshenko correction factor) for uniform elasticity, but for nonuniform elasticity here you can put a value
ad-hoc from a preprocessor.
void SetZshearRigidity (const double mv)
Sets the shear rigidity, for shear about Z axis, at shear center, usually A*G*(Timoshenko correction factor) for uniform elasticity, but for nonuniform elasticity here you can put a value
ad-hoc from a preprocessor.
void SetSectionRotation (double ma)
Set the rotation in [rad] of the Y Z axes for which the YbendingRigidity and ZbendingRigidity values are defined. More...
double GetSectionRotation ()
void SetCentroid (double my, double mz)
"Elastic reference": set the displacement of the elastic center (or tension center) respect to the reference section coordinate system placed at centerline.
double GetCentroidY ()
double GetCentroidZ ()
void SetShearRotation (double mb)
Set the rotation in [rad] of the Y Z axes for which the YshearRigidity and ZshearRigidity values are defined. More...
double GetShearRotation ()
void SetShearCenter (double my, double mz)
"Shear reference": set the displacement of the shear center S respect to the reference beam line placed at centerline. More...
double GetShearCenterY ()
double GetShearCenterZ ()
void ComputeStress (ChVector<> &stress_n, ChVector<> &stress_m, const ChVector<> &strain_e, const ChVector<> &strain_k) override
Compute the generalized cut force and cut torque. More...
void ComputeStiffnessMatrix (ChMatrixNM< double, 6, 6 > &K, const ChVector<> &strain_e, const ChVector<> &strain_k) override
Compute the 6x6 tangent material stiffness matrix [Km] = dσ/dε. More...
Constructor & Destructor Documentation
◆ ChElasticityCosseratAdvancedGeneric()
chrono::fea::ChElasticityCosseratAdvancedGeneric::ChElasticityCosseratAdvancedGeneric ( const double mAx,
const double mTxx,
const double mByy,
const double mBzz,
const double mHyy,
const double mHzz,
const double malpha, inline
const double mCy,
const double mCz,
const double mbeta,
const double mSy,
const double mSz
mAx axial rigidity
mTxx torsion rigidity
mByy bending regidity on Y of reference at elastic center
mBzz bending rigidity on Z of reference at elastic center
mHyy shear rigidity on Y of reference at shear center
mHzz shear rigidity on Y of reference at shear center
malpha rotation of reference at elastic center, for bending effects [rad]
mCy elastic center y displacement respect to centerline
mCz elastic center z displacement respect to centerline
mbeta rotation of reference at shear center, for shear effects [rad]
mSy shear center y displacement respect to centerline
mSz shear center z displacement respect to centerline
Member Function Documentation
◆ ComputeStiffnessMatrix()
void chrono::fea::ChElasticityCosseratAdvancedGeneric::ComputeStiffnessMatrix ( ChMatrixNM< double, 6, 6 > & K,
const ChVector<> & strain_e, overridevirtual
const ChVector<> & strain_k
Compute the 6x6 tangent material stiffness matrix [Km] = dσ/dε.
K 6x6 stiffness matrix
strain_e local strain (deformation part): x= elongation, y and z are shear
strain_k local strain (curvature part), x= torsion, y and z are line curvatures
Reimplemented from chrono::fea::ChElasticityCosserat.
◆ ComputeStress()
void chrono::fea::ChElasticityCosseratAdvancedGeneric::ComputeStress ( ChVector<> & stress_n,
ChVector<> & stress_m,
const ChVector<> & strain_e, overridevirtual
const ChVector<> & strain_k
Compute the generalized cut force and cut torque.
stress_n local stress (generalized force), x component = traction along beam
stress_m local stress (generalized torque), x component = torsion torque along beam
strain_e local strain (deformation part): x= elongation, y and z are shear
strain_k local strain (curvature part), x= torsion, y and z are line curvatures
Implements chrono::fea::ChElasticityCosserat.
◆ SetSectionRotation()
void chrono::fea::ChElasticityCosseratAdvancedGeneric::SetSectionRotation ( double ma ) inline
Set the rotation in [rad] of the Y Z axes for which the YbendingRigidity and ZbendingRigidity values are defined.
◆ SetShearCenter()
void chrono::fea::ChElasticityCosseratAdvancedGeneric::SetShearCenter ( double my,
double mz inline
"Shear reference": set the displacement of the shear center S respect to the reference beam line placed at centerline.
For shapes like rectangles, rotated rectangles, etc., it corresponds to the elastic center C, but for "L" shaped or "U" shaped beams this is not always true, and the shear center accounts for torsion
effects when a shear force is applied.
◆ SetShearRotation()
void chrono::fea::ChElasticityCosseratAdvancedGeneric::SetShearRotation ( double mb ) inline
Set the rotation in [rad] of the Y Z axes for which the YshearRigidity and ZshearRigidity values are defined.
The documentation for this class was generated from the following files:
• /builds/uwsbel/chrono/src/chrono/fea/ChBeamSectionCosserat.h
• /builds/uwsbel/chrono/src/chrono/fea/ChBeamSectionCosserat.cpp | {"url":"https://api.projectchrono.org/6.0.0/classchrono_1_1fea_1_1_ch_elasticity_cosserat_advanced_generic.html","timestamp":"2024-11-07T23:44:57Z","content_type":"application/xhtml+xml","content_length":"42797","record_id":"<urn:uuid:0b8be0aa-7a80-4f6e-89d4-e689de5aa239>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00210.warc.gz"} |
Problem B
Magnus is the youngest chess grandmaster ever. He loves chess so much that he decided to decorate his home with chess pieces. To decorate his long corridor, he decided to use the knight pieces. His
corridor is covered by beautiful square marble tiles of alternating colors, just like a chess board, with $n$ rows and $m$ columns. He will put images of knights on some (possibly none) of these
tiles. Each tile will contain at most one knight.
The special thing about his arrangement is that there won’t be any pair of knights can attack each other. Two knights can attack each other if they are placed in two opposite corner cells of a 2 by
3 rectangle. In this diagram, the knight can attack any of the Xs.
Given the dimension of the long corridor, your task is to calculate how many ways Magnus can arrange his knights. Two arrangements are considered different if there exists a tile which contains a
knight in one arrangement but not in the other arrangement (in other words, rotations and reflections are considered different arrangements).
Each input will consist of a single test case. Note that your program may be run multiple times on different inputs. Each test case will consist of a single line with two integers $n$ and $m$ ($1 \le
n \le 4$, $1 \le m \le 10^{9}$) representing the dimensions of the carpet. There will be a single space between $n$ and $m$.
Output a single line with a single integer representing the number of possible arrangements, modulo $(10^{9}+9)$.
Sample Input 1 Sample Output 1
Sample Input 2 Sample Output 2
Sample Input 3 Sample Output 3 | {"url":"https://kth.kattis.com/courses/DD2458/popup17/assignments/kwxfem/problems/knights","timestamp":"2024-11-05T09:57:34Z","content_type":"text/html","content_length":"27268","record_id":"<urn:uuid:0b4f615e-2e17-4092-832d-0597a9827829>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00104.warc.gz"} |
Capacitor Specification Conversion
Run Capacitor Selection Guide
In unusual circumstances, a run capacitor could be used as a start capacitor, but the values available for them are much lower than the values usually available for dedicated start capacitors. The
capacitance and voltage ratings would have to match the original start capacitor specification.
Capacitance Conversion Calculator | Mouser India
Use our Capacitance Conversion calculator to convert between the popular capacitance units pF, µF, nF, and F.
Minimum resonant capacitor design of high‐power …
Design specification of the converter. Specification Value; rated power: 2 kW: input voltage: 690 VDC: output voltage: 75–150 VDC: output current: 0–20 A: IGBT: MM40GTU120B: ... The contrastive …
The Important Points of Multi-layer Ceramic Capacitor Used in Buck Converter …
The Important Points of Multi-layer Ceramic Capacitor ...
A Programmable Capacitance-to-Voltage Converter for MEMS …
A Programmable Capacitance-to-voltage converter for MEMS capacitive sensors is proposed. The main circuit consists of balance capacitor arrays, capacitance transimpedance amplifier, sample/hold, low
pass filter, and output buffer. Capacitance-to-voltage converter detects the change in capacitance between two capacitors and …
Capacitance Converter
Capacitance Converter
MFD Capacitor: How to Get an In-Depth Understanding of the
The Capacitance Conversion Table. As we mentioned earlier, capacitance units are in terms of microfarads. However, it is relatively common to find other manufacturers showing the MFD capacitor
ratings in nanofarads (nF) and picofarads (pF). ... In such a case, you might have your capacitor specifications in uF, but the available …
X7R, X5R, C0G…: A Concise Guide to Ceramic …
X7R, X5R, C0G…: A Concise Guide to Ceramic Capacitor ...
Capacitor Value Calculator (and Code Calculator)
The Capacitor Code Calculator will convert a value into a code. "Breaking" the Capacitor Code The formula that the capacitor value calculator uses isn''t really all that difficult, and one that you
could memorize and do in your head. Really, its not that hard! Let''s ...
Run Capacitor Selection Guide
Specifications. Most run capacitor applications use a rating of 2.5-100 uf (microfarads) capacitance and voltages of 370 or 440 VAC. They are also usually always 50 and 60 Hz rated. Case designs are
round or oval, most commonly using either a steel or aluminum shell and cap. Terminations are usually ¼" push on terminals with 2-4 terminals per ...
Switched capacitor
Switched capacitor
Capacitance Conversion Calculator | Mouser Canada
Use our Capacitance Conversion calculator to convert between popular capacitance units pF, µF, nF, and F.
General Capacitor Specification
Voltage Ratings A capacitor''s voltage rating is an indication of the maximum voltage that should be applied to the device. The context of the rating is significant; in some instances it may indicate
a maximum safe working voltage, in others it may be more akin to a semiconductor''s "absolute maximum" rating, to which an …
What Size Capacitor Do I Need for Air Conditioner?
If you still can''t find an exact match for your old capacitor, you can use one that has slightly different specs—as long as it meets certain criteria. First, make sure that the potential difference
between its voltages is no more than 10% off from what was specified originally.
DC-Link Capacitor, Specification and Application
In this webinar, Würth Elektronik discuss polypropylene metallized film DC-link capacitor applications, its characteristics and comparison to aluminum capacitor solution. DC-Link capacitors are …
1000 uF 25 VDC Capacitors – Mouser India
1000 uF 25 VDC Capacitors are available at Mouser Electronics. Mouser offers inventory, pricing, & datasheets for 1000 uF 25 VDC Capacitors. Smart Filtering As you select one or more parametric
filters below, Smart Filtering will instantly disable any unselected values that would cause no results to be found.
Replacing Capacitors With Different Values Guide
When replacing a capacitor with a different capacitance value, it is important to take into account any other components connected to it that might be affected by the change in capacitance. For
example, an increase in capacitance could cause an increase in output voltage from a power supply circuit, which could damage the device …
How to Read a Capacitor: 13 Steps (with Pictures)
How to Read a Capacitor: 13 Steps (with Pictures)
Capacitor MF
Capacitor MF - MMFD Conversion Chart
Capacitor Codes and Conversion
Capacitor Codes and Conversion. Use this capacitance converter to convert between common values like nF to uF. Use the chart to look up common capacitor codes. Or use …
Capacitor Characteristics/Specifications
These capacitors have insulation resistance of 10(sup)6 MΩ. Film capacitors make for very good capacitors for AC coupling, when you want to only pass through AC signals and block DC. Capacitor Shelf
Life Capacitor shelf life is the amount of time a capacitor
Back to Capacitor Basics
Tolerance specification: Together with the capacitor''s value, its tolerance indicates the likely variation from the stated nominal value—for example, 220pF ±10 %. Standard tolerances include ±5 %
and ±10 %. Electrolytic capacitors typically have a larger
A typical specification for an electrolytic capacitor states a lifetime of 2,000 hours at 85 °C, doubling for every 10 degrees lower temperature, achieving lifespan of approximately 15 years at room
temperature. ... Reference conditions for failure rates and stress models for conversion;
Guide to Integrated Charge Pump DC-DC Conversion
Capacitive voltage conversion is achieved by switching a capacitor periodically. Passive diodes can perform this switching function in the simplest cases, if an alternating voltage is available.
Otherwise, DC voltage levels require the use of active switches, which first charge the capacitor by connecting it across a voltage source and …
Ripple Current and its Effects on the Performance of Capacitors
Ripple Current and its Effects on the Performance of ...
Optimal selection of bulk capacitors in flyback converter
To optimize the selection of the bulk capacitor in a flyback converter, this paper proposes a method based on the lifetime and volume of aluminum electrolytic capacitors (Al e-caps). ... Power loss
analysis of bulk capacitors3.1. Specification of the converter. The main parameters of the flyback converter designed in this paper are … | {"url":"https://iniron.pl/22_05_24_19619.html","timestamp":"2024-11-13T00:58:13Z","content_type":"text/html","content_length":"21637","record_id":"<urn:uuid:a8b33500-6af4-4507-8525-c83e11324532>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00787.warc.gz"} |
1040 - Costume Party
It's Halloween! Farmer John is taking the cows to a costume party,
but unfortunately he only has one costume. The costume fits precisely
two cows with a length of S (1 <= S <= 1,000,000). FJ has N cows
(2 <= N <= 20,000) conveniently numbered 1..N; cow i has length L_i
(1 <= L_i <= 1,000,000). Two cows can fit into the costume if the
sum of their lengths is no greater than the length of the costume.
FJ wants to know how many pairs of two distinct cows will fit into
the costume.
* Line 1: Two space-separated integers: N and S
* Lines 2..N+1: Line i+1 contains a single integer: L_i
* Line 1: A single integer representing the number of pairs of cows FJ
can choose. Note that the order of the two cows does not
sample input
sample output
USACO JAN08 | {"url":"http://hustoj.org/problem/1040","timestamp":"2024-11-13T15:20:45Z","content_type":"text/html","content_length":"8026","record_id":"<urn:uuid:25e6dbad-215d-45df-ac37-836d21f37b0d>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00446.warc.gz"} |
Operator Algebras and Non-commutative Geometry
The subject of operator algebras has its origins in the work of Murray and von Neumann concerning mathematical models for quantum mechanical systems. During the last thirty years, the scope of the
subject has broadened in a spectacular way and now has serious and deep interactions with many other branches of mathematics: geometry, topology, number theory, harmonic analysis and dynamical
Alain Connes' program of non-commutative geometry is based on the fact that any commutative C*-algebra is isomorphic to an algebra of continuous functions on some space. The aim of the program is to
develop the tools of geometry in the setting where a commutative algebra of functions is replaced by a non-commutative algebra of operators.
Operator algebras has a strong history in Canada, but the establishment of substantial groups of researchers in the west (particularly Victoria, Regina and Alberta) is more recent. The proposed
period of concentration will be part of the development of this group and its profile worldwide. The participation of people like Cuntz, Arveson, Higson, Rieffel and Connes is a result of the
collaborative efforts of the group and attests to its standing. The program of activities demonstrates that the group is involved in the latest developments in the field and will, in turn, establish
western Canada as a world leader.
CRG Leaders:
1. Douglas Farenick, University of Regina
2. Marcelo Laca, University of Victoria
3. Anthony Lau, University of Alberta
4. Ian Putnam, University of Victoria
Scientific Activities:
Year 2009
• 37th Canadian Operator Symposium, University of Regina, May 26-30, 2009
• KMS States and Non-Commutative Geometry, University of Victoria, June 29-July 10, 2009
• 2009 Northwest Functional Analysis Seminar, Banff, Alberta, October 16-18, 2009
Year 2010
Year 2011
Year 2009
• Adam Rennie, Australian National University, June 23 - July 10, 2009
• Alan Carey, Australian National University, June 23 - July 10, 2009
• Fyodor Sukochev, University of New South Wales, June 26 - July 9, 2009
• Christian Skau, Norwegian University of Science and Technology, Aug. 10 - 14, 2009
Year 2010
• Astrid an Huef, University of Otago, June 26 - July 9, 2010
• Iain Raeburn, University of Otago, June 26 - July 9, 2010
• Jerry Kaminker, Indiana University- Purdue University Indianapolis, June 26- July 9, 2010
• Joachim Cuntz, Universitat Munster, Oct. 1 - 30, 2010 (PIMS Distinguished Chair)
Year 2011
Postdoctoral Fellows
• Bodgan Nica, September 2009 - August 2011
• Antoine Julien, September 2010 - August 2012 | {"url":"https://staging.pims.math.ca/programs/scientific/collaborative-research-groups/past-crgs/operator-algebras-and-non-commutative","timestamp":"2024-11-10T18:54:50Z","content_type":"text/html","content_length":"472747","record_id":"<urn:uuid:9df2bd80-65bf-469b-be3d-19b344717774>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00560.warc.gz"} |
Mathematical Reasoning: Patterns, Problems, Conjectures, and Proofs - PDF Free Download
Mathematical Reasoning Mathematical Reasoning Patterns, Problems, Conjectures, and Proofs Raymond S. Nickerson ¥ P...
489 downloads 3021 Views 3MB Size Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a
simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Mathematical Reasoning
Mathematical Reasoning Patterns, Problems, Conjectures, and Proofs
Raymond S. Nickerson
Psychology Press Taylor & Francis Croup New York London
Psychology Press Taylor & Francis Group 270 Madison Avenue New York, NY 10016
Psychology Press Taylor & Francis Group 27 Church Road Hove, East Sussex BN3 2FA
© 2010 by Taylor and Francis Group, LLC This edition published in the Taylor & Francis e-Library, 2011. To purchase your own copy of this or any of Taylor & Francis or Routledge’s collection of
thousands of eBooks please go to www.eBookstore.tandf.co.uk. Psychology Press is an imprint of Taylor & Francis Group, an Informa business International Standard Book Number: 978-1-84872-827-1
(Hardback) For permission to photocopy or use material electronically from this work, please access www. copyright.com (http://www.copyright.com/) or contact the Copyright Clearance Center, Inc.
(CCC), 222 Rosewood Drive, Danvers, MA 01923, 978-750-8400. CCC is a not-for-profit organization that provides licenses and registration for a variety of users. For organizations that have been
granted a photocopy license by the CCC, a separate system of payment has been arranged. Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used only for
identification and explanation without intent to infringe. Library of Congress Cataloging-in-Publication Data Nickerson, Raymond S. Mathematical reasoning : patterns, problems, conjectures, and
proofs/ Raymond Nickerson. p. cm. Includes bibliographical references and index. ISBN 978-1-84872-827-1 1. Mathematical analysis. 2. Reasoning. 3. Logic, Symbolic and mathematical. 4. Problem
solving. I. Title. QA300.N468 2010 510.1’9--dc22 Visit the Taylor & Francis Web site at http://www.taylorandfrancis.com and the Psychology Press Web site at http://www.psypress.com ISBN 0-203-84802-0
Master e-book ISBN
The Author Preface
vii ix
What Is Mathematics?
Deduction and Abstraction
Informal Reasoning in Mathematics
Representation in Mathematics
Predilections, Presumptions, and Personalities
Esthetics and the Joys of Mathematics
The Usefulness of Mathematics
Foundations and the “Stuff” of Mathematics
Preschool Development of Numerical and Mathematical Skills
Mathematics in School
Mathematical Problem Solving
Final Thoughts
Appendix: Notable (Deceased) Mathematicians, Logicians, Philosophers, and Scientists Mentioned in the Text
Author Index
Subject Index
The Author
Raymond S. Nickerson is a research professor at Tufts University, from which he received a PhD in experimental psychology, and is retired from Bolt Beranek and Newman Inc. (BBN), where he was a
senior vice president. He is a fellow of the American Association for the Advancement of Science, the American Psychological Association, the Association for Psychological Science, the Human Factors
and Ergonomics Society, and the Society of Experimental Psychologists. Dr. Nickerson was the founding editor of the Journal of Experimental Psychology: Applied (American Psychological Association),
the founding and first series editor of Reviews of Human Factors and Ergonomics (Human Factors and Ergonomics Society), and is the author of several books. Published titles include: The Teaching of
Thinking (with David N. Perkins and Edward E. Smith) Using Computers: Human Factors in Information Systems Reflections on Reasoning Looking Ahead: Human Factors Challenges in a Changing World
Psychology and Environmental Change Cognition and Chance: The Psychology of Probabilistic Reasoning Aspects of Rationality: Reflections on What It Means to Be Rational and Whether We Are
What does it means to reason well? Do the characteristics of good reasoning differ from one context to another? Do engineers reason differently, when they reason well, than do lawyers when they
reason well? Do physicians use qualitatively different reasoning principles and skills when attempting to diagnose a medical problem than do auto mechanics when attempting to figure out an automotive
malfunction? Is there anything about mathematics that makes mathematical reasoning unique, or at least different in principle from the reasoning that, say, nonmathematical biologists do? As a
psychologist, I find such questions intriguing. I do not address them directly in this book, but mention them to note the context from which my interest in reasoning in mathematics stems. Inasmuch as
I am not a mathematician, attempting to write a book on this subject might appear presumptuous—and undoubtedly it is. My excuse is that I wished to learn something about mathematical reasoning and I
believe that one way to learn about anything—an especially good one in my view—is to attempt to explain to others, in writing, what one thinks one is learning. But can a nonmathematician hope to
understand mathematical reasoning in a more than superficial way? This is a good question, and the writing of this book represents an attempt to find out if one can. I beg the indulgence of my
mathematically sophisticated friends and colleagues if specific attempts at exposition reveal only mathematical naiveté. I count on their kindness to give me some credit for trying. Although much has
been written about the importance of the teaching and learning of mathematics at all levels of formal education, and much angst has been expressed about the relatively poor job that is being done in
American schools in this regard, especially at the primary and secondary levels, the fact is that mathematics is not of great interest to most people. Hammond (1978) refers to mathematics as an
“invisible culture” and raises the question as to what it is in the nature of “this
unique human activity that renders it so remote and its practitioners so isolated from popular culture” (p. 15). One conceivable answer is that the fundamental ideas of mathematics are inherently
difficult to grasp. Certainly the world of mathematics is populated with objects that are not part of everyday parlance: vectors, tensors, twisters, manifolds, geodesics, and so forth. Even concepts
that are relatively familiar can quickly become complex when one begins to explore them; geometry, for example, which we know from high school math deals with properties of such mundane objects as
points, lines, and angles, encompasses a host of more esoteric subdisciplines: differential geometry, projective geometry, complex geometry, Lobachevskian geometry, Riemannian geometry, and
Minkowskian geometry, to name a few. But, even allowing that there are areas of mathematics that are arcane, and will remain so, to most of us, mathematics also offers countless delights and uses for
people with only modest mathematical training. I hope I am able to convey in this book some sense of the fascination and pleasure that is to be found even in the exploration of only limited parts of
the mathematical domain. Although there are a few significant exceptions, mathematicians generally are notoriously bad about communicating their subject matter to the general public. Why is that the
case? Undoubtedly, many mathematicians are sufficiently busy doing mathematics that they would find an attempt to explain to nonmathematicians what they are doing to be an unwelcome, time-consuming
distraction. There is also the possibility that the abstract nature of much of mathematics is extraordinarily difficult to communicate in lay terms. Steen (1978) makes this point and contrasts the
“otherworldly vocabulary” of mathematics with the somewhat more concrete terms (“molecules, DNA, and even black holes”) that provide chemists, biologists, and physicists with links to material
reality with which they can communicate their interests. “In contrast, not even analogy and metaphor are capable of bringing the remote vocabulary of mathematics into the range of normal human
experience” (p. 2). Whatever the cause of the paucity of books about the doing of mathematics or about the nature of mathematical reasoning written by mathematicians, we should be especially grateful
to those mathematicians who have proved to be exceptions to the rule: G. H. Hardy, Mark Kac, Imre Lakatos, George Polya, and Stanislav Ulam, along with a few contemporary writers, come quickly to
mind. (Hardy wrote about the doing of mathematics only when he considered himself too old to be able to do mathematics effectively, and expressed his disdain for the former activity; nevertheless,
his Apology provides many insights into the latter.) I have found the writings of these expositors of mathematical reasoning to be not only especially illuminating, but easy and pleasurable to read.
I hope in this book to convey a sense of the enriching experience that reflection on mathematics can provide even to those whose mathematical knowledge is not great. I owe thanks to several people
who generously read drafts of sections of the book and gave me the benefit of much insightful and helpful feedback. These include Jeffrey Birk, Susan Chipman, Russell Church, Carol DeBold, Francis
Durso, Ruma Falk, Carl Feehrer, Samuel Glucksberg, Earl Hunt, Peter Killeen, Thomas Landauer, Duncan Luce, Joseph Psotka, Judah Schwartz, Thomas Sheridan, Richard Shiffrin, Robert Siegler, and
William Uttal. I am especially grateful to Neville Moray, who read and commented on the entire manuscript. Stimulating and enlightening conversations on matters of psychology and math with son,
Nathan Nickerson, and colleagues at Tufts, especially Susan Butler, Richard Chechile, and Robert Cook, have been most helpful and enjoyable. Special thanks go also to granddaughters Amara Nickerson
for critically reading the chapters on learning math and problem solving from the perspective of a first-year teacher with Teach America, and Laura Traverse for pulling the cited references out of a
cumbersome master reference file and catching various grammatical blunders in the manuscript in the process. Thanks also to Paul Dukes and Marsha Hecht of Psychology Press for their skillful and
amicable guidance of the manuscript to the point of publication. As always, I am profoundly grateful to my wife, Doris, whose constant love and support are gifts beyond measure. It is a great
pleasure to dedicate this book to our youngest grandson, Landon Traverse, whose progress in learning to count and reckon is fascinating and wonder-evoking to observe.
C H A P T E R
What Is Mathematics?
Mathematics is not primarily a matter of plugging numbers into formulas and performing rote computations. It’s a way of thinking and questioning that may be unfamiliar to many of us, but is available
to almost all of us. (Paulos, 1995, p. 3) Mathematics is permanent revolution. (Kaplan & Kaplan, 2003, p. 262) Many have tried, but nobody has really succeeded in defining mathematics; it is always
something else. (Ulam, 1976, p. 273)
What is mathematics? Is it the “queen of the sciences,” as, thanks to Carl Frederich Gauss,* it is often called? Or the “most original creation of the human spirit,” as Alfred North Whitehead†
suggests (1956, p. 402)? Or, more prosaically, is it, as George Polya claims it appears to be to many students, “a set of rigid rules, some of which you should learn by heart before the final
examinations, and all of which you should forget afterwards” (1954b, p. 157)? Is it the one area of knowledge in which absolute truth is possible? Or is it “fundamentally a human enterprise arising
from human activities” (Lakoff & Núñez, 2000, p. 351), and therefore “a necessarily imperfect and revisable endeavor” (Dehaene, 1997, p. 247)? Do the truths of mathematics exist independently of the
minds that discover them, or are they human inventions? Birth and death dates of (deceased) mathematicians, logicians, and philosophers mentioned in this book are given in the Appendix. † Whitehead
noted that music might also make this claim, and he did not attempt to settle the matter. *
Mathematical Reasoning
Are they timeless and culture free? Or do they rest on assumptions that can differ over time and place? Are mathematics and logic one and the same? Does mathematics spring from logic, or logic from
mathematics? Or is it the case, as Polkinghorne argues, that “mathematical truth is found to exceed the proving of theorems and to elude total capture in the confining meshes of any logical net”?
(1998, p. 127). Is mathematics, as Bertrand Russell famously said, “the subject in which we never know what we are talking about, nor whether what we are saying is true” (1901/1956a, p. 1576)? Not
surprisingly, a diversity of opinions can be found regarding the answers to these and many related questions. In particular, as Hammond (1978) reminds us, “Mathematicians do not agree among
themselves whether mathematics is invented or discovered, whether such a thing as mathematical reality exists or is illusory” (p. 16). In this book, I shall use the terms invention and discovery more
or less interchangeably in reference to mathematical advances, having not found an entirely convincing argument to prefer one over the other. There are numerous views as to what constitutes the
essence of mathematics, especially among mathematicians. A major purpose of this book is to explore some of those views and to catch a glimpse of what it means to reason mathematically. I suspect
that for many people, mathematics is synonymous with computation or calculation. Doing mathematics, according to this conception, amounts to executing certain operations on numbers—addition,
subtraction, multiplication, division. More complex mathematics might involve still other operations—raising a number to a specified power, finding the nth root of a number, finding a number’s prime
factors, integrating a function. Computation is certainly an important aspect of mathematics and, for most of us, perhaps the aspect that has the greatest practical significance. Knowledge of how to
perform the operations of basic arithmetic is what one needs in order to be able to make change, balance a checkbook, calculate the amount of a tip for service, make a budget, play cribbage, and so
on. Moreover, the history of the development of computational techniques is an essential component of the story of how mathematics got to be what it is today. But, important as computation is, it
plays a minor role, if any, in much of what serious mathematicians do when they are engaged in what they consider to be mathematical reasoning. Whatever else may be said about mathematics, even the
casual observer will be struck by the rich diversity of the subject matter it subsumes. Ogilvy (1956/1984) suggests that mathematics can be roughly divided into four main branches—number theory,
algebra, geometry, and analysis—but each of these major branches subsumes many subspecialties, each of which can be portioned into narrower subsubspecialties. What is it that the myriad forms of
mathematical activity have in common that justifies referring to them all with the same name?
What Is Mathematics?
Sternberg (1996) begins a commentary on the chapters of a book on the nature of mathematical thinking that he edited with Ben-Zeev (Sternberg & Ben-Zeev, 1996), with the observation that the chapters
make it clear that “there is no consensus on what mathematical thinking is, nor even on the abilities or predispositions that underlie it” (p. 303). He cautions the futility of the hope of
understanding mathematical thinking in terms of a set of features that are individually necessary and jointly sufficient to define it, and expresses doubt even of the possibility of characterizing it
in terms of a prototype, as has proved to be effective with other complex concepts. This seems right to me. There are many varieties of mathematical thinking. And mathematicians are a diverse lot of
people, reflecting an unbounded assortment of interests, abilities, attitudes, and working styles. Nevertheless, there are, I believe, certain ideas that are especially descriptive of the doing of
mathematics and that hold it together as a unified discipline. Among these are the ideas of pattern, problem solving, conjecture, and proof.
☐☐ Mathematics as the Study of Pattern Where there is life, there is pattern, and where there is pattern there is mathematics. (Barrow, 1995a, p. 230)
At the heart of mathematics is the search for regularity, for structure, for pattern. As Steen (1990) puts it, “Mathematics is an exploratory science that seeks to understand every kind of
pattern—patterns that occur in nature, patterns invented by the human mind, and even patterns created by other patterns” (p. 8). And, as Whitehead (1911) and Hammond (1978) note, it is the most
powerful technique for analyzing relations among patterns. Sometimes mathematics is referred to simply as “the science of patterns” (Devlin, 2000a, p. 3; Steen, 1988, p. 611). The types of patterns
that mathematicians look for and study include patterns of shapes, patterns of numbers, patterns in time, and patterns of patterns. What is the pattern of constant-radius spheres when they are packed
in the most efficient way possible? What is the pattern that describes the distribution of prime numbers? How does one tell whether a pattern that describes a relationship between or among
mathematical entities in all known instances is descriptive of the relationship generally? (For all cases checked, every even number greater than 2 is the sum of two primes; how does one know whether
it is true of all numbers?) Is it possible to find a single pattern of relationships in the falling of a feather and a stone, and the motion of the moon around the Earth and that of the planets
around the sun?
Mathematical Reasoning
The detection of patterns has been of great interest to scientists as well as to mathematicians. At least one account of human cognition makes pattern recognition the fundamental basis of all thought
(Margolis, 1987). That the same patterns are observed again and again in numerous contexts in nature—in crystals, in biological tissues, in structures built by organisms, in effects of physical
forces on inanimate matter (Stevens, 1974)—demands an explanation. The importance of pattern in art is obvious; it is expressed in numerous ways, and one need not be a mathematician to appreciate or
to produce it. Maor (1987) says of Johann Sebastian Bach and Maurits C. Escher: “Both had an acute sense for pattern, rhythm, and regularity—temporal regularity in Bach’s case, spatial in Escher’s.
Though neither would admit it (or even be aware of it), both were experimental mathematicians of the highest rank” (p. 176). Hofstadter (1979) gives numerous examples of the role of pattern in the
work of both men. The detection of a local pattern, some regularity among a limited set of mathematical entities (numbers, shapes, functions), can be the stimulus to a search for a general principle
from which the observed pattern would follow. Often such observations have prompted conjectures, and these conjectures sometimes have stood as challenges to generations of mathematicians who have
sought to prove them and elevate them to theorem status. Cases in point include Gauss’s conjecture, made when he was 15, that the number of prime numbers between 1 and n is approximately n/loge n;
Christian Goldbach’s conjecture, dating from 1742, that every even number greater than 2 is the sum of two primes; Pierre de Fermat’s “last theorem,” that xn + yn = zn is not solvable for n > 2; and
Georg Riemann’s zeta conjecture. These conjectures and others will be noted again in subsequent chapters. Unlike the conjectures of Gauss, Goldbach, and Fermat, that of Riemann—usually referred to as
the Riemann hypothesis, is likely to appear arcane to nonmathematicians, but the—so far unsuccessful—search for a proof of it has been a quest of many first-rate mathematicians. For present purposes
it suffices to note that the Riemann hypothesis has to do with the distribution of prime numbers and involves the zeta function, ∞
ζ( x ) =
∑n n =1
1 x
first noted by Leonard Euler, and about which more is given in Chapter 3. To all appearances the distribution of primes is chaotic and completely unpredictable, but proof of the Riemann hypothesis
would make it possible to locate easily a prime of any specified number of digits, and would
What Is Mathematics?
have other implications for number theory as well. The interested reader is referred to a beautiful book-length treatment of the Riemann hypothesis by Marcus du Sautoy (2004), who, in the subtitle of
his book, refers to the question of the distribution of primes as “the greatest mystery of mathematics.” Local patterns do not always signal more general relationships, so one cannot simply
extrapolate beyond what one has actually observed, unless one has the authority of a proof on which to base such an extrapolation. We will return to this point in Chapter 5, but will illustrate it
now with an example from solid geometry. A convex deltahedron is a solid, all the faces of which are equilateral triangles of the same size. (The term deltahedron comes from the solid’s triangular
face resembling the Greek letter delta.) The smallest possible deltahedron is a tetrahedron, and the largest is an icosahedron. These are the smallest (4-faced) and largest (20-faced) of the five
Platonic solids. That it is impossible to make a solid with less than four equilateral triangles should be obvious. That it is impossible to make one with more than 20 such triangles follows from the
necessity, in order to do so, to have six triangles meet at a corner (five meet at a corner in the icosahedron), and the only way to do this is to have all six be in the same plane. How many
deltahedra should it be possible to make counting the smallest and largest possibilities? Because every triangle has three sides and every edge of a deltahedron must be shared by two triangles, the
finished shape with n triangles will have 3n/2 edges, and because the shape must have an integral number of edges, the number of triangles used, n, must be a multiple of 2. If we tried making a
six-faced deltahedron, we would find it is possible to do so. We would also discover, if we tried, that we could make deltahedra with 8 faces, with 10, with 12, and so forth. At some point in this
experiment, we might begin to suspect, perhaps even to convince ourselves, that deltahedra can be made with any even number of triangles between 4 and 20 inclusive. If we continued making these
things, our confidence in this conjecture would probably be quite high by the time we succeeded in making 14- and 16-faced forms. But surprise! Try as we might, we would not succeed in making one
with 18 faces. In short, local regularities can be misleading. Inferring a general pattern from a local pattern is risky. If there is a general pattern, a local pattern will be observed, but the
converse is not necessarily true, which is to say that if a local pattern is observed, a general pattern of which it is an instance may or may not exist. Not surprisingly, mathematicians are
especially interested in finding patterns that are general; they take it as a challenge to distinguish between those that are and those that are not.
Mathematical Reasoning
☐☐ Mathematics as Problem Solving As well as regarding mathematics as the study of patterns, mathematics can be viewed, pragmatically, as a vast collection of problems of certain types and of
approaches that have proved to be effective in solving them. Pure mathematicians may not appreciate this view, but it is a viable one nevertheless. Some mathematicians see the essence of mathematics
to be problem solving (Halmos, 1980; Polya, 1957, 1965). Casti (2001) puts it this way: “The real raison d’etre for the mathematician’s existence is simply to solve problems. So what mathematics
really consists of is problems and solutions” (p. 3). The really “good” problems, in Casti’s view— those that become recognized as “mathematical mountaintops”—are those that challenge the best
mathematical minds for centuries. Really good problems are really good, not only by virtue of being difficult, but because attempts to solve them commonly contribute to the development of whole new
fields of mathematics. Casti’s selection of the five most famous mathematical problems of all time is shown in Table 1.1. More will be said about some of these problems in subsequent chapters. A
distinction is sometimes made between knowing mathematics and doing mathematics. Romberg (1994b) illustrates the distinction by drawing an analogy with the difference between having knowledge about
other activities (flying an airplane, playing a musical instrument) and actively engaging in them. Schoenfeld (1994b) likens the doing of mathematics with the doing of science—“a ‘hands-on,’
data-based enterprise” with “a significant empirical component, one of data and discovery. What makes it mathematics rather than chemistry or physics or biology is the unique character of the objects
being studied and the tools of the trade” (p. 58). The nature of the problems that present themselves in pure and applied math may differ, but the need for problem solving spans the range from the
purest to the most highly applied. Among other benefits, mathematical training gives one the ability to solve many complex problems in a stepwise fashion, by formulating the problems in such a way
that the successive application of well-defined symbol transformation rules will take one from the problem statements to the expressions of the desired solutions. There is the view too that one of
the purposes that mathematics serves is to make problem solving—or reasoning more generally—easier or, in some cases, unnecessary. Austrian physicist-philosopher Ernst Mach (1906/1974) attributes to
Joseph Louis Lagrange the objective, in his Méchanique analytique, “to dispose once and for all of the reasoning necessary to resolve mechanical problems, by embodying as much as possible of it in a
single formula” (p. 561). Mach believed that Lagrange
What Is Mathematics?
Table 1.1. The Five Most Famous Mathematical Problems of All Time, According to Casti (2001) 1.
Determination of the solvability of a Diophantine equation. “To devise a process according to which it can be determined by a finite number of operations whether a Diophantine equation is solvable in
rational integers” (p. 12). A Diophantine equation is a polynomial equation, all the constants and variables of which, as well as the solutions of interest, are integers.
Proof of the four-color conjecture: The conjecture that four colors suffice to color a two-dimensional map so that no two contiguous regions have the same color.
Proof of Cantor’s continuum hypothesis: There is no infinity between that of the natural numbers and that of the reals.
Proof of Kepler’s conjecture: Face-centered cubic packing of spheres of the same radius is the optimal packing arrangement (yields the smallest ratio of unfilled-to-filled space).
Proof of Fermat’s “last theorem”: The equation xn + yn = zn has no solutions for n > 2.
was successful in attaining this objective, that his method made it possible to deal with a class of problems mechanically and unthinkingly. “The mechanics of Lagrange,” Mach contends, “is a
stupendous contribution to the economy of thought” (p. 562). It seems clear that discoveries of mathematical equations that are descriptive of relationships in the physical world have made reasoning
about those relationships easier, or even perhaps unnecessary in some instances, but it seems equally clear that the same discoveries have had the effect of extending the range of reasoning,
unburdening it in some respects and thereby enabling it to function at higher levels.
☐☐ Mathematics as Making Conjectures Quibbles aside, every theorem is born as a conjecture. Like the rest of us, the mathematician’s seeing-that precedes reasoning-why. But it is an important point
(as many mathematicians will confirm) that the mathematician
Mathematical Reasoning
comes to see his conjecture as a theorem at some stage prior to, and indeed usually a good deal prior to, being able to prove it. (Margolis, 1987, p. 84)
The idea that theorems typically begin life as conjectures is easy to accept, but what evokes conjectures that eventually become theorems is not known very precisely. Inductive reasoning, searching
for patterns and regularities, and playing with ideas often appear to be involved. Conjectures in this context are not wild guesses; mathematicians usually attempt to prove only conjectures they
believe likely to be true (vis-à-vis the axioms of some mathematical system). They often have claimed to have been convinced of the truth of a conjecture long before being able to construct a
rigorous proof of it. And sometimes they have described “seeing” a proof in its entirety before laying out the sequential argument explicitly, which is not unlike the reported experience of some
composers of having had a conception of a complex composition as a whole before writing down any music. Penrose (1989) gives a hint of this ability to see arguments, in some sense, in their entirety
before being aware of their parts. “People might suppose that a mathematical proof is conceived as a logical progression, where each step follows upon the ones that have preceded it. Yet the
conception of a new argument is hardly likely actually to proceed in this way. There is a globality and seemingly vague conceptual content that is necessary in the construction of a mathematical
argument; and this can bear little relation to the time that it would seem to take in order fully to appreciate a serially presented proof” (p. 445). In her biography of John Nash, Sylvia Nasar
(1998) says, “Nash always worked backward in his head. He would mull over a problem and, at some point, have a flash of insight, an intuition, a vision of the solution he was seeking. These insights
typically came early on, as was the case, for example, with the bargaining problem, sometimes years before he was able, through prolonged effort, to work out a series of logical steps that would lead
one to his conclusion” (p. 129). Nasar notes that other great mathematicians, including Riemann, Poincaré, and Wiener, worked in a similar way. It is not necessary to assume that all such “flashes of
insight” have turned out to be correct in order to appreciate the importance of the kind of reasoning that has led to them and the difference between it and the “series of logical steps” that could
be used to convince others of the truth—in the mathematical sense—of those that have proved to be true. Any adequate theory of mathematical reasoning must be able to account both for the exploratory
ruminations that produce the insights—or productive conjectures—and for the process of constructing logically tight arguments that justify the conjectured conclusions.
What Is Mathematics?
Just as individual mathematicians make use of hunches and conjectures before they have been proved, the history of mathematics has many examples of concepts being used long before they have been
justified or defined in any formal way. Moreover, concepts that we are likely to consider to have intuitively obvious meanings may have become “intuitively obvious” only as a consequence of
familiarity through common usage over a long time; many of these concepts were used much more tentatively, if at all, by mathematicians of previous centuries. There are few concepts, for example,
that are more useful in mathematics today than those of variable and function. A variable is an entity whose value is not fixed. A function expresses the relationship between two or more variables;
in particular, it shows how the value of one variable (the dependent variable) depends on the value or values of one or more other variables (the independent variables). A more general definition of
a function could be “any rule that takes objects of one kind and produces new objects from them” (Devlin, 2002, p. 26). The idea of dependence among variables is basic not only to mathematics, but to
science as well, and it may come as something of a surprise that neither the idea of variable nor that of function was prominent before the time of René Descartes. Though Isaac Newton and Gottfried
Leibniz both succeeded in developing the differential calculus—the study of continuous change—to the point at which it could be applied effectively to problems of interest, neither was able to
provide an adequate logical foundation for the subject, nor was anyone else, in spite of concerted efforts to do so, for about 150 years. Both Newton and Leibniz were guided more by intuition than by
logic. American mathematician-historian of mathematics Morris Kline (1953a) refers to the recognition of the relationship between the general concept of rate of change and the determination of
lengths, areas, and volumes as “the greatest single discovery made by Newton and Leibniz in the Calculus” (p. 224), and to the calculus itself as the richest of all the veins of thought explored by
geniuses of the 17th century. He also points to the roles of Newton and Leibniz in the development of the calculus as compelling refutations to the popular conception of mathematicians reasoning
perfectly and directly to conclusions. Probably no one has done more to call attention to the importance of conjectural and inductive thinking in mathematics than HungarianAmerican mathematician
George Polya, who distinguishes “finished mathematics” from “mathematics in the making.” The former consists of the demonstrative reasoning of deductive proofs; the latter resembles other human
knowledge in the making: “You have to guess the mathematical theorem before you prove it: you have to guess the idea of the proof before you carry through the details. You have to combine
Mathematical Reasoning
and follow analogies: you have to try and try again. The result of the mathematician’s creative work is demonstrative reasoning, a proof; but the proof is discovered by plausible reasoning, by
guessing” (Polya, 1954a, p. vi). It is important, Polya held, for the student to learn not only to distinguish a proof from a guess, but also to tell the difference between more and less reasonable
guesses. “To be a good mathematician, or a good gambler, or good at anything, you must be a good guesser” (p. 111). At the beginning of this section, I quoted Margolis’s observation that “every
theorem is born as a conjecture.” It is not the case, however, that every conjecture ends up being proved; some do and others do not. There are many famous conjectures that have been conjectures for
a very long time and remain so despite the countless hours that firstrate mathematicians have spent trying to prove them. Others have been shown to be false. No one knows what percentage of the
conjectures that mathematicians make eventually prove to be true. It could be that most conjectures are wrong. If that is the case, it does not follow that the making of conjectures is a waste of
time. The exploration of conjectures that turned out to be wrong has often led to important discoveries and the development of new areas of mathematical inquiry. Arguably, this is because the
conjectures that mathematicians make are generally made in a context that is rich in mathematical knowledge. Seldom are inspired mathematical conjectures made by people who have little understanding
of mathematics. The reasoning of the mathematician and that of the scientist are similar to a point. Both make conjectures often prompted by particular observations. Both advance tentative
generalizations and look for supporting evidence of their validity. Both consider specific implications of their generalizations and put those implications to the test. Both attempt to understand
their generalizations in the sense of finding explanations for them in terms of concepts with which they are already familiar. Both notice fragmentary regularities and—through a process that may
include false starts and blind alleys—attempt to put the scattered details together into what appears to be a meaningful whole. At some point, however, the mathematician’s quest and that of the
scientist diverge. For scientists, observation is the highest authority, whereas what mathematicians seek ultimately for their conjectures is deductive proof.
☐☐ Mathematics as Proof Making Mathematics seems to be a totally coherent unity with complete agreement on all important questions; especially with the notion of proof, a procedure by which a
proposition about the unseen reality can be established
What Is Mathematics?
with finality and accepted by all adherents. It can be observed that if a mathematical question has a definite answer, then different mathematicians, using different methods, working in different
centuries, will find the same answer. (Davis & Hersh, 1981, p. 112)
If conjecture is the engine that powers the mathematical train, proof is the intended destination. The centrality of the idea of proof in mathematics is widely acknowledged. Devlin (2000a) refers to
proof as the only game in town when it comes to establishing mathematical truth. Romberg (1994) expresses the same idea in noting that for a proposition to be considered a mathematical product it
must be rigorously proved by a logical argument. Aczel (2000) says that statements that lack proofs carry little weight in mathematics. Noted American historian of mathematics Eric Temple Bell (1945/
1992) sees proof as the sine qua non of mathematics: “Without the strictest deductive proof from admitted assumptions, explicitly stated as such, mathematics does not exist” (p. 4). Whether the
proofs that mathematicians achieve are ever fully deductive is debatable; more will be said on this issue in subsequent chapters, especially Chapter 5. Proof in mathematics is sometimes characterized
as the counterpoint to intuition. The need for proofs is seen in an observation by Ogilvy (1956/1984) that “in mathematics, so many of the things that are ‘obviously’ true aren’t true” (p. 19).
Kaplan (1999) speaks of “the ever present tension between intuition and proof,” which he describes lyrically as follows. These are the two poles of all mathematical thought. The first centers the
free play of mind, which browses on the pastures of phenomena and from its rumination invents objects so beautiful in themselves, relations that work so elegantly, both fitting in so well with our
other inventions and clarifying their surroundings, that world and mind stand revealed each as the other’s invention, conformably with the unique way that Things Are. After invention the second
activity begins, passing from admiring to justifying the works of mind. Its pole is centered in the careful, artful deliberations which legalize those insights by deriving them, through a few
deductive rules, from the Spartan core of axioms (a legal fiction or two may be invented along the way, but these will dwindle to zero once their facilitating is over). What emerges, safe from error
and ambiguity, others in remote places and times may follow and fully understand. (p. 159)
Without denying the usefulness of the distinction between intuition and proof, I believe it can be drawn too sharply; intuition plays an essential role in the making and evaluating of proofs and is
sometimes changed as a consequence of these processes. In this respect, the distinction is like that between creative and critical thinking; while this
Mathematical Reasoning
distinction too is a useful one, it is not possible to have either in any very satisfactory sense without the other. Greeno (1994) sees an analogy between the role that proofs play in mathematics and
the one that observations and experiments play in science. Just as observations and experiments provide evidence of the tenability of theoretical assertions in science, so proofs provide evidence of
the truth of theorems in mathematics. The evidence provided by a proof in mathematics is qualitatively different from that provided by empirical observations in science, but the analogy holds in the
sense that learning mathematics without an appreciation of the role of proofs would be as disabling as learning science without appreciation of the role of empirical evidence.
Why Is Mathematics Important? I would not wish to have you possessed by the notion that the pursuit of mathematics by human thought must be justified by its practical uses in life. (Forsyth, 1928/
1963, p. 45)
The presumed practical importance of numeracy in modern society is reflected in the stress placed on its acquisition by formal education. Much of the curriculum during the first few years of school
is devoted to learning the natural numbers and methods of manipulating them. This emphasis is often justified by assertions of the importance of mathematical competence for the demands of modern
life. The editors of the report of the National Research Council’s Mathematics Learning Study Committee contend that “the growing technological sophistication of everyday life calls for universal
facility with mathematics” (Kilpatrick, Swafford, & Findell, 2001, p. 16). The more recent final report of the National Mathematics Advisory Panel (2008) states that while there is reason enough for
the often expressed concern about the implications of mathematics and science for “national economic competitiveness and the economic well-being of citizens and enterprises,” the more fundamental
recognition is “that the safety of the nation and the quality of life—not just the prosperity of the nation—are at issue” (p. xi). Somewhat ironically, despite the great emphasis that is put on the
practical importance of mathematics as a justification for requiring mathematics beyond arithmetic in secondary education, even teachers of math courses may be hard-pressed to make a convincing case.
In a small survey of middle school teachers, Zech et al. (1994) found that only 3 of 25 were able to identify uses of geometry other than the calculation of area
What Is Mathematics?
and volume, and 5 could give no ideas of geometry’s practical usefulness. Before condemning the teachers for their inability to answer this question, one might do well to try to imagine how many
people of one’s acquaintance actually use in their daily lives geometry, algebra, or any other area of mathematics—as formally taught in school—beyond arithmetic. Unquestionably, today mathematics is
essential to certain professions—engineering, architecture, accounting, scientific research. But what about people who are not working in these professions, which is to say most people? From the
considerable emphasis that is placed on the importance of having strong math courses in the public secondary schools, one would judge that the prevailing assumption is that for most of us the
acquisition of nontrivial competence in mathematics is a pressing need. Most students are encouraged, if not required, to have courses in several areas of math beyond elementary arithmetic in high
school (algebra, geometry, trigonometry, calculus). I support this emphasis and do not mean to suggest that it is misguided, but I think it important to consider the possibility that for the majority
of students it is not clear that this emphasis on mathematics beyond basic arithmetic serves any practical purpose. I venture the guess that the large majority of them will seldom, if ever, solve a
problem with algebra, or trigonometry, or calculus when they no longer have to do so to satisfy a course requirement. And in most cases, this will not be a major handicap. One need not know how to
solve problems in algebra or other nonelementary areas of mathematics in order to balance a checkbook, shop for groceries, drive a car, cook, maintain a home, and do the countless other things that
daily life demands. Nor is mathematical expertise essential to the performance of most jobs, even those that are intellectually demanding in other respects. (Whether most jobs of the future will be
more or less demanding of mathematical expertise than most jobs of today is an open question; one can find predictions both ways.) Unless one is among the minority of people whose jobs require the
use of higher math, one generally can get along without it quite well. Undoubtedly, poor mathematical skills can limit one’s comprehension of much of the news and commentary that appears in daily
newspapers and other media (Paulos, 1995), and can make one vulnerable to ill-advised financial decisions (Taleb, 2004) and predatory scams as well as to the reporting of true but misleading
statistics for political or other purposes (Huff, 1973), but my sense is that poor mathematical skills in these contexts generally means the inability to do basic arithmetic or, in some cases, to
think carefully and logically. If I believe this to be so, why do I believe it to be good for high schools to put the emphasis on mathematics that they do? Without denying the practical importance of
math for those students who will
Mathematical Reasoning
eventually use it in their work, I want to argue that some acquaintance with higher math is beneficial to everyone for at least four reasons. The first reason is that if mathematics is to continue to
advance as a discipline, there is a perennial need for the training of individuals with the potential and desire to become professional mathematicians. And how better to discover such individuals
than by means of their mathematical performance in their early and intermediate school years. From the point of view of the community of professional mathematicians, this is arguably the preeminent
consideration. Dubinsky (1994b) puts it this way: “The issue of concern for most professional mathematicians is the continuation and preservation of the species [of professional mathematicians]: the
education and production of first-rate research mathematicians, people who are from the beginning very talented in mathematics” (p. 47). Dubinsky distinguishes this concern from what would likely be
considered more fundamental by mathematical educators, namely, mathematical literacy—“the raising of the level of mathematical understanding among the general population.” A second reason that I
believe it to be good for elementary and secondary schools to put considerable emphasis on mathematics is the possibility, expressed clearly by Judah Schwartz (1994), that successful mathematical
education means “changing habits of mind so that students become critical, in a constructive way, of all that is served up on their intellectual plates” (p. 2). “Students who have been successfully
educated mathematically,” Schwartz contends, “are skeptical students, who look for evidence, example, counterexample, and proof, not simply because school exercises demand it, but because of an
internalized compulsion to know and to understand” (p. 2). I wish I could point to compelling empirical evidence that this claim is true. Unhappily—and ironically perhaps in view of the nature of the
claim—I cannot. But if it is true, it establishes the importance of a solid grounding in mathematics beyond doubt in my view. And determining whether it is true strikes me as a more important
question for research than many on which much greater effort is expended. In response to a query about the claim, Schwartz (personal communication) expressed the belief that it is more likely to be
true with people whose mathematical education is put to use in scientific and engineering contexts than in purely mathematical contexts. Third, nontrivial knowledge of mathematics is important in the
sense in which some acquaintance with great literature and art, influential philosophies, cultures other than one’s own, and the history of science is important. Some acquaintance with these topics
is part of what it means to be well educated. More importantly, familiarity with such areas of human knowledge and creative work enriches one’s intellectual life immeasurably. It is a disservice to
students, I think, to teach
What Is Mathematics?
mathematics as though the only, or even the primary, reason for learning it is the practical use one might make of it. One would not think of trying to justify the teaching of music, literature,
history, or philosophy solely on the basis of the practical utility of exposure to these subjects (not to deny there undoubtedly is some); it makes no more sense to me to do so in the case of
mathematics. Fourth, I believe that, as Friend (1954), Court (1961), Devi (1977), and Pappas (1989, 1993), among others, remind us, there is much pleasure to be had in the pursuit of mathematics,
even at modest levels of expertise. My fuzzy memory of high school math (geometry, algebra, trigonometry) is sadly devoid of any effort by any teacher to convince me, or to demonstrate with personal
enthusiasm, how exciting and wonderful (full of wonder) mathematics can be. I do not mean to demean my teachers; they were good and conscientious people, dedicated to their jobs and the progress of
their students. My guess is that they themselves thought of high school math as something one ought to learn, much as one ought to eat one’s spinach, and that the idea that it could, if presented
properly, be fun would have struck them as more than slightly odd. I have no way of knowing whether my experience is representative of that of others, but would be surprised to learn that it was
unique or even highly unusual. And that, I believe, is a sad reality. There are undoubtedly compelling reasons other than those I have mentioned to teach mathematics in school, beginning at an early
age, but even if that were not the case, these suffice in my view. My hope for this book is that it will help make not only the usefulness of mathematics clear, but also its charm.
☐☐ Mathematics and Psychology To the psychologist, the development of mathematical competence, both by the species over millennia and by individuals over their lifetimes, is a fascinating aspect of
human cognition. Many questions of psychological interest arise. When and why did the rudiments of mathematical capability first appear among human beings? How and why has mathematics grown into the
richly branching complex of specialties that it is today? From where comes our fascination with and propensity for abstract reasoning? What prompts the emergence of concepts—like the infinite and the
infinitesimal—that appear to be descriptive of nothing in human experience? Why does abstract mathematics, developed with no thought of how it might be applied to the solution of practical problems,
often turn out to be useful in unanticipated ways?
Mathematical Reasoning
What are the fundamental concepts of mathematics? What is a number? Do species other than human beings have a sense of number? Are they capable of counting, or of doing elementary mathematics in any
meaningful sense? Why do essentially all modern cultures use the same system for representing numbers and counting, despite that they do not all speak the same language? How did the system that is
now used nearly universally come to be what it is? What is the basis of mathematicians’ compulsion to prove assertions? What makes a proof a proof, that is, cognitively compelling? How can one be
sure that proofs (e.g., about infinite sets, infinitesimals, and other unobservables) that cannot be verified empirically are correct? What is it about certain mathematical problems that motivate
people to work obsessively on them, sometimes with little to show for their efforts, for years? Are the truths of mathematics discoveries or inventions? What are mathematicians seeing when they
describe a mathematical entity (proof, theorem, equation) as beautiful? How is it that mathematical ideas that seem absurd to one generation can be accepted with equanimity by another? What types of
mathematical awareness do children acquire spontaneously? What do children need to know, what concepts and skills must they have, in order to be able to do well when first introduced to elementary
mathematics? How is the potential for mathematical reasoning best developed through instruction? To what extent are the considerable differences that people show in their interest in mathematics and
in the level of mathematical competence they attain attributable to genetics, or to experience? Is there such a thing as mathematical potential that is distinct from general intelligence? Or a
specifically mathematical disability that is distinguishable from a general cognitive deficit? Do mathematicians share a set of characteristics that distinguish them, as a group, from
nonmathematicians? Why have the vast majority of notable mathematicians been men? It is questions of these sorts that motivate this book. I do not pretend to answer them, but I do hope to provide
some food for thought that is relevant to them, and to many others of a similar ilk.
C H A P T E R
I regard the whole of arithmetic as a necessary, or at least natural consequence of the simplest arithmetic act, that of counting. (Dedekind, 1872/1901, p. 4)
Ernst Mach (1883/1956) once defined mathematics as “the economy of counting” (p. 1790). The dependence of mathematics and the quantitative sciences on this ability is obvious; perhaps less
immediately apparent but no less real is the importance of counting to the very existence of any technology or organized society beyond the most primitive. How and when counting was invented, or
discovered, no one knows; we can only speculate. Our debt to the fact that it was is immeasurable. Aristotle believed that the ability to count is among the things that make human beings unique. Is
counting a uniquely human capability? Asking that question presumes agreement on what constitutes counting. Is there such agreement? What does constitute counting? These and related questions
motivate this chapter. Inasmuch as the focus of this book is mathematical reasoning as it is done by human beings, discussion of whether animals count may seem out of place, but a brief digression on
the topic is useful in rounding out the discussion of what it means to count, and in providing a broader context in which to understand what may be unique about human capabilities in this regard.
Mathematical Reasoning
☐☐ What Counts as Counting? In English, the word count, when used as a verb, can be either transitive or intransitive. When used as a transitive verb, it refers to the enumeration of a set of
entities—the determination of the number of items in the set by a process that focuses successively on each item in the set. To count the words in this sentence is to determine, by enumeration, that
there are 18 of them. When used as an intransitive verb, it refers to the recitation, in order, of the natural numbers. When a proud parent notes that two-yearold Johnny can count, she is probably
claiming that, when asked to display his precocity in this regard, Johnny will recite “one, two, three, …” Von Glasersfeld (1993) refers to such recitation of number words, apart from coordination
with countable items, as “vacuous counting.” In the study of counting the focus has generally been on counting in the first sense, that of enumeration, the determination of quantity. The process of
counting in this sense probably appears to most of us to be straightforward and simple to grasp conceptually. But is it really? The simplicity becomes somewhat less obvious when we consider the
question of what should be taken as evidence that someone, or something, is counting. Suppose we ask a child, “How many fingers do you have?” and he holds up both hands and says, “That many.” Should
we take this as evidence that he can count? Or imagine that we ask, “How many sisters do you have?” and the child holds up three fingers. Assuming that he really has three sisters, should we take
that as evidence that he can count? The second gesture seems to be better evidence of counting ability than the first. In the first case, the child could be simply showing us the things we asked
about (his fingers) and saying in effect, “You count them.” In the second case, there is at least an implicit mapping operation involved. The child is not holding up his sisters for enumeration, but
rather a set of fingers that has in common with his sisters only their number. He is implying, knowingly or not, that if we count his fingers, we will get the same number that we would get if we
counted his sisters. He is using one set to represent another with respect to the property of number. In order to represent the number of his sisters by holding up three fingers, the child need not
be able to attach a name (three, trois, drei) to the number he is representing. Nor is it necessary that he realize that a quantity of three is greater than one of two and less than one of four. That
is, he need not understand numbers in either their cardinal (how many) or ordinal (position in an ordered set) sense. He needs only to know that the number of fingers that he is holding up is the
same as the number of sisters he has. (Of course, if he has been taught to hold up his hand in a certain way when asked how many sisters he has, he need not
know even that.) This is not to suggest that when children use fingers to represent quantities that that is all they know, but simply to say that they need not know more than that. Being able to
verbalize a quantity also is not compelling evidence of an ability to count. The ability to say, for example, that one has three sisters, or that one is three years old, does not require the
knowledge that three is greater than two and less than four. It does not imply an understanding of greater than and less than, or any concepts of quantity at all. One may learn that one has three
sisters, or that one is three years old, in much the same what that one learns that one’s name is Sam, or that one’s address is 17 Oak Street. Similarly, one may learn to repeat a sequence of
words—one, two, three, four, …—without understanding how to apply these words to the task of counting. Suppose that an individual is able to reproduce a sequence of, say, four taps accurately but is
unable to say how many taps there are in the sequence and is unable to produce four taps on request without having a model to copy. It is easy to imagine being able to reproduce a pattern of a
sequence of a small number of events (four taps) without having a concept of number. One need only store a rhythmic pattern (tap-taptap-tap) and reproduce it as a whole. One suspects that moderately
long sequences can be stored and reproduced without counting, by the use of rhythmic groupings (di-dah-di-dah—di-dah-di-dah—di-dah-di-dah). It is not clear what the upper limit of one’s ability to
retain and reproduce such sequences without resorting to counting is, but it is conceivable that one might be able to reproduce such sequences of modest length accurately—as, for example, in scat
singing of jazz—even if one had no concept of number and was unable to specify which of two sequences had the greater number of elements. The ability to distinguish many from few is not compelling
evidence that one can count, inasmuch as such distinctions can be made on the basis of differences in gross perceptual features of collections. The same may be said regarding the ability to
distinguish more from fewer, although the extent to which this distinction might be assumed to depend on counting seems likely to vary with the specifics of the sets with respect to which the
distinction is to be made—to determine which of two groups of stones contains more stones requires a much less sophisticated grasp of the concept of number if one group contains 1,000 stones and the
other 10 than if one group contains 1,000 and the other 999. A related point may be made with respect to the ability to determine whether two small sets have the same number of items. Piaget (1941/
1952) discovered that when very young children are asked to say whether two small sets are equal in number (and they in fact are), they are likely to give the correct answer if the items are
spatially aligned so
Mathematical Reasoning
the one-to-one correspondence is salient, but to give the wrong answer if one of the sets is spread out relative to the other so the one-to-one correspondence is perceptually less apparent. Other
investigators have provided evidence that children often make judgments of relative number on the basis of overall perceptual features (length, area, density) of a set of items (Brainerd, 1979). The
attribution of such findings to numerical incompetence is called into question, however, by the finding that, when young children (two to four years old) are given the choice of either of two sets of
M&Ms, both relatively small in number, a majority consistently pick the more numerous set, regardless of their spatial arrangements (Mehler & Bever, 1967). There is good experimental evidence that
people may perceive the number of objects in a collection directly if the number is sufficiently small; some set the limit at three or four, while others say five or six (Chi & Klahr, 1975;
Gallistel, 1988; Jensen, Reese, & Reese, 1950; Kaufman, Lord, Reese, & Volkmann, 1949; Klahr & Wallace, 1973, 1976; Mandler & Shebo, 1982; Miller, 1993; Taves, 1941). Such direct perception of
numerosity has been distinguished from counting and called subitizing (Kaufman et al., 1949). McCulloch (1961/1965) refers to the numbers 1 through 6 as perceptibles and to all others as countables.
The ability to perceive perceptibles, he suggests, is one that nonhuman species probably also possess. According to this view, the ability to distinguish among collections of up to four objects, if
not to five or six, is not evidence of the ability to count, and it seems reasonable to suspect that the discrimination by animals of different small collections may be based on an ability of this
sort (Davis & Pérusse, 1988a). Whether subitizing is really fast counting whereby the items or memory representations of them are serially noted (Folk, Egeth, & Kwak, 1988; Gallistel & Gelman, 1991;
Gelman & Gallistel, 1978) or a case of all-at-once preattentive processing (Dehaene, 1997; Dehaene & Cohen, 1995) is a matter of debate. An understanding of subitizing is complicated by small numbers
of items sometimes being distinguishable on the basis of patterns they form (two forming a line, three a triangle, four a rectangle) and larger numbers of items being distinguished more readily if
arranged regularly (in aligned rows, say) than if distributed randomly (Maertens, Jones, & Waite, 1977; Mandler & Shebo, 1982). Also subsets of items can be identified on the basis of specific
characteristics, as illustrated in Figure 2.1, and their numbers combined in determining the number of items in the total set. The idea that the curve relating the time required to make numerosity
judgments has an elbow at the point that divides sets within the subitzing range from larger sets—which has been seen as evidence of the reality of subitizing—has been challenged (Balakrishnan &
Ashby, 1992).
Counting 1
Figure 2.1 Readers may find it easy to determine the numbers of items in Box 1 and in Box 2 at a glance, without explicit counting. The numbers in Box 3 and Box 4 are likely to be less easy to
determine this way, although explicit counting of every item in each set may be avoided by mentally dividing the items into subsets of three, four, or five items and summing. Boxes 5 through 8
illustrate that clustering may be facilitated by a variety of features, such as shape, spatial arrangement, color, and combinations of them.
Researchers disagree on whether the ability to subitize—assuming there is such a distinctive process—develops before the ability to count (von Glasersfeld, 1982; Klahr & Wallace, 1976; Klein &
Starkey, 1988) or after it (Beckwith & Restle, 1966; Mandler & Shebo, 1982; Silverman & Rose, 1980). Uncertainties of these sorts have led to questioning the continuing usefulness of the concept of
subitzing (Terrell & Thomas, 1990). Some argue that subitizing has been used only as a descriptive term and that the concept lacks explanatory value—that simply labeling the process of rapidly
discriminating small numbers of visually presented stimuli as subitizing does not shed light on the underlying mechanism (Miller, 1993; Thomas & Lorden, 1993). On the other hand, there is the view
that much of what has been interpreted as counting by very young children could be the result of subitizing (Sophian, 1998). Clements (1999a) argues that children should be taught to subitize as a
means of facilitating the development of other numerical ideas such as those of addition and subtraction. Whatever the status of subitizing, that the process of determining numerosity is sensitive to
the number of items in a display is seen in the time required to name the number, and the frequency of errors, both increasing with the number of items, especially for collections larger than three
or four (Bourbon, 1908; Jensen et al., 1950; Logan & Zbrodoff, 2003; Mandler & Shebo, 1982). Counting and subitizing are both to be distinguished from estimation, which is also a type of numerosity
assessment that has been much
Mathematical Reasoning
studied (Siegler & Booth, 2005). Generally, estimation is required when the number of items in a set is too large to be subitized and the time for inspection is too short to permit counting. One
common finding is that adults generally underestimate the number of items in a brief visual display that contains more than a few items, the magnitude of the underestimation increasing with the
number of items in the display (Krueger, 1982). Both children and adults differ considerably in their ability to estimate numerical quantities, such as the results of a computation, like 63 × 112,
and that ability tends to be positively correlated with indicants of general cognitive ability, like IQ (Reys, Rybolt, Bestgen, & Wyatt, 1982). Children learn to say number words—one, two three, …—in
the correct sequence at a fairly early age, but as already noted, this can be done independently of any concept of quantity. In English, counting is used to refer to verbalization of this sort as
well as to the act of enumeration, which makes for confusion. Fuson and Hall (1983) recommend use of the terms sequence words and counting words to differentiate the one case from the other. For
present purposes, I want to argue that the main evidence of the ability to count, at least in a mathematically relevant sense, is the ability to put integers into one-to-one correspondence with a set
of objects (or with events in time)—to assign to each object (or event) in the set a unique integer name in the conventional order of the integers (1, 2, 3, …) and to equate the number of items in
the set with the integer name assigned to the last object (or event) in the set. This is close to Gelman and Gallistel’s (1978) description of counting and essentially what Briars and Siegler (1984)
refer to as the word–object correspondence rule, though not perhaps precisely as they would express it. The eminent American mathematician George Dantzig (1930/2005) argues that the principle of
one-to-one correspondence is one of two that permeates all of mathematics (the other being the idea of ordered succession). The main point I want to make with this brief consideration of the question
of what it means to count, however, is that the answer is not as obvious and simple as one might assume at first thought. We will return to the topic of children learning to count in Chapter 14.
☐☐ Can Animals Count? The literature relating directly to the counting, or counting-like, abilities of animals is very large. Here we can only scratch the surface. For the reader who would like more
information, there are numerous readily available reviews of work on the topic, among them, those of Honigmann
(1942), Salman (1943), Wesley (1961), Davis and Memmott (1982), Davis and Pérusse (1988a, 1988b), and Rilling (1993). There have been many claims that specific animals have had the ability—either
naturally or as a consequence of training—not only to count but also to solve mathematical problems that are beyond most humans. The claimed abilities of these animals (usually, but not always,
either horses or dogs) have been exhibited before large audiences, and in a few cases the animals’ abilities have been the focus of scientific study. Perhaps the most famous of the calculating
animals was Clever Hans, an Arab stallion whose mathematical prowess—evidenced by his tapping out the answers to mathematical questions with his hoof—was exhibited around Germany by his owner,
Wilhelm von Osten, a Russian high school teacher, beginning late in the 19th century. Hans’s performance was good enough to convince the 13 members of a commission (including two zoologists and a
psychologist) established by Germany’s Board of Education that his apparent mathematical knowledge was genuine. Subsequent investigations by a psychologist, Oskar Pfungst (1911/1965), revealed that
Hans was reacting to subtle involuntary cues produced by his questioner (usually, but not always, his owner). Hans did poorly on questions to which the questioner did not know the answer. If the
accounts of the nature of the cues to which the horse was responding are accurate, they were subtle indeed—too subtle to be detected by numerous human observers, or to be deliberately suppressed by
Pfungst when he served as the questioner—and Clever Hans was unquestionably appropriately named, his inability to count or do mathematics notwithstanding. Clever Hans is well known among
psychologists because of the work of Pfungst. There were many other examples at about the same time as Clever Hans of animals that were reputed to be able to count and calculate (Rosenthal, 1911/
1965). Among the more spectacular instances were several other horses, owned by Karl Krall, a wealthy merchant of Elberfeld, Germany. (Krall purchased Clever Hans shortly before von Osten’s death.)
Claims of what some of these “Elberfeld horses” could do (e.g., immediately produce cubic or fourth roots of seven-digit numbers) were nothing short of amazing. The Belgian poet-essayist Maurice
Maeterlinck (1862–1949) was sufficiently impressed with their abilities and sufficiently convinced of their authenticity to feature them, and to attempt to explain them in paranormal terms, in his
The Unknown Guest (Maeterlinck, 1914/1975). Brief accounts of many of the calculating animals are provided by Tocquet (1961), who attributes their performance to the ability to respond to cues
unconsciously provided by questioners or observers, which is not to deny the possibility of deliberate deception and fraud in some instances.
Mathematical Reasoning
In sum, reports, of which there are many, of abilities of animals to do higher forms of mathematics appear to have been quite thoroughly debunked. Clever Hans (or Clever Hans and his owner) was
indeed clever enough to convince many observers of his mathematical prowess, but the cleverness rested on abilities other than mathematics. The same conclusion appears to be warranted for all the
other instances of computing animals that have been carefully investigated. Few, if any, contemporary researchers believe that nonhuman species can do complicated mathematics. On the question of
whether nonhuman species can count, or perhaps do some rudimentary arithmetic, there is far less agreement. The answer to the question of whether they can count seems likely to depend on how one
defines counting. One might object that my definition in the preceding section is too narrow and that with only a slightly less restrictive definition, we would have to conclude that they can. They
are able, for example, to discriminate between patterns of dots on the basis of the number of dots the patterns contain, at least for patterns containing relatively small numbers of dots. And some
organisms naturally engage in repetitive behavior, repeating some act the same number or times, or approximately so, on different occasions. Davis and Memmot (1982, p. 549) claim, for example, that
when a cow chews a cud, it moves its jaw almost precisely 50 times between each swallow. I suspect that few of us would take that as evidence that cows can count to 50, but the observation points up
the importance of being clear about what we will take to be evidence of counting. In the context of an informal discussion of animal intelligence, Sir John Lubbock (1885) recounts an observational
report of a wild crow that gave evidence of being able to “count” to four or five. The story is that a man wished to shoot the crow, and to do so he planned to deceive it by having two people enter a
watch house and only one leave. The implication is that the crow would make itself scarce if it believed a person was in the vicinity. The man discovered that the crow was not fooled if two men
entered and only one left, or if three men entered and two left; only if five or six men entered and four or five left was the crow’s ability to keep track exceeded. The accuracy of the account is
unknown. The results of recent experiments with ravens, close cousins to crows, suggest that these birds have considerable ability to solve problems that appear to require some logical reasoning
(Heinrich & Bugnyar, 2007). There is at least suggestive evidence that honeybees use landmarks to estimate the distance to a goal. Chittka and Geiger (1995) trained bees to fly from a hive to a food
source on a course that passed a given number of landmarks; they found that by changing the number of landmarks after training, they disrupted the bees’ ability to find the food source. These
investigators interpreted their findings as evidence that the bees
were capable of “proto-counting”—something less than bona fide counting, but close to it. Although the terms numerosity and numerousness are used more or less interchangeably in the literature,
Stevens (1951) makes a distinction between them, using numerosity to connote the property of a collection that one determines by counting, and numerousness to indicate a property of a collection that
is perceived without actual enumeration. Dantzig (1930/2005) makes a somewhat similar distinction, in this case between having a number sense and being able to count; nonhuman species have the
former, he argues, but not the latter, which ability he sees as exclusively human. It is possible, Dantzig contends, “to arrive at a clear-cut number concept without bringing in the artifices of
counting” (p. 6). He illustrates what he means by a “number sense” with behavior exemplified when a mother wasp of a particular species—genus Eumenus—which lays eggs in individual cells and provides
each cell with several live caterpillars to serve as food for hatchlings, puts 5 caterpillars in cells with eggs destined to become male grubs and 10 in those with eggs destined to become females
(which grow to be larger than males). Some might take this behavior, and other similar number-based distinctions that nonhuman species can make, as evidence of the ability to count, but Dantzig
reserves the concept of counting for a more demanding type of process, which rests not only on the ability to perceive differences in quantity but also on a grasp of the concept of ordered
succession: To create a counting process it is not enough to have a motley array of models, comprehensive though this latter may be. We must devise a number system: our set of models must be arranged
in an ordered sequence, a sequence which progresses in the sense of growing magnitude, the natural sequence: one, two, three, …. Once this system is created, counting a collection means assigning to
every member a term in the natural sequence in ordered succession until the collection is exhausted. (p. 8)
In effect, Dantzig requires that behavior that is to count as counting give evidence of appreciation of the principles of both cardinality and ordinality: “matching by itself is incapable of creating
an art of reckoning. Without our ability to arrange things in ordered succession little progress could have been made. Correspondence and succession, the two principles which permeate all
mathematics—nay, all realms of exact thought— are woven into the very fabric of our number system” (p. 9). The question of whether animals count in their natural habitat in a sense that meets
Dantzig’s criteria seems doubtful. More generally, I believe that the majority of students of animal behavior would not
Mathematical Reasoning
contend that animals naturally count, except in a relatively rudimentary sense of the word. Noting the existence of anecdotal reports of animal behavior in the wild that, if taken at face value, seem
to suggest the ability to count, Davis and Memmott (1982) contend that there is no solid evidence that animals count in their natural state. That animals use number (numerosity or numerousness)
discrimination in their natural habitat seems pretty well established, but exactly how to relate that to counting appears to be a topic of continuing debate. But do animals have the ability to learn
to count, if trained to do so? This question has motivated a considerable amount of research (Davis & Pérusse, 1988a, 1988b; Gallistel, 1990; Rilling, 1993). German zoologist– animal behaviorist Otto
Koehler (1937, 1943, 1950) made some of the earliest attempts to train animals, more particularly birds, to make discriminations on the basis of number. An example of the kind of task given to his
birds was to select from five boxes the one whose lid contained a pattern matching another pattern lying in front of the row of boxes. The patterns differed with respect to the number of spots they
contained, and this varied from two to six. Koehler taught his birds to respond to numerosity (or numerousness, if one prefers) in other ways as well. For example, he was able to teach them to eat
only a fixed number of (up to four or five) items (e.g., peas) from a larger number that was available. Unfortunately, details regarding Koehler’s experiments are sparse, and investigators of
counting behavior by animals have been cautious about interpreting his results as firm evidence of counting ability. Koehler himself was unwilling to attribute to his subjects the ability to count.
They could not count, he argued, because they lacked words. What they could do, he suggested, was “think unnamed numbers.” The possibility that Koehler’s birds could have been making their
discriminations on the basis of cues confounded with numerosity has not gone unnoted (Thomas & Lorden, 1993; Wesley, 1961). Other investigators have demonstrated that pigeons can be taught to
discriminate between sets containing different numbers of objects, within limits. Rilling and McDiarmid (1965) trained pigeons through operant conditioning to discriminate between 50 pecks and 35
pecks, but as noted above, discriminating more from fewer could conceivably be made on the basis of gross perceptual features and not require counting. Being able to discriminate, say, 15 from 14
would be much more compelling evidence of the ability to count. Watanabe (1998) trained pigeons to respond to a set of four objects while refraining to respond to a set of two objects, and to do this
when the objects comprising the sets of four and two were varied in size and shape. He interpreted his results as evidence that the birds were able to
abstract “twoness” and “fourness,” but not as evidence of their ability to count sequentially from one to four. Other experiments purporting to demonstrate the ability of pigeons to make distinctions
on the basis of numerosity include those of Rilling (1967), Honig (1993), Xia, Siemann, and Delius (2000), and Xia, Emmerton, Siemann, and Delius (2001). Pepperberg trained an African grey parrot,
Alex, to produce different vocalizations to collections of from two to six objects, the objects in the collections and their arrangements being varied to invalidate cues that might be used other than
number. The same investigator obtained evidence that the bird had acquired a concept of absence or zero, in the sense of being able to vocalize “none” appropriately when questioned about properties
or objects that were missing from a display. Pepperberg’s 30-year odyssey with this remarkable bird, which included the acquisition of many cognitive abilities other than those involving numbers, is
documented in several technical publications (Pepperberg, 1987, 1988, 1994, 1999) as well as in a popular book-length account (Pepperberg, 2008). Researchers have trained rats by operant conditioning
to press a bar a specified small number of times (e.g., eight) in order to obtain food (which is withheld if the number of bar presses is not correct, or nearly so). Mechner (1958; Mechner &
Guevrekian, 1962) trained them to press one lever, A, the desired number of times (up to 16) in order to get food by pressing a second lever, B. The number of presses of lever A was generally
approximate to the target number, the spread around that number increasing with the number’s size. Davis and Bradford (1986) showed that rats can quickly learn to select which of six tunnels to enter
in order to obtain food, when cues other than ordinal position of the tunnel are controlled. On the basis of numerous operant conditioning experiments with rats over three decades, Capaldi (1964,
1966; Capaldi & Miller, 1988a, 1988b) concluded that the animals count reinforcing events. On the question of whether what animals can do really amounts to counting, Capaldi’s (1993) answer is
unequivocal: “Animals, at least animals as highly developed as the rat, count routinely, I suggest” (p. 193). Routinely is worth emphasis here; Capaldi explicitly dismisses the contention that if
animals can count, they can do so only under highly contrived circumstances designed to maximize the opportunity for them to learn: “Counting is not some esoteric activity engaged in by rats when no
other means of solution is open to them, as suggested by Davis and Memmott (1982) and Davis and Pérusse (1988). Rather, rats count routinely. By this I mean that it is reasonable to assume that rats
count in a wide variety of conventional learning situations” (p. 206). Fernandes and Church (1982) trained rats to press one lever in response to a sequence of two sounds of fixed duration and to
press a different lever in response to a sequence of four sounds of the same fixed
Mathematical Reasoning
duration. Because the total sound time was redundant with number of sounds in this task, Fernandes and Church tested the rats on a transfer task in which the durations of the individual sounds was
modified so the total sound time was the same for both the two-sound and the foursound stimuli, and the rats made the discrimination on the basis of the different numbers of sounds. In a subsequent
experiment, Meck and Church (1983) trained rats to respond differentially to stimuli for which duration and number were confounded (e.g., one stimulus was two events in two seconds and another was
four events in four seconds). After training, the rats responded correctly either to number (number of events varied with duration held constant) or to duration (number of events held constant with
duration varied), showing that they had encoded both number and duration during training. Similar results were obtained by Roberts and Mitchell (1994) with pigeons. Church and Meck (1984) also
trained rats to press either a left or a right lever in response to two tones or two lights (left) or to four tones or four lights (right). When tested, the rats pressed the left lever in response to
one tone and one light in combination (two events) and the right lever in response to a combination of two tones and two lights (four events), indicating that they had learned to respond on the basis
of number, independently of the signals’ sensory mode. These and other experiments (Church & Gibbon, 1982; Gibbon & Church, 1990; Meck, Church, & Gibbon, 1985) provide strong evidence that rats can
learn to respond either to duration or to the number of sequential events when possibly confounding temporal variables are adequately controlled. Following a review of experiments of the sort just
described, Broadbent, Church, Meck, and Rakitin (1993) conclude that “a substantial body of evidence indicates that timing and counting have a shared mechanism” (p. 185). What that mechanism is
remains to be determined, but “it is clear,” they contend, “that any model that provides an explanation of counting should also explain the data that link counting and timing” (p. 185). The ability
to learn to make numerousness or numerosity discriminations has also been demonstrated with various species of monkeys (Brannon, 2005; Brannon & Terrace, 2000, 2002; Hicks, 1956; Rumbaugh & Washburn,
1993; Thomas & Chase, 1980; Thomas, Fowlkes, & Vickery, 1980; Washburn & Rumbaugh, 1991), chimpanzees (Boysen, 1992, 1993; Boysen & Berntson, 1989, 1996; Ferster, 1964; Matsuzawa, 1985; Matsuzawa,
Asano, Kubota, & Murofushi, 1986; Tomonaga & Matsuzawa, 2002), raccoons (Davis, 1984), and dolphins (Kilian, Yaman, von Fersen, & Güntürkün, 2003). Using sets controlled for shape, size, and surface
area, Brannon and Terrace (1998, 2000) obtained evidence that monkeys represent
numerosities one through nine at least ordinally. The animals learned to order sets of one through four in ascending or descending order, and what they learned transferred to sets of five through
nine in ascending order, but not in descending order. Similar transfer has been obtained with a squirrel monkey and a baboon (Smith, Piel, & Candland, 2003). More recently Brannon, Cantlon, and
Terrace (2006) were able to get some transfer to testing on 3→2→1 when the animals were trained first on 6→5→4. Some of the results obtained with monkeys suggest that the animals represent numerical
values as analog magnitudes. Cantlon and Brannon (2005), for example, trained monkeys to choose the pattern with the larger number of items when the patterns were superimposed on a blue background,
and to choose the pattern with the smaller number when the background was red. They found that choice time varied inversely with magnitude of the difference between the numbers of items on the two
displays being compared. Similarly, Beran (2007) found that when monkeys had to select the more numerous of two sets varying in size from 1 to 10 items, percent correct varied directly with the
magnitude of the difference between the set sizes. Cantlon and Brannon (2006) trained monkeys to respond to pairs of patterns on the basis of number of items contained in them—to press the pattern
with the smaller number of items first and then the one with the larger number. After training with patterns containing from 1 to 9 items, patterns with 10, 15, 20, and 30 items were added to the
mix. What the animals learned from training with the less numerous sets transferred spontaneously to the more numerous sets. Response time varied inversely and percent correct directly with the ratio
of the numbers of items in the larger and smaller of sets being compared. These results are reminiscent of the finding by Moyer and Landauer (1967, 1973) of evidence of analog representation of
numbers by humans. Cantlon and Brannon also suggest that most, if not all, of the quantities monkeys are able to discriminate are probably represented only approximately, although Thomas and Chase
(1980) were able to train one monkey to tell the difference between collections with eight elements and those with nine, which seems to require a more exact representation. A few efforts have been
made to give animals a symbolic representation of number. Washburn and Rumbaugh (1991) demonstrated that monkeys are capable of learning to select, after much training, the (numerically) larger of
two Arabic numerals, even when the two numerals had not been paired previously. Other successful efforts to train animals to associate symbols with numerosities have been made by Olthof, Iden, and
Roberts (1997) with monkeys, by Ferster (1964) and Boysen and Berntson (1989; Boysen, 1993) with chimpanzees, and by Xia et al. (2000, 2001) with pigeons.
Mathematical Reasoning
Matsuzawa and colleagues trained a chimpanzee to select an Arabic numeral reflecting the number of objects—up to nine—in a display. This is perhaps the most sophisticated behavior relating to
counting that has been demonstrated with animals; it clearly reveals the ability to make discriminations on the basis of number (assuming all cues that could be correlated with number are ruled out
by counterbalancing), but it does not prove that animals can count in the fullest sense. One could learn to make these discriminations and to associate the different patterns with a set of symbols
(numerals or letters of the alphabet) without having an appreciation of the idea that the quantities involved constitute a progression. Tomonaga and Matsuzawa (2002) agree with Murofushi (1997) that
the chimpanzee’s performance also does not provide compelling evidence of the ability to count in the sense of using a one-to-one mapping process as distinct from estimating numerosity. Woodruff and
Premack (1981) trained a chimpanzee to match objects on the basis of fractional parts, for example, to select 1/2 an apple (rather than 3/4 of an apple) to correspond to 1/2 a glass of water, and to
select (more often than not) a combination of 1/2 of one thing and 1/4 of another to correspond to 3/4 of something else, suggesting a rudimentary form of fraction addition. That animals can make
discriminations based on numerosity seems now beyond doubt. But can they count? Obtaining an answer to this question is hindered by the fact that what constitutes counting has not yet been
established to everyone’s satisfaction, though a variety of distinctions relating to the question have been made. Honig (1993) distinguishes among numerosity discrimination, number discrimination,
and counting: “A numerosity discrimination involves a discrimination between nonadjacent numbers, or between different ranges of numbers of elements. A number discrimination involves differential
responding to adjacent numbers of items, such as 3 and 4, 7 and 8, and so forth. Counting is a number discrimination in which different responses are made to each of a series of adjacent numbers of
items” (p. 62). On the basis of his own work with pigeons, Honig concludes that the birds are capable of numerosity discrimination “but number discriminations are more difficult” (p. 62); he did not
use a paradigm to test for counting. Davis and Memmott (1982) distinguish a continuum of numberrelated abilities ranging from simple number discrimination through counting, to a concept of number,
and the ability to perform operations on numbers. With respect to what constitutes counting, they give two requirements: “(a) the availability of some form of cardinal chain and (b) the application
of that chain in one-to-one correspondence to the external world” (p. 565). They consider number discrimination to be within the normal ability of many animals. The concept of number, on the other
hand, they consider to be probably beyond the ability of most infrahuman species. On the basis of a review of several efforts to teach animals to count involving both operant and Pavlovian
conditioning paradigms, they conclude that although for many studies in which counting has been claimed an alternative explanation is possible, nevertheless the evidence is compelling that animals
can learn to count, by their definition, at least up to three. Davis and Pérusse (1988a) distinguish between counting and counting-like behavior that falls short of true counting. The latter, which
they call proto-counting, is what they believe animals are capable of doing. Gallistel (1993) contrasts numbers as categories and numbers as concepts: A mental category, in the usage I propose, is a
mental/neural state or variable that stands for things or sets of things that may be discriminated on one or more dimensions but are treated as equivalent for some purpose. The mental category
corresponding to 3 is a mental state or variable that can be activated or called up by any set of numerosity 3 and is activated or called up for purposes in which the behaviorally relevant property
of a set is its numerosity. On the other hand, a concept, in the usage I propose, is a mental/neural state or variable that plays a unique role in an interrelated set of mental/neural operations, a
role not played by any other symbol. The numerical concept 3 is defined by the role it plays in the mental operations isomorphic to the operations of arithmetic, not by what it refers to or what
activates it. (p. 212)
Do animals have numerical categories? Do they have numerical concepts? Gallistel’s answer is that they have the former, and perhaps the latter as well. Their possession of numerical categories is
seen in the fact that they respond to sets on the basis of their numerosity, even though the process by which they categorize sets on this basis is a noisy one. The noisiness is seen, for example, in
that when rats have to press a lever n times to get some reinforcement, they typically learn to press it approximately the right number of times, and the accuracy of the approximation varies
inversely with n. Gallistel’s conclusion that animals probably also have numerical concepts rests on the results of experiments suggesting that “common laboratory animals order, add, subtract,
multiply, and divide representatives of numerosity” (p. 222). Rumbaugh and Washburn (1993) argue that “to conclude that an animal counts, one must, among other things, demonstrate (a) how it
enumerates items of sets, things, or events; (b) that it partitions the counted from the uncounted; (c) that it stops counting appropriately; and (d) that it can count different quantities and kinds
of things” (p. 95). Again, “to count, an organism must know each number’s ordinal rank
Mathematical Reasoning
and that each number’s cardinal value serves to declare the total quantity counted at each step in the counting process (e.g., the item assigned ‘three’ is the third one and, also, that three items
have been enumerated)” (p. 96). I think it safe to say that very few studies of animals’ dealings with numerosity have clearly demonstrated all of these criteria. So by this definition, while animals
unquestionably can make many impressive discriminations based on numerical properties, a large majority of those discriminations fall short of being conclusive demonstrations of counting. Rumbaugh
and Washburn (1993) point, however, to some of their own work (Rumbaugh, 1990; Rumbaugh, Hopkins, Washburn, & Savage-Rumbaugh, 1989; Rumbaugh, Savage-Rumbaugh, Hopkins, Washburn, & Runfeldt, 1989) as
evidence that a chimpanzee (an extensively language-trained chimpanzee) can be taught to count, which is to say, “to respond to each of three Arabic numbers in a differential and relatively accurate
manner—in a manner that we term entry-level counting” (p. 101). Davis (1993) makes a distinction between relative and absolute numerosity abilities and argues that, while animals have the former, the
latter are unique to humans. Animals are able, he contends, to demonstrate many forms of numerical competence under supportive conditions, such as those typically provided in research laboratories,
but the supportive conditions are essential. “In short, I do not believe that demonstrations of numerical competence come easily. They are no mean feat, and for each of the successes you have read
about, there are untold failures. Although the nondissemination of negative evidence is the way science normally progresses, it is particularly unfortunate in the case of numerical competence in
animals because it clouds the question of how general or easily established this ability is” (p. 110). Davis argues that much of what has been interpreted as counting behavior by animals can be
explained in other ways, and that the search for rudiments of human abilities among other species may have had the cost of reducing the probability of discovering abilities that other species have
that humans do not. Von Glasersfeld (1993) makes the same point and characterizes the “often unconscious supposition that the way we tend to solve problems of a certain kind is the only way of
solving them” as “a widespread manifestation of anthropocentrism” and one that “seems almost unavoidable in cases where we have not thought of an alternative solution” (p. 249). I have already noted
that Terrell and Thomas (1990) suggest that the concept of subitizing may have outlived its usefulness. Miller (1993) dismisses as untenable the belief that subitizing can account for much of the
empirical data on animal perception of numerosity. Thomas and Lorden (1993) also argue that the idea of proto-counting, as proposed by Davis and Pérusse (1988a), is unjustified. They hold that some
discriminations that
might appear to result from counting can be made on the basis of pattern detection. “Prototype matching is a well-established process to explain the acquisition and use of class concepts in general,
and we suggest that numerousness concepts are not an exception” (p. 141). Similar positions have been expressed by Mandler and Shebo (1982) and by von Glasersfeld (1982, 1993). Von Glasersfeld (1993)
argues that “it is one thing to recognize a spatial or temporal pattern as a pattern one knows and has associated with a certain name, and quite another to interpret the pattern as a collection of
unitary items that constitute a certain numerosity” (p. 233). Many studies have addressed the question of whether animals have the ability to do, or to be taught to do, simple arithmetic. The
literature on the subject is large and I will not attempt to review it here. For present purposes it suffices to note that the types of arithmetic capabilities that have been observed in animal
studies are roughly comparable to those that have been observed in studies of human infants and prelingual children (Boysen & Berntson, 1989; Brannon, Wusthoff, Gallistel, & Gibbon, 2001). A study by
Rumbaugh, Savage-Rumbaugh, and Hegel (1987) is illustrative of what has been done. These investigators gave chimpanzees a choice between two trays, each of which had a pair of food wells containing a
few (in combination not more than eight) chocolate chips. Over time, the chimps learned to choose the tray whose food wells, in combination, held the larger number of chips, suggesting that they were
adding the contents of the two food wells, or something equivalent to that. The finding was replicated and extended slightly by Pérusse and Rumbaugh (1990). In sum, studies have shown with reasonable
certitude that animals, including not only primates but rats and birds, can be taught to make limited distinctions based on number. Precisely what animals are doing when they are making such
distinctions is still a matter of debate, as are the questions of how numerical quantities are represented by animals (or infants) in the absence of a verbal code and whether the distinctions that
are made require a specifically numeric sensitivity or can be done with more generic capabilities (Dehaene & Changeux, 1993; Gallistel & Gelman, 1992, 2000; Gelman & Gallistel, 1978; Meck & Church,
1983; Mix, Huttenlocher, & Levine, 2002a,b; Simon, 1997, 1999). In the aggregate, the evidence suggests that, whatever the representation, it produces a relationship between quantities that is
described, to a first approximation and at least for quantities greater than 3, by Weber’s law, according to which the just discriminable difference between two quantities is a constant proportion of
their size; so it is easier, for example, to discriminate between 4 and 5 than between 8 and 9, and easier to discriminate between 8 and 9 than between 16 and 17. Some theorists hold that small
quantities (one to three) are represented discretely, whereas larger
Mathematical Reasoning
quantities are represented in a more continuous form (Cordes & Gelman, 2005). Few if any experiments have yielded incontestable evidence that animals can learn to count in the sense of doing
something equivalent to enumerating the items in a set and consistently telling the difference between sets of modest size that differ only by a single item. A common criticism of the methods used to
study counting, or countinglike, behavior by animals is that number often is confounded with other variables (density, area covered by ensembles, brightness, patterns, interelement distances,
durations or timing of events, rhythm, and as in the case of Clever Hans, cues too subtle to be detected by most human observers). Moreover, in controlling for one confounding variable, one can
easily increase the salience of other confounds, or introduce new ones. Concern about the possibility of responses to correlates of numerosity being misinterpreted as responses to numerosity, per se,
is unquestionably well founded. On the other hand, as Miller (1993) points out, absent empirical data, one should not assume that nonnumerical cues will always overshadow numerical cues. Two things
are clear. First, the question of whether animals can count, either naturally or as a consequence of careful training, remains a matter of definition and of lively debate. Second, the most impressive
examples of the numerical abilities of animals provide only the vaguest hint of the type of numerical competence that human beings somehow developed.
☐☐ Homo sapiens Learns to Count Mathematics is an astoundingly broad subject. The concepts involved range from those that are sufficiently intuitively simple that they can be comprehended by very
young children to those that are sufficiently complex that only a handful of people in the world are likely to be completely in command of them. As is true of other conceptually rich subjects, many
of the concepts are related in a hierarchical fashion, in the sense that understanding those at a given level of the hierarchy is unlikely, if not impossible, unless one understands those that are
closely related at lower levels in the hierarchy. Understanding the operation of raising to a power, for example, requires an understanding of multiplication. Performance of simple arithmetic
operations presumably depends on the ability to count. The ability to count presupposes the ability to distinguish discrete objects.
This view of mathematics as hierarchically structured motivates efforts to determine the level at which children of a specific age or mental development are capable of functioning. It also is
conducive to the idea that the acquisition of mathematical competence by humankind as a species necessarily progressed up a similar, if not the same, ladder—counting before calculating, concrete
concepts before abstract ones, arithmetic before algebra, and so on. From this perspective it is not surprising to find an interest in the development of mathematical competence in children being
combined with an interest in the history of mathematics, as is evident, for example, in the work of Piaget (1928, 1941/1952; Piaget, Inhelder, & Szeminska, 1948/1960), who, as Resnick and Ford (1981)
put it, “thought it possible … to understand the development of the species’ intellectual capacities by studying the intellectual development of individuals as they grew into adults” (p. 156). Sadly,
the early history of the development of mathematics is not known in any detail. All we have is guesswork aided by a few clues to the progression of the ability to count, represent quantities, and
calculate in prehistoric times. One thing is clear, however; the ability to count, whenever and however it was obtained, not only constituted a major step in the development of mathematics but was an
extraordinarily useful ability in its own right. Imagine a shepherd who could not count trying to keep track of his sheep. If his flock were sufficiently small, he might satisfy himself that they
were all accounted for by checking them one-by-one against a mental list (assuming he could recognize the individual sheep), but if the flock were large, this would be an impossible task. Knowing the
number of sheep in his flock and being able to count at least up to that number greatly simplifies his task. There is considerable dispute among mathematicians as to whether ordinality or cardinality
is the more fundamental concept, and whether either is more fundamental than the natural numbers. There is some evidence that the individual child develops a concept of, or is taught, ordinality
before cardinality, and that the former but not the latter precedes and is fundamental to the development of number competence. Dantzig (1930/2005) notes that inasmuch as cardinality is based on
matching only, while ordinality requires both matching and ordering, there is a temptation to assume that cardinality preceded ordinality in the history of the development of the number concept, but
he argues that investigations of primitive cultures have not revealed such precedence: “Wherever any number technique exists at all, both aspects of number are found” (p. 9). Psychologists have been
on both sides of the debate regarding the priority of ordinality or cardinality. Some hold that the priority of cardinality is seen in the ability of children to make discriminations on
Mathematical Reasoning
the basis of number (manyness) among collections of objects that are few in number before they can count in any meaningful sense (Nelson & Bartley, 1961). Whether what is being discriminated in these
cases is really number, as distinct from some property—such as density, the spatial closeness of the objects—has not always been clear. The priority of ordinality has been proposed by Brainerd
(1973a, 1973b, 1979), who contends that most children have a good grasp of ordinality, but not of cardinality, before they begin school. Cardination, he argues, is generally not a relatively stable
concept until children are roughly 9 or 10 years old, which is considerably after the time they are expected to begin learning arithmetic. Brainerd (1979) describes his ordinal theory of the
acquisition of number concepts as postulating a process with three overlapping phases: “What the theory actually says is that most children will have made considerable progress with the notion of
ordination before they make much progress with arithmetic, and most children will have made considerable progress with arithmetic before they make much progress with cardination” (p. 168). Following
a brief review of some work of Piaget (1952), Dodwell (1960, 1961, 1962), Hood (1962), Beard (1963), Siegel (1971a, 1971b, 1974), Wang, Resnick, and Boozer (1971), and Beilin and Gillman (1967),
Brainerd concludes that “the available evidence on ordination, cardination, and arithmetic is sketchy and not very conclusive. Insofar as the developmental relationship between ordination and
cardination is concerned, there is very little solid evidence” (p. 126). There is much more to be said regarding the question of when and how children acquire an appreciation of the ordinal and
cardinal properties of number, but the acquisition of these properties is complicated in that many of the numbers to which children are exposed in their early years—TV channel numbers, numbers on
athletes’ jerseys, numbers on buses or trains, license plate numbers, street address numbers, telephone numbers— generally are neither ordinals nor cardinals but serve only a nominal function. A
distinction that is important to a full understanding of numbers, in addition to that between ordinality and cardinality, is the distinction between count numbers and measure numbers (Munn, 1998).
Count numbers apply to the numeration of discrete entities; measure numbers are used to represent continuous variables, such as length, weight, and temperature. For present purposes, let us note that
the question, still not completely settled, has obvious implications for the teaching of elementary mathematics, a topic to which we will return in Chapter 15. Are there people (cultures) who do not
count, or is counting an activity that is common to all cultures? Apparently there are
cultures—probably very few—in which the ability to count has not been developed much beyond distinguishing among one, two, and many. Flegg (1983) cites the example of the Damara of Namibia, who would
“exchange more than once a sheep for two rolls of tobacco but would not simultaneously exchange two sheep for four rolls” (p. 19). Another instance of a group that lacks words for specific numbers
beyond 2 is a small hunter–gatherer tribe, the Pirahã, that lives in relative isolation along the Macai River in the Amazon jungle (Gordon, 2004). It appears that children in all cultures that count
use their fingers to facilitate the process (Butterworth, 1999), but beyond this, various methods for counting and for representing the results thereof have been developed and used in different
cultures over the ages. Conant (c. 1906/1956) speculates that every nation or tribe has developed some method of numeration before having words for numbers. Detailed accounts of several systems, used
for both counting and calculating, based on finger positions and other body parts, are readily available (Dantzig, 1930/2005; Ellis, 1978; Flegg, 1983; Ifrah, 2000; Menninger, 1992; Wassmann & Dasen,
1994). A system described by the Venerable Bede, who lived in the eighth century, could represent quantities as great as 1 million (Figure 2.2). Apparently people—merchants, traders, and ordinary
folk—in most parts of the world have used finger counting at one time or another, and there are places where its use is still common (Flegg, 1983). Saxe (1981, 1982, 1985) describes a system of
tallying and measuring used by the Oksapmin of Papua New Guinea that associates numbers with 27 different locations on the hands, arms, shoulders, and head (see also Lancy, 1978; Saxe, 1991; Saxe &
Posner, 1983). Saxe, Dawson, Fall, and Howard (1996) note that, unlike the notational system (HinduArabic) used today throughout Western society, which is specialized for computation, the Oksapmin
system is specialized for counting and certain forms of measurement and is not particularly conducive to computation. Flegg (1983) notes that an abstract understanding of numbers is not needed in
order to be able to count. He contends that the activity of counting, including the development of finger systems to facilitate the process, predated the concept of numbers in the abstract and that
the intellectual step from the former to the latter is a large one that came relatively late in human history. Exactly when this step was taken is not known; Flegg speculates that an awareness of
numbers in the abstract developed sometime around 3,000 BC or a bit later, but that it did not become an influential part of mathematical thinking until about the time of Pythagoras. But the species
did learn to count—one way or another.
Mathematical Reasoning
Figure 2.2 The Venerable Bede’s system for the manual representation of numbers as rendered on a woodcut originally published in Summa de Arithmetica by Italian mathematician Luca Pacioli near the
end of the 15th century.
☐☐ And to Calculate Calculating—performing operations on numbers such as adding, subtracting, multiplying, dividing, and so on—is a considerably more abstract process than counting. For some of the
operations that can be performed on numbers there are fairly obvious analogs that can be performed on concrete objects. What one can do with numbers, however, is much less constrained than what one
can do with concrete objects. One can add 6 apples and 6 apples and get 12 apples, just as one can perform the abstract operation 6 + 6 = 12. But one does not multiply 6 apples by 6 apples to
get 36 apples, and while 62 apples makes sense, (6 apples)2 does not. The physical analog of the multiplication operation for the natural numbers is successive addition. To stick with the apples
example, one gets 36 apples by adding 6 sets of apples, each of which contains 6 apples, and one gets 62 apples by computing 62 = 36 and counting out that number of apples. The origin of calculating,
like that of counting, is hidden in the mists of prehistory. No one knows who first noticed that when one item is combined with one item the result is invariably two items. Or that when two items and
three items are put together, their combination always is five items. Or how long it took for the concept of an addition operation to emerge. Much of the history of the development of counting and
calculating competence is recorded in the techniques and artifacts that were invented to aid either process or both. Ancient physical aids to counting and calculating include counting boards, abaci,
and related devices. Traders, merchants, and tax collectors used such devices for counting and computing before they made use of, or perhaps even had, written systems of numerals. A system of
“reckoning on the lines” was used widely, especially for commerce, from the 12th century in Europe and was sufficiently popular to motivate strong opposition to the adoption of the HinduArabic
numerals and their use in calculation (see Flegg, 1983, p. 157 for an example of this representation). A number was represented by the placement of dots with respect to a column of horizontal lines,
each line representing a different power of the base. Each dot on a line represented one instance of the associated power of the base; a dot between lines was equivalent in value to five dots on the
line immediately below it. Arithmetic calculations with this representation were relatively straightforward. The Incas had developed a system of knotted cords (quipus or khipus) that was in use for
purposes of accounting and numerical record keeping at the time of the Spanish conquest of Peru (Asher & Asher, 1981; Salmon, 2004). Similar systems were used to represent quantities elsewhere,
including among North American natives and in parts of Africa, as well. A system for doing “finger multiplication,” described by Robert Recorde in Grounde of Artes in 1542, was effective for
multiplication and, by using 10s-complement arithmetic, required one to memorize a multiplication table only up to 5 × 5 (which involves only 15 products, compared with 78 in a table up to 12 × 12).
The history of the development of procedures of computation and of the invention of devices that compute or facilitate computing is a
Mathematical Reasoning
fascinating one, but one that I will not attempt to relate here. I have touched on the subject briefly elsewhere (Nickerson, 1997, 2005), and there are many sources of extensive information on the
topic. It suffices for present purposes to note that Homo sapiens not only learned to count, but to compute, and that, arguably, this ability has been second in importance only to that of language in
the development of civilization as we know it.
C H A P T E R
It is widely held that no other idea in the history of human thought even approaches number’s combination of intellectual impact and practical ramifications. (Brainerd, 1979, p. 1) The real numbers
are taken as “real.” Yet no one has ever seen a real number. (Lakoff & Núñez, 2000, p. 181)
“What,” Warren McCulloch (1961/1965) asks, “is a number, that a man may know it, and a man, that he may know a number?” Wynn (1998) puts essentially the same question somewhat more prosaically, but
no less emphatically: “The abstract body of mathematical knowledge that has been developed over the last several thousand years is one of the most impressive of human achievements. What makes the
human mind capable of grasping number?” (p. 3). The concept of a number line extending from 0 to infinity in both directions is among the more powerful representations in mathematics, or any other
domain of thought. But what exactly is a number? The question is easy to ask, but the answer is neither obvious nor simple: The concepts of counting and number have taxed the powers of the subtlest
thinkers and problems of number theory which can be stated so that a child can grasp their meaning have for centuries withstood attempts at solution. It is strange that we know so little about the
properties of numbers. They are our handiwork, yet they baffle us; we can fathom only a few of their intricacies. Having defined their attributes and prescribed their behavior, we are hard pressed to
perceive the implications of our formulas. (Newman, 1956c, p. 497)
Mathematical Reasoning
The reader who is inclined to doubt such an assertion is referred to Russell’s (1903, 1919) Herculean effort to answer the question and to the many objections it evoked (Cassirer, 1910/1923). The
puzzle is that we all understand instinctively what a number is—until we try to define it, as mathematicians have been trying to do, with dubious success, for a very long time. Perhaps, as some have
claimed, number is a primitive concept, and not definable. In any case, numbers have a history, and the final chapter has yet to be written. We do not know who first used tokens or symbols to
represent quantities. Nor do we know whether this use of tokens or symbols was invented once or many times. In either case, it was a momentous invention. Whether or not counting is unique to
humankind, the use of symbols to represent numerical concepts certainly is.
☐☐ Beginnings The origins of systems for representing numbers are obscure. Theories rest on scattered archeological evidence from a few sites and a considerable amount of speculation. Some of this
evidence suggests that the earliest systems predated the development of written language and were very concrete. Notched bones may have been used to record phases of the moon, the number of animals
killed by hunters, and other matters of interest possibly as many as 20,000 to 30,000 years ago. Token systems, of small objects, possibly coded by shape, size, and marking, appear to have been used
in Western Asia and the Middle East to record and communicate the number of entities in a collection—number of animals in a herd—at least as many as 11 millennia ago, before a system of written
numerals began to emerge. According to one theory, these token systems were immediate precursors to the development of writing, which evolved from them in an interesting way. Merchants adopted the
practice of enclosing in sealed clay containers tokens representing merchandise that was being transported from one place to another and using them as bills of lading. By breaking open a container, a
recipient of a shipment of goods could determine from the number and types of tokens inside what the shipment was supposed to contain. At some point, users of this system began making inscriptions on
the outside of containers to represent the tokens they contained. In time, it is hypothesized, the redundancy of this system was noticed, the tokens were done away with, and users began to rely
solely on the inscriptions (Schmandt-Besserat, 1978, 1984). Archeological records show that by the latter part of the fourth millennium BC the use of inscriptions on clay tablets was common
throughout Sumerian and Elamite sites. The inscriptions, which are believed to be primarily accounts and receipts, contain many number representations (Friberg, 1984).
☐☐ Number Abstraction Numerosity initially was probably thought of as a property of objects, so the symbol for a given quantity would differ depending on what was being counted. Thus, the concepts
“three sheep” and “three fish” were distinct, represented in different ways, and not thought of as sharing the abstract property of threeness. Perhaps we see some remnants of the one-time
concreteness of numbers in the special names that we have for representing the same quantity in different contexts: couple, duo, duet, dual, pair, brace, twice, twin, twain, and so forth. Rucker
(1982) points out that Renaissance mathematicians hesitated to add x2 and x3 because the first represented an area and the second a volume. We have no idea how or when abstraction of the concept of
number from the thing counted occurred, how or when the realization dawned that three stones and three fingers have something very interesting— their numerosity—in common. The recognition of number
as something distinguishable from objects counted and independently manipulatable—that three could be a noun as well as an adjective—was a major breakthrough in the intellectual odyssey of humankind.
Today we use numbers as both adjectives and nouns, and failure to distinguish between the two uses can make for confusion in beginning arithmetic. Two cats plus three cats makes five cats, but we do
not say that two cats times three cats makes six cats; we would say that two cats times three makes six cats. And, of course, without reference to what is being counted, we say without fear of
contradiction that 2 + 3 = 5 and 2 × 3 = 6. Recognition that number is a property of a set and not of the items that comprise the set—that three fingers, three apples, and three ducks have in common
their threeness—requires an abstraction. Realization that two fingers, three apples, and four ducks have in common their numerability—that each is a set that can be counted and characterized by a
number, though not the same one—represents a higher-level abstraction. The history of the development of the number concept is a study of a progression of abstractions. The number universe, populated
initially with the positive integers (“natural” numbers), effective for counting discrete entities but severely limited as a basis of reckoning and partitioning wholes into parts, expanded over time
to include fractions (rationals), negative numbers, irrationals (radicals), imaginaries, and more. The rationals
Mathematical Reasoning
and irrationals in combination comprise the reals. Rational numbers can be expressed as fractions in which both numerator and denominator are whole numbers; irrational numbers cannot be represented
this way. When expressed in decimal form, rational and irrational numbers differ in that the former either get quickly to a string of zeros or to a sequence of digits that repeats endlessly, whereas
the latter do neither but continue indefinitely with no repeating pattern. The irrationals include both algebraic numbers and transcendentals. Algebraic numbers are solutions of algebraic equations
with integer coefficients; the square root of 2, for example, though irrational is algebraic, because it is the solution of x2 – 2 = 0. Transcendental numbers are a subset of irrationals; they are
numbers that are not solutions—roots—of polynomial equations with integer coefficients. In addition to 2 , they include p, e, f, the golden ratio (about which more in Chapter 11), and many
trigonometric and hyperbolic functions. (For one writer’s list of the 15 most famous transcendental numbers, see Pickover, 2000.) Although Euler believed that transcendentals exist, their existence
was first proved in the 19th century by French mathematician Joseph Liouville. Johann Heinrich Lambert, a German mathematician and physicist, surmised in 1761 that both e and p are transcendental.
French mathematician Charles Hermite proved e to be transcendental in 1873, and German mathematician Carl Ferdinand von Lindemann proved p to be so in 1882. German mathematician Georg Ferdinand
Ludwig Philipp Cantor proved the transcendentals to be abundant, despite the difficulty of identifying individual cases, and in doing so, evoked much unwelcome and unsettling criticism from other
mathematicians. The early (pre-Pythagorean) Greeks recognized only positive whole numbers as numbers; 2 and 3 were numbers in their view, but not –2 or –3, or 2/3. The Pythagoreans recognized
rationals—quantities that could be expressed as the ratio of two whole numbers—as real, but were scandalized by the discovery that the length of the diagonal of a unit square cannot be expressed as
such a number. This is an early example—there are many more in the history of mathematics—of mathematical thinking hitting a barrier, the crossing of which required the invention or discovery of new
concepts. The admission to the family of numbers of quantities that cannot be represented as the ratio of whole numbers, such as 2 and p, was a slow and painful process; acceptance of -1, or i, and
complex numbers was even more so. Today all these types of representations are familiar members of the number lexicon, but there are more recent additions— transfinites (Cantor), hyperreals,
hyperradicals and ultraradicals (Kasner & Newman, 1940; Robinson, 1969), surreals (Conway, 1976; Tøndering, 2005), inaccessible, hyperinaccessible, indescribable, and ineffable cardinals (among other
esoteric kinds) (Rucker, 1982)—that are considerably
less so. A compelling argument can be made that to the question “What really is a number?” there is no universally agreed-upon answer. If Lakoff and Núñez (2000) are right, “not only has our idea of
number changed over time, but we now have equally mathematically valid but mutually inconsistent versions of what numbers are” (p. 359). From the vantage point of the 21st century, it is easy to fail
to appreciate the sometimes slow and difficult path to acceptance of many of the abstractions that we take for granted, but two facts pertain to nearly every extension of the number system from
natural numbers to rationals, to real numbers, to negative numbers, to complex numbers, and beyond: Each has been motivated to meet a need (to make some mathematical operation possible that was not
possible before), and each has occurred against considerable opposition. Negative numbers, for example, were found to be essential to the performance of even the most fundamental of arithmetic
operations, such as the subtraction of a larger from a smaller number. One might expect too that they would naturally arise in the treatment of debt in accounting. Nevertheless, most European
mathematicians of the 16th and 17th centuries did not accept negative numbers, which they learned of from the Hindus, as bona fide numbers—as opposed to “fictitious” ones. McLeish (1994) credits
Indian mathematician Brahmagupta, who lived during the seventh century, with the first systematic treatment of negative numbers, including rules for multiplying them, and of their legitimacy as roots
of quadratic equations. Sixteenth- and 17th-century European mathematicians, French polymath René Descartes among them, were unhappy with the idea that negative numbers could be roots of equations.
It was not until late in the 17th century that English mathematician John Wallis represented them as quantities to the left of zero on the number line, negative n being the same number of units to
the left as positive n is to the right, a representation that is commonly used today. The use of negative numbers was eventually accepted for practical reasons before a logical foundation had been
provided for them, although some textbook authors rejected the possibility of multiplying two negative numbers even as late as the 18th century (Boyer & Merzbach, 1991). Nahin (1998) recounts the
following argument, made by Wallis in 1665: Since a/0, with a > 0, is positive infinity, and since a/b with b < 0, is a negative number, then this negative number must be greater than positive
infinity because the denominator in the second case is less than the denominator in the first case (i.e., b < 0). This left Wallis with the astounding conclusion that a negative number is
simultaneously both less than zero and greater than positive infinity, and so who can blame him for being wary of negative numbers? (p. 14)
Mathematical Reasoning
Among other impediments to the acceptance of negative numbers was that in his introduction of analytic geometry, Descartes originally considered equations only in the first quadrant of the x,y plane;
as the development of analytic geometry progressed, however, use of the other quadrants, each of which involved negative numbers, was such an obvious extension that this may have facilitated
acceptance over time.
☐☐ Tallies to Ciphers Realization that symbols like |, ||, and ||| could be used to represent one, two, or three of anything—sheep, fish, children—was an enormously important insight. It was
essential to the development of mathematics as we know it. It would be wonderful to know the story of how this happened, but we are unlikely ever to do so. These symbols are used here only for the
sake of illustration, but one might defend the notion that the stroke or line (tally mark) is a powerful and profound symbol. It is trivially easy to make with almost any medium (bone, stone, clay,
papyrus). The one-to-one correspondence between a sequence of strokes and the set of objects whose numerosity is being represented is direct and salient. Evidence of the notching of bone, presumably
to represent tallies, goes back perhaps 30,000 years (Flegg, 1983). The single vertical stroke was used to represent 1 by several ancient number systems, and some of them use two or three strokes to
represent 2 and 3 as well. Apparently around 2000 BC the residents of Harappa, in the Indus Valley (parts of which are in modern India, Pakistan, and Bangladesh), used vertical strokes to represent
the numbers from 1 to 7 and abandoned the tally representation only beginning with the number 8, their symbol for which bears some resemblance to our own (Fairservis, 1983). How long it took from the
first uses of tallies in which only a single symbol was used repeatedly to represent a quantity and the beginnings of the use of different symbols to represent different quantities, “cipherization,”
is not known. Among the oldest known cipherized systems are those of the ancient Sumerians, Babylonians, Egyptians, Greeks, Romans, Aztecs, and Mayans.
☐☐ The Hindu-Arabic System Today the Hindu-Arabic system (alternatively referred to as the IndoArabic, or simply Arabic, system) for representing numerical concepts is used throughout the civilized
world. It is so familiar to us that we
may have difficulty in perceiving it as the convention that it is. We tend to think of a string of Hindu-Arabic symbols, such as 267, not as a representation of a number—a numeral—but as the number
itself. In fact, this system is a relatively recent chapter in the history of the development of number systems. The Hindu-Arabic system is generally referred to as such because it was widely
believed to have been developed by the Hindus in India, and its familiarity to the West owes much to the work of Arab scholars. It appears that this system—in principle but not in detail the one used
in India by the seventh century—was adopted by the Arabs probably during the eighth century and was known here and there in Europe for several hundred years before it was appropriated there for
general use. Decimal systems with different symbols were in use in other parts of the world, including China, centuries earlier (Ifrah, 2000). (For a brief account of the origin of the characters
that constitute our decimal integers, see Friend, 1954, or Pappas, 1989.) Precisely how and by whom the Hindu-Arabic symbols were introduced to medieval Europe is uncertain. According to Ifrah
(1987), the earliest known European manuscript that contains the first nine Hindu-Arabic numerals is the Codex Vigilanus, which dates from 976. Ifrah credits the French monk Gerbert of Aurillac, who
became Pope Sylvester II, with being the first great scholar to spread the use of these numerals in Europe, although the system was not widely used in Europe until several centuries later. Schimmel
(1993) identifies a Latin translation by Robert of Chester in about 1143 of a book, Concerning the Hindu Art of Reckoning, written in the ninth century by Mohammed ibn-Musa al-Khwarizmi (whose last
name is also the origin of our algorithm) as the vehicle that introduced this number system, as well as the concept of algebra, to the West. It was perhaps because of an Arab’s (al-Khwarizmi’s) role
in publicizing the Hindu numerals in Europe that the system became known as Hindu-Arabic. A Syrian reference to the Hindu numerals is known from 662, and an Indian plate survives from 595 in which
the date 346 appears in decimal place notation form (Boyer & Merzbach, 1991). Leonardo of Pisa, a scholarly and widely traveled merchant also known to posterity as Fibonacci (a condensation of Filius
Bonaccio, or son of Bonaccio), promoted the use of the Hindu-Arabic system in Europe in his book Liber abaci, which was circulated in manuscript form early in the 13th century, and was made more
widely available when it was published in Latin some six centuries later. At the beginning of the book, Fibonacci identifies the nine figures of the Indians as 1 2 3 4 5 6 7 8 9 and notes that with
these nine figures, and the sign 0, any number can be written.
Mathematical Reasoning
Europe was slow to adopt the Hindu-Arabic system. Some have speculated that this was due in part to the widespread use of the abacus for computation at the time and the advantages of the Hindu-Arabic
notation over Roman numerals being less apparent with this method of computation than when calculations are done with pen and paper. Merchants in Florence were forbidden to use the Hindu-Arabic
system for fear that their customers could easily be deceived by it (Ellis, 1978). A struggle between Abacists, those committed to the use of the abacus and old traditions, and Algorists, who
advocated reform, went on for three or four centuries. In retrospect, scholars have viewed the eventual adoption of the Hindu-Arabic system of number representation as one of the defining events in
the history of Europe and, by extension, of the world. We are so familiar with the system we use today to represent numbers that it is difficult for us to see it as the thing of beauty and power that
it is. The history of the evolution of systems for representing numbers has been told, at least in part, by numerous scholars, among them Smith and Ginsburg (1937), Menninger (1958/1992), Flegg
(1983), Ifrah (1987), Barrow (1992), Kaplan (1999), and Seife (2000). It is instructive to compare the Hindu-Arabic system with its predecessors. When one does so, one sees a variety of similarities
and differences that have implications for its use. It is more abstract than several of its predecessors, and therefore probably more difficult to learn, but also more compact, more readily
extendable, and more conducive to computational manipulation. The Hindu-Arabic system was not the first that was able to represent any number with a fixed small set of symbols—the Babylonian system,
which predated the Hindu-Arabic system by many centuries, could do so with only two symbols, approximated by | and 0, Xn > 1, and for n < 0, Xn < 1. As n approaches 0, from either the positive or
negative side, Xn approaches 1 (see Figure 3.1), so it seems right that at n = 0, Xn should equal 1. We see then compelling reasons for deciding that X0 = 1. Obviously, 0 multiplied by itself as many
times as one wishes equals 0, so 0X = 0. (Raising 0 to a negative power is not allowed by virtue of the prohibition of division by 0.) But now, what about the oddest expression of all, 0 0? If X0 = 1
is to hold generally, then 00 must equal 1, but if 0X = 0 is to hold generally, then 00 must equal 0. We seem to be at an impasse; whether we make the value of 00 be 1 or 0, we must give up the
generality of one or the other of the relationships considered; they cannot both be general. Because of this dilemma, the value of 00 is sometimes said to be indeterminate or undefined, like 0/0.
Several rationales have been given, 10 9
0 n
Figure 3.1 Illustrating the approach of Xn to 1 as n approaches 0 from either below or above. In this illustration, X = 2 (black) and 3 (dark gray).
Mathematical Reasoning
however, for considering the value to be 1. One such is that defining it as 1 permits the binomial theorem to be general—to not have to treat X = 0 as a special case. To my knowledge, there is no
widely accepted answer to the question of what the “real” value of 00 is, if that is a meaningful question; how it should be treated appears to be a matter of what works in the context in which it is
encountered, and perhaps on the predilections of the user. Defined as the empty set, zero has been used as the basis for settheoretic definitions of all the natural numbers. And this is considered
preferable to using 1 as the basis for such definitions, as was done by Frege and Peano, because 1 is a member of the set that is being defined, which makes the definitions circular. If the
definitions based on 0 are not to be seen as circular, 0, in this context, cannot be considered a natural number. But there it is on the number line, and what would the number line be without it?
What is zero, really? Lakoff and Núñez (2000) argue that there is no one answer to this question. Each of the candidate answers—empty set, number, point on the number line— “constitutes a choice of
metaphor, and each choice of metaphor provides different inferences and determines a different subject matter” (p. 7). Whatever zero is, its importance to mathematical reasoning is beyond dispute;
without it, much of mathematics as we know it today could not have been developed.
☐☐ Fractions The ancient Egyptians expressed all fractional quantities as sums of different fractions, each of which had a numerator of 1. The fraction 2/3 would be written as 1/2 + 1/6; 3/7 would be
1/3 + 1/11 + 1/231, in Egyptian notation of course. In general, ancient number systems, perhaps with the exception of the Babylonian system, were not ideally suited to representing fractions. One of
the strategies used for limiting the need for fractions was that of defining many subdivisions of units of length, weight, and other measures so that calculations could be done in terms of integral
multiples of the subdivisions. The practice of representing fractions by placing the numerator over the denominator and separating them with a horizontal bar was used in Arabia and by Fibonacci in
Europe in the 13th century, but it did not come into general use in Europe until the 16th century. The slanted line was suggested by British mathematician–logician Augustus De Morgan in 1845. Hindu
mathematicians represented fractions by placing the
numerator over the denominator, but with no bar between, as early as the seventh century. Boyer and Merzbach (1991) note that “it is one of the ironies of history that the chief advantage of
positional notation—its applicability to fractions—almost entirely escaped the users of the Hindu-Arabic numerals for the first thousand years of their existence” (p. 255). Although decimal fractions
are known to have been used in more than one preRenaissance culture on occasion—including China as early as the third century AD—the decimal point was first used to separate the integral from the
fractional part of a number sometime close to the end of the 16th century. John Napier, Scottish mathematician-physicist and inventor (among others) of logarithms, was an early advocate of this
scheme. The French mathematicians Francois Viète (sometimes Vieta, an amateur to whom Kasner and Newman [1940] refer as “the most eminent mathematician of the 16th century”) and Simon Stevin are also
credited with playing significant roles in ensuring wide acceptance of decimal fraction notation, which did not occur until about 200 years after Napier’s advocacy (Boyer & Merzbach, 1991; Ellis,
1978). Contemporary conventions for representing numbers differ in some respects in different parts of the world. In America, a period is used to set off the fractional part of a number, while a
comma serves the same purpose in Europe. Thus, what Americans would write as 231.05, Europeans would write as 231,05. Americans separate multiples of 1,000 by commas (1,000,000), while Europeans do
so with spaces (1 000 000). Especially confusing is that English-speaking Americans and Europeans use the same words to denote different quantities; thus to Americans, one billion is 109, whereas to
the British, one billion is 1012, that is, 1 million squared.
☐☐ Mental Number Representation The many ways in which numbers have been represented in various places and at sundry times is an interesting story, and it raises many questions regarding the relative
advantages and disadvantages of specific representational systems from a psychological point of view. Other interesting psychological questions have to do with the way numbers are represented in
people’s minds. One might assume that numbers are represented as numerals—that 3, for example, is represented in one’s mind as the symbol 3. But there is reason to believe that it is not quite that
simple. Moyer and Landauer (1967, 1973) measured the time it takes for people to decide which of two numbers is the smaller and found that it
Mathematical Reasoning
varies inversely with the distance (the difference in value) between the numbers on the number line—the greater the distance (the larger the difference), the faster the decision time. A comparison
between 2 and 9, for example, takes considerably less time than one between 9 and 7. This classic result, which was surprising when it was first obtained, has been interpreted as what would be
expected if the mental representation of numbers were analog in character, something like an actual number line (Dehaene, Bossini, & Giraux, 1993; Fias, Lammertyn, Reynvoet, Dupont, & Orban, 2003).
The “symbolic distance effect,” or simply the “distance effect,” as it is generally called, has been replicated many times in various forms, including with languages other than English (Banks, Fujii,
& Kayra-Stuart, 1976; Dehaene, 1996; Parkman, 1971; Tzeng & Wang, 1983), and with children as well as with adults (Donlan, 1998; Sekuler & Mierkiewicz, 1977; Temple & Posner, 1998), although the
effect appears to decrease in magnitude with increasing age (Duncan & McFarland, 1980; Sekuler & Mierkiewicz, 1977). Dehaene, Dupoux, and Mehler (1990) reported a distance effect with two-digit
numbers. Dehaene (1997) gave university students the task of pressing a left- or right-hand key to indicate whether a digit was smaller or larger than 5, and found that the distance effect persisted
even after 1,600 trials over several days. He takes the distance effect as evidence of the inadequacy of the popular metaphor of the brain as a digital computer, arguing that the way we compare
numbers is more suggestive of an analog machine than of a digital one. But comparing numbers is a relatively restricted type of cognitive activity, and whether the many other types of cognitive
activity involved in the doing of complex mathematics are as readily attributed to analog processes remains to be seen. There are several findings involving the perception of, or decisions about,
numbers that are closely related to the distance effect. A case in point is known variously as the “magnitude effect,” “size effect,” or “problem-difficulty” effect, and it manifests itself in
several ways. The time required to read numbers increases with their magnitude, especially for numbers in the range of 1 to about 50 (Brysbaert, 1995; Dehaene, 1992). Given two numbers that differ by
a fixed amount, say, 2 and 4 or 8 and 10, the time required to decide which of two is the smaller (or the larger) increases with their size (Antell & Keating, 1983; Strauss & Curtis, 1981). The time
required to report the sum, difference, product, or quotient of two numbers, or to judge whether a proposed answer to a computation is correct, increases with the sizes of the numbers involved
(Campbell, 1987; Groen & Parkman, 1972; LeFevre et al., 1996; Miller, Perlmutter, & Keating, 1984; Rickard & Bourne, 1996; Stazyk, Ashcraft, & Hamann, 1982). Interestingly, this effect is greater
when the numbers involved are represented in printed verbal form
(one, two, …) than as numerals (1, 2, …) (Campbell, 1999; Campbell & Clark, 1992). An analog of the finding that the time required to decide that two numbers are different varies inversely with the
numerical difference between the numbers has been obtained with nonnumeric stimuli. When people have been asked to indicate as quickly as possible whether two stimuli are the same or different and
those that differ can differ to varying degrees, the typical finding has been that same stimuli are identified as “same” faster than different stimuli are identified as “different,” on average, but
that the time required to identify different stimuli as “different” decreases as the magnitude of the difference between them increases (Bamber, 1969; Egeth, 1966; Hawkins, 1969; Nickerson, 1967,
1972). A numerical distance effect has also been observed when the task is to report whether a proposed answer to a simple mathematical calculation (e.g., addition of two single-digit numbers) is
correct or incorrect and the “distance” involved is the magnitude of the difference between the correct answer and an incorrect proposal: The greater the difference between the correct and the
incorrect answer, the faster and more accurately people report that the incorrect answer is incorrect (Ashcraft & Battaglia, 1978; Ashcraft & Stazyk, 1981). Still another related finding suggests a
representation that has numbers proceeding from small to large from left to right. Using left and right keys to respond “smaller” and “larger,” respectively, when comparing a visually presented
number to a reference number produces faster responses than does assignment of “smaller” and “larger” to the right and left keys, respectively (Dehaene et al., 1990; Dehaene et al., 1993). The
advantage of the smaller-left, larger-right mapping has been found both with Hinduu-Arabic numerals and with numbers represented as words (Fias, 2001; Nuerk, Iversen, & Willmes, 2004). The effect has
been obtained whether the left and right keys are pressed with the left and right hands (most experiments), with two fingers of the same hand (Kim & Zaidel, 2003), or by crossing the arms and
pressing the left and right keys with the right and left hands (Dehaene et al., 1993). Fias and Fischer (2005), who review these and related findings in detail, point out that, in the aggregate, they
can be seen as illustrative of a more general class of spatial compatibility effects of the sort noted half a century ago by Fitts and Seeger (1953) and reviewed by Kornblum, Hasbroucq, and Osman
(1990). The extent to which this small-to-large left-to-right mapping is peculiar to cultures that read from left to right is, so far as I know, an open question. In sum, the findings from number
comparison studies have been widely interpreted as generally supportive of the idea that numerical quantities are represented, not necessarily exclusively, as locations on an
Mathematical Reasoning
analog number line. Regarding how numbers are spaced on this line—in particular whether the spacing is linear (Gallistel & Gelman, 1992, 2000) or something closer to logarithmic (Dehaene, 1992;
Dehaene et al., 1990), so the differences between successive numbers decrease with number size—it appears that the jury is still out. And alternatives to the number-line representation have been
proposed that are able to account for many of the findings as well (Zorzi & Butterworth, 1999; Zorzi, Stoianov, & Umilta, 2005). Several investigators of number processing have used Stroop-like
tasks, which require the suppression of some feature of a stimulus in order to respond accurately to some other feature, as for example when one is asked to respond to the color of a word, say red,
which can be printed in red or blue. A version of the task used in number processing might require a person to say which of two numerals, 2 or 8, was in the larger print. As might be expected from
the results of findings with Stroop tests with nonnumeric stimuli, people respond faster and more accurately in determining that 5 is the (physically larger) of 5 and 3 than in determining that 3 is
the larger of 5 and 3 (Henik & Tzelgov, 1982; Pansky & Algom, 1999) (Figure 3.2, left), or in reporting that a display contains four digits if the display is of four 4s than if it is of four 3s
(Pavese & Umilta, 1998) (Figure 3.2, center). The finding of no Stroop interference when the task is to specify which of two verbal numerals (three or five) is written in the larger font (Figure 3.2,
right) has been taken as evidence of separate types of processing of verbal and Arabic numeral representations of numbers (Ito & Hatta, 2003). There is some
Figure 3.2 Illustrating Stroop-like effects with numbers. Left: People are faster to report which numeral is physically larger when that numeral is also numerically larger. Center: People are faster
to report the number of numerals when that number corresponds to the numerals shown. Right: The time required to say which number name is written in the larger font appears not to be affected by the
numbers named.
evidence that high math anxiety increases the interference one experiences from irrelevant stimulus features on Stroop-like tasks (Hopko, McNeil, Gleason, & Rabalais, 2002). The finding of a variety
of priming effects—in which presentation of a number has an effect on the response to a subsequent occurrence of the same or a different number in the context of some numerical task— has been taken
as evidence of automatic activation of number codes and to have implications for models of the nature of those codes. Studies have found a distance-prompting effect, according to which the size of
the effect of a prompt* is inversely proportional to the quantitative difference between the prompt and target (the number on which the prompt has its effect). Thus, the effect on one’s response to 5
in a numerical task would be greater if prompted with 4 than if prompted with 2 (den Heyer & Briand, 1986). The finding that the magnitude of the effect is strictly a function of the difference
between prompt and target and relatively independent of whether the prompt is smaller or larger than the target has been taken as evidence against the idea that numbers are represented mentally on a
logarithmically spaced number line (Reynvoet, Brysbaert, & Fias, 2002; Zorzi et al., 2005). The question of how numbers are represented mentally continues to be a focus of research. One issue is the
extent to which the representation of numerical concepts is integrated with, or independent of, the representation of natural language. According to models proposed by McCloskey (1992; McCloskey &
Macaruso, 1995) and by Gallistel and Gelman (1992), numerical concepts are represented independently of natural language. A model proposed by Dehaene (1992) and elaborated by Dehaene and Cohen (1995)
hypothesizes three separate codes: one that represents magnitudes analogically, one that represents numbers visually as Hindu-Arabic numerals, and one that represents number facts verbally. Details
can be found in the cited references and a comparison of the models in Campbell and Epp (2005). In a study of acquisition of the concept of infinity, Falk (in press) obtained evidence that
recognition that numbers are endless is facilitated by the ability to separate numbers from their names—children who could not make this distinction had difficulty in seeing that numbers could be
increased beyond what could be named. One aspect of the question of how numeric concepts are represented in the brain that is currently motivating much research is that of identifying specific
regions of the cortex that are actively involved in the *
What I here refer to as a prompt usually is referred to in the literature as a prime. I have opted for prompt so as to avoid confusion with prime in mathematics to refer to numbers that have no
divisors other than 1 and themselves.
Mathematical Reasoning
processing of such concepts. The application of a variety of neuroimaging techniques has led to a focus on three areas, all in the parietal lobes, that appear to be heavily involved: a horizontal
segment of the intraparietal sulcus (HIPS), the angular gyrus, and the posterior superior parietal lobule (PSPL) (Dehaene, Piazza, Pinel, & Cohen, 2005). Evidence that numeric and nonnumeric concepts
are represented, at least partially, in different cortical areas comes from studies showing that lesions can impair the processing of numeric concepts while leaving the processing of nonnumeric
concepts intact (Anderson, Damasio, & Damasio, 1990; Butterworth, Cappelletti, & Kopelman, 2001; Cappelletti, Butterworth, & Kopelman, 2001), and can have the opposite effect as well (Cohen &
Dehaene, 1995; Dehaene & Cohen, 1997). Studies have also shown dissociation between knowledge of numerical facts and the ability to perform mathematical procedures (Temple, 1991; Temple & Sherwood,
2002), as well as differential impairment of the abilities to do subtraction and multiplication (Dagenbach & McCloskey, 1992; Delazer & Benke, 1997; Lampl, Eshel, Gilad, & SarovaPinhas, 1994; Lee,
2000). One interpretation of the latter finding is that multiplication requires access to information stored in verbal form (the typically overlearned multiplication table), whereas subtraction does
not (Dehaene & Cohen, 1997; Dehaene et al., 2005). Dehaene et al. (2005) contend that such dissociations suggest the existence of “a quantity circuit (supporting subtraction and other
quantity-manipulation operations) and a verbal circuit (supporting multiplication and other rote memorybased operations)” (p. 445). The neuroanatomy of number competence is a lively area of research
at the present and promises to continue to be so for the foreseeable future.
☐☐ Irrational Numbers “To the Pythagorean mind, ratios controlled the universe” (Seife, 2000, p. 34). More specifically, the controlling ratios were assumed to be ratios of integers. The discovery
that there are quantities, such as the square root of 2, that cannot be expressed as the ratio of two integers was a devastating shock to people with this view; the existence of such irrational
numbers was seen to be a serious threat to the rationality or orderliness of the 2
world. (One wonders whether Pythagoras was aware that ( 2 ) 2 = 2 .) One of the implications of the existence of irrational numbers is the failure of the rationals to “fill up” the number line. Given
that there are infinitely many rational numbers and that between any two rationals,
no matter how close, there are infinitely many other rationals, one might easily assume that the rationals fill up the line. But the existence of irrationals means that there are gaps in the line
that are not filled by rationals. Maor (1987) calls the discovery that the rationals leave “holes” in the number line—points that are not rational numbers—“one of the most momentous events in the
history of mathematics” (p. 43). This is an interesting claim in view of the distinction between rational and irrational numbers having little, if any, practical importance inasmuch as both can be
expressed to whatever level of accuracy (number of significant digits) that is required for computational purposes. Besides generating reflection on the question of what mystical significance the
existence of irrationals might have, their discovery threatened the integrity of geometry by invalidating many of the early geometrical proofs that had been considered beyond question. The
restoration of confidence in geometrical methods became a major objective of much mathematical thinking, and according to British mathematician and historian of mathematics Herbert Turnbull (1929),
this was eventually accomplished by Eudoxus in the fourth century BC by his establishment of a sound basis for the doctrine of irrationals. Even so, irrationals were not fully accepted as bona fide
numbers; a distinction was made between numbers and magnitudes—with irrationals considered to be in the latter category—and prevailed for about a millennium (Flegg, 1983). Notable among irrational
numbers are two of the most important constants of mathematics. I refer, of course, to π, the ratio of a circle’s circumference to its diameter, and e, the base of natural logarithms. Both are
indispensable, appearing in the most mundane as well as the most arcane of computations, and each is a colorful character with an intriguing history. Mathematicians have had techniques for
approximating the value of π at least since the time of Archimedes. Non-Western cultures also long ago had methods for approximating this all-important number with a ratio of two integers; Dunham
(1991) gives as examples of values produced: 355/113 = 3.14159292 by Chinese mathematician-astronomer Tsu Ch’ung-chih (sometimes Zu Chongzhi) in the fifth century, and 3,927/1,250 = 3.1416 by Hindu
mathematician-astronomer Bhäskara around 1150. That π is irrational was demonstrated by Swiss mathematician Johann Lambert in the 1760s. A little over a century later, German mathematician Carl Louis
Ferdinand von Lindemann showed it to be transcendental. Why does π appear in so many formulas that have no obvious connection with circles? Why, for example, does it occur in the formula for the
probability that a needle tossed on a plane of equally spaced parallel lines will fall across one of the lines: p = 2n/pd, where n is the length of the needle and d is the distance between the lines?
Why does it appear
Mathematical Reasoning
in the formula for the Gaussian, or “normal,” probability density, which describes the distribution of so many natural variables? What is it doing in Maxwell’s equations describing electromagnetic
effects? Or in Einstein’s general relativistic equation? Why does it appear in Stirling’s formula for bracketing n! for large n? And what business does it have, to ask a qualitatively different
question, showing up in such elegant attire as 2+ 2+ 2 … ? 2 2 2+ 2 = • • p 2 2 2 This equation, discovered by Viète in 1593, is credited by Maor (1987) with being the first explicit expression of an
infinite process in a mathematical formula. It is, Maor notes, still admired for its beauty. Viète’s formula was subsequently joined by many others that express π as a function of an infinite
arithmetic process and that produce approximations to π that increase in accuracy regularly with the number of terms in the equation. One such is p 2 2 4 4 6 6 8 8… = • • • • • • • 2 1 3 3 5 5 7 7 9
discovered by 17th-century English mathematician John Wallis. Another is p=
4 12 1+ 32 2+ 52 2+ 2 + 72...
from 17th-century Irish mathematician Lord William Brouncker, founder and first president of the Royal Society of London. Leibniz and 17th-century Scottish mathematician James Gregory independently
discovered p 1 1 1 1 1 1 = - + - + + ... 4 1 3 5 7 9 11 In Chapter 1, I mentioned the zeta function of Riemann’s famous conjecture, ∞
ζ( x ) =
∑n n=1
1 x
Euler discovered that, for x = 2, this function approximates p2 = 6
p2 , that is, 6
Maor (1987) refers to this series as one of the most beautiful results in mathematical analysis, and du Sautoy (2004) calls Euler’s discovery “one of the most intriguing calculations in all of
mathematics” (p. 80). Intriguing because surprising: The finding “took the scientific community of Euler’s time by storm. No one had predicted a link between the innocent sum 1 + 41 + 91 + 161 + …
and the chaotic number π” (p. 80). Although all the series mentioned, as well as others, converge so as to yield good approximations to π, they do not all converge at the same rate. The series that
was discovered independently by Leibniz and Gregory and that converges to π/4 requires 628 terms just to produce a value for π that is accurate to two decimal places. In contrast, p4 = 90
∑n n=1
also from Euler, converges rapidly. Other people, including some for whom mathematics was only an avocation, developed other infinite series approximations to π, some of which are highly efficient in
that relatively few terms yield approximations that are accurate to dozens of decimal places. Dunham (1991) mentions Abraham Sharp, a British schoolmaster, who found π accurate to 71 places in 1699,
and John Machin, a British astronomer, who found it accurate to 100 places in 1706. William Shanks, an amateur mathematician, also British, used Machin’s series to approximate π to 707 places in
1873, but more than seven decades later in 1944, another British mathematician, D. F. Ferguson, discovered, with the help of a desktop mechanical calculator, that Shanks’s approximation had an error
at the 528th place, making the rest of the approximation wrong. In 1947, American mathematician John Wrench Jr. produced an approximation of 808 digits, and again Ferguson found an error, this time
in the 723rd place. He and Wrench jointly published a correction to 898 places in 1948. The story of this progression of longer and longer approximations to π, predating the arrival on the scene of
the digital computer, is told in detail by Beckman (1971) and Dunham (1991). The fascination that mathematicians—professional and amateur— have shown for π over the centuries and their willingness to
Mathematical Reasoning
countless hours attempting to approximate it to accuracies far beyond any practical usefulness are interesting psychological phenomena. It is one among numerous examples of mathematical entities that
have captured the attention of people with a mathematical bent and held it seemingly permanently. With the arrival of the electronic computer on the scene, the rate at which ever longer
approximations to π were found increased exponentially. Among the first to use a computer for this purpose were Daniel Shanks, an American physicist and mathematician, and Wrench, who collaborated on
programming an IBM 7090 computer to compute π to a little over 100,000 places in approximately nine hours in 1961. In 2005, Yasumasa Kanada of the Information Technology Center, Computer Centre
Division, University of Tokyo, announced having approximated π to over 1 trillion (1012) decimal places; I have no idea how the accuracy of such a computation is verified. The remarkable constant e,
the base of natural logarithms, is a particularly interesting case of the infinitely large and the infinitely small combining to confound our intuitions and to yield an immensely useful construct in
the process. It is usually defined as the limit of the expression (1 + 1/n)n, which is sometimes written as
1 1 1 1 1 1 1 , 1 2 , 1 3 , 1 4 , 1 5 , …
or as 1
2 3 4 5 6 1 , 2 , 3 , 4 , 5 , …
As n increases indefinitely, 1/n goes to 0, 1 + 1/n goes to 1, so, inasmuch as 1n is 1, we might expect (1 + 1/n)n to go to 1. On the other hand, inasmuch as xn increases without limit provided that
x is greater than 1, and 1 + 1/n is always greater than 1, even if only infinitesimally, we might expect (1 + 1/n)n to increase without limit. In fact, as n increases indefinitely, (1 + 1/n)n
converges to 2.7182818284 …, which we have come to know and love as e. As seen in Table 3.1, the sequence has to be extended for about 1,000,000 terms for the converging approximation to be accurate
to six decimal places.
( )
Table 3.1. Values of 1 n1
for n up to 1,000,000
(1 )
1 n
Another sequence that converges to e, and considerably more rapidly, is n
∑ k! k =0 1
This approximation is accurate to seven decimal places when extended to only 10 terms, as shown in Table 3.2. The following series approximation of e is attributed to Euler: 1
e = 2+
3+ 4+
4 5 + ...
Mathematical Reasoning
Table 3.2. The Values of for n From 1 to 10
∑ k! 1
k =0
∑ k!
k =0
The story of how e came to be selected in the 17th century by Napier as the base for natural logarithms has been wonderfully told by Maor (1994) and can be found in part in Coolidge (1950), Péter
(1961/1976), and in any general reference on logarithms, infinite series, or number theory. As every student who has finished first-year calculus knows, e x is unique among mathematical functions in
that it is equal to its own derivx ative, which is to say de = e x . This is easily seen when e x is expressed in power-series form as dx ∞
ex =
∑ n ! = 1 + x + 2! + 3! + … xn
n =0
which, when differentiated term by term, yields the same expression. As we have noted, the values of π and e, as well as those of all other irrational constants, cannot be expressed exactly
numerically; they can only be approximated. No matter how many digits one uses to
approximate them, the representation will fall short of being exact. The concept of an approximation seems to carry the notion that there is some value to be approximated. But what is it in the case
of π, say, that is being approximated? What is the exact value of this constant, which we can only approximate? The answer appears to be that it has no exact value. One might say that the exact value
of π is the ratio of a circle’s circumference to its diameter, but one is still left with a value that cannot be expressed exactly. This is a curious situation from a psychological point of view, but
one that we seem to accept with no difficulty.
☐☐ Imaginary Numbers Nahin (1998) and Mazur (2003) tell the fascinating story of the emergence and slow acceptance of the idea that negative numbers could have square roots and the eventual adoption
of i (first used by Euler) as the symbol to represent -1 , the square root of minus one. They note that the construct -1 (and multiples of it, such as -15) was used for centuries, because it was
found to be useful, before a satisfactory “imagining” of what it might mean was attained. As recently as the 16th and 17th centuries, negative roots of equations were referred to by mathematicians as
false, fictitious, or absurd. Writing late in the 18th century, Euler called square roots of negative numbers impossible on the grounds that “of such numbers we may truly assert that they are neither
nothing, nor greater than nothing, nor less than nothing” (quoted in Nahin, 1998, p. 31). Often the use of square roots of negative numbers was covert, which was possible because mathematicians have
long been reluctant to expose the sometimes tortuous thought processes that have gone into the solving of problems, preferring to present only polished justifications of results. Dantzig (1930/2005)
credits Italian physician–mathematician Girolamo Cardano as the first to use the -1 representation, which he did in 1545. Its appearance in equations was often accompanied by an apology or
rationalization for the use of such strange—“meaningless, sophisticated, impossible, fictitious, mystic, imaginary” (Dantzig, 1930/2005, p. 190)— quantities. Cardano himself believed that negative
numbers could not have square roots, but considered use of the form in a calculation to be justified if the final answer was a real number. Italian mathematician Raphael Bombelli, a contemporary of
Cardano, also used imaginary numbers and defended the practice as essential to the solving of algebraic equations of the form x2 + 1 = 0. Descartes is generally credited with being the first to refer
to square roots of negative numbers as “imaginary,” presumably to connote something the existence of which was doubted or denied.
Mathematical Reasoning
Acceptance of imaginary numbers as bona fide numbers was slow, but in time it came. Arguably a major factor in their acceptance was the invention in the latter part of the 18th century of complex
numbers— numbers of the form a + ib, in which a is considered the real part and b the imaginary part—and the convention of representing such numbers geometrically. In this representation, a number is
a point in a “complex plane,” its real and imaginary parts being indicated by its x and y coordinates, respectively. Accounts of geometrical representations of complex numbers were given by Caspar
Wessel in 1799, Jean-Robert Argand in 1806, and William Rowan Hamilton in 1837. This representation gave numbers the capability to represent—now as vectors—both magnitude and direction, and opened up
many unanticipated applications in science and engineering. Nevertheless, according to Kline (1980), well into the 19th century “Cambridge University professors preserved an invincible repulsion to
the objectionable -1 , and cumbrous devices were adopted to avoid its occurrence or use, wherever possible” (p. 158). Stewart (1990) refers to the agreement to allow –1 to have a square root as “an
act of pure mathematical imagination,” and to the resulting imaginary numbers as “among the most important and beautiful ideas in the whole of mathematics” (p. 234). Its importance is seen in the
impressive range of its uses. As Péter (1961/1976) puts it, “There is no branch of mathematics which does not turn to this i for help, especially when something of a deep significance needs to be
expressed” (p. 163). Today the rules for performing mathematical operations with complex numbers are well developed, and the use of such numbers is ubiquitous in many areas, especially in science and
engineering. High school students use such quantities with little realization of the consternation they once caused, although some may have difficulty in accepting them as bona fide numbers (Tirosh &
Almog, 1989). The modern view recognizes a hierarchy of number concepts in which natural (counting) integers are a subset of the rationals, which are a subset of the reals, which are a subset of the
complex numbers (a real number is a complex number with imaginary component 0i), and so on. How are we to explain the contrast between the matter-of-fact way in which -1 and other imaginary numbers
are accepted today and the great difficulty they posed for learned mathematicians when they first appeared on the scene? One possibility is that mathematical intuitions have evolved over the
centuries and people are generally more willing to see mathematics as a matter of manipulating symbols according to rules and are less insistent on interpreting all symbols as representative of one
or another aspect of physical reality. Another, less self-congratulatory possibility is that most of us are content to follow the computational rules we are taught and do not give a lot of thought to
The latter possibility gains some credence from the suspicion that many people who use square roots of negative numbers in computations with apparent ease might be hard-pressed to say what is wrong
with the following argument. Clearly follows that
1 -1
-1 1
1 -1
-1 1
, from which it
, which can be written 1 1 = -1 -1, and inasmuch
as -1 -1 = i 2 = -1, we have 1 = −1. Bunch (1982), from whom I took this example, notes that in order to avoid such unpleasantries, mathematicians prohibit the use of -1 in equations and insist on
using i instead. Note this disallowance must extend not only to the stand-alone use of -1, but also to its implicit use in expressions like -1 1
-1 1
. If these forms are permitted and we allow that
1 -1
-1 1
we would still have the problem inasmuch as from -11 = i1 and -1 1 = i1 , we get i1 = i1 , and 1 = i2 = –1. Mazur (2003) notes that the consternation caused by the early use of square roots of
negative numbers did not differ greatly, in kind, from that felt by the use of the rule that the multiplication of two negative numbers should produce a positive number. I venture the conjecture that
most of us who learned this rule in elementary school have long since become comfortable with it—which is not to suggest that we necessarily ever lost sleep over it—but many of us would perhaps find
it difficult to explain to the satisfaction of an inquisitive child the rationale for the rule. The reader who has no trouble with this assignment may get more -1 of a challenge from explaining what
-1 , or ii, means. Mazur claims that “this concoction can be given a natural enough interpretation, which has a real-number value, as already seen by Euler” (p. 209). I am not the one to explain what
it is. So there are many kinds of numbers, and they all serve us well. But still the question of what exactly a number is—or what numbers are— lingers. Do they exist independently of mankind’s
knowledge and use of them, or are they products of human inventiveness brought into existence to serve useful purposes? One can take one’s pick, and whichever answer appeals, one can find
authoritative spokespersons for that view.
☐☐ A Number Paradox There are many paradoxes involving numbers. I will mention one because of its relevance to other paradoxes that will be discussed in subsequent chapters. The Berry paradox was
made famous by British philosophermathematician-logician Bertrand Russell, who attributed the initial version
Mathematical Reasoning
of it to G. G. Berry, an Oxford University librarian. It is a self-referential paradox, similar in important respects to such paradoxes as “This sentence is false” (true or false?) and “All knaves
are liars,” spoken by a knave. Presumably there exist, among the infinity of integers, many that are not definable in less than 11 words. (You may prefer to substitute for definable some other word,
such as describable, identifiable, or referable.) Consider “the smallest integer not definable in less than 11 words.” Inasmuch as “the smallest integer not definable in less than 11 words” is a
10-word definition of the integer of interest, there can be no integer not definable in less than 11 words. The Berry paradox is illustrative of paradoxes that involve talking about things that
cannot be talked about, describing things that cannot be described. There are perhaps numbers that are so large that we cannot name them, and if there are, there must be a smallest one, but, in
referring to such a number as the smallest number that we cannot name, we have just named it, have we not? Rucker (1982), who discusses the Berry paradox in various guises at length, notes with some
bemusement: “It is curious how interesting it can be to talk about things that we supposedly can’t talk about” (p. 107). We shall encounter closely related ideas in the chapter on proofs and, in
particular, with reference to proofs of nonprovability, the most famous of which we owe to Kurt Gödel.
☐☐ Prime Numbers Prime numbers were mentioned briefly in Chapter 1, but the concept deserves more emphasis and a chapter on numbers seems the appropriate place to provide it. A prime number is a
number that cannot be divided evenly (without a remainder) by any numbers other than 1 and itself. All numbers that are not prime numbers are composite numbers, which means that they are integral
products of other integers. This distinction is a very old and important one in mathematics, and more will be said about prime numbers in subsequent chapters of this book. The assertion that every
natural number is either prime or the product of a unique set of primes is referred to sometimes as the fundamental theorem of arithmetic and sometimes as the unique factorization theorem. Seventeen
is a prime number, being divisible only by 1 and 17; 30, in contrast, is a composite number, the unique set of primes of which it is composed being 2, 3, and 5. We shall see in subsequent chapters
that determining whether a particular (large) number is prime or composite is very difficult, and that this has acquired great practical significance today in its use in making communications systems
secure. Many
modern encryption systems are based on it being a trivially simple matter to create an extremely large number (say more than 100 digits long) that is the product of two large primes, but exceedingly
difficult to determine the factors of such a large composite number, or whether in fact the number is composite. This permits one to encode messages using the composite number, which can be generally
known, as the encryption key and be sure that only those who know the prime factors of that number will be able to decode it. Prime numbers have been of considerable interest to mathematicians over
the centuries independently of any practical applications that could be made of the knowledge of them, which were not great until recently. Many conjectures have been advanced about them, some of
which have been proved as theorems and some of which still have the status of unproved conjectures. Conjectures have often motivated extraordinary effort from mathematicians to produce the needed
proofs (either to show the conjectures to be correct or to show them to be false). One example involving primes is a conjecture by French lawyer and renowned amateur mathematician Pierre de Fermat.
It has been known for a long time that every prime number, x, greater than 2 can be represented either by 4n + 1 or 4n – 1. (This is not to say that for any n, 4n + 1 or 4n – 1 is a prime; for n =
16, 4n + 1 = 65 and 4n – 1 = 63, both composite.) Fermat claimed that all primes, and only those primes, that can be represented by 4n + 1 are the sum of two squares. Table 3.3 shows the first few
primes that can be represented by 4n + 1 and the squares that sum to them. More than a century after Fermat’s claim, Euler proved it, but it took him seven years to do so (Singh, 1997). Why it is
that some of the primes that can be expressed as 4n + 1 are equal to the sum of two squares while none of those that can be expressed as 4n – 1 are remains unexplained, so far as I have been able to
determine. The history of work on prime numbers is another illustration of how a particular aspect of mathematics can motivate years of effort by extraordinarily bright people who, in many cases,
have little to gain but the satisfaction of solving a problem or at least hopefully getting a bit further on it than others have yet done. Prime numbers continue to provide endless fascination for
mathematicians; the search for patterns among them is the persisting challenge. How is it, one wants to know, that the pattern of primes can be so irregular in one sense (seen in the absence of any
structure that would allow the prediction of which numbers in a sequence will be prime) and so remarkably regular in another sense (the increasingly close correspondence between the density of primes
and 1/loge n with increasing n)? The mix of unpredictability on one level with predictability at another is analogous to that seen with many random processes (coin tossing, die rolling, radioactive
decay) in which the behavior is random
Mathematical Reasoning
Table 3.3. The First Few Values of n for Which 4n + 1 Is a Prime and Also the Sum of Two Squares n
22 +12
3 2 + 22
4 2 + 12
5 2 + 22
6 2 + 12
5 2 + 42
7 2 + 22
6 2 + 52
Prime 4n + 1
(unpredictable) at the level of individual events but highly predictable in the aggregate. We will return briefly to the topic of prime numbers and the continuing search for ever larger ones in
Chapter 11. Numbers are fascinating—at least to some people—and for many reasons. Moreover, all numbers are fascinating—there are none that are not. If there are any numbers that are dull—not
fascinating—there must be a smallest dull number. But there can be no smallest dull number, because a smallest dull number would be fascinating by virtue of being unique in this respect. Therefore,
there can be no dull numbers. (This is not original with me, but I cannot recall to whom I am indebted for it.) **** This chapter has focused on numbers, and many different kinds of numbers have been
noted. I thought it would be good, before leaving the topic, to provide a diagram that shows how the different kinds that have been mentioned relate to each other. Things went well at first, but when
my attempt got to about the point shown in Figure 3.3, I began to find it difficult to proceed. The problem is that the set–subset relationships represented in the figure tell only part of the story.
To be sure, there are many such relationships: The primes are a subset of the natural (counting) numbers, which are a subset of the integers, which are a subset of
Numbers Irrationals Transcendentals
Reals Naturals
Rationals Fractions
Figure 3.3 The universe of numbers; an incomplete representation.
the reals, which are a subset of the complex numbers, and so on. But I was not sure how best to represent the distinction between positive and negative numbers in this scheme. Representation of the
distinction between algebraic numbers—numbers that are roots of polynomial equations with rational coefficients—and nonalgebraic numbers was complicated by the fact that some irrationals ( 2 , for
example) are algebraic, while others (π, for example) are not. Where should surreals, superreals, and hyperreals appear in such a diagram? Or quaternions, vectors, and matrices? And where does
humble, indispensable 0 belong? You, dear reader, may wish to try to construct a representation of the number universe that is more inclusive, and more enlightening, than Figure 3.3. It is one way to
get a feel for how richly that universe is populated. The attentive reader will note that nowhere in this chapter is there a definition of number. That is because I do not know how to define it. My
sense is that definitions one finds in dictionaries—including even mathematical dictionaries—in effect take an understanding of the concept of natural number as given—essentially indefinable—and use
it as a basis for defining types of numbers that are derivative from it. Whatever a number is, there are few concepts that have been more important to the history of humankind, or more instrumental
to the intellectual development of the species.
C H A P T E R
Deduction and Abstraction
Mathematics as a science commenced when first someone, probably a Greek, proved propositions about any things or about some things, without specification of definite particular things. (Whitehead,
1911/1963, p. 54)
A distinguishing task of pure mathematics is to make explicit, by deducing theorems, the relationships that are implicitly contained in the axioms of a given mathematical system. A distinction
between mathematics and empirical science can be made on the basis of the different ways in which they ensure the objective validity of thought. “Science disciplines thought through contact with
reality, mathematics by adhering to a set of formal rules which, though fixed, allow an endless variety of intellectual structures to be elaborated” (Schwartz, 1978, p. 270). Regarding the difference
between mathematical and nonmathematical (commonsense) arguments, Schwartz contends that because of their precise, formal character, the former are sound even if they are long and complex, whereas
the latter, even when moderately long, easily become far-fetched and dubious. Davis and Hersh (1981) refer to Euclidean geometry as the first example of a formalized deductive system and the model
for all subsequently developed systems.
Mathematical Reasoning
☐☐ Postulational Reasoning Primarily, mathematics is a method of inquiry known as postulational thinking. The method consists in carefully formulating definitions of the concepts to be discussed and
in explicitly stating the assumptions that shall be the bases for reasoning. From these definitions and assumptions conclusions are deduced by the application of the most rigorous logic man is
capable of using. (Kline, 1953a, p. 4)
A major difference between the mathematics of Egypt and Babylonia and that of Greece was that the conclusions in the former were established empirically, whereas those in the latter were established
by deductive reasoning. Kline (1953a) describes the insistence by the Greeks on deductive reasoning as the sole method of proof in mathematics as a contribution of the first magnitude. “It removed
mathematics from the carpenter’s tool box, the farmer’s shed, and the surveyor’s kit, and installed it as a system of thought in man’s mind. Man’s reason, not his senses, was to decide thenceforth
what was correct. By this very decision reason effected an entrance into Western civilization, and thus the Greeks revealed more clearly than in any other manner the supreme importance they attached
to the rational powers of man” (p. 30). As already noted, geometry was once an empirical discipline. Its development was motivated, as its name suggests, at least in part by an interest in measuring
land. The idea of building geometry as an axiomatic system took shape gradually over several hundred years and found its first extensive expression in the work of Euclid of Alexandria, who lived in
the third century BC. Euclid constructed his geometry from a set of 23 definitions, 5 postulates, and 5 common notions. He considered the postulates and common notions (sometimes referred to
collectively as axioms) to be sufficiently obvious that everyone would agree to their truth. The goal was to keep the number of postulates to a minimum, to assume nothing that could be derived. The
high place that deductive reasoning has held in Western culture is probably a consequence, to a significant degree, of the success that Euclid and his followers had in deriving deductive proofs in
geometry. (Some mathematicians make a distinction between postulates and axioms, some do not. In what follows, the terms are used more or less interchangeably, except when in quotes of others’
writings, where, of course, the terms are those used by the quoted authors.) Euclid’s Elements is undoubtedly the most durable text on mathematics ever written. The estimated number of editions that
have been published since the appearance of the first printed version in Venice in 1482 exceeds 1,000. Boyer and Merzbach (1991), to whom I am indebted
Deduction and Abstraction
for this fact, refer to Euclid’s Elements as the most influential textbook of all times, an assessment that agrees with Wilder’s (1952/1956) surmise that there may be no other document that has had a
greater influence on scientific thought. The entire text was composed of 13 books that deal with plane and solid geometry, the theory of numbers, and incommensurables. Euclid did not claim
originality for most of what he wrote; his intention was to produce a text that covered the elements of mathematics as they were understood in his day, and he drew much from the work of others in the
pursuit of this goal. The deference shown to his product over two millennia attests to the success of his efforts. What gives Elements its lasting importance is not so much the theorems proved in it,
but the method Euclid explicated, which involved starting with a few “self-evident” truths and, using deduction as the only tool, making explicit what they collectively imply. Beckman (1971) refers
to Elements as the first grandiose building of mathematical architecture and describes Euclid’s achievement in metaphorical terms as the construction of an edifice, the foundation stones of which are
the postulates. “Onto these foundation stones Euclid lays brick after brick with iron logic, making sure that each new brick rests firmly supported by one previously laid, with not the tiniest gap a
microbe can walk through, until the whole cathedral stands as firmly anchored as its foundations” (p. 48). Euclid, Beckman contends, is not just the father of geometry but the father of mathematical
rigor. As an aside, we may see in the U.S. Declaration of Independence a reflection of Euclid’s approach of constructing a body of geometric truths composed of deductions from self-evident axioms.
The writers of this document identified four truths that they considered to be self-evident— that all men are created equal, that they are endowed by their Creator with certain inalienable rights,
and so on—and then deduced what they saw to be the implications of these truths, which they concluded justified their declaration of the right of the colonies to be free and independent states.
Meyerson (2002), who makes this comparison, contends that the U.S. Constitution also may be seen as basically a set of axioms, and that it is the business of the Supreme Court to pass judgment on
what is implicit in those axioms—which explains why the Constitution can be as short as it is and yet have such far-ranging influence. The impressiveness of Euclid’s monumental accomplishment
notwithstanding, his logic has not escaped criticism. Even he was not immune to the common problem of making tacit assumptions without recognizing them as such. Critical views as to the tightness of
Euclid’s logic have been expressed by Russell (1901/1956a, pp. 1588–1589) and Bell (1946/1991, p. 332), among numerous others. Bell (1945/1992)
Mathematical Reasoning
agrees that Euclid’s contribution to mathematics was monumental, not because of the postulates he proved, but because of the “epoch-making methodology” of his work: For the first time in history
masses of isolated discoveries were united and correlated by a single guiding principle, that of rigorous deduction from explicitly stated assumptions. Some of the Pythargoreans and Eudoxus before
Euclid had executed important details of the grand design, but it remained for Euclid to see it all and see it whole. He is therefore the great perfector, if not the sole creator, of what is today
called the postulational method, the central nervous system of living mathematics. (p. 71)
Dunham (1991) contends that Euclid’s sins were sins of omission and that his works are free of sins of commission, which is to say that while some of his proofs are incomplete—by today’s
standards—none of his 465 theorems has been shown to be false. But despite the emphasis on deduction, the Greeks saw their mathematics as dealing with truth, though not necessarily truth that
depended on empirical observation. Given the Euclidean view of axioms as obviously true statements about reality as the point of departure, and deduction as the method of drawing conclusions from
those axioms, the conclusions drawn—the theorems proved—could also be considered true assertions. This view of the matter persisted until relatively recent times. To German physician-physicist
Hermann von Helmholtz, for example, the axioms that comprise the foundation of geometry were unprovable principles that would be admitted at once to be correct by anyone who understood them. Von
Helmholtz (1870/1956) wondered, however, about why there should be such self-evident truths and about the basis of our confidence in their correctness. “What is the origin of such propositions
unquestionably true yet incapable of proof in a science where everything else is reasoned conclusion? Are they inherited from the divine source of our reason as the idealistic philosophers think, or
is it only that the ingenuity of mathematicians has hitherto not been penetrating enough to find the proof?” (p. 649). Von Helmholtz rejected the view held by German philosopher Immanuel Kant that
the axioms of geometry are necessary consequences of an a priori transcendental form of intuition, because it is possible to imagine spaces in which axioms different from those of plane geometry
would be intuitively obvious to the inhabitants, and even we, who live presumably in a Euclidean space, can imagine what it would be like to live in a non-Euclidean one. The axioms of Euclidean
geometry are what they are, according to this view, because of the properties of the space in which we live, but we can imagine them otherwise because we can imagine other types of spaces.
Deduction and Abstraction
This perspective represents a small departure from that of the Greeks because, while it sees the axioms of geometry as reflections of the way things are, it recognizes the possibility that they could
be otherwise. However, it does not abandon the basic assumption of a direct connection between the axioms and the properties of the physical world; in fact, it reinforces that assumption by allowing
as to how a world with different characteristics would give rise to a geometry with different axioms. Modern mathematicians have a very different attitude toward axiomatic systems. The empirical
truth of a system’s axioms, or postulates, as they are often called, is irrelevant to the mathematical integrity of the system. The axioms are viewed as conventions that mathematicians agree to take
as a system’s foundation, and the only requirements are that they be consistent with each other and that the logic by which theorems are deduced be valid. It is consistency within an axiomatic system
that is required; different systems need not be consistent with each other, nor need they be descriptive of the physical world. What constitutes deductively valid logic is taken to be a matter of
agreed-upon convention. Given these conditions, one can say that if a system’s axioms are empirically true, theorems that are validly deduced from them are true also. It is not a requirement,
however, that either the axioms or the theorems be empirically true. Russell (1901/1956a) expresses this attitude with respect to Euclid’s system in particular. “Whether Euclid’s axioms are true, is
a question to which the pure mathematician is indifferent” (p. 1587). Russell further points out that the question of their truth “is theoretically impossible to answer in the affirmative. It might
possibly be shown, by very careful measurements, that Euclid’s axioms are false; but no measurements could ever assure us (owing to the errors of observation) that they are exactly true” (p. 1587).
This is not to deny that they have proved to be sufficiently accurate to be immensely useful for practical purposes in the physical world. Philosophers have sometimes made a distinction between
analytic and synthetic truths. Analytic truths are not verified by observation; true analytic statements are tautologies and are true by virtue of the definitions of their terms and their logical
structure. Synthetic truths relate to the material world; the truth of synthetic statements depends on their correspondence to how physical reality works. Mathematics, according to this distinction,
deals exclusively with analytic truths. Its statements are all tautologies and are (analytically) true by virtue of their adherence to formal rules of construction.
Mathematical Reasoning
☐☐ Mathematical Induction Mathematical induction is a method that can be used to prove a countable infinity of statements true in a finite number of steps. (Bunch, 1982, p. 43)
In books on logic and reasoning a distinction is often made between deduction and induction, deduction involving reasoning from more general assertions to more particular ones and induction involving
going from the more particular to the more general. In most everyday situations, and probably in most scientific contexts as well, if one entertained the hypothesis “If A then B” and upon observing a
large number of instances of A found B to be present in every case, one would undoubtedly consider the hypothesis to be supported strongly and would expect to see B in all subsequent observations of
A. This is a form of induction and it works very well most of the time. Mathematicians use induction, sufficiently broadly defined to include guessing, hunch following, intuiting, and trialand-error
experimenting in their efforts to construct, validate, and refine deductive systems, but it is not what is meant by mathematical induction. Kasner and Newman (1940) describe mathematical induction as
“an inherent, intuitive, almost instinctive property of the mind. ‘What we have once done we can do again’” (p. 35). The form of reasoning involved may be represented in the following way: If R is
true for n, it is true for n + 1. R is true for n. Therefore, R is true for n + 1. This representation looks very much like a deductive argument, in particular the modus ponens form of the
conditional syllogism, and so it is. What motivates referring to it as mathematical induction is that it must be applied iteratively. To use this form of argument, one must first show that the major
premise holds—that R is necessarily true for n + 1 if it is true for n. With that settled, if one can then demonstrate that R is true for some specific integer value of n, it must be true for all
greater integers. One shows this by iterating on the syllogism with specific integers: If I have shown that R is true for 1, for example, I can conclude that it is true for 2; if it is true for 2, it
must be true for 3; if true for 3, …. This is a very powerful form of argument. As Péter (1961/1976) points out, what mathematical induction allows us to do—demonstrate that something holds for all
natural numbers—would otherwise be impossible for finite brains. This form of argument is sufficiently different from that in which one simply generalizes from a sample to a population, on the
strength of the assumption that inasmuch as the sample reveals no exceptions to a rule, the population is unlikely to have any either, to warrant a different
Deduction and Abstraction
name—Dantzig (1930/2005) has proposed reasoning by recurrence—but induction continues to be used, although generally with the modifier mathematical to accentuate the distinction. That a conjecture
has been shown to be correct with respect to every specific instance that has been checked is never taken by a mathematician as proof of its truth. Many formulas have been proposed for generating
prime numbers. Some of these have worked impressively well, generating nothing but primes for a long time. One might say they have passed any reasonable test that could be applied to justify the leap
from many consistent observations to the statement of a general law. But mathematicians are rightfully wary of such leaps, because some of the prime number generators that have appeared to be so
promising have eventually proved to be not quite perfect. The formula n2 – 79n + 1,601, for example, produces nothing but primes for all values of n up to 79, but for n = 80, the formula gives 1,681,
which is 412. The numbers 31; 331; 3,331; 33,331; 333,331; 3,333,331; and 33,333,331 are all primes. One might guess on the basis of such regularity that any sequence of 3s followed by a 1 would
yield a prime number, but in fact 333,333,331 is not prime; it is the product of 17 and 19,607,843. Until 1536 people believed that any number of the form 2p – 1 was prime if p was prime, but in that
year Hudalricus Regius discovered that 211 – 1 = 2,047, the product of 23 and 89. (As far as I can tell, this is the only thing for which Hudalricus Regius is remembered. A search of indexes of
mathematicians failed to find him listed, and all the numerous references to him I was able to find referred to him simply as “a mathematician” and mentioned him only in reference to this one
finding.) Before Fermat’s last theorem (xn + yn ≠ zn for n > 2) was proved to be true in the 1990s, it had been shown to be true for all n up to 4 million, but this did not suffice to guarantee it to
be true for all n. An obscure conjecture in number theory known as Merten’s theory was verified to be true for the first 7.8 billion natural numbers before it was shown to be false in 1983 (Devlin,
2000a, p. 74).
☐☐ The Trend Toward Increasing Abstraction The longer mathematics lives the more abstract—and therefore, possibly, also the more practical—it becomes. (Bell, 1937, p. 525)
The history of mathematics reveals a progression from the more concrete to the more abstract, beginning with the earliest knowledge we have of the origins of counting and number systems and
continuing to the present day. This progression is seen in numerous ways—in the
Mathematical Reasoning
emergence of the distinction between numbers as properties of things counted and numbers as interesting entities in their own right (between three as an adjective and three as a noun), in the
evolution of systems for representing numbers, in the early attention to mathematics for the sake of practical applications (trade, surveying, construction) spurring exploration of increasingly
esoteric avenues of mathematical thought for its own sake, in the invention or discovery of increasingly abstract and counterintuitive ideas (negative numbers, irrational numbers, imaginary numbers,
infinities of different order, proofs of nonprovability). It is of at least passing interest that students of cognitive development in children report a similar progression from more concrete to more
abstract thinking in the normal course of cognitive maturation— “away from the material and toward the formal, or away from ideas rooted in the here and now and toward ideas addressing events that
are distant in time and space” (White & Siegel, 1999, p. 241). The progression from the more concrete to the more abstract in the history of mathematics is seen clearly in the early development of
methods of counting and representing quantities. Schmandt-Besserat (1978, 1981, 1982, 1984) notes markers of this trend: • Animal bones and antlers bearing series of notches and dating from
prehistoric times, considered to be reckoning or tally devices and illustrative of the use of the principle of one-to-one correspondence. • Three-dimensional tokens of various shapes used for
counting, also on a one-to-one basis, dating from about 8000 BC—the shape of the token indicating what was represented and the number of tokens indicating the number of units. • Ideograms impressed
on clay tablets. Such ideograms were less concrete than the three-dimensional tokens, but were still relatively concrete inasmuch as they were indicative of the things they represented and not just
of the numbers of such things. • Symbols, appearing around 3100 BC, that represented numbers, or quantities, that were independent of the things whose quantities were being represented. As noted in
Chapter 3, a trend toward increasing abstractness is seen in the evolution of systems for representing numbers over the centuries. The Hindu-Arabic system that is used almost universally today
encodes the basic principles on which the representational scheme is based in a more abstract way than did many of its predecessors. The greater
Deduction and Abstraction
abstractness has provided greater computational convenience, perhaps at the cost of making the system’s rationale somewhat more obscure. Abstraction in mathematical operations is seen already even in
elementary arithmetic. Alfred North Whitehead (1911/1963a) puts it this way: Now, the first noticeable fact about arithmetic is that it applies to everything, to tastes and to sounds, to apples and
to angels, to the ideas of the mind and to the bones of the body. The nature of the things is perfectly indifferent, of all things it is true that two and two make four. Thus we write down as the
leading characteristic of mathematics that it deals with properties and ideas which are applicable to things just because they are things, and apart from any particular feelings, or emotions, or
sensations, in any way connected with them. This is what is meant by calling mathematics an abstract science. (p. 52)
Algebra represents a higher level of abstraction than do number and arithmetic. Number and arithmetic are abstractions from the entities numbered, or added and subtracted and otherwise manipulated;
algebra, in modern terms, is an abstraction not only from particular numbers but from the concept of number itself. As Bell (1937) puts it, Once and for all Peacock [in his Treatise on Algebra,
published in 1830] broke away from the superstition that the x, y, z, … in such relations as x + y = y + x, xy = yx, x(y + z) = xy + xz, and so on, as we find them in elementary algebra, necessarily
‘represent numbers’; they do not, and that is one of the most important things about algebra and the source of its power in applications. The x, y, z, … are merely arbitrary marks, combined according
to certain operations, one of which is symbolized as +, and other by × (or simply as xy instead of x × y), in accordance with postulates laid down at the beginning. (p. 438)
The abstract nature of mathematics is seen also in the debated question of what it means for some of its simplest constructs to exist. Geometry, for example, deals with shapes, only approximations to
which are found in the physical world; the entities that populate its axioms and theorems are idealizations, figments of the mathematician’s imagination. For even though mathematics teaches us that
there are cubes and icosahedrons, yet in the sense that there are mountains over 25,000 feet high, that is, in the sense of physical existence, there are no cubes and no icosahedrons. The most
beautiful rock-salt crystal is not an exact mathematical cube, and a model of an icosahedron, however well constructed, is not an icosahedron in the mathematical sense. While it is fairly clear what
is meant by the expressions ‘there is’ or ‘there are’ as used in the sciences
Mathematical Reasoning
dealing with the physical world, it is not at all clear what mathematics means by such existence statements. On this point indeed there is no agreement whatever among scholars, whether they be
mathematicians or philosophers.” (Hahn, 1956, p. 1600)
The question of the existence, or nonexistence, of mathematical entities has motivated much discussion and debate, and proposed answers distinguish several schools of thought, about which more is in
Chapter 13. For the present it suffices to recognize that such debates have seldom, if ever, gotten in the way of the doing of mathematics, and to note that whatever the sense in which mathematical
entities may be said to exist, it differs from the sense in which physical objects may be said to exist. As Kasner and Newman (1940) put it, “A billiard ball may have as one of its properties, in
addition to whiteness, roundness, hardness, etc., a relation of circumference to diameter involving the number π. We may agree that the billiard ball and π both exist; we must also agree that the
billiard ball and π lead different kinds of lives” (p. 61). Geometry illustrates the increasingly abstract nature of mathematics in another way as well. Presumably geometry initially grew out of
practical concerns about measuring physical areas and making calculations that could be useful for purposes of building physical structures. Once geometry was framed by Euclid as deductions from a
set of axioms, it became possible to explore the consequences of changing the axioms, although more than 2,000 years were to pass before such exploration occurred. Why it took this long for the idea
to surface is an interesting question. Apparently, the assumption that the theorems of geometry were descriptive of the physical world was sufficiently strong to preclude the consideration of other
perspectives. The development of nonEuclidian geometries in the 19th century, perhaps more than any earlier event, challenged the prevailing idea that the axioms of mathematics are examples of truths
that would be recognized universally as such by all rational persons. It demonstrated that geometry (one should say any particular geometry) could be treated as an abstract deductive system in which
one states a set of axioms and investigates what logically follows from them—any correspondence to the physical world, or lack thereof, being irrelevant to the enterprise. Instead of being assertions
of obvious truths about the physical world, the axioms of geometry now were better viewed as “conventions,” as Poincaré (1913) put it, and thus at once abstract and arbitrary. Non-Euclidean
geometries have been around long enough now that we easily accept them, but their introduction was greeted with great skepticism and angst. That mathematics generally can be viewed strictly as symbol
manipulation, without any reference to what, if anything, the symbols
Deduction and Abstraction
represent is a relatively recent idea. Among the more explicit statements of this view is one by De Morgan, who proclaimed that (with the exception of =) the symbols he used in mathematical
expressions had no meaning whatsoever. Algebra to him was nothing more or less than the business of manipulating symbols according to specified rules. French polymath Louis Couturat (1896/1975)
describes what the mathematician does as the laying down of symbols and the prescribing of rules for combining them. He treats the conventions by which mathematical entities are created as arbitrary
and equates the process with that by which chessmen and the rules that govern their moves are defined. Lakoff and Núñez (2000) contend that even such fundamental concepts as point, line, and space
have been transformed in what they refer to as the “discretization” of mathematics. Space, once thought of as continuous as attested by our ability to move smoothly within it, became conceptualized
as a set of points. The latter, less intuitively natural, conceptualization is a reconceptualization of the former, they suggest, constructed to suit certain purposes. According to the earlier
conceptualization, lines and planes exist independently of points; according to the more recent one, lines and planes are composed of points, and a point, defined as a line of zero length, is about
as abstract a concept as one can imagine. Lakoff and Núñez illustrate the abstractness of the concept of a point with the question of whether points on a line touch. One answer that might be expected
is that of course they touch, else the line would not be continuous. Another is that of course they do not touch, because, if they did, there would be no distance between them and they would
therefore be the same point. The latter seems to follow from the definition of a point as a line of zero length, but that does not make the definition invalid. Lakoff and Núñez contend that other
counterintuitive ideas emerge from the discretization of mathematics, and that one just has to get used to it. “In thinking about contemporary discretized mathematics, be aware that your ordinary
concepts will surface regularly and that they contradict those of discretized mathematics in important ways” (p. 278). (We will return to this topic in Chapter 9.) The progression from the more
concrete to the more abstract can be seen over the entire recorded history of mathematics, but the rate of change appears to have increased substantially in relatively recent times. Most of math was
empirically based up until the time of Galileo, with math concepts straightforward abstractions from real-world experience. But things had begun to change rapidly by the 17th century. Wallace (2003)
puts it this way: By 1600, entities like zero, negative integers, and irrationals are used routinely. Now start adding in the subsequent decades’ introductions of
Mathematical Reasoning
complex numbers, Naperian logarithms, higher-degree polynomials and literal coefficients in algebra—plus of course eventually the 1st and 2nd derivative and the integral—and it’s clear that as of
some pre-Enlightment date math has gotten so remote from any sort of real-world observation that we and [Ferdinand de] Saussure can say verily it is now, as a system of symbols, ‘independent of the
objects designated,’ i.e. that math is now concerned much more with the logical relations between abstract concepts than with any particular correspondence between those concepts and physical
reality. The point: It’s in the seventeenth century that math becomes primarily a system of abstractions from other abstractions instead of from the world. (p. 106)
Arguably it is the increasing tendency of mathematical concepts to be abstractions from other mathematical concepts, themselves abstractions, that makes much of higher mathematics opaque to
nonmathematicians. Devlin (2002) maintains that “the only route to getting even a superficial understanding of those concepts is to follow the entire chain of abstractions that leads to them” (p.
14). Follow the chain, that is, if you are able. But in many cases of contemporary higher math, one is likely to find that to be a tall order. Devlin (2000a) also contends that an inability to deal
effectively with abstraction is the single major obstacle to doing well at mathematics. Bell (1945/1992) describes the magnitude of the change toward abstractness and generality that occurred during
the first half of the 19th century this way: “By the middle of the nineteenth century, the spirit of mathematics had changed so profoundly that even the leading mathematicians of the eighteenth
century, could they have witnessed the outcome of half a century’s progress, would scarcely have recognized it as mathematics” (p. 169). In 1900, German mathematician David Hilbert proposed a
program, the aim of which was to axiomatize all of mathematics. The trend continues and new areas of mathematics tend to be more abstract than those from which they emerged. Innovative work in
mathematics today is often so abstract, as well as dependent on considerable specialized background knowledge, that few but specialists in the areas of development can follow it. By 1925 Whitehead
(1925/1956) could, without much fear of contradiction, describe mathematics as the science of the most complete abstractions to which the human mind can attain. The history of mathematics can be
viewed as a continuing attempt to extend the limits of what is attainable in this regard. In focusing on the tendency of mathematics to become increasingly abstract over the course of its existence
and realizing that the criteria that a modern mathematical system must satisfy do not include making true statements about the physical world as we understand it, one is led to wonder whether
mathematics would have been of interest—whether
Deduction and Abstraction
it would ever have been developed—if it were not so obviously applicable to the world of the senses. We should note too that essentially all of the early abstractions were abstractions from the
perceived physical world. Surprisingly, perhaps, that mathematics has become increasingly abstract does not mean that it has become increasingly useless. To the contrary, it can be argued that its
utility, as well as its beauty, has only been enhanced by its tendency to eschew the concrete in its preference for the domain of pure thought. Bell (1945/1992) describes the abstractness of
mathematics as “its chief glory and its surest title to practical usefulness” (p. 9). Kline (1980) has a similar assessment. “Though it [mathematics] is a purely human creation, the access it has
given us to some domains of nature enables us to progress far beyond all expectations. Indeed it is paradoxical that abstractions so remote from reality should achieve so much. Artificial the
mathematical account may be; a fairy tale perhaps, but one with a moral. Human reason has a power even if it is not readily explained” (p. 350). We will return to the topic of the usefulness, some
would say the surprising usefulness, of mathematics in Chapter 12. Implicit in much of the foregoing account of the increasingly abstract nature of mathematics is the equating of increasing
abstraction with progress. Few would question the assertion that over the centuries mathematics has become more and more abstract and mathematics has made progress. It seems only natural to yoke
these two observations in a causal way: Mathematics has made progress because it has become increasingly abstract, or it has become increasingly abstract because it has made progress—or becoming
increasingly abstract and making progress are the same thing. The equation can be challenged. Smith and Confrey (1994) argue that making increasing abstraction the universal standard for mathematical
progress has two unfortunate tendencies. “First, it tends to treat abstraction as an ahistorical concept, that is, it assumes that we can interpret historical mathematical events in terms of some
timeless concept of abstraction. Second, it encourages the creation of an historical record in which only those events that are viewed as part of the story of increasing abstraction are considered
important, while events that do not fit into this framework are often considered superfluous or wrong” (p. 177). The second tendency has the effect of defining progress in terms of increasing
abstraction, and simply ignoring, or not building on, innovations that do not fit that definition. Smith and Confrey argue that progress lies, to some extent, in the eye of the beholder. “As we look
backwards, it is often easy to see what we now understand but which those before us seemingly did not…. However, what is much more difficult to see is what they did understand that we do not and
perhaps cannot, because we cannot enter the historical and social world in which they lived” (p. 178). We may, they argue,
Mathematical Reasoning
see the understanding of forerunners as confused, when, in fact, the confusion lies in our own inability to imagine the world as they saw it. The development of a new concept or area of mathematics
arguably follows a common trajectory, and increasing abstraction appears to be a major aspect of it. Devlin (2000) describes the trajectory as one that begins with the identification and isolation of
new key concepts and that later is followed by analysis and attempts at axiomatization, which generally means increased abstraction, which in turn leads to generalizations, new discoveries, and
greater connections to other areas of mathematics. It must be pointed out, however, that increasing abstraction does not mean to all mathematicians an abandoning of the concrete. Some argue that no
matter how abstract some aspects of mathematics may become, there will always be a place for the concrete. Kac (1985) forcefully defends this position: By its nature and by its historical heritage,
mathematics lives in the interplay of ideas. The progress of mathematics and its vigor depend on the abstract helping the concrete and on the concrete feeding the abstract. To isolate mathematics and
to divide it means in the long run to starve it and perhaps even destroy it…. The two great streams of mathematical creativity [the concrete and the abstract] are a tribute to the universality of
human genius. Each carries its own dreams and its own passions. Together they generate new dreams and new passions. Apart, both may die—one in a kind of unembodied sterility of medieval scholasticism
and the other as a part of military art. (p. 153)
The existence of abstract mathematics is something of an enigma. What is there in evolutionary history that can explain not only the intense desire to acquire knowledge, even that which has no
obvious practical utility, but the apparent ability of people to do so? From where, in particular, comes the fascination with and propensity for abstract mathematics? As Davies (1992) puts it, “It is
certainly a surprise, and a deep mystery, that the human brain has evolved its extraordinary mathematical ability. It is very hard to see how abstract mathematics has any survival value” (p. 152).
The question of the basis for mathematical ability touches on that of what it means to be human.
☐☐ Freedom From Empirical Constraints The mathematician is entirely free, within the limits of his imagination, to construct what worlds he pleases. (Sullivan, 1925/1956, p. 2020)
That the geometry that Euclid systematized stood alone for 2,000 years is testimony both to the influence of his work and to the strength of the
Deduction and Abstraction
connection in people’s thinking between geometry and the perceived properties of the physical world. The ability to think of geometry as an abstract system, rather than as a description of physical
reality, was essential to the development of geometries other than that of Euclid, and it was a long time in coming. It emerged from efforts of mathematicians to deal with the “parallel postulate” of
Euclid’s geometry, which had been a challenge and frustration to them for centuries. I have put “parallel postulate” in quotes because, as expressed by Euclid, the postulate did not mention
explicitly parallel lines, but rather referred to the angles made by a line falling across two straight lines; what is commonly cited today as Euclid’s parallel postulate is a rephrasing of what
Euclid said by Scottish mathematician John Playfair: Given a line l and a point P not on l, there exists one and only one line m, in the plane of P and l, which is parallel to l.
Many believed this postulate to be derivable from the others, and countless hours were spent on efforts to prove it to be so. Proofs were published from time to time, but invariably they were shown,
sooner or later, to be invalid. The idea that a geometry might be developed that did not contain the equivalent of Euclid’s parallel postulate, either as a postulate or as a theorem, was not
seriously entertained for a long time, because of the prevailing conception of geometry as descriptive of real-world relationships, and it was obvious to anyone who thought about it that parallel
lines could never meet. The strength of this conviction is illustrated by the work of Italian logician-theologian-mathematician Giovanni Girolamo Saccheri. Saccheri demonstrated that Euclid’s
geometry was not the only one possible, but refused to accept his own findings. Bell (1946/1991) refers to Saccheri’s success in convincing himself of the absolutism of Euclid’s geometry as “one of
the most curious psychological paradoxes in the history of reason. Determined to believe in Euclid’s system as the absolute truth, he constructed two other geometries, each self-consistent and as
adequate for daily life as Euclid’s. Then, by a double miracle of devotion, he disbelieved both” (p. 344). Kac (1985) speculates that the reason that Sacherri failed to accept his findings as a basis
for the development of a geometry in which the parallel postulate did not hold was that, being convinced that the postulate was correct, he was searching for a contradiction; he never found one, but
he apparently never became convinced that there was not one to be found. One of the new geometries that Saccheri comtemplated and dismissed was the one that Russian mathematician Nikolai Lobachevsky
developed 97 years later. Saccheri’s work did not come to the attention of other mathematicians until more than 150 years following his death.
Mathematical Reasoning
When the possibility of non-Euclidean geometries began to be considered seriously during the 19th century, several of them were introduced within a relatively short period of time by such eminent
mathematicians as Hungarian János Bolyai, German Carl Friedrich Gauss, Nikolai Lobachevsky, and German Georg Friedrich Bernhard Riemann, although some of these individuals had difficulty accepting
this development at first. Bolyai, Gauss, and Lobachevsky explored the implications of a geometry in which the parallel postulate was replaced by one that held it to be possible to draw more than one
parallel to a straight line; Riemann considered the implications of an axiom that permitted no parallels to be drawn to a given line through a point not on the line. In the geometry resulting from
the first approach—hyperbolic geometry—the sum of the angles of a triangle is always less than 180 degrees; in Riemann’s geometry—elliptical geometry—the sum of the angles of a triangle is always
greater than 180 degrees. These strange new ideas opened the door to a new era of innovation in mathematics. The non-Euclidean—hyperbolic and elliptical— geometries that emerged at the beginning of
this era appeared at first to be products of the rarified kind of thinking in which mathematicians indulge for their own intellectual amusement and to have no connection with the physical world. As
has been true time and time again, however, these products eventually proved to be powerful new tools for furthering our understanding of the universe; in this case their usefulness was first
recognized in the area of theoretical physics and notably in Albert Einstein’s work on relativity, the geometry of which is that of Riemann. The creation of non-Euclidean geometries forced a major
rethinking of the nature of mathematics. Kasner and Newman (1940) refer to the development of these geometries as a “sweeping movement” that has never been surpassed in the history of science, and
contend that it “shook to the foundations the age-old belief that Euclid had dispensed eternal truths” (p. 134). Kline (1953) also describes its significance in similarly superlative terms: “It is
fair to say that no more cataclysmic event has ever taken place in the history of all thought” (p. 428). What it did was compel mathematicians, scientists, and others “to appreciate the fact that
systems of thought based on statements about physical space are different from that physical space” (p. 428). It demonstrated, as no prior development had done, the independence of mathematics from
the material world. The change in perspective forced by this demonstration was to many not only profound but profoundly unsettling. “Prior to the coming of non-Euclidian geometry, there was a unity,
a confidence, and a certainty to our knowledge of the world. Afterwards, it was not enough to know that God is a geometer. The one unassailable truth about the nature of the physical world had been
eroded and so, along with it, had centuries
Deduction and Abstraction
of confidence in the existence and knowability of unassailable truths about the Universe” (Barrow, 1992, p. 14). British mathematician-philosopher William Kingdom Clifford (1873/1956) compared the
revolution represented by the invention of non-Euclidean geometries to that which Copernicus brought about on Ptolemaic astronomy; in his view, the consequence in both cases was a change in our
conception of the cosmos. That is not to say that the change was immediate. Lobachevsky’s work attracted little attention until about 30 years after it was published. The freeing of geometry from
considerations of real-world constraints was advanced considerably by the work of Hilbert, who developed a system of geometry based on a small set of undefined terms and relations and 21 axioms,
which he referred to as assumptions. Far more important than Hilbert’s particular system—which became known as formalism—was his insistence that it is not necessary that the constructs of such a
system represent anything at all in the real world, and that all that is necessary is that the system be internally consistent, which is not to suggest that internal consistency among a set of more
than a very few axioms is readily established. The cost of this newfound freedom was great, and not everyone was willing to pay it. Gauss delayed publication of his own work on nonEuclidean geometry
in the interest of avoiding controversy. Kline (1953) describes the effect of the development of non-Euclidean geometry as that of not only depriving mathematics of its status as a collection of
truths, but perhaps of robbing man of the hope of ever attaining certainty about anything. On the other hand, it also gave mathematicians carte blanche to wander wherever they wished, which is
precisely what they proceeded to do. Kline’s assessment of this result is not positive. The development of non-Euclidean geometries demonstrated also the inappropriateness of characterizing
mathematics as an axiomatic system. Mathematics is not an axiomatic system; it encompasses many such systems. While it is required of any axiomatic system that it be self-consistent, mathematics, as
a whole, need not and does not meet that requirement; it contains many axiomatic systems, each of which is intended to be self-consistent, but it is not essential that the theorems derived from one
set of axioms be consistent with those derived from another, or even that the axioms themselves be consistent across systems. The arbitrariness of the axioms of any mathematical system is seen
perhaps most starkly in the interchangeability of the notions of point and line in projective geometry. Given the axioms of this discipline, these two constructs are entirely interchangeable in the
sense that either can play the role of the fundamental element, and if every mention of point were replaced with line and conversely, the system would remain intact.
Mathematical Reasoning
Polish-American mathematician Nathan Court (1935/1961) captures the modern mathematician’s indifference to the correspondence between mathematical statements and physical reality in somewhat
whimsical terms. “If a mathematician takes a notion to create a mathematical science, all he has to do is to set up a group of postulates to suit his own taste, postulates which he by his own fiat
decrees to be true, and involving things nobody, including the mathematician himself, knows about, and he is ready to apply formal logic and spin his tale as far and as fast as he will” (p. 24). And,
more soberly, “The postulates of a mathematical science may be laid down arbitrarily. The rest of the doctrine is developed by pure logic and the test of its validity is that it must be free from
contradictions” (p. 26). The essence of an axiomatic system is that all that can be said about the system—the total collection of assertions that can be made—is implicit in the axioms. The challenge
to the mathematician is to make what is implicit explicit by applying agreed-upon rules of inference. That is it in a nutshell. As Whitehead (1898/1963b) puts it, “When once the rules for the
manipulation of the signs of a calculus are known, the art of their practical manipulation can be studied apart from any attention to the meaning to be assigned to the signs” (p. 69). There are many
axiomatic systems in mathematics. Euclidean geometry is one such. Among many others are hyperbolic geometry, elliptical geometry, probability theory as axiomatized by Russian mathematician Andrey
Nikolaevich Kolmogorov, and the set theory of German mathematician Ernst Zermelo and Israeli mathematician Abraham Fraenkel. The push within mathematics to axiomatize has been very strong. The vision
of many mathematicians over the ages has been the development of a single axiomatic system that would provide a foundation for all of mathematics. This dream appears to have been shown by Kurt Gödel
to be unattainable (more on that subject in subsequent chapters). Suffice it to note here that, while Gödel’s work, which showed it to be impossible to have an axiomatic system that was complete even
for arithmetic, was seen by some to be devastating to the mathematical enterprise, it appears not to have slowed mathematical activity appreciably, if at all. If total axiomatization is not possible,
there appear to be plenty of challenges that do not require it. As Moore (2001) puts it: “Whatever the appeal of axiomatic bases, we must not regard them as sacrosanct. After all, people were engaged
in arithmetic for thousands of years before any attempt was made to provide it with one” (p. 182).
C H A P T E R
In practice, proofs are simply whatever it takes to convince colleagues that a mathematical idea is true. (Henrion, 1997, p. 242) There is an infinite regress in proofs; therefore proofs do not
prove. You should realize that proving is a game, to be played while you enjoy it and stopped when you get tired of it. (Lakatos, 1976, p. 40)
If we are to understand mathematical reasoning at all, we must understand, at least from the perspective of our culture and time, something about the nature of mathematical proof and the processes
involved in proof construction. What constitutes a proof? What do mathematicians mean when they use the term? Where and when did the idea of a proof originate? How do proofs get built? How can one be
sure that a proposed proof is valid? Who is qualified to judge the validity of a proof? A proof in mathematics is a compelling argument that a proposition holds without exception; a disproof requires
only the demonstration of an exception. A mathematical proof does not, in general, establish the empirical truth of whatever is proved. What it establishes is that whatever is proved—usually a
theorem—follows logically from the givens, or axioms. The empirical truth of a theorem can be considered to be established only to the extent that the axioms from which it is derived can be
considered to be empirically true—to be accurately descriptive of the real world.
Mathematical Reasoning
☐☐ Origin and Evolution of the Idea of Proof The concept of proof perhaps marks the true beginning of mathematics as the art of deduction rather than just numerological observation, the point at
which mathematical alchemy gave way to mathematical chemistry. (Du Sautoy, 2004, p. 29)
The origin of the notion of proof is obscure. Apparently the ancient Egyptians lacked it. They also did not make a sharp distinction between exact relationships and approximations (Boyer & Merzbach,
1991). They did use demonstrations of plausibility such as noting, in the context of claiming that the area of an isosceles triangle is half its base times its height, the possibility of seeing an
isosceles triangle as two right triangles that can be rearranged to form a rectangle with the same height as the triangle and a width of half its base, but they did not prove theorems in a formal
way. Like the Egyptians, the Babylonians appear to have dealt primarily with specific cases and not to have attempted to produce general formulations of unifying mathematical principles. They too
failed to distinguish sharply between exact and approximate results. As Boyer and Merzbach point out, however, that statements of general rules have not been found on surviving cuneiform tablets is
not compelling evidence that no such rules were recognized; the many problems of similar types that are found on Babylonian tablets could be exercises that students were expected to work out using
recognized rules and procedures. It cannot be said with certainty that pre-Hellenic peoples had no concept of proof. The mathematics of many ancient cultures—Egyptian, Babylonian, Chinese, and
Indian—display a mixture of accurate and inaccurate results, of primitive and sophisticated methods, and of the simple and the complex. Bell’s treatment of the question of whether the Babylonians had
the concept of a proof is puzzling. In his Men of Mathematics, published in 1937, he refers to the Babylonians as “the first ‘moderns’ in mathematics” and credits them with “recognition—as shown by
their work—of the necessity for proof in mathematics” (p. 18). He calls this recognition “one of the most important steps ever taken by human beings” (p. 18) and notes that “until recently,” it had
been supposed that the Greeks were the first to have it. However, in his The Magic of Numbers, which was first published in 1946, Bell, after noting that some historians of mathematics considered the
Babylonian algebra of 2000–1200 BC to be superior to any other produced before the 16th century and their work in geometry
and mensuration to be almost as good, says that the work has “no vestige of proof” (p. 27). Whatever the status of the concept of proof, or of precursors to this concept, in the pre-Hellenic world,
there is little doubt that the concept was explicitly articulated by the classical Greeks. Bell (1937) credits Pythagoras with importing proof into mathematics and calls it his greatest achievement,
but probably the name that is most closely associated with the concept of proof is that of Euclid. To be sure, the idea of what constitutes an acceptable proof has changed considerably since Euclid’s
time. Many of the proofs in his monumental Elements that went unchallenged for centuries do not meet current standards. Some theorems use undefined terms that have not been identified as such; some
depend on unstated assumptions or postulates; definitions that are given are sometimes exceedingly vague; and so on. But there is no denying the enormous influence that Euclid’s work had in giving
the idea of a deductive proof center stage in mathematics and focusing the efforts of subsequent generations of mathematicians on the activity of proof making. That contemporary mathematicians find
it easy to point out the deficiencies in Euclid’s proofs may be seen as evidence not so much of weakness in Euclid’s thinking as of changes in the standards of precision, rigor, and proof since his
day, indeed changes that were largely consequences of the thinking that his own work and that of his contemporaries set in motion. Moreover, whatever the shortcomings of Euclid’s proofs, even if
many, it took two millennia of mathematizing to improve much upon them. One of the major changes in perspective that has implications for the nature of proofs was the change from thinking of geometry
as selfevidently descriptive of the way the world is to thinking in terms of an axiomatic system. As was noted in the preceding chapter, according to the prevailing modern perspective, whether (any
particular) geometry describes the physical world is incidental; what is important from a mathematical point of view are the implications of the axioms that constitute the system. As Devlin (2000a)
puts it, “When it comes to establishing the theorems that represent mathematical truths, the axioms are, quite literally, all there is” (p. 163). But even when it is not required of mathematical
theorems that what they assert is descriptive of the physical world, what constitutes a proof may be in some dispute, as may be the validity of specific proofs. Bell (1937) credits Gauss with being
the first to see clearly that a proof that can lead to absurdities is no proof at all, and with being responsible for imposing a rigor on mathematics that was not known before his time.
Mathematical Reasoning
☐☐ A Proof as the “Final Word” Proof has a ring of finality to it. To say that something has been proved is to say, or so it would appear, that we can be certain it is true—in the sense of being
derivable from the system’s axioms. Once an assertion has been proved to be true—once it has attained the status of a theorem—from that point on one can take it as a given and need no longer worry
about it. German-American philosopher of science Carl Hempel (1935/1956a) expresses essentially this idea in contrasting the status of the theories of empirical science with the theorems of
mathematics in the following way: “All the theories and hypotheses of empirical science share this provisional character of being established and accepted ‘until further notice,’ whereas a
mathematical theorem, once proved, is established once and for all; it holds with that particular certainty which no subsequent empirical discoveries, however unexpected and extraordinary, can ever
affect to the slightest extent” (p. 1635). The centrality of the role of proofs distinguishes, probably more than anything else, mathematics from the empirical sciences. Du Sautoy (2004) argues that
a major reason that proofs play a central role in mathematics but not in the empirical sciences is that the subject matter of mathematics is ethereal while that of the empirical sciences is tangible.
In some respects, the ethereal nature of mathematics as a subject of the mind makes the mathematician more reliant on providing proof to lend some feeling of reality to this world. Chemists can
happily investigate the structure of a solid buckminsterfullerene molecule; sequencing the genome presents the geneticists with a concrete challenge; even the physicists can sense the reality of the
tiniest subatomic particle or a distant black hole. But the mathematician is faced with trying to understand objects with no obvious physical reality such as shapes in eight dimensions, or prime
numbers so large they exceed the number of atoms in the physical universe. Given a palette of such abstract concepts the mind can play strange tricks, and without proof there is a danger of creating
a house of cards. In the other scientific disciplines, physical observation and experiment provide some reassurance of the reality of a subject. While other scientists can use their eyes to see this
physical reality, mathematicians rely on mathematical proof, like a sixth sense, to negotiate their invisible subject. (p. 31)
Du Sautoy goes on to note too that perhaps the most compelling reason for the emphasis on proofs in mathematics is that proofs are possible in this domain, and that is not true of the empirical
sciences. “In how many other disciplines is there anything that parallels the statement that
Gauss’s formula for triangular numbers will never fail to give the right answer? Mathematics may be an ethereal subject confined to the mind, but its lack of tangible reality is more than compensated
for by the certitude that proof provides” (p. 32). Many other definitions or descriptions of proof could be quoted that would give it a similar ring of finality.
☐☐ The Relativity of Proofs To some, the idea that a proof is the “final word” on a mathematical question is an aspect of mathematical reasoning that sets it apart from, and in a nontrivial sense
above, reasoning in other domains. Knorr (1982) refers to mathematicians as intellectual elites among members of both the sciences and the humanities, and surmises that the basis of this status is
the incontrovertible nature of properly reasoned mathematical arguments. The degree of consensus that is attainable in mathematics is found nowhere else. At least this is a common claim. But not
everyone, not even every mathematician, holds this view. Apparently Eric Temple Bell saw things differently. Kline (1980) quotes him as follows: “Experience has taught most mathematicians that much
that looks solid and satisfactory to one mathematical generation stands a fair chance of dissolving into cobwebs under the steadier scrutiny of the next…. The bald exhibition of the facts should
suffice to establish the one point of human significance, namely, that competent experts have disagreed and do now disagree on the simplest aspects of any reasoning which makes the slightest claim,
implicit or explicit, to universality, generality, or cogency” (p. 257). Regarding how drastically views as to what constitutes a proof can change from one generation to another, Bell (1946/1991)
contends that “a proof that convinces the greatest mathematician of one generation may be glaringly fallacious or incomplete to a schoolboy of a later generation” (p. 66). Mathematical proofs are
relative in several ways. First, any mathematical proof exists within some specific axiomatic system. As already noted, the truth that it establishes is relative to the axioms of that system. That is
to say, the proof of a theorem establishes the theorem to be true only to the extent that the axioms of the system are held to be true. To be completely consistent with currently prevailing ideas
about mathematics, we probably should not use the concept of truth at all. What a proof purports to show is that a theorem follows from the axioms of the system of which it is a part. Inasmuch as it
is required of the axioms of a system only that they be consistent with each other, and not that they be true in the sense of accurately reflecting properties of the physical world, it cannot be
Mathematical Reasoning
required of proved theorems that they be true in this sense either. I shall continue to speak of proved theorems as true as a matter of convenience, as do mathematicians, but it must be borne in mind
that truth in this context has the specific meaning of “following from the axioms.” Second, the history of mathematics is replete with proofs that, after standing for a considerable time, have been
shown to be inadequate in retrospect. I have already noted that many of the proofs in Euclid’s revered Elements fail to meet contemporary standards. Here is the assessment by one prominent
mathematician from the vantage point of more than 2,000 years of intervening work: “Any impartial critic may convince himself in less than an hour—as many did when European geometers began to recover
from their uncritical reverence for the Greek mathematical classics—that several of Euclid’s definitions are inadequate; that he frequently relies on tacit assumptions in addition to the postulates
to which he imagined he had restricted himself; that some of his propositions, as he states them, are false, and that the supposed proofs of others are nonsense…. If it were worth anyone’s trouble,
the entire logical structure of the geometrical portions of the Elements might be destructively analyzed for inexplicit assumptions and defective proofs” (Bell, 1946/1991, p. 332). Hersh (1997) has
similarly harsh words for what he refers to as the myth of Euclid, and for those who perpetuate it: “Today advanced students of geometry know Euclid’s proofs are incomplete and his axioms are
unintelligible. Nevertheless, in watered-down versions that ignore his impressive solid geometry, Euclid’s Elements is still upheld as a model of rigorous proof” (p. 37). Even among mathematicians,
Hersh claims, the Euclid “myth” was universal until well into the 19th century. An abiding challenge to researchers of human cognition is to figure out how it is that what can appear to be a
compelling proof to some mathematicians can be unconvincing—and in some cases even appear to be nonsensical—to others. It has not been unusual for generations of great mathematicians to overlook
specific problems in proofs. As von Mises (1951/1956) puts it, “All followers of the axiomatic method and most mathematicians think that there is some such thing as an absolute ‘mathematical rigor’
which has to be satisfied by any deduction if it is to be valid. The history of mathematics shows that this is not the case, that, on the contrary, every generation is surpassed in rigor again and
again by its successors” (p. 1733). The problems that mathematicians find in the arguments of predecessors often have less to do with what was said than with what was not said—less likely to lie with
what the authors of the arguments knew they had assumed than with what they unconsciously assumed. “Each generation criticizes the unconscious assumptions made by its parents. It may assent to them,
but it brings them out in the open” (Whitehead, 1925/1956, p. 406).
Third, even proofs that are considered sound differ considerably in their ability to convince. There are many theorems that have been proved in a variety of ways. Several hundred different proofs
have been offered of the Pythagorean theorem, which relates the length of the hypotenuse of a right triangle to the lengths of its other two sides (Loomis, 1968). It seems a safe bet that the reader
who will take the trouble to check out a few of them will find some more compelling, or more readily grasped, than others. The amount of attention this theorem has received, as indicated in the great
variety of proofs of it that have been constructed, gives credence to Dunham’s (1991) reference to it as “surely one of the most significant results in all of mathematics” (p. 47). Fourth, proofs,
again even when considered sound, vary also in their simplicity (beauty, elegance, attractiveness to mathematicians). Complex, ugly, inelegant proofs stand as challenges to mathematicians to find
ways to improve upon them. Joseph Bertrand’s conjecture that one can always find a prime between any number, n, and its double, 2n, was proved by Russian mathematician Pafnuty Chebyshev seven years
after it was proposed. Chebyshev’s proof, though accepted as valid, was not seen by all number theorists to be as elegant as possible. Indian mathematician Srinivasa Ramanujan found a way to improve
upon it, as did Hungarian mathematician Paul Erdös independently some years later (Du Sautoy, 2004). There are countless other examples of “proof perfecting,” and many remaining challenges. Fifth,
the concept of proof is itself evolving. Hungarian philosophermathematician Imre Lakatos (1976) points out that changes in the idea of what constitutes a rigorous proof have engendered revolutions in
mathematics. The Pythagoreans, for example, held that rigorous proofs have to be arithmetical, but upon discovering a rigorous proof that the square root of 2 is irrational, they had to change this
criterion for rigor. As a consequence, geometrical intuition took the place once held by arithmetical intuition. Newton, Leibniz, Euler, LaGrange, and Laplace, all of whom were great analysts, had
little conception of what is now acceptable as a proof involving infinite processes. And what is now acceptable depends on whom one asks; some mathematicians recognize only constructive proofs, for
example, and rule out indirect proofs such as those that employ the reductio ad absurdum argument. (A constructive proof is one that is derivable—constructible—from integer arithmetic. To prove a
proposition indirectly, one shows that assuming the proposition to be false leads to a contradiction, which, according to Aristotelian logic, means that the proposition must be true.) German
mathematician-logician Leopold Kronecker was a staunch advocate of recognizing only constructive proofs and used this restriction to discredit Cantor’s work. Mathematicians disagree about the
legitimacy of the “law of the excluded middle,” according
Mathematical Reasoning
to which a mathematical statement is either true or false; Dutch mathematician Luitzen Brouwer, for example, rejected it, whereas Hilbert considered it an essential mathematical tool and likened the
prohibition of its use to prohibiting a boxer from using his fists (Hellman, 2006). That the concept of proof has changed over time should temper harsh judgments of the inadequacy of proofs that were
considered to be sound by earlier generations. It is hardly fair to judge the proofs of Euclid by standards that were developed gradually over hundreds of years after his death. It should also make
us less than completely certain of the finality of current ideas on the matter. Kline’s (1980) observation that “the proofs of one generation are the fallacies of the next” (p. 318) undoubtedly
applies to our own generation not only as one that follows those that have already passed, but also as one that precedes those that are to come. It would be presumptuous to assume that today’s ideas
about what constitutes adequacy of proof will be regarded much more highly by succeeding generations than those of preceding generations are regarded by our own. Sixth, even within a specific time
frame, proof can have different connotations in different contexts. Hersh (1997), for example, distinguishes two meanings of the term as it is used today in mathematics, one that applies in practice
and the other in principle. “Meaning number 1, the practical meaning, is informal, imprecise. Practical mathematical proof is what we do to make each other believe our theorems. It’s argument that
convinces the qualified, skeptical expert…. What is it exactly? No one can say. Meaning number 2, theoretical mathematical proof, is formal…. It’s transformation of certain symbol sequences (formal
sentences) according to certain rules of logic (modus ponens, etc.). A sequence of steps, each a strict logical deduction, or readily expanded to a strict logical deduction” (p. 49). Finally, even
among mathematicians who agree on the rules, disputes regarding the validity of specific proofs continue to arise. Such disputes are resolved—if they are resolved—by consensus. “Real proofs aren’t
checkable by machine, or by live mathematicians not privy to the mode of thinking of the appropriate field of mathematics. Even qualified readers may differ whether a real proof (one that’s actually
spoken or written down) is complete and correct. Such doubts are resolved by communication and explanation” (Hersh, 1997, p. 214). Hersh emphasizes the social nature of mathematics and the importance
of the influence of the culture in which it is done—and of the tentativeness of the proofs that mathematicians develop. “The mathematics of the research journals is validated by the mathematical
community through criticism and refereeing. Because most mathematical papers use reasoning too long to survey at a glance, acceptance is tentative. We reconsider our claim if it’s
disputed by a competent skeptic” (p. 224). Fortunately, as a general rule, mathematicians agree relatively quickly on the merits of most proposed proofs, but sometimes a consensus as to the adequacy
of a complicated proof can be a long time in forming, if it forms at all. To nonmathematicians, formal proofs can be intimidating and sometimes incomprehensible. Rucker (1982, p. 274) illustrates how
cumbersome the process of writing out a formal proof can be with an example of such a proof of (∀y) [0 + y = y], that is, for all y, 0 + y = y. The proof takes 17 steps and uses on the order of—I am
estimating—400 to 500 symbols. Rucker argues that, despite their “nitpicking, obsessive quality,” fully formalized proofs “are satisfyingly solid and self-explanatory. Nothing is left to the
imagination, and the validity of a formal proof can be checked simply by looking at the patterns of symbols. Given the basic symbols, the rules of term and formula formation, the axioms and axiom
schemas, and the rules of inference, one can check whether or not a sequence of strings of symbols is a proof in a wholly mechanical fashion” (p. 275). (Note the difference between Rucker’s claim
that a formal proof can be verified mechanically and Hersh’s insistence, mentioned above, that real proofs are not checkable this way.) Casti (2001) argues that proofs differ in quality, and proposes
that three grades be recognized: The first, or highest quality type of proof, is one that incorporates why and how the result is true, not simply that it is so. … Second-grade proofs content
themselves with showing that their conclusion is true, by relying on the law of the excluded middle. Thus they assume that the conclusion they want to demonstrate is false and then derive a
contradiction from this assumption. … In [third, or lowest-grade proofs] the idea of proof degenerates into mere verification, in which a (usually) large number of cases are considered separately and
verified, one by one, very often by a computer. (p. 137)
Casti points to Appel and Haken’s (1977b) proof of the four-color theorem as an example of the third type. Obviously, from this perspective, the goal is to produce highest quality proofs; second or
third level is to be settled for only when the first one proves to be out of reach. All this being said, the only absolute conclusion we can draw is that proofs are relative. What a mathematical
proof gives us, in Kline’s (1980) words, is “relative assurance. We become quite convinced that a theorem is correct if we prove it on the basis of reasonably sound statements about numbers or
geometrical figures which are intuitively more acceptable than the one we prove” (p. 318). Intuitively acceptable statements in this context must include statements that are acceptable by
Mathematical Reasoning
virtue of themselves having already been proved. The chain of inferences that gets one back to basic givens, or statements that are intuitively acceptable without proof, is, in some instances, very
long. The history of proof making and proof discrediting dictates caution in accepting new proofs as infallible.
☐☐ A Proof in Process The nature of proof and the process of proof making, as well as those of proof challenging and proof repairing, have been explored in a delightfully readable work by Lakatos
(1976), published two years after his untimely death at age 52. The aim of Lakatos’s “case study,” as he called it, of the methodology of mathematics was, in his words, “to elaborate the point that
informal, quasi empirical mathematics does not grow through a monotonous increase of the number of indubitably established theorems, but through the incessant improvement of guesses by speculation
and criticism, by the logic of proofs and refutations” (p. 5). (For a concise description of “the method of proofs and refutations,” which he also calls “the method of lemma incorporation,” see
Lakatos, 1976, p. 50.) Lakatos’s essay recounts a lengthy discussion among a group of students and a teacher in a classroom. The classroom and the participants in the discussion are fictitious, but
the discussion tracks the development of mathematical thinking over several centuries as it relates to the problems on which the class focuses. The problem that has the class’s attention at the
outset is the question of whether there is a relationship between the number of vertices (V), the number of edges (E), and the number of faces (F) of a polyhedron. The students discover by trial and
error that for regular polyhedra these variables are related according to the formula V – E + F = 2. (Both Euler and Descartes had observed this relationship, Descartes in 1640 and Euler in 1752.
Euler expressed the relationship with the formula just mentioned.) The teacher proposes a proof that this relationship holds for all polyhedra. The students challenge the validity of the proof (which
actually was believed to be valid by several notable 19th-century mathematicians) by questioning the truth of some of the claims that comprise it. This they do by finding counterexamples to one or
more of these claims. (Lakatos makes a distinction between a local counterexample, which refutes a lemma—a proposition subsidiary to a theorem, proved, or assumed to be true, in order to simplify the
proof of the theorem—of a proof but not necessarily the main conjecture that one is trying to prove, and a global
counterexample, which refutes the main conjecture itself. A global counterexample shows the main conjecture to be false, whereas a local counterexample shows only that some element of the proof is
false, but does not rule out the possibility that the conjecture itself is true.) The teacher concedes that the students have indeed shown the proof to be invalid, but rather than discard it, he
attempts to improve it so that it will be able to stand up to the criticisms. As the dialogue proceeds, alternative proofs are offered and challenged with counterexamples of one or another type.
Counterexamples are sometimes challenged or made to be irrelevant by the “method of monster-barring,” whereby the original conjecture is modified, or the class of interest (in this case polyhedron)
is redefined in such a way that the counterexample becomes a “monster” with respect to that class, which is to say no longer a member of it. This leads to a discussion of the importance of
definitions, and also to the recognition that definitions can often be a focus of debate and disagreement. Lakatos points out that short theorems in mathematics are sometimes obtained at the expense
of long definitions; the definition of ordinary polyhedron in the 1962 edition of Encyclopedia Britannica, for example, takes up 45 lines. Sometimes the response to a demonstration that a conjecture
is false is a modification of the conjecture. For example, when shown not to be true, the original conjecture “For all polyhedra, V – E + F = 2” is replaced with “For all simple polyhedra, V – E + F
= 2,” where simple is meant to rule out polyhedra of the type represented by a picture frame (a polyhedron with a hole in it). When that conjecture is shown to be false, it, in turn, is replaced by
“For a simple polyhedron, with all its faces simply connected, V – E + F = 2.” And so on. Definitions are crucial because of the vagueness of language, especially in its everyday use. Lakatos
contends that one can always find a sufficiently narrow interpretation of the terms of a proposition to make it be true as well as a sufficiently wide interpretation to make it be false. The rigor of
a proof of a theorem may be increased through redefinition or the incorporation of new lemmas at the expense of decreasing the inclusiveness of the theorem’s domain. Lakatos refers to the idea that
the path of discovery is a simple progression from facts to conjecture, and from conjecture to proof as “the myth of induction” (p. 73), noting that basic concepts often become modified substantially
in the making, criticizing, and revising of proofs: “Naive conjectures and naive concepts are superseded by improved conjectures (theorems) and concepts (proof-generated or theoretical concepts)
growing out of the method of proofs and refutations. And as theoretical ideas and concepts supersede naive ideas and concepts, theoretical language supersedes naive language” (p. 91).
Mathematical Reasoning
Thus as a consequence of the process of attempting to prove a conjecture, the conjecture itself may be modified, as may the concepts that comprise it, so what ends up being proved is something other
than the conjecture that motivated the proof-making effort. The conjecture, as modified, is likely to be considerably more precise, and perhaps narrower in scope—as a result of the introduction of
precise definitions and the delimitation of conditions under which it is claimed to hold—than as originally conceived. Casti (2001) cites Lakatos’s work as a prime illustration of an increasing
tendency to acknowledge the empirical component in the practice of mathematics. He notes that Lakatos’s view—that “the practice of mathematics constitutes a process of conjecture, refutation, growth,
and discovery”—has much in common with Karl Popper’s ideas about the nature of the scientific enterprise. As an example of how a proof can evolve as generations of mathematicians work on it, Barrow
(1991) points to the case history of the prime number theorem, which derives from a conjecture of Gauss and French mathematician Adrien-Marie Legendre—known naturally as either the prime number
conjecture or the Gauss-Legendre conjecture—regarding the proportion of numbers less than any given value that are primes. The conjecture was that the number of primes less than n was approximated by
n/log n ever more closely with increasing size of n. The first proof of the theorem, given by French mathematician Jacques Hadamard (1865–1963) and Belgian mathematician Charles de la Vallée-Poussin
in 1896, involved complex analysis and was very difficult. Somewhat simpler proofs were produced later by German mathematician Edmund Landau and American mathematician Norbert Wiener. In 1948, Erdös
and Norwegian mathematician Atle Selberg gave what Barrow says could be considered an elementary proof of some 50-plus pages in length, which was refined and made truly elementary by Norman Levinson.
Barrow argues that this is characteristic of the evolution of mathematics, and that bona fide original ideas of the caliber of Cantor’s diagonal argument and Gödel’s proof of undecidability (about
both of which more later) are very rare.
☐☐ Refutations In the course of political, scientific and everyday disputes, in the process of a court investigation and analysis, in attempts to solve various problems, one must learn not only to
prove, but also to refute. (Bradis, Minkovskii, & Kharcheva, 1938/1999, p. 2)
The give and take in Lakatos’s (1976) account of the role of refutations in proof making and proof improving illustrate that a refutation of a proof
is itself a proof of sorts; it is a proof that the proof that is being refuted is not valid—is not a proof after all. As we have noted, the history of mathematics has many examples of “proofs” that
have survived for some time only eventually to be considered to be faulty. French mathematician Jean le Rond D’Alembert, Euler, and Lagrange all produced proofs of the fundamental theorem of
algebra—according to which every polynomial equation has at least one root—which were later determined to be wrong (Flegg, 1983). (The fundamental theorem is also expressed as follows: Any real
polynomial of degree n can be factored into n linear factors.) The importance of refutations is also seen in the history of attempts to solve some of the famous problems that have tantalized
mathematicians, and many would-be mathematicians as well, over the centuries— squaring the circle, trisecting an angle, proving the four-color conjecture or Fermat’s last theorem. Proposed solutions
or proofs have to be shown to be wrong if they are to be dismissed. In another delightful book, published originally in 1938, Russian mathematicians V. M. Bradis, V. L. Minkovskii, and A. K.
Kharcheva give numerous “proofs” of mathematical absurdities. Examples of what is “proved” include: 45 – 45 = 45; 2 × 3 = 4; every negative number is greater than the positive number having the same
absolute value; all triangles are of equal area; the length of a semicircle is equal to its diameter; π/4 = 0; 1/4 > 1/2; and every triangle is a right triangle. I will reproduce here two of the
faulty proofs from Bradis et al., the first algebraic and the second geometric. (If it is not obvious where these proofs go wrong, the reader is referred for explanations to Bradis et al., pp. 115,
123.) • Proof of the equality of two arbitrary numbers (p. 80). Take two arbitrary numbers a and b > a, and write the identity:
a2 – 2ab + b2 = b2 – 2ab + a2
where the algebraic sums in the right- and the left-hand members differ from one another only by the order of the terms. Equation (5.1) we rewrite in the shorter form, making use of the formula for
the square of a difference:
(a – b)2 = (b – a)2
Extracting the square root from both members, we obtain: a – b = b – a
Mathematical Reasoning
whence, upon transferring some terms, simplifying and dividing both members by 2, we have: a + a = b + b, 2a = 2b, a = b • Proof that the segments of parallel straight lines bounded by the sides of
an angle are equal. Take an arbitrary angle and intersect its sides by two arbitrary parallel straight lines. Let AB and CD be the segments of the parallels included between the sides of that angle,
and E its vertex (Figure 5.1). As is well known, parallel straight lines intersect proportional segments of the sides of the angle. Consequently, AE : CE = BE : DE and
AE × DE = BE × CE
Multiplying both members of Equation (5.4) by the difference AB – CD, we carry out the following transformations: AE × DE × AB – AE × DE × CD = BE × CE × AB – BE × CE × CD AE × DE × AB – BE × CE × AB
= AE × DE × CD – BE × CE × CD
AB(AE × DE – BE × CE) = CD(AE × DE – BE × CE)
C A
D B
Figure 5.1 Supporting the “proof” that the segments of parallel straight lines bounded by the sides of an angle are equal.
Dividing both members of the last equality by the difference AE × DE – BE × CE, we obtain the equality AB = CD. Thus, the segments of parallels confined between the sides of a given angle are always
equal (p. 123). For each of the faulty proofs, Bradis et al. provide an explanation of where it goes wrong. I strongly suspect that most readers, including those with a considerable knowledge of
mathematics, will have to work a bit to find the faulty steps on their own in some cases. Bases for faulty proofs that Bradis et al. identify include incorrect usage of words, inaccurate
formulations, neglect of the conditions of applicability of theorems, hidden execution of impossible operations, and invalid generalizations—as in passing from a finite set to an infinite one. What
these authors demonstrate is the ease with which such errors can be made and go undetected. One cannot read their book and ever again accept even the simplest and most transparent of proofs as
“obviously infallible” without some thought of how it might be wrong. The point of course is that one does well to be wary of accepting proofs too quickly. Kline (1980) makes a stronger statement:
“No proof is final. New counterexamples undermine old proofs. The proofs are then revised and mistakenly considered proven for all time. But history tells us that this merely means that the time has
not yet come for a critical examination of the proof” (p. 313). Proofs are accepted, Kline argues, by virtue of being endorsed by the leading specialists of the day or because of employing currently
fashionable principles. But tentative or no, proofs remain the objective, and the ticket for recognition among mathematicians. As someone has said—I cannot remember where I read it—“Of scientists one
asks, what did they discover; of mathematicians, what did they prove.”
☐☐ Unproved Conjectures One of the lessons that the history of mathematics clearly teaches us is that the search for solutions to unsolved problems, whether solvable or unsolvable, invariably leads
to important discoveries along the way. (Boyer & Merzbach, 1991, p. 595)
Throughout the history of mathematics, mathematicians have been tantalized by certain conjectures that are believed to be true, but that have resisted all past attempts to prove them to be so. Some
conjectures have been proved true only after existing as conjectures for a very long time. A case in point is the conjecture that every positive whole number is the sum of no more than four squares.
This was known, as a conjecture, to the classical Greeks, and existed as a conjecture until Lagrange proved it in 1770.
Mathematical Reasoning
Some conjectures have been known and have remained unproved for a long time despite countless hours of intense work by many mathematicians seeking the elusive proofs. Among the better known examples,
which were mentioned in Chapter 1, are Goldbach’s conjecture that every even number is the sum of two primes, Gauss’s conjecture that the number of primes between 1 and n is approximated by n/loge n,
the fourcolor conjecture according to which four colors suffice to color any map without using the same color for any two bordering areas, and Fermat’s “last theorem,” according to which integers
cannot be found to solve the equation xn + yn = zn for n > 2 and xyz not equal to 0. Many proofs of all of these conjectures have been proposed over the years, only eventually to have been shown to
be invalid. As of 2000, Goldbach’s conjecture, which was put forward by Prussian mathematician Christian Goldbach in 1742, had been shown to be valid for all numbers up to 400,000,000,000,000, but it
has yet to be proved either true for all numbers or not true for all numbers. Gauss’s conjecture was proved (by French mathematicians Jacques Hadamard and Charles Jean de la Vallée-Poussin)
approximately 100 years after it was stated by Gauss. The four-color conjecture has it that four colors are sufficient to color any conceivable map without making any bordering countries have the
same color. (A shared point, like that shared by Arizona, Utah, Colorado, and New Mexico, does not count as a common border.) According to Rouse Ball (1892), the problem was mentioned by German
mathematicianastronomer August Ferdinand Möbius in lectures in 1840, but received little attention until it was communicated to De Morgan around 1850. De Morgan learned of the problem from a student,
Frederick Guthrie, whose brother Francis, a South African botanist and mathematics professor, conceived the question as a consequence of actually coloring a map. May (1965) claims that the evidence
about the origin of the problem is tenuous. In any case, the problem became widely known, and despite many efforts to prove that four colors are enough, which most mathematicians appear to have
believed, the conjecture remained unproved for 150 years, although several people believed, at least for a short time, that they had succeeded. One proof, published by British mathematician Arthur
Kempe in 1879, stood for 11 years until another British mathematician, Percy Heawood, who himself worked on the problem over 60 years, showed Kempe’s proof to be flawed. A proof of the conjecture
that in time became widely accepted as such was produced by American mathematicians Kenneth Appel and Wolfgang Haken with assistance from John Koch and much use of a computer. It was described for
mathematicians in the Illinois Journal of Mathematics in two parts (Appel & Haken, 1977a; Appel, Haken, & Koch, 1977) and for a more general audience in Scientific
American (Appel & Haken, 1977b). An engaging historical account of the development of the proof is given in Appel and Haken (1978), in which the authors credit Kempe with producing an extremely
clever argument containing most of the basic ideas that eventually led to their proof. Appel and Haken point out that much of what became known as graph theory, which now has numerous practical
applications, grew out of the work done in the countless efforts to prove the four-color conjecture. Mathematicians labored in vain for over three centuries to generate a proof of Fermat’s last
theorem. According to Aczel (1996), the year following the announcement in 1908 of the Wolfskehl Prize of 100,000 German marks for a proof of it, 621 proposed solutions were submitted, none of them
sound. Some have believed they have succeeded; however, every “proof” that had been advanced prior to the one announced and improved upon by American mathematician Andrew Wiles in the 1990s was
subsequently found to be invalid. And some have believed that they were very close to producing a proof, but discovered later that they were unable to complete the feat. In 1847, shortly after the
French Academy of Sciences established a gold medal and monetary prize for anyone who first produced a bona fide proof, French mathematicians Gabriel Lamé and Augustin Cauchy each announced to the
assembled academy members that he was on the verge of doing so, but it would be another century and a half before Wiles actually produced one. It is interesting that we read much about the energy
that was put into finding a proof of the theorem but relatively little about attempts to show the theorem to be false, when all that was necessary for the latter was to find one set of values for
which xn + yn = zn is true. And given that there are an infinite set of values for which x3 + y3 + z3 = w3 is true, one might be excused for thinking that perhaps there should be at least one set for
which xn + yn = zn is true. Apparently, many mathematicians—perhaps this is testimony to the esteem in which Fermat was held—believed from the beginning that no such set exists. Over time evidence
supporting the assumption accumulated in the form of demonstrations that it was true for values of n up to a specified limit, which by 1993 was over 4 million.
☐☐ Failed Proofs and Mathematical Progress Attempts to find a general proof of Fermat’s last theorem led serendipitously to many other mathematical developments. Indeed, it can be argued plausibly
that major unproved conjectures that have motivated intensive search for proofs (or disproofs) over long periods of time have provided a great service to the advancement of mathematics because of
Mathematical Reasoning
the unanticipated discoveries that have been made as a consequence of these efforts. We will consider proofs of the four-color theorem and Fermat’s last theorem again below because they provide
further evidence of how difficult it can sometimes be to decide when a proof is a proof. Prominent among failed attempts to prove something are the numerous efforts over several centuries to prove
that Euclid’s parallel postulate—the fifth of his geometry—is deducible from the other Euclidean postulates. Generations of mathematicians have been uneasy about it, believing it to be unnecessary by
virtue of being implicit in the other postulates, but no one was able to prove that to be the case. As we have already noted, after many centuries of failed efforts to demonstrate the parallel
postulate to be redundant, several mathematicians began to consider the possibility of geometries in which it did not hold, and such nonEuclidian geometries were formulated in the 19th century. The
classical Greeks described three geometrical construction problems that have challenged professional and amateur mathematicians alike for more than two millennia. The problems may be stated as
follows: Using only a straightedge and compass, (1) construct a square whose area is equal to the area of a given circle, (2) construct a cube the volume of which is twice that of a given cube, and
(3) divide an angle into three equal angles. Countless hours were devoted to efforts to solve these problems and many “solutions” were proposed, all of which were shown eventually to be invalid.
Numerous constructions have been developed over the centuries, some by the early Greeks themselves (see Beckman, 1971), but none that requires only straightedge and compass. (Incidentally, not until
1672 did someone—Danish mathematician Georg Mohr—point out that, given that a line is determined when its two endpoints are specified, any plane construction that can be effected by straightedge and
compass can be effected by compass alone [Boyer & Merzbach, 1991].) Eventually, during the 19th century, all three of these problems were shown to be insoluble (Jones, Morris, & Pearson, 1991). Does
this mean that all of the time spent working on them should be considered wasted? In fact, some of this work produced very useful results. The conic sections as well as numerous other mathematical
phenomena, including many that have proved to have great practical utility, were discovered as a consequence of attempts to solve these problems. Work on analytic curves, cubic and quartic equations,
Galois theory, and transcendental numbers has been attributed, at least in part, to these attempts (Paulos, 1992). More generally, efforts to solve problems that eventually were shown to be insoluble
have often led to unanticipated advances in mathematics. Stewart (1987) refers to Fermat’s last theorem as an example of a problem that is so good that even the failures to solve it have enriched
mathematics greatly. A similar observation might be made regarding other conjectures that have remained unproved for a long time. Failed attempts to prove Riemann’s hypothesis have frequently led to
mathematical discoveries. (Du Sautoy [2004] notes that mathematicians are sufficiently confident that Riemann’s hypothesis is true, despite the nonexistence of a proof, that some have used it in the
production of proofs of other theorems; this is a risky strategy, however, because if the hypothesis is eventually proved to be false, theorems that have been based on it will fall.) This is not to
suggest that all such work is productive in any meaningful sense. Determining a cost-benefit ratio for work on insoluble mathematical problems is undoubtedly an insoluble mathematical problem, for
practical if not for theoretical reasons. It is interesting to note that in 1755 the French Academy resolved to examine no more manuscripts purporting to “square the circle,” even though at the time
of the academy’s decision, the impossibility of solving this problem had not yet been proved. Apparently the academy had decided that what was to be gained by continuing to receive such submissions
was not worth the effort of reviewing them. Before Wiles presented his proof of Fermat’s last theorem in 1993, the Göttingen Royal Society of Science had received over 5,000 proposed proofs, all of
which had to be evaluated by a qualified mathematician (Casti, 2001). In 1900 David Hilbert presented to the International Congress of Mathematics 23 unsolved problems that represented, in his view,
the greatest challenges to mathematicians at the dawn of the 20th century. (Two additional problems not on the list of 23 were mentioned in his introductory remarks.) The Riemann hypothesis and
Goldbach’s conjecture in combination—generally referred to as problems of prime numbers—constituted the eighth item in Hilbert’s list. Accounts of the status of Hilbert’s problems at the end of the
20th century are given by Gray (2000) and Yandell (2002). Some of the problems turned out to be too vague to admit a precise solution, but of those that were sufficiently precise, all but one—the
Riemann hypothesis—had been solved by the end of the century. But there appears to be no end of challenging problems. At a 1974 symposium at which progress on Hilbert’s problems was discussed by
experts in the various relevant areas, a new 23-item set of problems was described (Browder, 1976). The mathematical world was ushered into the 21st century with the announcement, in May 2000, of a
$1 million prize to anyone who could solve any of seven problems then considered by the offerers of the prize to be among the most difficult mathematical problems still unsolved. Prize money—$7
million—was provided by Landon Clay, founder of the Clay Mathematics Institute. Among the Millennium
Mathematical Reasoning
Problems, as the set of seven is known, is the Riemann hypothesis, the only carryover from Hilbert’s list. A description of all seven problems, written expressly for the interested layperson who is
not an expert mathematician, is provided by Devlin (2002); this is not to say that it is an easy read.
☐☐ Proofs as Convincing Arguments In practice, proofs are simply whatever it takes to convince colleagues that a mathematical idea is true. (Henrion, 1997, p. 242)
Devlin (2000a) gives a definition of a proof very similar to Henrion’s just quoted, but with the qualification that who needs to be convinced is “any sufficiently educated, intelligent, rational
person” (p. 51). This is an important qualification inasmuch as to be “sufficiently educated” to understand some proofs (e.g., the four-color theorem, Fermat’s last theorem) means knowing a great
deal of rather esoteric mathematics. For a proof of a theorem to be compelling, every assertion must be either an axiom or a statement that follows logically from the system’s axioms either directly
or indirectly through other already proved theorems. As Nozick (1981) puts it, “A proof transmits conviction from its premises down to its conclusion, so it must start with premises … for which there
already is conviction; otherwise, there will be nothing to transmit” (p. 14). Accepting the axioms as givens is one necessary condition for accepting a proof as a whole; another is believing the
assertions that are derived from them to be valid inferences. But while this combination is essential, it does not suffice to satisfy all inquiring minds. Polya (1954b) notes the possibility that a
mathematician may be convinced that every step in a proof is correct and still be unsatisfied if he does not feel he understands the proof as a whole. “After having struggled through the proof step
by step, he takes still more trouble: he reviews, reworks, reformulates, and rearranges the steps till he succeeds in grouping the details into an understandable whole. Only then does he start
trusting the proof” (p. 167). Kline (1980) makes essentially the same point in noting that an intuitive grasp of a proof can be more satisfying than logic. “When a mathematician asks himself why some
result should hold, the answer he seeks is some intuitive understanding. In fact, a rigorous proof means nothing to him if the result doesn’t make sense intuitively” (p. 313). Penrose (1989)
similarly emphasizes the importance of being able to see the truth of a mathematical argument in order to truly be convinced
of its validity, and insists that mathematical truth is not ascertained merely by use of an algorithm. Gödel’s theorem, more about which will presently be discussed, shows us the necessity for
external insights for deciding the validity of algorithms. Indeed, our ability to be persuaded by Gödel’s argument is itself evidence of this need. “When we convince ourselves of the validity of
Gödel’s theorem, we not only ‘see’ it, but by so doing we reveal the very non-algorithmic nature of the ‘seeing’ process itself” (p. 418). In an algorithmic approach to mathematics, one applies
useful techniques to solve problems, but does not worry much about how the techniques were derived or why they work. This approach, effective though it may be for practical purposes, is unlikely to
satisfy the mathematician who wishes to understand mathematics at a deeper level. Just so, a proof that is accepted as such because no fault can be found with the sequence of steps that comprise it
(a strictly algorithmic proof) is not likely to be as satisfying to a mathematician as an insightful proof that provides, or at least facilitates, an understanding of why the proved relationship is
what it is. Casti (2001) holds that a good proof has three characteristics: It is convincing, surveyable, and formalizable. By convincing, he means convincing to mathematicians—if a proof is a good
one, most mathematicians will believe it. To be surveyable means to “be able to be understood, studied, communicated, and verified by rational analysis” (p. 70). “Formalizability means we can always
find a suitable formal system in which an informal proof can be embedded and fleshed out into a formal proof” (p. 70). These three characteristics, Casti argues, represent, respectively, the
anthropology, epistemology, and logic of mathematics. From a psychological point of view, the first of these characteristics is paramount. If a proof is not convincing to people who are sufficiently
knowledgeable to follow it, there is not much else to be said for it. Some proofs are relatively straightforward in the sense that they involve only a few inferences that most people probably would
have little trouble following. Consider, for example, Euclid’s proof that there is no largest prime number. I have seen two versions, and will give both. Version 1: Suppose there were a largest prime
number, p. Suppose further that we pick a number, p*, that is equal to the product of all the primes, plus 1. Obviously, p* cannot be evenly divided by any prime—it was constructed so as to ensure
that division by any prime would leave a remainder of 1—so p* itself must be prime. This contradicts the assumption that p is the largest prime. And inasmuch as the same reasoning can be applied to
p* and to any larger prime that is found, it follows that there
Mathematical Reasoning
is no largest prime, or, equivalently, that there are an infinity of primes. Version 2: Let p represent any prime. Construct p! + 1. The result clearly is not divisible by p or any number smaller
than p. Either it is not divisible or it is divisible by a prime between p and p! + 1; either possibility implies the existence of a prime larger than p. Euclid’s strategy—proof by contradiction, or
reductio ad absurdum, in which one shows that the assumption that the assertion to be proved is false leads to a contradiction and that the assertion therefore must be true—has been widely used in
mathematics. Euclid’s proof that there is no largest prime is often held up as an example of an elegant proof. Another example is the proof traditionally attributed to the Greeks that the square root
of 2 is not a rational number. It starts by assuming that 2 is rational—that it can be represented as the ratio of two integers. If the two integers have any common factors, those factors can be
eliminated by dividing each of the integers by them and expressing the p2 ratio in its lowest terms, say qp . If qp = 2 , then q2 = 2 and p 2 = 2q 2 , from which it follows that p and q must both be
even numbers, and therefore have at least the common factor 2, which contradicts the assumption that p q is the ratio in its lowest terms. For a charmingly presented collection of richly illustrated
and relatively simple proofs, see Polster (2004). Polster begins with the observation that “proofs should be as short, transparent, elegant, and insightful as possible” (p. 2), and then provides
numerous proofs that meet these criteria, some, of course, better than others.
☐☐ Cantor’s Proofs Some especially elegant and ingenious proofs were produced by Cantor in his work on infinity—more accurately, infinities—during the latter part of the 19th century. We will
consider first his proof that the rational numbers are countable—can be put in one-to-one correspondence with the natural numbers—and then his proof that the real numbers, which include not only the
rationals but also the irrationals (nonrepeating infinite decimals), are not. This distinction is the basis of his concept of different infinities. The proof that the rationals can be put in
one-to-one correspondence with the natural numbers requires the construction of a table of fractions. All the fractions in a given row have the same numerator, and the numerator for each row is
increased by 1 in successive rows. All the fractions in a given column have the same denominator, and the
denominator for each column is increased by 1 in successive columns. So the upper left portion of this table is as follows: 1/1
. . .
. . .
. . .
. . .
. . .
One has to imagine the table being continued indefinitely both to the left and down, which is to say that it contains an infinity of rows and an infinity of columns. Cantor pointed out that the
fractions in this table can be put into one-to-one correspondence with the natural numbers by simply progressing through the table in an orderly fashion—in a diagonal-by-diagonal pattern. By starting
with the upper left fraction, 1/1, and proceeding as shown below, one will not miss any numbers. This demonstrates that rational numbers can be put into one-to-one correspondence with the natural
numbers. 1/1
. . .
. . .
. . .
. . .
. . .
To construct the proof that the set of real numbers is not only infinite but, unlike the set of rational fractions, uncountable, Cantor (1911/1955) used the concept, which he originated, of a
diagonal number. Suppose, he
Mathematical Reasoning
argued, that there were a finite number of decimals between 0 and 1. Imagine that we listed them all, in no particular order, as follows: .77358436 … .84663925 … .16486902 … .53932175 … .35487250 …
.94882604 … .04327419 … .36498105 … A diagonal number may be composed from this set of numbers by making its first digit correspond to the first digit of the first number in the set (7), its second
digit to the second digit of the second number (4), and, in general, making the nth digit in the diagonal number correspond to the nth digit of the nth number in the set. Thus, the first eight digits
of the diagonal number defined on the above set are .74437615 … Now, suppose we construct a new number, say .85548726 … that differs from the diagonal number with respect to every digit—we change the
first digit from 7 to 8, the second from 4 to 5, and so on. We can be sure that the resulting number differs from every number in the original set with respect to at least one digit and so is not a
member of that set. Inasmuch as it would always be possible to define such a number, no matter how many numbers there were in the original set, our supposition that there are a finite number of
decimals between 0 and 1 must be false. It follows from Cantor’s demonstration that the real numbers cannot be put in one-to-one correspondence with the integers. Therefore, Cantor concluded that the
“power” of the set of reals is
greater than that of the integers, although the sets are infinite in both cases. Cantor’s diagonal-number proof of the uncountability of the reals is another example of the reductio ad absurdum, in
which one shows that the assumption that something is true (in this case that the reals are countable) leads to a contradiction, which permits one to conclude that the assumption is not true after
all. A propos the conception of a proof as a convincing argument, we may note that French mathematician-physicist-philosopher Jules Henri Poincaré did not find Cantor’s proof of the uncountability of
the real numbers just described to be convincing. He did not accept the inability to devise a way to match the natural numbers with the reals in a one-toone fashion as compelling evidence that the
latter were more numerous than the former. There are many other examples of mathematicians who have rejected proofs that have been accepted by most of their colleagues. As already noted, there is a
sense in which what constitutes mathematical truth is determined by consensus, though most mathematicians do not promote this aspect of the discipline. Cantor’s genius at proof making is seen (among
other places) in his proof that all the points of a plane can be mapped onto the points of a line. Inasmuch as every point on a plane is defined by two coordinates, whereas every point on a line has
only a single coordinate, the idea that the points of a plane can be mapped—in one-to-one fashion—to the points of a line seems impossible on the face of it. And so it seemed to many mathematicians,
including Cantor, for a long time. Cantor’s stroke of genius was to see that one could merge the two coordinates of any point on the plane in such a way that the resulting number represented a unique
position on the line. Imagine a unit square, with both x and y coordinates going from 0 to 1, and a unit line starting at 0 and terminating at 1. Consider any point on the square, say the point at x
= .379253 … and y = .849016…. If we merge these coordinates, taking the first number from x, the second from y, the third from x, and so on, we generate the blended number .387499205136 …, which
identifies a unique point on the line. We can do this (in our imagination) for every point on the plane, and every blend will identify a unique position on the line. (And the process works for spaces
of more than two dimensions.) Cantor’s proof is reminiscent of a paradox involving infinity described by logician-philosopher Albert Ricmerstop (Albert of Saxony) in his book Sophismata, which was
published in Paris in 1489, a century after his death. Consider a beam of unit width and height and of infinite length. Imagine cutting the beam into 1 × 1 × 1 blocks. Since there would be an
infinite number of these blocks, there would be enough of them to completely fill an infinite three-dimensional space.
Mathematical Reasoning
☐☐ Proof of Impossibility and Nonprovability Now to establish a formula is one thing, but to establish the nonexistence of a formula is a chore of quite a different magnitude. (Mazur, 2003, p. 228)
All that is necessary to show that a conjecture in the form of a universal assertion is false is to find a single case for which it does not hold. There are many famous conjectures in mathematics
that have stood for some time only to have been shown to be false when someone eventually found a counterexample. A case in point is Fermat’s conjecture that numbers of the form 22n + 1 are always
prime. This is interesting because Fermat made it on the basis of knowing of only a very few values of n for which it held. Euler showed it to be false by demonstrating that 225 + 1 = 4,294,967,297
is factorable. That he was able to find the factors of a number this size without the assistance of modern computational devices is impressive. A conjecture that is consistent with some, but not all,
relevant cases may remain tenable for a long time if a case that would reveal it to be false is exceptionally difficult to find. There is a great difference between confessing to being unable to find
an answer to a question and demonstrating conclusively that there is no answer to be found. Beginning in the 19th century, mathematicians became interested in the idea that it may be impossible to
solve some problems, or to prove some assertions in mathematics even if they are true (and not axiomatic), and a new challenge became that of developing proofs of impossibility or nonprovability. The
proof of Fermat’s last theorem is an example of a demonstration that no integral solution can be found for an equation of the form xn + yn = zn for n > 2. A proof by Norwegian mathematician Niels
Abel that a formula for fifth degree equations cannot be found, and one by French mathematician Évariste Galois (1811–1832) that extends the principle to equations of higher degree are other
well-known examples of demonstrations of impossibility. The impossibility of proving Euclid’s parallel postulate by deduction from his other postulates was demonstrated chiefly through the work of
Gauss, Bolyai, Lobachevsky, and Riemann. In 1882 Lindemann proved π to be a transcendental number, from which it follows that the circle cannot be squared by the methods of Euclidean geometry. Only
in the 20th century was it proved that none of the three straightedge and compass construction problems posed by the Greeks are solvable. No one knows the amount of time that was devoted to searches
for solutions to these problems—especially the squaring of the circle—by countless mathematicians, professional and amateur, during the 2,000 years that they were not known to be unsolvable, but the
consensus seems to be that it was enormous.
Arguably the most significant proof of nonprovability in all of mathematics was the demonstration by Austrian mathematician-philosopher Kurt Gödel in 1931 that mathematics cannot be proved to be
internally consistent—that one cannot prove from the axioms of a mathematical system that contradictions will never occur within that system. Other famous proofs of nonprovability that followed hard
on Gödel’s were the proof by British mathematician Alan Turing of the impossibility of solving the “halting problem”—determining, of any computer program, whether it will produce a result and
stop—and the “undecidability” proof by American mathematician Alonzo Church that no general procedure can be specified that will invariably tell whether the arithmetic truth or falsity of any
statement can be determined; both were given in 1936. Gödel (1930, 1931) published two theorems demonstrating certain limitations inherent to any axiomatic approach to the construction of a deductive
system and, in particular, showed the impossibility of guaranteeing the logical consistency and completeness of a system as complex as elementary arithmetic. The first of the theorems showed that
arithmetic (as axiomatized by Italian mathematician Guiseppe Peano) is incomplete; the second showed that it could not be proved to be consistent. Generally reference is made to Gödel’s
incompleteness theorem, singular, the two being treated as one inclusive demonstration that any formal theory sufficiently complicated to include arithmetic that was consistent would necessarily be
incomplete, which is to say that for any such system there are true statements that cannot be derived from the system’s axioms. It is important to note that Gödel did not prove arithmetic to be
inconsistent; he only showed that it cannot be proved to be consistent, but this was enough to be very unsettling to the mathematical world. Central to the proof is the demonstration that a system as
complex as elementary arithmetic is capable of producing assertions of the sort “This assertion cannot be proved”—shades of Russell’s antinomies. To prove the statement to be true would prove it to
be incapable of being proved. To prove it to be false would show it to be provable—and therefore, by its own claim, not capable of being proved. As Stewart (1987) puts it, “Gödel showed that there
are true statements in arithmetic that can never be proved, and that if anyone finds a proof that arithmetic is consistent, then it isn’t!” (p. 214). Much has been written about the significance of
Gödel’s theorems. Here are just a few of the claims that have been made: • “Mathematics was forced to face an ugly fact that all other sciences had come to terms with long before: it is impossible to
be absolutely certain that what you are doing is correct” (Stewart, 1987, p. 218).
Mathematical Reasoning
• “In the axiomatization game, the best you can do is to assume the consistency of your axioms and hope that they are rich enough to enable you to solve the problems of highest concern to you”
(Devlin, 2000a, p. 84). • [Gödel’s theorem is] “the deathblow for the Hilbert program” (Singh, 1997, p. 142). Hilbert had challenged mathematicians to provide mathematics with a foundation that was
free of doubt and inconsistency. • “Gödel’s theorem is one of the most profound results in pure mathematics. When it was first published, in 1931, it had a devastating impact” (Moore, 2001, p. 172).
• [Gödel’s theorem is] “probably the single most profound conceptual result obtained by mankind in the twentieth century” (Ruelle, 1991, p. 143). • “What it seems to say is that rational thought can
never penetrate to the final, ultimate truth” (Rucker, 1982, p. 165). • “Gödel showed that provability is a weaker notion than truth, no matter what axiomatic system is involved” (Hofstadter, 1979,
p. 19). • [Gödel’s theorem] “says something about the possibilities of a certain kind of knowledge; yet it is expressed within that body of knowledge itself. This is the mystery of Gödel’s
theorem—that within the context of logical thought one can deduce limitations on that very thought” (Byers, 2007), p. 282). Barrett (1958) argues that Gödel’s theorem had repercussions far beyond
mathematics, which, since the time of Pythagoras and Plato, had been considered “the very model of intelligibility” and “the central citadel of rationalism.” Although commentaries tend to emphasize
its implications for mathematics—and sometimes the limitations of human knowledge more generally—the theorem is also lauded for its esthetic appeal. Moore (2001), for example, refers to its sheer
beauty as enough to take one’s breath away. As for its implications for mathematics, Rucker (1982) sees the incompleteness of arithmetic as not entirely a bad thing, because, if it were complete,
there would be no more need for mathematicians; “we could build a finite machine that would answer every question we could ask about the natural numbers” (p. 277). Moore (2001) makes a similarly
positive observation in noting that “the infinite
richness of arithmetical truth is beyond the reach of any finite collection of arithmetical axioms” (p. 182). In demonstrating unmitigated faith in the infallibility of the queen of the sciences to
be unjustified, Gödel’s theorem provided support for the increasingly experienced existential sense of alienation, estrangement, and irrationality. Readily available simplified—not to say
simple—expositions of Gödel’s proof include those of Nagel and Newman (1958/2001), Hofstadter (1979), Bunch (1982), Rucker (1982), Moore (2001), and Meyerson (2002). To my eye, a particularly
enlightening—and entertaining—explication of Gödel’s proof is the beautifully illustrated book-length treatment by Hofstadter (1979). Hofstadter presents the argument as an example of a “strange
loop”—a phenomenon that occurs “whenever, by moving upwards (or downwards) through the levels of some hierarchical system, we unexpectedly find ourselves right back where we started” (p. 10). He
draws analogies between the strange loop that constitutes Gödel’s argument and other strange loops that he sees in the compositions (especially fugues) of J. S. Bach and the etchings of M. C. Escher.
Hofstadter describes Russell and Whitehead’s Principia Mathematica as a mammoth attempt to exorcise strange loops from logic, set theory, and number theory. The exorcising strategy was to propose the
notion that sets form a hierarchy of types, and that any given set can be a member only of a set that is higher than itself in the hierarchy; no set can be a member of a set at its own level, so, in
particular, no set can be a member of itself. Although this strategy works in the sense of getting rid of (disallowing) self-referential paradoxes, it does so, Hofstadter argues, “at the cost of
introducing an artificial-seeming hierarchy, and of disallowing the formation of certain kinds of sets—such as the set of all run-of-the-mill sets. Intuitively, this is not the way we imagine sets”
(p. 21). Russell and Whitehead’s theory of types banishes all forms of selfreference. Hofstadter describes this as overkill, inasmuch as the effect is the treatment of many perfectly good
constructions as meaningless. Hofstadter sees Gödel’s proof as the translation of the Epimenides paradox, also known as the liar paradox, into mathematical terms. A modern form of the paradox is
“This statement is false.” “The proof of Gödel’s Incompleteness Theorem hinges upon the writing of a self-referential mathematical statement, in the same way as the Epimenides paradox is a
self-referential statement of language” (p. 17). Gödel invented a coding system in which every possible symbol and statement (sequence of symbols) in number theory would be represented by a unique
number. In this system, numbers could represent not only statements of number theory but statements about statements of number theory.
Mathematical Reasoning
The relationship of Gödel’s work to that of Cantor and that of Turing illustrates the continuity of mathematical innovation involving different mathematicians. In his proof of the incompleteness
theorem, Gödel made use of Cantor’s diagonal number proof of the uncountability of the real numbers (see Moore, 2001, p. 175). Casti (2001) notes the close correspondence between Gödel’s theorem and
Turing’s halting theorem, contending that the latter is really the former expressed in terms of computers instead of the language of deductive logic. Although there can be no doubt that Gödel’s
accomplishment was momentous, destroying forever, as it did, the idea of the perfect certainty of mathematics, Du Sautoy (2004) urges caution not to overemphasize its significance. “This was not the
death knell of mathematics. Gödel had not undermined the truth of anything that had been proved. What his theorem showed was that there’s more to mathematical reality than the deduction of theorems
from axioms” (p. 182). Du Sautoy argues that Gödel’s result exposed the dynamic character of mathematics and the importance of the intuitions of mathematicians in its continuing evolution. Byers
(2007) makes a similar point in noting that while Gödel’s theorem marked an end from one point of view, it could be seen as a beginning from another. “The collapse of the hope for a kind of ultimate
formal theory can be seen as a kind of liberation. It is a liberation from a purely formal or algorithmic view of mathematics and opens up the possibility of a view of mathematics that is more open
and filled with creative possibilities” (p. 273). In any case, there is little, if any, evidence that awareness of the theorem and its implications has deterred many mathematicians from the doing of
mathematics. I would guess that Gödel’s demonstration of the incompleteness of mathematics is considerably better known among the general public, if not among mathematicians, than Turing’s
demonstration of the insolvability of the halting problem, or what he referred to as the Entscheidungsproblem (decision problem). It has been argued, however, that Turing, in fact, did more than
Gödel and that his approach was more fundamental: Turing not only got as a corollary Gödel’s result, he showed that there could be no decision procedure. You see, if you assume that you have a formal
axiomatic system for arithmetic and it’s consistent, from Gödel you know that it can’t be complete, but there still might be a decision procedure. There still might be a mechanical procedure which
would enable you to decide if a given assertion is true or not. That was left open by Gödel, but Turing settled it. The fact that there cannot be a decision procedure is more fundamental and you get
incompleteness as a corollary. (Chaitin, 1995, p. 30)
Another example of proof of nonprovability is one constructed by Paul Cohen addressed to the first of the 23 problems that David Hilbert
identified in 1900. Cantor had proved that the set of real numbers (which includes irrationals) is larger than the set of rationals, even though both sets are infinitely large. Hilbert’s question was
whether there is an infinite set that is between these two in size. Cohen proved that it is impossible to tell, which is to say that it is impossible to construct either a proof that such a set
exists or a proof that such a set does not exist. That it is possible to show that something is not provable in mathematics is an interesting and important fact. As it happens, it is also possible to
show that a theorem is provable without actually proving it. Early in the 20th century, Finnish mathematician Karl Frithiof Sundman proved that the famous and recalcitrant “three-body
problem”—determining the behavior of three bodies under mutual gravitational attraction—has a solution, although he did not provide the solution itself (Bell, 1937). More recently, several
collaborating computer scientists and mathematicians from Canada, Israel, and the United States have promoted the idea of “zero-knowledge proofs,” and shown how it can be demonstrated that a theorem
has been proved without providing details of the proof itself (Peterson, 1988, p. 214). Finally, a word of caution regarding proofs of nonprovability. Austrian philosopher Ludwig Wittgenstein (1972)
reminds us that to say that a proposition cannot be proved is not to say that it cannot be derived from other propositions. Any proposition can be derived from others, but it may be that the
propositions from which the one to be proved is derived are no more certain than the one that is derived from them, in which case, the derivation would be a “proof” in a peculiar sense and unlikely
to be of much interest.
☐☐ The Quest for Generality, Unity, and Simplicity The development of mathematics has been characterized by a striving for ever higher and higher degrees of generality. (Boyer & Merzbach, 1991, p.
It is one thing to demonstrate the possibility of constructing with straightedge and compass a regular polygon of 17 sides, as Gauss, at 18, was the first to do; it is quite another to discover a
rule that will distinguish all regular polygons that can be constructed from those that cannot be, which Gauss also did, though not at 18. What he discovered was the constructability of all regular
polygons whose number of sides is expressible as 2 2n + 1 . The demonstration—no mean feat—pertains to a single regular polygon, whereas the rule relates to all—an infinity—of them. Mathematicians
strive for generality. They seek theorems that state properties of all the members of classes of interest. It is not enough
Mathematical Reasoning
to know that Goldbach’s conjecture—that every even number is the sum of two primes—holds for all even numbers up to some astronomically large number; one wants a proof either that it holds for all
even numbers or that it does not. The search for generality is seen also in the preference for if and only if relationships over simple if relationships. For example, it is interesting and useful to
realize that if a number is a prime, other than 2, it can be represented either as 4n + 1 or as 4n – 1, but it would be much more interesting and useful to be able to say that if and only if a number
is a prime, other than 2, it can be represented either as 4n + 1 or as 4n – 1. The latter statement is false, however, inasmuch as it is not the case that all numbers that can be represented either
as 4n + 1 or as 4n – 1 are primes. Nine, for example, which is 4 × 2 + 1, is not prime, nor is 15, which is 4 × 4 – 1. Misinterpretation of if (conditional) assertions as if and only if
(biconditional) assertions is a common source of error in mathematical and logical reasoning. The power of general relationships, which motivates the search for them, is seen in the following comment
about the highly abstract concept of a group: Having proved, using only the group axioms, that group inverses are unique, we know that this fact will apply to every single example of a group. No
further work is required. If tomorrow you come across a quite new kind of mathematical structure, and you determine that what you have is a group, you will know at once that every element of your
group has a single inverse. In fact, you will know that your newly discovered structure possesses every property that can be established—in abstract form—on the basis of the group axioms alone.
(Devlin, 2000a, p. 193)
The abstractness of the concept of a group is captured in a definition offered by Newman (1956a). “The theory of groups is a branch of mathematics in which one does something to something and then
compares the result with the result obtained from doing the same thing to something else, or something else to the same thing” (p. 1534). Browder and Lane (1978) refer to the notion of a group as
having become in the 20th century “the fundamental conceptual and formal tool for mathematical descriptions of the physical world” (p. 343). There are many examples in mathematics of relationships
that have been discovered that are tantalizingly close to being general, but still fall a little short. Fermat’s “little theorem” is a case in point. One would like a formula that will distinguish
all prime numbers from all composites. Fermat claimed that a number, x, is not prime if 2n–1 ≠ 1 (mod n). So the inequality identifies (some) nonprimes. However, from 2n–1 = 1 (mod n), one cannot
conclude that n is prime, because the equality holds for some
composite numbers as well as for all primes. Another statement of the theorem is: If n is a whole number and p is a prime number that is not a factor of n, then p is a factor of np–1 – 1.
Fermat claimed to have a proof for this theorem, but he did not publish it. Euler constructed a proof of it nearly 100 years later. Closely related to the search for generality in mathematics is the
search for unity. Just as scientists seek to unify the forces of nature by accounting for them within a single cohesive theoretical structure, so mathematicians seek a conceptual framework within
which all of mathematics can be viewed. Interest in unifying different mathematical areas has been a continuing one, and the history of mathematics records many efforts to join various combinations
of the “branches”— arithmetic, geometry, algebra, analysis, and probability theory, among others—in terms of which the mathematical tree has developed. Among the reasons for the attention that
topology has received in the recent past is the prospects that some see for unifying much of mathematics within it. A major unifying concept in mathematics, introduced at the turn of the 20th
century, is that of a set. As Rucker (1982) puts it: “Before long, it became evident that all of the objects that mathematicians discuss— functions, graphs, integrals, groups, spaces, relations,
sequences—can be represented as sets. One can go so far as to say that mathematics is the study of certain features of the universe of set theory” (p. 41). Whether all mathematicians would agree with
this assessment of the unifying nature of set theory, Rucker’s statement illustrates the interest that potentially unifying concepts hold. Many proofs in mathematics are simple and comprehensible;
many are not. Undoubtedly some proofs are complex because simpler ones are impossible, but a difficult proof is often taken as a sign that the essential nature of the problem has not been understood
and as a challenge to construct a simpler one. “Cleaning up old proofs is an important part of the mathematical enterprise that often yields new insights that can be used to solve new problems and
build more beautiful and encompassing theories” (Schecter, 1998, p. 59). Schechter quotes Italian-American mathematician-philosopher Gian-Carlo Rota’s observation that “the overwhelming majority of
research papers in mathematics is concerned not with proving, but with reproving; not with axiomatizing, but with reaxiomatizing; not with inventing, but with unifying and streamlining” (p. 59), and
he notes that Pafnuty Chebyshev’s difficult proof of a conjecture by French mathematician Joseph Bertrand that there is
Mathematical Reasoning
always at least one prime number between every number and its double stood for 80 years before being tidied up by the teenage Paul Erdös. As Casti (2001) points out, there is no reason to expect that
every short, simple, true statement should have a short, simple, true proof. On the other hand, mathematicians attempt to produce proofs that are as simple as they can be, and they strive to simplify
existing complex proofs whenever they believe it possible to do so. One might wonder why one should believe, in any particular case, that a simpler proof than all those that have already been
developed might be found. One suspects that Stewart (1990) expresses a belief that is deeply held by many mathematicians when he claims that “the best mathematics is always simple, if you look at it
the right way” (p. 73). Unhappily, for most of us much of mathematics seems exceedingly complex, no matter how we look at it. Usually anyone with the necessary training will have no difficulty
understanding each of the steps in a complicated mathematical proof or chain of deductions; grasping the entire proof or deductive chain in its entirety is another matter. A proof can be so complex
that understanding it in its entirety may not be possible. Appel and Haken’s proof of the four-color theorem represented the beginning of a new era in proof making. It involved an excessive amount of
computation, which was done by a computer, and the result was so complicated as to be incomprehensible. Stewart (1987) notes that a full proof, if written out, would be so enormous that nobody could
live long enough to read it, and he asks the obvious question of whether such a monstrosity is really a proof. On that question opinions are divided. To accept it as a proof, which many—but not
all—mathematicians do, requires an act of faith, inasmuch as nobody could check it without the help of computers, even in principle. Another proof the complexity of which challenges comprehension is
the more recent one by Andrew Wiles of Fermat’s last theorem. If, as has been claimed, not more than 1 mathematician in 1,000 can understand Wiles’s argument, perhaps we have to say that the theorem
has been proved to a few people—in particular mathematicians conversant with elliptic equations, modular forms, especially the ground-breaking work of Japanese mathematicians Yutaka Taniyama and Goro
Shimura, and numerous other mathematical arcana—but that for the rest of us to consider it proved requires an act of faith, namely, faith in the competence of those few mathematicians who claim they
understand it well enough to vouch for its adequacy. If it is true that very few mathematicians understand Wiles’s proof in its entirety, it does not follow that only those few are in a position to
challenge its authenticity; one need not follow an argument in its entirety to find flaws in parts of it if such flaws exist. Doubts were
raised about specific aspects of Wiles’s proof soon after it was published (Cipra, 1994a, 1994b), but it appears that the necessary repair work was done in collaboration with Richard Taylor, and as
of May 1995, it had been sufficiently accredited among mathematicians who are deemed capable of passing judgment to warrant an entire issue of Annals of Mathematics (Taylor & Wiles, 1995; Wiles,
1995). This does not ensure that no questions of the adequacy of this proof will ever be raised in the future. It can take a long time for the mathematical community as a whole to make up its mind as
to whether to accept a complicated proof or to keep looking for a fatal flaw. Anyone who is disappointed that Fermat’s last theorem has finally been proved, because he or she had hoped to be the
first to that goal, may still aspire to a place beside Wiles in the mathematical pantheon by inventing a proof of the theorem that is within the grasp of garden-variety minds—or the one that Fermat
might have wished he could have written in the margin of his book. The complexity of some proofs and the energy that can be put into evaluating them are illustrated also by Russian mathematician
Grigori Perelman’s recently announced proof of a conjecture by Poincaré involving the topological equivalence of a three-dimensional manifold and a three-dimensional sphere under certain conditions.
Poincaré made the conjecture 100 years ago, but neither he nor any of the many mathematicians who tried between then and now were able to prove it until Perelman came along. Perelman was offered, and
declined to accept, the prestigious Fields Medal for the accomplishment, and appears to have stopped doing mathematics, at least as a professional, since 2003, being disillusioned by the perceived
unseemly behavior—apparently relating at least in part to issues of credit—of some fellow mathematicians. As of 2006, it was not clear whether, if offered it, he would accept the $1 million prize
provided by the Clay Institute for the proof of Poincaré’s conjecture (Nasar & Gruber, 2006). Possibly the most monstrous proof ever is one in group theory—the classification theorem for finite
simple groups. Authored by hundreds of mathematicians and published in pieces in a variety of journals, it has been estimated to be around 15,000 pages long and, not surprisingly, is known among
mathematicians as simply the enormous theorem (Conway, 1980; Davis & Hersh, 1981; Stewart, 1987). Although it is doubtful that anyone understands this proof in its entirety in any very deep sense,
many mathematicians rely on it for various purposes. A considerable effort is being devoted to finding a way to shorten the proof to the point where it could be understood by a single individual;
although progress to this end has been reported, the task is proving to be formidable (Cipra, 1995).
Mathematical Reasoning
The idea that credence would be given to a mathematical proof that relatively few mathematicians understand may come as a surprise to nonmathematicians. What may be even more surprising is that such
proofs are not rare exceptions. Mathematics is not the homogeneous discipline that it is commonly believed to be; it is a very large territory and individual mathematicians spend their lives
exploring small parts of it. Frontier-extending work often is understood by very few people other than those who are doing it. As Davis and Hersh (1981) put it, “The ideal mathematician’s work is
intelligible only to a small group of specialists, numbering a few dozen or at most a few hundred. This group has existed only for a few decades, and there is every possibility that it may become
extinct in another few decades” (p. 34). That what appears clear to one mathematician may prove to be abstruse to another is illustrated by the fact that French mathematician Siméon-Denis Poisson,
for whom the famed Poisson distribution of probability theory is named, found the paper in which Galois presented what later became widely recognized as the beautiful Galois theory to be
incomprehensible (Aczel, 1996). Why should anyone accept as a mathematical proof an argument that few, if any, people can understand in its entirety? Because “the strategy makes sense, the details
hang together, nobody has found a serious error, and the judgment of the people doing the work is as trustworthy as anybody else’s” is Stewart’s answer (1987, p. 117). Regarding proofs that depend on
many computer computations, Appel and Haken (1978) acknowledge that there is a tendency among mathematicians to prefer proofs that can be checked by hand over those that can only be checked by
computer programs, but argue with respect to long computationally complicated proofs that, even when it is feasible to check them by hand, the probability of human error is likely to be higher than
the probability of computer error. That most known proofs of the past are reasonably short they attribute to the lack of tools for producing extraordinarily long proofs: “If one only employs tools
which will yield short proofs that is all one is likely to get” (p. 178). On the other hand, Paulos (1992) points out that even if the probability that any specific bit of a proof as complicated as
the four-color theorem is in error is miniscule, there are so many opportunities for the small probability event to occur that the most one should conclude is that such a proof is probably true,
which is not the same, in his view, as being “conclusively proved.” At best, such proofs are less than completely satisfying psychologically, because making sure a conjecture is true is only one
reason for wanting proof in mathematics; another is to understand why it is true. Of course, what it means to understand a proof (or anything else, for that matter) is a nontrivial question in its
own right; suffice it to say here that proofs differ in the degree to which they make us believe we
understand, and most of us probably have a preference for those that do a good job of that. Stewart (1987) refers to an illuminating proof as the kind of proof that mathematicians really like, but
notes that what constitutes such a proof is more a matter of taste than of logic or philosophy. So although mathematicians greatly prefer simple proofs to complicated ones, they often have to settle
for the latter. But the search for simplifications continues. Insofar as possible, one wants proofs that are simple enough to be understood in their entirety and simple enough so that one can have
relatively high confidence that they do not conceal undetected faults.
☐☐ Some Notable Proofs There are millions of proofs in the mathematical literature—some simple, some complex, some elegant, some ugly. Most of us will never see the vast majority of those proofs, and
probably neither will most professional mathematicians. There are a few proofs, however, that receive special attention in books on the history of mathematics. What determines that a proof is
considered worthy of such attention is an interesting question. Possible factors include simplicity, elegance, novelty (as when a proof constitutes an entirely new way of looking at a relationship),
difficulty, and implications (as when a proof opens up a new area of mathematics). We have already encountered some proofs that have received a great deal of attention, among them Georg Cantor’s
(1911/1955) proof that the real numbers are not countable, Kenneth Appel and Wolfgang Haken’s (1977a) proof of the four-color theorem, and Andrew Wiles’s (1995) proof of Fermat’s last theorem. Dunham
(1991) describes, and provides the historical context of, several proofs of what he characterizes as the great theorems of mathematics. These include: • Hippocrates’ proof of a procedure for
quadrature (squaring the area) of a lune • Euclid’s proof of the Pythagorean theorem • Euclid’s proof that there are an infinity of prime numbers • Archimedes’ proof of a procedure for determining
the area of a circle to any desired accuracy
Mathematical Reasoning
• Heron of Alexandria’s proof of a procedure for determining the area of a triangle, given only the lengths of its three sides • Cardano’s proof of a rule for solving a cubic equation • Newton’s
proof of a method for approximating π • The brothers (Johann and Jakob) Bernoulli’s proof that the harmonic ∞ 1 series, , diverges ∞ n n=1 1 • Euler’s proof that the sum of the reciprocals of the
squares, , 2 n n=1 converges
• Euler’s proof that Fermat’s conjecture that all numbers of the form 22n + 1 are primes is false • Cantor’s proof that any set is smaller than its power set (the power set of a set is the set of all
its subsets)
☐☐ The Transitory Nature of Understanding Understanding a proof, or any other argument, must depend, in part, on having some familiarity with the concepts of which it is composed. But this cannot be
the whole story. Relevant background knowledge may be a necessary condition for understanding, but it is not a sufficient one. Moreover, why one knowledgeable individual finds an argument compelling
and another does not—whether in mathematics or elsewhere—is not always clear. Perhaps even more perplexing is why the same individual will find the same argument persuasive on one occasion and not on
another. Bertrand Russell (1944/1956b) reports being able to remember the precise moment “one day in 1894, as I was walking along Trinity Lane,” when he “saw in a flash (or thought I saw) that the
ontological argument is valid. I had gone out to buy a can of tobacco; on my way back, I suddenly threw it up in the air and exclaimed as I caught it: ‘Great Scot, the ontological argument is sound’”
(p. 386). What happened in Russell’s head that convinced him at that particular moment that this argument, with which he was thoroughly familiar and that, up until that time, he had not found
compelling, was valid? Had his brain been busy weighing the strengths and weaknesses of the argument and suddenly found the scale tipping in favor of the pros? Russell
wrote these words many years after the incident he described and long after he no longer considered the ontological argument to be sound. So we have an example of the interesting case of believing an
argument to be unsound, then believing it to be sound, and then again believing it to be unsound. Even outstanding mathematicians can be unsure of whether an ostensible proof is really a proof.
Cantor sought for years for a compelling proof of his “continuum hypothesis,” and he many times thought he had found it, only soon to become convinced that he had not, or even to be convinced that he
had now found a proof that it was false (Aczel, 2000). Cantor’s repeated unsuccessful attempts to prove the continuum hypothesis have been considered by some to be the cause of, or at least a factor
contributing to, his bouts of depression, which tended to occur when he was working on the problem. Long after his death, the hypothesis was demonstrated to be not decidable. Probably most of us have
had the experience of being convinced on one occasion that a particular argument is compelling and equally convinced on another that it is not. This should make us cautious about taking dogmatically
rigid stands on controversial issues even when we are quite sure (at the moment) that we are right. It also should make us sympathetic to the notion that all proofs are tentative, in a sense. One can
never be certain that what one takes to be an indisputable argument at a given time will be perceived by all others as indisputable, or even that it will still be perceived so by oneself at a later
time. This type of uncertainty is one of the reasons why some theorists discount feelings of conviction as at all relevant to the question of the soundness of proofs or logical arguments. Hempel
(1945), for example, contends that both the idea that the feeling of plausibility should decide the acceptability of a scientific hypothesis and the opinion that the validity of a mathematical proof
should be judged by reference to convincingness have to be rejected on analogous grounds: They involve a confusion of logical and psychological considerations. Clearly, the occurrence or
non-occurrence of a feeling of conviction upon the presentation of grounds for an assertion is a subjective matter which varies from person to person, and with the same person in the course of time;
it is often deceptive, and can certainly serve neither as a necessary nor as a sufficient condition for the soundness of the given assertion. A rational reconstruction of the standards of scientific
validation cannot, therefore, involve reference to a sense of evidence; it has to be based on objective criteria. (p. 9)
But how is one to judge the merits of Hempel’s own argument regarding the need for objective criteria, except by appealing to one’s
Mathematical Reasoning
intuitions—to one’s feelings of soundness or convincingness? And, assuming one accepts the argument, how is one to decide what the objective criteria should be? Objectivity is the abiding goal of
empiricism, but it is elusive. We want objective criteria, and groups of people may be able to agree as to what those criteria should be in specific instances, but the opinions from which such
agreement is derived must be based, in the final analysis, on feelings or convictions of correctness that are subjective through and through.
C H A P T E R
Informal Reasoning in Mathematics
A mathematician’s work is mostly a tangle of guesswork, analogy, wishful thinking and frustration, and proof, far from being the core of discovery, is more often than not a way of making sure that
our minds are not playing tricks. (Rota, 1981)
The characterization of mathematics as a deductive discipline is accurate but incomplete. It represents the finished and polished consequences of the work of mathematicians, but it does not
adequately represent the doing of mathematics. It describes theorem proofs but not theorem proving. Moreover, the history of mathematics is not the emotionless chronology of inventions of evermore
esoteric formalisms that some people imagine it to be. It has its full share of color, mystery, and intrigue. That the process of mathematical discovery is not revealed in the finished proofs that
mathematicians publish was pointed out by Evariste Galois, the brilliant French mathematician who, after inventing group theory, died in a duel at the age of 21. It has been convincingly documented
by Polya (1954a, 1954b) and Lakatos (1976). In addition to deducing the implications of axioms, mathematicians also invent new axiomatic systems, and this cannot be done by deductive reasoning alone.
As Polish-American mathematician Stanislav Ulam (1976) puts it, “In mathematics itself, all is not a question of rigor, but rather, at the start, of reasoned intuition and imagination, and, also,
repeated guessing” (p. 154). Rucker (1982) makes essentially the same point: “In the initial stages of research, mathematicians do not seem to function like
Mathematical Reasoning
theorem-proving machines. Instead, they use some sort of mathematical intuition to ‘see’ the universe of mathematics and determine by a sort of empirical process what is true. This alone is not
enough, of course. Once one has discovered a mathematical truth, one tries to find a proof for it” (p. 208). Fundamental to an understanding of mathematical reasoning is the distinction between
mathematics as axiomatic systems and the thinking done by mathematicians in the process of defining and refining those systems. The systems themselves are better viewed as the results of mathematical
thinking than as illustrations of it. Theorems make explicit what is implicit in a discipline’s axioms and its theorems, but they do not reveal much of the nature of the reasoning that resulted in
this explication. In somewhat oversimplified terms, the distinction contrasts proofs with proof making. Polya (1945/1957) captures this distinction in noting the difference between mathematics, when
presented with rigor as a systematic deductive science, and mathematics in the making, which he describes as an experimental inductive science. Penrose (1989) makes a similar distinction in noting
that a rigorous argument typically is the last step in the mathematician’s reasoning and that generally many guesses—albeit constrained by known facts and logic—precede it. Casti (2001) contends that
mathematics—the doing of mathematics—is an experimental activity, and points to Gauss’s notebooks as supportive evidence of the claim.
☐☐ Knowledge in Mathematics A great many problems are easier to solve rigorously if you know in advance what the answer is. (Stewart, 1987, p. 65)
To be a good historian one must know a great many historical facts; to be a competent lawyer one must be familiar with much law and numerous legal cases; physicians are expected to know much of what
there is to know about the human physiological system or at least that part of it on which they specialize. I suspect that similar assumptions are not made about mathematicians; what is assumed to be
required to do well at mathematics is to be able to reason well, not necessarily to know a lot of facts. Mathematics, being the relatively abstract and content-free type of entity that it is, is not
usually thought of as a knowledge-intensive discipline. The well-known fact that mathematicians tend to make their major discoveries while relatively young, often too young to have amassed a large
amount of mathematical knowledge, generally supports this view.
Informal Reasoning in Mathematics
It may be that if the various cognitively demanding professions were ordered in terms of the extent to which success depends on accumulating a large body of factual knowledge, mathematics, especially
theoretical mathematics, would be at, or close to, the bottom of the list. There is perhaps no other profession in which one can accomplish so much on the basis of abstract thought alone. But this is
not to say that knowledge is of no use to mathematicians, or that it never plays a role in mathematical discoveries. Extraordinary mathematicians not only are very good thinkers, but often they know
a lot—have a large storehouse of facts—about mathematical entities and operations. Ulam (1976) considered a good memory to be a large part of the talent of both mathematicians and physicists. British
mathematician John Littlewood once claimed that every positive integer was one of Srinivasa Ramaujan’s personal friends. By way of lending credence to this claim, Littlewood’s colleague and fellow
British mathematician G. H. (Godfrey Harold) Hardy (1940) recounts an occasion on which he informed Ramanujan that the taxicab in which he had just ridden had the “rather dull” number, 1729.
Ramanujan informed him that he was wrong about the dullness of the number, because it was the smallest number that could be expressed as the sum of two cubes in two different ways—123 + 13 and 103 +
93. On the other hand, because Ramanujan lacked knowledge of what other mathematicians had done, many of his own discoveries were rediscoveries of what others had discovered before him. The
indispensable role of knowledge in the development of some cutting-edge proofs is seen clearly in Wiles’s proof of Fermat’s last theorem, which makes extensive use of the work of numerous preceding
mathematicians (see Chapter 5). Often history credits an individual with a major mathematical innovation, and one can get the impression that the contribution was made single-handedly in a vacuum.
When one looks more closely at the circumstances under which the innovation was made, however, one is likely to find that the innovator benefited greatly from the work of forerunners with which he or
she was thoroughly familiar. The importance of knowledge to mathematical discovery is also illustrated by the numerous occasions in the history of mathematics on which two or more people have come up
with the same innovation independently at the same, or approximately the same, time. (For lists of simultaneous, or near simultaneous, discoveries or inventions in mathematics and science, see Ogburn
[1923, pp. 90–102] and Simonton [1994, pp. 115–122].) One example of simultaneous, or near simultaneous, discoveries in mathematics is that of logarithms by Scottish mathematician John Napier, German
mathematician Michael Stifel, and Swiss clockmaker–mathematician Joost Bürgi.
Mathematical Reasoning
Another is that of hyperbolic geometry by Lobachevsky, Gauss, and Bolyai. Perhaps the best known instance of a major mathematical innovation that has been credited to two people working independently
is the infinitesimal calculus by Gottfried Leibniz and Isaac Newton. We will have occasion to revisit this event later. The point I wish to make here is that such occurrences would be remarkable
coincidences if it were not the case that mathematical innovations spring from the soil of current mathematical knowledge. Individuals who have added significantly to that knowledge have done so by
building on what already exists. Mathematicians, like lawyers, often find it convenient to refer to similar cases with which they are familiar and for which they know the solution or disposition. A
large mathematical knowledge base permits the mathematician to classify problems as to types, to anticipate the form that a solution is likely to take, and to select an appropriate approach. It may
permit one also to judge whether a problem is reasonable, whether it is well or poorly formed, and whether it is important or trivial. Because mathematical knowledge accumulates and mathematicians
build on the work of their predecessors, much of the work done by mathematicians in any given age could not have been done at an earlier time. There are some notable exceptions to this rule, but, in
the main, it stands. It follows also that the opportunities for mathematical discoveries were never greater than they are now, because mathematicians never had more on which to build. Indeed, we are
living in a period of unusual mathematical inventiveness; by some estimates, more mathematics has been created during relatively few decades in the recent past than in all previous time. Fifty years
ago Kline (1953a) pointed out that the elementary school graduate of that day knew far more mathematics than a learned mathematician of Medieval Europe. This is a sobering and exciting thought.
☐☐ Intuition We have noted that current attitudes regarding the intuitive status of the axioms of a mathematical system, say geometry, are different from the attitudes of the ancients and from those
of even philosophers of only a couple of centuries ago. No longer is it held that the empirical truth—correspondence to physical reality—of such axioms must be intuitively obvious. Does this mean
that intuition plays no role in modern mathematics, or in the thinking of modern mathematicians? In fact, intuition is a very important factor in the psychology of mathematics, in the sense that
mathematicians spend a great deal of time exploring guesses and
Informal Reasoning in Mathematics
checking out hunches in their efforts to discover and prove new theorems. Proofs and proof making are at the core of mathematics, but what tells the mathematician what to try to prove? Hersh (1977)
refers to intuition as “an indispensable partner to proof,” and argues that one sees it everywhere in mathematical practice. One evidence of intuition at work that has been noted by many writers is
that mathematicians have typically made effective use of concepts and relationships before they have been proved. British mathematician– logician Philip Jourdain (1913/1956) puts it this way: “In
mathematics it has, I think, always happened that conceptions have been used long before they were formally introduced, and used long before this use could be logically justified or its nature
clearly explained. The history of mathematics is the history of a faith whose justification has been long delayed, and perhaps is not accomplished even now” (p. 35). Dantzig (1930/2005) notes that in
the history of mathematics the how has always preceded the why, which is to say that the invention of problem-solving techniques has preceded the development of theoretical explanations of why they
work. He argues that this is particularly true of arithmetic, the technique of counting and the rules of reckoning being established hundreds of years before the emergence of a philosophy of number.
Dantzig notes too that the same history reveals that progress in mathematics has been erratic and that intuition has played a predominant role in it. Kline (1980) contends that the essential idea of
new mathematical work is always grasped intuitively before it is explicated by a rational argument, and that mathematical creation is the special province of those “who are distinguished by their
power of intuition rather than by their capacity to make rigorous proofs” (p. 314). Nasar (1998) emphasizes the importance of intuition in the work of contemporary American mathematician–economist
John Nash, which she likens to that of other great “mathematical intuitionists,” Riemann, Poincaré, and Ramanujan: “Nash saw the vision first, constructing the laborious proofs long afterward. But
even after he’d try to explain some astonishing result, the actual route he had taken remained a mystery to others who tried to follow his reasoning” (p. 12). Intuition also figures in the work-a-day
world of mathematicians in the monitoring and evaluating of their own work. Hadamard (1945/1954) points out, for example, that although good mathematicians frequently make errors—he claims to have
made many more of them than his students—they usually recognize them as errors and correct them so no trace of them remains in the final result. But how do they recognize the errors as such? Hadamard
refers to a “scientific sensibility,” which he calls insight, but perhaps might as appropriately be called intuition, that warns the mathematician that the calculations do not look as they ought to
look. It is not clear how best to account for this in current psychological
Mathematical Reasoning
terms; possibly what clues one that things are not right is a realization of incompatibility between the “answer” in hand, or the process by which it was obtained, and other relevant things one
knows. But intuition also plays a much more fundamental role in mathematics, considered as a deductive system. If, as Whitehead and Russell (1910) argue, mathematics is reducible to logic, our
acceptance of the basic rules of the former rests on our acceptance of the basic rules of the latter. And our acceptance of the rules of logic is a matter of intuition because there is no way to
justify them without appeal to the very laws one is seeking to justify. One might protest that intuition is an unreliable basis for anything, because intuitions change over time. Ideas that are
intuitively acceptable at one time may be unacceptable at another, and conversely. Intuitions do change as a consequence of learning and developing the ability to see things from new perspectives.
But changes of perspective can occur only to the extent that one becomes convinced, intuitively, of their justification. As Kline (1980) puts it, “We are now compelled to accept the fact that there
is no such thing as an absolute proof or a universally acceptable proof. We know that, if we question the statements we accept on an intuitive basis, we shall be able to prove them only if we accept
others on an intuitive basis” (p. 318). Undoubtedly, intuition can lead one into paradoxes and other types of mathematical quicksand. On the other hand, what one is willing to accept as a resolution
of a paradox, or solutions of other types of problems, with the help of definitions and formalisms must ultimately depend on how intuitively compelling one finds such a resolution or solutions to be,
given the definitions and formalisms proposed. One may find oneself in the situation of saying yes, given such and such a definition or formalism, I accept the conclusion that follows, while
recognizing that the acceptance is provisional on the givens, and one may harbor some reservations about the givens as representing the way things are as distinct from being the specifications of an
arbitrary abstract game. Defining an infinite set as any set the elements of a proper subset of which can be put into one-to-one correspondence with the elements of the set, and defining equality as
one-to-one correspondence answers the question of how it is that the set of all even integers can be said to equal the set of all integers, but only for one who is willing to accept the definitions.
☐☐ Insight Numerous mathematicians have reported the experience of suddenly realizing the solution to a problem on which they had labored unsuccessfully for some time, but about which they were not
consciously thinking
Informal Reasoning in Mathematics
when the solution came to mind. Poincaré (1913) reports realizing, as he was about to enter a bus, the solution of a problem relating to the theory of Fuschsian functions with which he had struggled
to no avail for a fortnight. Immediately prior to the insight he had been traveling, and by his own account, the incidents of the travel had made him forget his mathematical work. Poincaré
experienced other moments of insight or “sudden illumination” at unexpected times, and the experiences left him convinced of the importance of unconscious work in mathematical invention. Many other
mathematicians have claimed to have experienced sudden insights at unexpected moments. Gauss reported seeing, “like a sudden flash of lightning,” the solution to a problem on which he had worked
unsuccessfully for years, and confessed to not being able to say “what was the conducting thread which connected what I previously knew with what made my success possible” (quoted in Hadamard, 1945/
1954, p. 15). We have already noted that, when only 18, Gauss discovered how to construct a 17-sided regular polygon with straightedge and compass. According to his account of this discovery, the
enabling insight occurred to him (after concentrated analysis) before getting up from bed one morning while on vacation. The insight involved the expression -
17 + 3 17 - 34 - 2 17 - 2 34 + 2 17 1 17 34 - 2 17 + + + 16 16 16 8
(Kaplan & Kaplan, 2003)—not your run-of-the-mill initial wake-up thought. Irish mathematician-physicist William Hamilton tried unsuccessfully for 10 years to extend work that he had done on what
later became known as complex numbers to three dimensions by using ordered number triples instead of couples. One day, during a stroll with his wife, it suddenly occurred to him to try using
quadruples instead of triples. By doing so, and dropping the commutative law of multiplication, he created a new type of number—the quaternion. Though still very useful in certain contexts, the
quarternion has, for most purposes, been eclipsed by the invention of vectors and matrices, which have many capabilities in common with it. Spurred by an interest in Gödel’s incompleteness theorem,
and Hilbert’s dream of a machine that could decide, of any statement, whether it could be proved, Alan Turing became intrigued by the question of whether it was possible to prove that such a machine
could not be built. His answer rested on an insight he had while jogging in Cambridge. The insight involved seeing a connection between the proof he was seeking and the approach Cantor had taken to
prove that irrational numbers
Mathematical Reasoning
outnumber rational numbers. From there he was able to construct the proof he sought. Hungarian-American mathematician Paul Halmos (1985) gives an engaging account of his struggle with, and eventual
solution of, a problem in algebraic logic: I lived and breathed algebraic logic during those years, during faculty meetings, during my after-lunch lie-down, during concerts, and, of course, during my
working time, sitting at my desk and doodling helplessly on a sheet of yellow paper. … The theorem that gave me the most trouble was the climax of Algebraic Logic II. … I remember the evening when I
got over the last hurdle. It was 9 o’clock on a nasty, dark, chilly October evening in Chicago; I had been sitting at my desk for two solid hours, concentrating, juggling what seemed like dozens of
concepts and techniques, fighting, writing, getting up to walk across the room and then sitting down again, feeling frustrated but unable to stop, feeling an irresistible pressure to go on. Paper and
pencil stopped being useful—I needed a change—I needed to do something—I pulled on my trenchcoat, picked up my stick, and mumbling “I’ll be back,” I went for a walk, out toward the lake on 55th
street, back on 56th, and out again on 57th. Then I saw it. It was over. I saw what I had to do, the battle was won, the argument was clear, the theorem was true, and I could prove it. Celebration
was called for. … (p. 211)
Andrew Wiles pays tribute to the role of the subconscious in helping one get by what appears to be an insurmountable impasse in a problem-solving effort, but he stresses also the importance of
prolonged concentrated work on the problem first: “When you’ve reached a real impasse, when there’s a real problem that you want to overcome, then the routine kind of mathematical thinking is of no
use to you. Leading up to that kind of new idea there has to be a long period of tremendous focus on the problem without any distraction. You have to really think about nothing but that problem—just
concentrate on it. Then you stop. Afterwards there seems to be a kind of period of relaxation during which the subconscious appears to take over, and it’s during that time that some new insight
comes” (quoted in Singh, 1997, p. 208). Regarding his own concentrated effort to prove a conjecture (the Taniyama-Shimura conjecture that all elliptic equations are modular) that was essential to his
proof of Fermat’s last theorem: “I carried this thought around in my head basically the whole time. I would wake up with it first thing in the morning, I would be thinking about it all day, and I
would be thinking about it when I went to sleep. Without distraction I would have the same thing going around and around in my mind” (quoted in Singh, 1997, p. 211). Hadamard (1945/1954) also
stresses the role that unconscious thought plays in mathematical invention or discovery: “Strictly speaking, there is
Informal Reasoning in Mathematics
hardly any completely logical discovery. Some intervention of intuition issuing from the unconscious is necessary at least to initiate the logical work” (p. 112). Hadamard recognizes four phases of
mathematical discovery that British psychologist Graham Wallas (1926/1945) had articulated—preparation, incubation, illumination, and verification—and points out that at least the first three of them
had been discussed by others, notably Helmholtz and Poincaré, before Wallas’s treatment of them. Whether in mathematics or elsewhere, invention and discovery take place, in Hadamard’s view, by
combining ideas. But most of the countless combinations that could be made are not useful and are effectively filtered out by a process of which we are not aware, so most of those combinations of
which we are conscious are fruitful or at least potentially so. While a strong believer in the effectiveness of unconscious thought processes, Hadamard cautions that the feeling of certitude that
often accompanies an unexpected inspiration can be misleading, so it is essential that what appear to be insights be verified in the light of reason.
☐☐ Origins of Mathematical Ideas In mathematics, as in science, it is difficult to trace ideas to their origins. We typically associate one or a few specific names with each major development, but a
close look generally reveals other lesser-known contributors on whose work those we remember built. What appears often to happen is that an idea emerges, perhaps in more than one place, in a vague
and imprecise form and that over a period of time it is explicated and refined sufficiently to become part of the mainstream of mathematical thought. The names that become associated with such ideas
in the historical record are as likely to be those of the individuals who helped to sharpen them or to communicate them to the mathematical community as those of the people who were more responsible
for their initial emergence. The relationship between the hypotenuse and the other two sides of a right triangle that is stated in the theorem that bears the name of Pythagoras was widely known among
the Babylonians long before the time of Pythagoras, and perhaps also among the Hindus of Iran and India, and in China, as well. According to McLeish (1994), it was known even by the builders of
megalithic structures in Britain also long before the time of Pythagoras. It is not clear, however, that the relationship was “known” in the same sense in every instance; knowing that it held in
certain cases—as evidenced by the “Pythagorean triples” shown on the Babylonian tablet, Plimpton 322—does not require recognition of the generality of the relationship. And knowing that a rope marked
Mathematical Reasoning
in, say, 3, 4, and 5 units could be used to construct right angles, as it is believed that the Egyptians knew, does not require explicit recognition of the Pythagorean relationship at all. “Pascal’s
triangle” of binomial coefficients was described by Arab and Chinese mathematicians several centuries before it was published by French mathematician-philosopher Blaise Pascal in 1665. The
13thcentury Chinese mathematician Yang Hui, for example, knew of it (Dunham, 1991). McLeish (1994) credits its discovery to Halayudha, a 10th-century Jaina mathematician who predated Yang Hui. (There
are discrepant accounts of when he lived; but most references I found claimed 10th century AD.) It appeared in several works published in Europe during the 16th century. In short, as a discoverer of
Pascal’s triangle, Pascal had many predecessors, none of whom, incidentally, acknowledged the others. It could be, of course, that most of them did not know of the others. It is true of many
mathematical discoveries, as with this one, that we cannot be sure who first made them—we cannot rule out the possibility that they were made by people whose names will never be known; we can be
sure, however, that the people whom history has credited with them were not the first in many cases, because the evidence of predecessors is clear. That Pascal’s name has been attached to the
triangle is not without some justification. Flegg (1983) describes his 1665 exposition of the triangle’s properties as standing out from all the others for its elegant and systematic thoroughness.
Western histories of mathematics are likely to emphasize the contributions of the Greeks, Indians, and Arabs to the early development of mathematics; apparently the Chinese too did much early work.
McLeish (1994) sees the insights of the Greek and Arab mathematicians as mainly derivative from the discoveries of the Chinese, Indians, and Babylonians, which is not to deny the genius of specific
Greek and Arab individuals. He credits the Chinese with inventing “the decimal system, the place-value concept, the idea of zero and the symbol we use for it, the solution of indeterminate, quadratic
and higher-order equations, modular arithmetic and the remainder theorem” (p. 71) centuries, if not millennia, before they were taken up, or independently discovered, by European scholars. According
to Flegg (1983), simultaneous linear equations were solved in China from as long ago as there are surviving mathematical records, and methods of solution using bamboo rods are known to have been in
existence as early as 1000 BC. McLeish notes also that with respect to arithmetic, the Maya, who flourished in the region of the Yucatan during the European middle ages, were far ahead of Europe in
some respects; their use of zero, for example, predated the adoption of that symbol in Europe by a long time. Quite possibly a great deal of mathematics was done in
Informal Reasoning in Mathematics
various parts of the world that is not generally described in Western accounts of the history of math. The extent to which specific ideas passed from one culture to another, as opposed to arising
independently in different places at different times, is unclear in many cases. The use of a symbol to serve the purpose that zero now serves is an especially interesting case of an invention that
probably occurred independently several times; such a symbol was used by the Babylonians, the Indians, the Chinese, and the Mayans, and conceivably was invented independently in each case. Very
readable histories of this enormously important concept and symbol have recently been provided by Kaplan (1999) and Seife (2000). That Newton and Liebniz quarreled over the question of which of them
had precedence in the development of the infinitesimal calculus is well known. Newton’s thinking about the calculus presumably benefited from lectures of his teacher, British mathematician Isaac
Barrow, who had developed methods for finding areas and tangents to curves. Much less widely known than the work of Newton and Liebniz is that the calculus was invented at about the same time in
Japan by Kowa Seki (sometimes Seki Kowa), who did not publicize the invention (Davies, 1992). Moreover, Fermat anticipated them all with a method of differentiation that is essentially the one that
is used today, but he did not see his method as a general one that was suitable for a whole class of problems and that deserved further investigation and development (Hadamard, 1945/1954). Most of
Fermat’s work in this area, as well as what he did pre-Descartes in analytical geometry, was not published until after his death. The calculus encompasses a number of inventions for which several
mathematicians deserve credit. “By even the simplest accounting, royalties would need to be shared by a good dozen mathematicians in England, France, Italy, and Germany who were all busily ramifying
Kepler and Galileo’s work on functions, infinite series, and the properties of curves” (Wallace, 2003, p. 126). The idea of approximating the properties of curved figures by aggregations of
successively smaller rectangles, which is fundamental to the calculus, goes back at least to the “method of exhaustion” used by the Greek mathematicians-philosophers Antiphon, Eudoxus, and
Archimedes. The method is illustrated by the approximating of the area of a circle by determining the area of an inscribed many-sided regular polygon; what is exhausted is the difference between the
area of the circle and that of the inscribed polygon as the number of sides of the polygon is increased. By starting with hexagons and doubling the number of sides four times to arrive at polygons of
96 sides, Archimedes determined the value of π to be between 3 10/71 and 3 1/7 (Beckman, 1971).
Mathematical Reasoning
We think primarily of Lobachevski, Bolyai, Riemann, and perhaps Gauss as the originators of non-Euclidean geometry, but Giovanni Saccheri, George Klügel, and Johann Heinrich Lambert all did prior
work in this area; even George (Bishop) Berkeley expressed doubts about the absoluteness of the truths of Euclidean geometry. The plane on which complex numbers are represented as points is known
today as the Gaussian plane, but although Gauss did promote this form of representation, the method was described in publications by both Norwegian-born surveyor Caspar Wessel and French accountant
Jean-Robert Argand several years before Gauss published about it. Looking back over the history of mathematics, it is hard to understand why some of the ideas that we take for granted today took so
long to become established. Some observers have argued that the acceptance of certain ideas or attitudes at critical times precluded or postponed the development or wide acceptance of other ideas,
and that the history of both mathematics and science could have been quite different if this had not been the case. Bell (1945/1992) argues that the ancient Greeks, despite their spectacular
accomplishments, delayed the development of mathematics and science for centuries by virtue of the Pythagoreans’ adoption of number mysticism with its abhorrence of empiricism and by their failure to
appropriate the algebra of the Babylonians. “Had the Pythagoreans rejected the number mysticism of the East when they had the opportunity, Plato’s notorious number, Aristotle’s rare excursions into
number magic, the puerilities of medieval and modern numerology, and other equally futile divagations of pseudo mathematics would probably not have survived to this day to plague speculative
scientists and bewildered philosophers. … If on the other hand the early Greeks had accepted and understood Babylonian algebra, the time-scale of mathematical development might well have been
compressed by more than a thousand years” (p. 54). In Bell’s view the Greeks more than made up for these sins against the future, however, by two monumentally great and lasting contributions to
thought: “explicit recognition that proof by deductive reasoning offers a foundation for the structures of number and form” and “the daring conjecture that nature can be understood by human beings
through mathematics, and that mathematics is the language most adequate for idealizing the complexity of nature into apprehensible simplicity” (p. 55). Where do individual mathematicians get their
ideas for original work? The same question may be asked, of course, regarding creative endeavor in any field. Where do creative thinkers get their ideas? Halmos (1985), in considering this question,
emphasizes the catalytic role of specific concrete problems. He categorically rules out the possibility that good ideas generally come from a desire to generalize, and claims that
Informal Reasoning in Mathematics
just the opposite is true: “The source of all great mathematics is the special case, the concrete example. It is frequent in mathematics that every instance of a concept of seemingly great generality
is in essence the same as a small and concrete special case. Usually it was the special case that suggested the generalization in the first place” (p. 324). Kac (1985) distinguishes between two kinds
of mathematical creativity. One, which he likens to the conquering of a mountain peak, “consists of solving a problem which has remained unsolved for a long time and has commanded the attention of
many mathematicians. The other is exploring new territory” (p. 39). Byers (2007) sees ambiguity as a major source of mathematical creativity. “Ambiguity is not only present in mathematics, it is
essential. Ambiguity, which implies the existence of multiple, conflicting frames of reference, is the environment that gives rise to new mathematical ideas. The creativity of mathematics does not
come out of algorithmic thought; algorithms are born out of acts of creativity, and at the heart of a creative insight there is often a conflict—something problematic that doesn’t follow from one’s
previous understanding” (p. 23). Byers makes ambiguity the central theme of his book-length treatment of the question of how mathematicians think. Logic is necessary too, as Byers sees it, but no
more so than ambiguity: “Logic moves in one direction, the direction of clarity, coherence, and structure. Ambiguity moves in the other direction, that of fluidity, openness, and release. Mathematics
moves back and forth between these two poles. … It is the interactions between these different aspects that give mathematics its power” (p. 78). Lakoff and Núñez (2000; Núñez & Lakoff, 1997, 2005)
emphasize the role that metaphor plays as a source of mathematical ideas. Indeed, they see metaphor to be central to all thought: “the basic means by which abstract thought is made possible” (p. 39).
They argue that conceptual metaphor is not just facilitative of understanding mathematics, but essential to it. Much of the abstraction of higher mathematics, they contend, “is a consequence of the
systematic layering of metaphor upon metaphor, often over the course of centuries” (p. 47). All of mathematics, Lakoff and Núñez (2000) argue, is derivable, psychologically, from four grounding
metaphors (4Gs)—object collection (forming collections), object construction (combining objects to form new collections), the measuring stick (using devices to make measurements), and motion along a
line (moving through space). These grounding metaphors are assumed to give rise to additional metaphors, such as a mental rotation metaphor that extends the number system to include negative numbers.
The 4Gs are considered grounding metaphors because they represent direct links between mathematics and sensory-motor experience, and it is in sensory-motor experience, Lakoff and Núñez contend,
Mathematical Reasoning
that all mathematics finds its roots. Metaphors that are derived from the grounding four are more abstract than the grounding four themselves, and the longer the chain of derivation, the more
abstract they become, but ultimately they all are traceable, in Lakoff and Núñez’s view, to bodily grounding. In addition to grounding metaphors, Lakoff and Núñez describe linking methaphors, the
function of which is to support the conceptualization of one branch of mathematics (arithmetic) in terms of another branch (set theory). “Linking metaphors are different from grounding metaphors in
that both the source and target domains of the mapping are within mathematics itself” (p. 142). As examples of classical branches of mathematics that owe their existence to linking metaphors, Lakoff
and Núñez give analytic geometry, trigonometry, and complex analysis. Insistence that all mathematics is derivative from a few metaphors that are grounded in sensory-motor experience is novel, and
whether it will have a lasting effect on the philosophy of mathematics is yet to be determined. We will encounter the idea again in Chapter 8.
☐☐ Strategies and Heuristics Both investigators of problem solving by human beings and developers of problem-solving programs for computers stress the importance of strategies and heuristic
procedures, especially in the context of problems for which algorithmic solutions are impractical or unknown. Mathematicians use such procedures in their efforts to solve mathematical problems. One
finds many rule-of-thumb approaches described in the mathematical literature.
Considering Analogous Problems Finding a solution to a difficult problem can sometimes be facilitated by considering a problem that is analogous to the one for which a solution is sought but that is
more tractable, because either one is familiar with it or it is inherently simpler. “Quite often [mathematicians] do not deliver a frontal attack against a given problem, but rather they shape it,
transform it, until it is eventually changed into a problem that they have solved before” (Péter, 1961/1976, p. 73). One is more likely to be able to do this, of course, if one has solved many
problems, and many types of problems, than if one has not.
Informal Reasoning in Mathematics
Bell (1945/1992) sees the approach of transforming problems for the purpose of making them more tractable as a defining characteristic of mathematical thinking. The methodology of transforming
problems and reducing them to standard forms is seen, he suggests, in all the greater epochs of mathematics. “A relatively difficult problem is reduced by reversible transformations to a more easily
approachable one; the solution of the latter then drags along with it the solution of the former and of all problems of which it is the type” (p. 36). Bell credits the ancient Babylonians, who appear
to have solved cubic equations by reducing them to a canonical form, with being the first to use this approach.
Specialization and Generalization One often sees an interplay between specialization and generalization in mathematical thinking. Mason, Burton, and Stacey (1985) emphasize the importance of this
interplay and make it central to their approach to the teaching of mathematical problem solving. In specializing, one considers concrete examples of abstract problems. The hope in doing this is that
one will find solutions to the concrete problems that will be extendable to the general case of which they are particular instances. If, for example, one is trying to solve a problem having to do
with a certain property of parabolas, consideration of several specific parabolas may help one to see what the nature of the general solution must be. Conversely, sometimes turning one’s attention
from a specific concrete problem to the general problem type of which the specific problem is an example can facilitate progress.
Considering Extreme Cases This trick, which is a special case of specialization, is often used to advantage in mathematical problem solving. It is nicely illustrated by Polya (1954a), who strongly
advocated its use. Two men are seated at a table of usual rectangular shape. One places a penny on the table, then the other does the same, and so on, alternately. It is understood that each penny
lies flat on the table, and not on any penny previously placed. The player who puts the last coin on the table takes the money. Which player should win, provided that each plays the best possible
game? (p. 23)
Mathematical Reasoning
Polya reports having watched a mathematician to whom this puzzle was posed respond by supposing that the table is so small that it is covered by one penny. Obviously in this case the first player
will win (but, of course, only the one penny that he put down). Now imagine the size of the table being gradually increased. If the first player places the first penny precisely in the center, as
soon as the table is large enough to hold a penny beside the first penny on any side, it will be large enough to hold another penny on the opposite side as well. By generalizing this argument, we can
see that, irrespective of the size of the table, if the first player puts his first penny in the middle and after that always precisely matches what the second player does, but on the opposite side
of the table, he will invariably win. It is not necessary to imagine the extreme case to solve this problem, because one could make an argument from symmetry straightaway, but use of the extreme-case
heuristic can help one see the argument that can be made.
Visualization The ability to visualize is considered by some to be of great benefit to both scientific and mathematical thinking (Ulam, 1976; Zimmerman & Cunningham, 1991). There are differences of
opinion, however, regarding just what the role of visualization in mathematics is. Kline (1953a) emphasizes the limitations of visualization, arguing that many of the relationships with which
mathematicians deal are inherently unvisualizable: “Anyone who insists on visualizing the concepts with which science and mathematics now deal is still in the dark ages of his intellectual
development” (p. 447). On the other hand, even inherently unvisualizable concepts or relationships may be developed with the help of visualization. We know, for example, that to develop his theory of
electromagnetism—which was so abstract that it could not be stated in ordinary language—James-Clerk Maxwell used visualizations of rotating vortices interconnected and transmitting their rotation by
means of cogs and wheels. There is also the belief that sometimes mathematics can get in the way of visualization, to the detriment of creative thinking. Freeman Dyson (1979) reports a conversation
with Richard Feynman in which the latter argued that Einstein’s failure in later life to match his early spectacular successes was because “he stopped thinking in concrete physical images and became
a manipulator of equations” (p. 62). Perhaps the safest conclusion to draw regarding visualization in mathematics is that some mathematical relationships and problems lend
Informal Reasoning in Mathematics
themselves to visualization while others do not, and that some, but not all, mathematicians find the ability to visualize to be a great asset in their work. Unfortunately, the role that visualization
plays in the development of mathematical proofs is generally obscured in mathematical publications, which typically focus on the proofs themselves and not on the reasoning that went into their
development. We will encounter the question of the importance of visualization ability to the learning of mathematics in Chapter 16.
Diversion Hadamard (1945/1954) reports frequently abandoning a problem for a while with the intention of returning to it again later on and says it is something he always recommends to beginners who
consult him. “One rule proves evidently useful: that is, after working on a subject and seeing no further advance seems possible, to drop it and try something else, but to do so provisionally,
intending to resume it after an interval of some months. This is useful advice for every student who is beginning research work” (p. 55). Apparently Hadamard believed that an especially good
diversion was sleep. “One phenomenon is certain and I can vouch for its absolute certainty: the sudden and immediate appearance of a solution at the very moment of sudden awakening” (p. 8).
Experimentation As already noted, a major difference between mathematics and science is that science looks to experimentation and empirical observation for validation of its theories, whereas
mathematics does not. The relationship represented by the equation 2 + 2 = 4 is not validated by checking to see if two apples plus two apples equals four apples, two oranges plus two oranges equals
four oranges, and so on. The equation is not a theoretical assertion about the physical world that is in principle falsifiable and that launches investigators on a search for a disconfirming case
that will bring the theory down. Experimentation does play a role in mathematical thinking nevertheless, albeit a role that is different from the one it plays in science. Peterson (1988) points out,
for example, that experimentation is important in research in number theory because so many of the key ideas in
Mathematical Reasoning
this area are resistant to definitive analysis. “Although their work differs from the experimental research associated with, say, test tubes and noxious chemicals, number theorists often collect
piles of data before they can begin to extract the principles that neatly account for their observations” (p. 23). I have mentioned a few of the strategies that are used in mathematical reasoning.
There are many more. Some are used by many, if not most, mathematicians; some are used mainly, or only, by their inventors. For some problems, strategies may be helpful but unnecessary; for others,
finding a solution apart from a strategy would be very difficult, if not impossible.
☐☐ Computing Aids and Devices Probably for as long as people have counted and computed they have found ways to facilitate doing so. The development of aids to computing strikes me as an especially
interesting aspect of the history of cognitive technology (Nickerson, 1997). The abacus of the Middle East, the soroban of Japan, and the knotted cords (quipu) of the Incas are among the better known
of the devices that have been used by different cultures to facilitate counting and computing. A device that served Europe and America very well for about three recent centuries was the logarithmic
slide rule. Invented, perhaps independently, by British mathematicians Edmund Wingate and William Oughtred around 1630, this device became an essential tool for engineers and others whose work
required frequent calculations. Many variations on the basic logarithmic rule were invented for specific purposes; in his History of the Logarithmic Slide Rule and Allied Instruments, Cajori (1910/
1994) gives a list of 256 different slide rules that were made between 1800 and 1910, the time of his writing. Uses included gauging, ullaging, and the computation of taxes and tariffs. Special rules
were designed for many different purposes: “The change from one system of money, weight, or other measure, to another system, or the computation of annuities, the strength of gear, flow of water,
various powers and roots. There are stadia rules, shaft, beam, and girder scales, pump scales, photo-exposure scales, etc.” (p. 73). Rules other than slide rules—wantage rules, lumber rules, shrink
rules, and so on—were designed with special-purpose scales that, in effect, enabled a user to carry out one or another type of computation in the process of making a measurement.
Informal Reasoning in Mathematics
Less well known, but no less useful for specific purposes, are countless devices that have been designed to provide users with answers to computational questions without having to do the actual
calculations. Many of these devices carry product advertisements and have been distributed by companies for promotional purposes. I have a small collection of such devices, mostly old, a few of which
are described in Nickerson (2005): • A circular celluloid device distributed by Sunkist Oranges for calculating the costs and selling prices of oranges (lemons on the flip side), per dozen, given the
cost per box and the number of oranges (or lemons) per box, and assuming a specified markup. • A similar device produced by Post Cereals for computing the perpackage retail sale price of a product,
given the wholesale price of a box of packages, the number of packages per box, and the desired profit percentage. • A device (copyrighted in 1924) advertising Mead’s dextri-maltose, a dietary
supplement for babies, that permits one to find the recommended mixture of milk, water, and dextri-maltose for each feeding and number of feedings in 24 hours, given the baby’s age and weight. • A
shop-cost calculator produced by General Electric (reflecting costs in 1953) that calculates the labor costs for operating a shop, given an hourly wage, number of operations performed per minute, and
an overhead rate. • A device distributed by the Esso Corporation (predecessor of Exxon) that can be used to calculate distance traveled by an airplane, given speed and time in flight; gallons of fuel
consumed, given gallons consumed per hour and time in flight (or fuel consumed per hour, given total consumed and time in flight); and speed, given distance traveled and time in flight. It allows for
making a correction in air speed, given the temperature and altitude, and can compute drift angle and ground speed, given course heading, wind velocity and direction, and air speed. The list of
examples could be greatly extended. My collection also includes devices that, in effect, calculate: (1) the appropriate torque setting on an adjustable torque wrench for a wrench extension of a given
Mathematical Reasoning
length, (2) a correction factor for a steam flow system, given a calibrated pressure and an operating pressure, (3) the feed rate for a turning, boring, or milling machine, given certain parameters
of the machine, stock, and desired product, (4) relative humidity, given dry-bulb and wet-bulb temperatures, (5) certain motor data, given motor horsepower, and (6) conduit data, given wire size and
composition (copper or aluminum) Perhaps devices of the sort just noted are better described as devices that make computation unnecessary than as aids to computation. In that respect, they are
similar to many of the developments in mathematics— especially perhaps in the development of notational systems—that have had the effect of easing one type of cognitive burden on doers of
mathematics, freeing cognitive capacity for application to other demands of the problem-solving situation. Unburdening shortcuts have always been welcome to the mathematician, provided that they
☐☐ Computers in Mathematical Thinking The arrival of high-speed digital computers on the scene has affected the doing of mathematics in several ways. It is somewhat ironic that while computers,
including pocket calculators, have arguably made obsolete the learning of certain algorithmic procedures that used to be taught in basic mathematics courses, they have also increased the utility of
an algorithmic approach to some types of mathematical problems by providing the computational power that is necessary to carry them out. Examples of the use of computers in proof making are given in
Chapter 5. Some of the problems on which mathematicians work today could not be approached without their involvement. Number theory provides many examples of problems that are beyond the capabilities
of manual techniques. The problem of determining whether very large numbers (numbers with thousands or tens of thousands of digits) are prime is one case in point. The problem of factoring large
composite numbers—which has practical implications for cryptography and computer system security—is another. Sixteenth-century German mathematician Ludolph van Ceulen spent most of his life
determining the value of π to 35 places; as of 2003, thanks to the use of computers in its calculation, π was known to 1.2 trillion decimal places (Gibbs, 2003), although what “known” means in this
context is open to some question.
Informal Reasoning in Mathematics
Chaitin (1995) describes the effect of computers on the doing of mathematics this way: The computer has enormously and vastly increased mathematical experience. It’s so easy to do calculations, to
test many cases, to run experiments on the computer. The computer has so vastly increased mathematical experience, that in order to cope, mathematicians are forced to proceed in a more pragmatic
fashion, more like experimental scientists. This new tendency is often called “experimental mathematics” … It’s often the case that when doing experiments on the computer, numerical experiments with
equations, you see that something happens, and you conjecture a result. Of course it’s nice if you can prove it. Especially if the proof is short. I’m not sure that a thousand-page proof helps too
much. But if it’s a short proof it’s certainly better than not having a proof. And if you have several proofs from different viewpoints, that’s very good. But sometimes you can’t find a proof and you
can’t wait for someone else to find a proof, and you’ve got to carry on as best you can. So now mathematicians sometimes go ahead with working hypotheses on the basis of the results of computer
experiments. Of course, if it’s physicists doing these computer experiments, then it’s okay; they’ve always relied heavily on experiments. But now even mathematicians sometimes operate in this
manner. (p. 44)
Research on mathematical chaos, the behavior of nonlinear systems, fractal geometry, and cellular automata, among other areas, is very much dependent on the use of computers. Much of this work is
greatly facilitated by computer graphics. Thomas Hale’s proof of Kepler’s conjecture that the face-centered cubic lattice is the densest of all possible three-dimensional sphere packings runs to 250
pages of text in addition to about 3 gigabytes of computer programs and data (Devlin, 2000). The use of computers in mathematical problem solving has led to the distinction between problems that are
likely to be tractable with the help of practically feasible amounts of computing power and those that are not. Most problems grow in complexity with their size, but some grow considerably faster
than others. Consider the problem of determining the minimum number of people who must be invited to a party in order to guarantee that either at least m of the guests will all know each other or n
will be mutual strangers. For m = n = 3, the problem is relatively simply solved: There must be a minimum of six guests in order to guarantee that at least three know each other or that at least
three are mutual strangers. To get an intuitive feel for the problem, try to satisfy the condition (either three people who know each other or three mutual strangers) in a party with five guests. To
decide that this is impossible, one must
Mathematical Reasoning
Table 6.1. A Party of Five, Lacking 3 Mutual Friends and 3 Mutual Strangers
A B C D
X X
X in a cell indicates that the individuals represented by the associated row and column know each other; O indicates that they are strangers
convince oneself that the negation of the condition is true (that it is possible for there to be both fewer than three who know each other and fewer than three mutual strangers). Suppose the guests
are A, B, C, D, and E. It is easy, of course, to have one of these conditions, but is it possible to have both (fewer than three mutual friends and fewer than three mutual strangers) with a group of
five? Suppose the friendship relationships are as shown in Table 6.1, where X means knows and O means does not know, and the relationships are reciprocal: If A knows B, B knows A. According to the
table, A knows B and C, but not D and E; B knows A and D, but not C and E; and so on. With only five guests, it is easy enough to consider exhaustively all possible combinations of three to see if
there are either three mutual friends or three mutual strangers. This is done in Table 6.2, where it is seen that, with this particular set of relationships, there are no combinations either of three
mutual friends or of three mutual strangers. As m and n are increased only a little, however, the problem becomes very difficult. The problem has yet to be solved for m = n = 5. The answer has been
known for over 20 years to be between 43 and 49, but as of 1998, the exact number remained to be determined, and this despite that computers have been used with abandon in efforts to solve the
problem. According to Hoffman (1998), “The most complex party problem that has been solved with the aid of computers, 110 of them running in sync, is the case of the minimum guest list needed to
guarantee a foursome of friends or a fivesome of strangers. In 1993, the answer was found to be 25” (p. 54). The computational complexity of some problems increases as an exponential function of
their size, whereas that of others increases as a polynomial function of their size. Letting N represent the size of a
Informal Reasoning in Mathematics
Table 6.2. Given the Relationships Represented in Table 6.1, There Are No Instances Either of 3 Mutual Friends or 3 Mutual Strangers Combination
3 Mutual Friends?
3 Mutual Strangers?
problem, say, the number of nodes in a network that is to be analyzed, the complexity of a problem that grows exponentially with size would be given by C N, whereas that of one that grows as a
polynomial function would be given by NC, where C is constant and relatively small in both cases. For a given small C, it is easy to see that C N increases with N much faster than does NC. Given C =
2, for example, C N increases from 2 to more than 1,000,000 as N goes from 1 to 20, whereas NC increases from 1 to 400 as N increases over the same range. This type of distinction led to the idea of
NP-complete problems, introduced by Stephen Cook (1971), an NP-complete problem being one for which a computer algorithm can be constructed but cannot be run to solution in realizable time. When
NP-complete problems are encountered in practical contexts such as scheduling, bin packing, or route planning, the goal of an optimal solution must yield to one of a good-enough solution or one that
can be considered an approximation to optimal. Although the use of computers in proof construction is controversial (Kleiner & Movshovitz-Hadar, 1997), their use in proof (or conjecture) refutation
is not. In mathematics, it takes only one counterexample of a
Mathematical Reasoning
general assertion to show the assertion to be false. Using a computer to find a counterexample to an assertion that has been widely believed to be true—because no one has yet found a counterexample
despite trying hard to do so—is not controversial, and it has been done on several occasions, especially in the area of number theory.
C H A P T E R
Representation in Mathematics
The history of mathematics shows that the introduction of better and better symbolism and operations has made a commonplace of processes that would have been impossible with the unimproved
techniques. (Kline, 1953a, p. 240)
Thinking is greatly aided by representational systems used for purposes of communication. Nowhere is this more apparent than in the area of mathematics. Although physicists, chemists, musicians, and
architects, among others, all have highly specialized representational systems that facilitate their work, no field has a longer or more impressive history than does mathematics with respect to the
development and use of symbol systems. It is very easy for us, who have been introduced to current mathematical symbols and notational conventions in a matter-of-fact way, to assume unthinkingly that
things are as they are because that is the way they should be and always have been. In fact, current symbology and notation are the results of a long history of developments. The basic arithmetic
operations—addition, subtraction, multiplication, and division— that we are likely to consider to be simple and straightforward, requiring only the memorization of a few elementary rules, were not
always so simple and straightforward. Both the representational schemes and operational algorithms that we were taught as children are products of many centuries of inventions and evolutionary
change. An engaging account of much of this history has been provided by Flegg (1983).
Mathematical Reasoning
Throughout the history of mathematics, the emergence and refinement of new ideas have been accompanied by the invention of new ways of representing those ideas. The introduction of new notational
conventions has provided significant economies of expression and greatly facilitated the performance of mathematical operations. And the notational systems invented to represent new mathematical
ideas have stimulated and made possible further advances in mathematical thinking. So central are representations to mathematics that, according to one view, “mathematics can be said to be about
levels of representation, which build on one another as the mathematical ideas become more abstract” (Kilpatrick, Swafford, & Findell, 2001, p. 19). Mathematicians who have developed new areas of
mathematics have often found it essential to invent new notational schemes in order to make progress. Diophantus, Descartes, Euler, and Leibniz are all remembered for their original contributions to
mathematics; each of them also introduced new notational conventions and did so because the existing ones were not adequate to represent the thinking they wished to do. Jourdain (1913/1956) claims
that Leibniz, who is remembered for numerous contributions to philosophy, science, and mathematics, attributed all his mathematical discoveries to his improvements in notation. As discussed in
Chapter 4, mathematical ideas have progressed from the more concrete to the more abstract. The emergence of new notational conventions often has been forced by the need to represent a new level of
abstraction. This progression is illustrated by the symbols 3, x, and f(x), which represent the increasingly abstract ideas of number, variable, and function. The concept three, as distinct from
three stones or three sheep, is an abstraction; threeness is the property that three sheep and three stones have in common. The concept number is a further abstraction; numberness is what 3, 17, and
64 have in common. Algebra is more abstract than arithmetic because, whereas arithmetic deals with entities (numbers) whose values are constant, algebra deals with entities (variables and functions)
whose values can vary.
☐☐ Beginnings of Algebraic Notation The limitations of the mathematical accomplishments of the early Greeks have been attributed, in part, to the limitations of their notational system. The Greeks
had a good notation for geometry, but less effective ones for arithmetic and algebra. Their notation was relatively effective for representing relationships among various parts of a figure, which are
static in nature, but not for representing relationships among variable quantities,
Representation in Mathematics
which are dynamic. Maor (1994) argues that the lack of an adequate representational system—in particular, the language of algebra—helps explain why, despite that Archimedes managed to apply Eudoxus’s
“method of exhaustion,” which came close to the modern integral calculus, to the finding of the area of the parabola, the Greeks failed to discover the calculus. A classification of representational
conventions in algebra, dating from the middle of the 19th century and attributed to G. H. F. Nesselmann (1842), distinguishes three phases of development: rhetorical algebra, syncopated algebra, and
symbolic algebra. Rhetorical algebra means algebra expressed in ordinary language; syncopated algebra made use of abbreviations, and symbolic algebra is essentially the system we currently use.
Progressing from rhetorical algebra through syncopated algebra to symbolic algebra had the effect of reducing very considerably the amount of cognitive effort that one has to put into the process of
solving any given mathematical problem. On the other hand, it also weakened the connection between the variables in equations and the real-world entities they may be used to represent; as we have
noted, modern algebra really is symbol manipulation, and whether the symbols represent realworld entities is irrelevant. Diophantus of Alexandria made a start during the third century toward the
development of a notational system for algebra. He represented variables with symbols other than words; a limitation of his notation was its use of the same symbol to represent different variables.
In the sixth century, Hindu mathematician Aryabhatta suggested the use of letters to represent unknowns. Like Diophantus, the Hindus and Moslems also used some form of what has been called additive
juxtaposition to represent successive powers of the unknown. A specific letter or abbreviated word was used to represent the square, perhaps a different one to represent a cube, and concatenations of
these to represent higher powers. Although both Hindu and Moslem mathematicians made rudimentary advances toward an operational symbolism, the Moslems eventually veered from this path and chose to
write out everything, including the names of numbers. During the 13 centuries between the time of Diophantus and that of French mathematician and counselor to the king Francois Viète, notational
innovations were made, but none with revolutionary effects. Medieval scholar Jordanus Nemorarius, about whom little is known with certainty, used letters to represent numbers in his book Arithmetica
(published in the 13th century), but he sometimes represented a given number by two letters (suggesting the endpoints of a line segment), and sometimes he used only one (suggesting one endpoint of a
line segment, the other of which was understood).
Mathematical Reasoning
Before the convention was adopted of using a letter to represent the unknown in algebraic expressions of one unknown, it was represented by a word, often the equivalent in the language of use of
thing in English. Italian friar-mathematician Fra Luca Pacioli, for example, used co, short for cosa (thing), to represent variables of unknown quantity in equations. An economy of expression was
realized by 15th-century French mathematician Nicolas Chuquet, who adopted the convention of omitting an explicit representation of the unknown altogether, using only coefficients and exponents of
the various terms. Thus, what we would now represent as 6x and 4x3, he would have represented as .6.1 and .4.3. A form of notation used to represent algebraic equations by some 15th-century
mathematicians in Europe, Pacioli among them, is illustrated by the following expression (Scott, 1958; reproduced in David, 1962, p. 48): 4.p.R6 4.m.R6 Productum 16.m.6 10 which in modern notation
would be written as (4 + 6 )(4 - 6 ) = 16 - 6 = 10 A major advance was made by Viète, who introduced a notational convention for distinguishing unknowns from constants: He used letters to represent
coefficients in polynomial equations, and introduced the convention of using a vowel to represent the unknown and a consonant to represent any quantity that was assumed to be known, thereby making
explicit the distinction between variables and parameters. Although they credit Viète with making a very significant advance in algebraic notation, Boyer and Merzbach (1991) classify his system as
“fundamentally syncopated rather than symbolic” because, along with the letters for variables and parameters and German symbols for the operations of addition and subtraction, he used words and
abbreviations for other constructs (e.g., A quadratus and A cubus for A 2 and A3, and aequalis for =). Viète is also credited with expressing proofs in strictly algebraic terms, which represented a
departure from the then prevailing use of geometric proofs. Although a form of exponential notation was used over 2,000 years ago by the Greek geometer Apollonius of Perga, more cumbersome schemes
were used to represent powers more recently. Among 16th- and 17th-century ways of representing A raised to the seventh power were AAAAAAA and Aqqc (for A squared squared cubed). Sometimes a Roman
numeral or an encircled number was placed directly above the coefficient of a term of a polynomial to indicate the power to which the variable in that term was to be raised.
Representation in Mathematics
The notation of symbolic algebra did not come into general use until the middle of the 17th century, and then through the influence of John Napier, René Descartes, and John Wallis. Galileo made no
use of this or any other special notation. In La géométrie, in which Descartes (1637) presented his invention of analytic geometry, he introduced the convention of denoting variables by letters
toward the end of the alphabet (x, y, z) and constants by letters toward the beginning of it (a, b, c). Descartes also promoted the notation for powers that we now use, x3, x,4 x5, …, except he
inexplicably chose to represent what we now write as x2 by xx. Viète had used superscripts to represent powers, but Descartes denied that he had read Viète (Watson, 2002). Livio (2002) credits French
mathematician Albert Girard with the introduction in 1634 of the use of subscripts to represent numerical position in a sequence. x3, for example, would represent the third term, xn the nth, and xn+k
the kth term following the nth. Despite its advantages, which are obvious from our perspective, Descartes’s notation met considerable resistance from mathematicians of the day and did not become the
norm throughout Europe for several decades. Writing 150 years after Descartes, Laplace notes in his Théorie analytique des Probabilités the importance of notation in mathematics and points especially
to Descartes’s way of representing powers. Flegg (1983) explains the reluctance of mathematicians to adopt quickly the new notational scheme this way: The answer would seem to lie in the habit of
expecting mathematical abbreviations to retain some obvious link with what they were signifying and especially with the spoken word. The older Greek works, which inspired so much of the mathematics
of the Renaissance period and after, were purely rhetorical. Mathematical ideas were explained in words; mathematical arguments were written in words. To adopt abbreviation of words is therefore a
natural step; the change to abstract symbolism demands an intellectual leap of extraordinary magnitude. It is precisely this requirement to write and hence to think in terms of symbols which makes
mathematics a difficult subject in the classroom today unless attention is paid to this particular intellectual demand. (p. 224)
The invention of symbolic algebra represented a very significant step forward, not only for mathematics, but for the history of thought. An algebraic equation provides a means of packing a large
amount of information into a few symbols. Jourdain (1913/1956) puts it this way: “By means of algebraic formulae, rules for the reconstruction of great numbers—sometimes an infinity—of facts of
nature may be expressed very concisely or even embodied in a single expression. The essence of
Mathematical Reasoning
the formula is that it is an expression of a constant rule among variable quantities” (p. 38). The wide adoption of a standard notational scheme greatly facilitated communication among mathematicians
and the building of any given mathematician on the work of others. A standard symbology was also a practical necessity for the printing of mathematical works, and the emergence of print technology
was an impetus to the development of one. Algebraic notation not only makes possible great economies of expression, but it also facilitates computation. Indeed, one may see the history of
improvements in algebraic symbolism as a shifting of an evergreater portion of the burden of computation and inference from the person to the symbolism. The symbolism encoded much of what its
developers had learned about mathematical inference and preserved that knowledge so that it would not have to be rediscovered anew each time it was needed. Inferences that would be very difficult to
make without the use of this, or some comparable, notation may become, with its use, matters of straightforward mechanical symbol manipulation. In many cases the need to make inferences was replaced
with the ability to apply an algorithmic procedure. Dantzig (1930/2005) argues that the symbol is not a mere formality but rather the essence of algebra: “Replaced by a symbol the object becomes a
complete abstraction, a mere operand subject to certain indicated operations” (p. 83). Descartes considered algebra to be a means of mechanizing mathematical operations, thus relieving the
mathematician of much mental effort. Kaplan (1956), who says that in algebra the notation is everything, also argues that the power of algebra consists in its allowing the symbolism to think for us.
Mathematical notation is but one of many examples of the development of tools designed to aid human cognition, making certain tasks less cognitively demanding, thus increasing the capacity to deal
with other tasks (Nickerson, 2005; Salomon, 1993). These observations may help account for the much discussed and disheartening finding that students who are able to solve textbook algebra problems
that are already formulated as equations often cannot set up appropriate equations when given verbal descriptions of the problems to be solved. The setting up of an equation requires some thinking
and understanding of the problem, whereas the solving of a preformulated equation may require only the application of memorized rules. Barrett (1958) makes the point that, as a general matter, people
today live on a level of abstraction way beyond that of their forebears. “When the contemporary man in the street with only an ordinary education quickly solves an elementary problem in arithmetic,
he is doing something which for a medieval mathematician—an expert—would have required hours” (p. 26). He cautions, however, that this does not
Representation in Mathematics
necessarily mean a higher level of understanding. “No doubt, the medieval man would have produced along with his calculation a rigorous proof of the whole process; it does not matter that the modern
man does not know what he is doing, so long as he can manipulate abstractions easily and efficiently” (p. 27). Barrett may well be overestimating the medieval man’s penchant for developing rigorous
proofs of the validity of his reckoning, but there can be little doubt that many people today are capable of putting mathematics to practical use without a deep understanding of the rationales of the
processes they are using.
☐☐ Mathematical Constants and Variables Among the more important distinctions in mathematics is that between constants and variables. The words are their own definitions: A constant is an entity
whose value does not change; a variable is an entity whose value may differ from occasion to occasion. There are certain constants that play sufficiently important roles in mathematics that each has
come to be represented by a single symbol, usually a Greek or English letter. Perhaps the two most common ones, already discussed in Chapter 3, are π (the ratio of a circle’s circumference to its
diameter) and e (the base of Napierian, or natural, logarithms). Napier, among others, invented logarithms in the early part of the 17th century, but Euler was the first to use e to represent their
base over a century later. The delay in giving this constant a special symbol was short, however, compared to the case of π; although there is evidence that the significance of the ratio of a
circle’s circumference to its diameter had begun to be appreciated as early as 2000 BC, the symbol π was not used to represent it until the 18th century AD. The symbols π, e, i, ∑, and f(x) all came
from Euler, to whom Boyer and Merzbach (1991) refer as the most successful notation builder of all time. We are so used to the idea of a variable that we are likely to be oblivious to the enabling
power it represents. Tarski (1941/1956), noting the necessity of variables for the writing of equations, characterizes their significance this way: It is to the introduction of variables that we are
indebted for the development of so fertile a method for the solution of mathematical problems as the method of equations. Without exaggeration it can be said that the invention of variables
constitutes a turning point in the history of mathematics; with these symbols man acquired a tool that prepared the way
Mathematical Reasoning
for the tremendous development of the mathematical sciences and for the solidification of its logical foundations. (p. 1909)
Paulos (1992) likens variables to pronouns—variables are to mathematics what pronouns are to natural language. Just as the same pronoun—she—can represent different individuals on different occasions,
the same variable—x—can represent different values in different contexts. Extending the metaphor, one may think of constants as nouns; 5 is 5 and 6 is 6, no matter what the context in which one
encounters them. Clegg (2003) credits 17th-century English mathematician John Wallis as the first to use ∞ to represent infinity. It appeared in his De sectionibus conicis [On conic sections],
written in 1655, and again in his Arithmetica infinitorum, written in 1656.
☐☐ Operators In addition to symbols to represent variables, functions, and special constants, there is a need for symbols or notational conventions to represent mathematical operations: addition,
subtraction, multiplication, division, exponentiation, and so on. The origins of all the many such symbols that are used today are obscure; the main design requirements for them, however, are fairly
obvious. They should be convenient to write and sufficiently distinct not to be easily confused with each other. In some cases, a given operation can be represented in more than one way; the division
of a by b, for example, can be represented as a/b, a ÷ b, or ab–1, the choice being strictly a matter of convenience. Having more than one way to represent the same operation is useful because it
sometimes happens that one representation is convenient for some contexts and not for others, but it also increases what the user of the symbology must learn in order to keep things straight. Not
only can some operations be represented in more than one way, it is also the case that some symbols have more than one meaning. The + sign, for example, is used to represent the operation of addition
and to identify positive numbers; similarly, – is used to represent subtraction and to identify negative numbers. By convention, unsigned numbers are assumed to be positive. The dual usage of these
signs can be algorithmically convenient; one learns by rote, for example, that two juxtaposed – signs are equivalent to one +, so subtracting a negative number is equivalent to adding a positive one.
It can make for conceptual difficulties, however; a – b does not distinguish between whether one is subtracting a positive number, a – (+b), or adding a negative one, a + (–b).
Representation in Mathematics
One suspects that what it means to subtract a negative number, a – (–b), is less than crystal clear to many people who are able to apply the algorithm, a + b, correctly. Early uses of the plus (+)
and minus (–) signs in Europe occurred during the 15th or 16th century. Boyer and Merzbach (1991) give Rechnun uff allen Kauffmanshafften, published in 1489 by Johann Widman, a German mathematician
and lecturer at Leipzig, as the oldest book in which + and – appear in print, but they note that these symbols were used first to indicate excess and deficiency in warehouse measurements and only
later became associated with arithmetic operations. The + and – signs are also seen in German mathematician Michael Stifel’s Arithmetica integra, which was published in Nuremberg in 1544, and in
German mathematician Christoff Rudolff’s Die Coss of 1525 (David, 1962). Bell (1945/1992) takes Widman’s use of + and – as the event that marks the beginning of algebra becoming more operationally
symbolic than it had been for Diophantus and the Hindus. He notes, too, that there is some evidence that sometime between 700 and 1100, the Hindus may have indicated subtraction with a sign similar
to our plus sign written after the subtrahend. The × was first used to represent multiplication by William Oughtred, who introduced it in his Clavis mathematicae in 1631. Oughtred also invented the
symbol :: to represent proportion (Maor, 1994). Today we use ! to represent the factorial function: n! = 1•2•3• … •n. In the 19th century, what we now write as n! was sometimes written as |n (a
vertical line followed by n). Today we represent roots either by fractional exponents (x1/2, 51/3) or by use of the sign , either by itself to represent square root (e.g., x ) or with a preceding
superscript to represent a root other than square (e.g., 5 x ). The use of fractional exponents as a way of representing roots began to be adopted early in the 17th century. Before the current
symbology evolved, words were sometimes used to represent the intended operations. In 16th-century Italy, for example, square and cube roots were identified, respectively, by the terms lato and lato
cubico. Lato is Italian for side, so the idea seems to have been that of equating the square root of a number with the length of a side of a square, the area of which would represent the number the
square root of which is being taken. Similarly, lato cubico would represent the length of a side of a cube, the volume of which would represent the number the cube root of which was desired (Mazur,
2003). Another early form of representation of roots, also noted by Mazur (2003), used a symbol something like R (or R with a small slash on the right leg, like the sign that is used for medical
prescriptions). R.q. (short for radice quadrata) and R.c. (short for radice cubica) would represent square
Mathematical Reasoning
root and cube root, respectively. Mazur speculates that the modern might have evolved from the letter r written cursively. Leibniz, after some experimentation with other possibilities, fixed on dx
and dy to represent what we now refer to as differentials, and on ∫, a large stylized s for sum, to indicate the operation of integration. The use of the notation f’(x), f”(x), …, fn(x), … to
represent first-, second-, and nthorder derivatives comes from French mathematician Joseph Lagrange. Boyer and Merzbach (1991) call Leibniz “one of the greatest of all notation builders, being second
only to Euler in this respect,” and point out that in addition to giving us the notation of the calculus, he was the first prominent mathematician to use systematically the dot for multiplication and
to write proportions in the form a:b = c:d. He also developed a way of representing sets of simultaneous equations that anticipated the invention of determinants. Leibniz used something similar to ∩
and ∪ to represent multiplication and division, respectively; whether or not this representational scheme has anything else to recommend it, it does convey the idea that multiplication and division
are inverses of each other. We owe to Leibniz also the use of ~ for “is similar to” and ≅ for “is congruent to.” Leibniz published, when only 20 years old, a treatise entitled Dissertatio de Arte
Combinatoria. Much of it deals with philosophical matters, but it contains some discussion of the problem of finding the number of combinations of n things taken m at a time. To represent a set of
things the members of which are to be taken two, three, or four at a time, he uses, respectively, the designations com2natio (combinatio), com3natio (conternatio), com4natio, and so on (Todhunter,
1865/2001, p. 32). French mathematician Pierre Rémond de Montmort represented the combination of n things taken m at a time by a small rectangle with n above it and m below it. Euler, possibly the
most prolific mathematician who ever lived, recognized the importance of convenient notation and contributed substantively to its development. He was the first to use ∑ to indicate summation, n f(x)
to stand for function of x, and mn —which became —to repre m sent m!(nn-! m)! , the number of combinations of n things taken m at a time. A symbol that we take for granted today that
did not come into general use until about the 16th century is the equals sign (=). It was introduced by Welshman Robert Recorde, physician to Edward VI and Mary Tudor and amateur mathematician and
astronomer, in The Whetstone of Witte (1557). A sign composed of a pair of parallels of one length was chosen by Recorde to represent equality because, as he put it, “no two things could be more
equal” (Struik, 1969, p. 4). The signs > and < to representing, respectively, “greater than” and “less than” were introduced
Representation in Mathematics
by British mathematician Thomas Harriot in a book, Praxis, published posthumously in 1631. The equals sign has come to have different meanings in different contexts, and failure to recognize the
differences can make for confusion. Consider the following expressions: x2 + 3x – 10 = (x + 5)(x – 2) x2 + 3x – 10 = y x2 + 3x – 10 = 0 ∞
The first expression is a tautology; the right side is simply a restatement of the left. The second one expresses a functional relationship, showing the dependence of the value of one variable on
that of another. The third expression is a constraint equation and can be solved for the values of x. The right side of the expression equals the left in all three instances, but not in the same
sense. In the fourth equation, the sum does not actually equal 2; 2 is the limit the sum approximates ever more closely as the value of n increases. We can add to the confusion by noting that many
computer programming languages permit expressions of the sort x=x+1 which is nonsensical mathematically, but in the context of programming can have the perfectly reasonable interpretation “give the
variable x a new value, namely one more than its current value.” The multiple uses of the equals sign is an unfortunate example of lack of precision in mathematical notation. Presumably, this
ambiguity causes mathematicians no difficulties, because they know immediately from the context what is meant, but it is quite possible that failure to make a clear distinction among these uses may
cause problems for students just learning algebra. Kieran (1989) contends that many students learn to interpret = as “and the answer is.” Although the operation of multiplication is called for
several times in the first three equations above, an explicit symbol representing multiplication is not used. This is consistent with the familiar convention of representing multiplication by
juxtaposition. A recent finding of difficulties
Mathematical Reasoning
that some people have in interpreting very simple equations involving multiplication raises a question as to the advisability of this convention. It is apparent that there is some degree of
arbitrariness about many of the symbols that have been chosen to represent mathematical operations. Surely, it would have made very little difference to the development of mathematics if – had been
used to represent addition, and + subtraction, or if multiplication had been denoted by ÷ and division by ×. How these particular assignments of symbols to operations came about is not fully known.
One conjecture about ÷ is that the dots above and below the a line signified the placement of divisor and dividend in the b representaa tion; a ÷ b has an obvious advantage over b for purposes of
typesetting. That the selection of a representational convention is not completely arbitrary, however, is illustrated by the fates of the notational systems proposed by Newton and Leibniz to
represent the differential calculus. Newton’s notation was inferior to that of Leibniz in at least two respects: It did not explicitly identify the independent variable involved in the functional
relationship, and it was not suitable for representing derivatives of higher degree; for these reasons, it was less conducive to the solving of differential equations. It has been claimed that the
progress of mathematics in England was delayed relative to advances made elsewhere in Europe by more than a century as a consequence of the failure of the British to see or acknowledge the
superiority of Leibniz’s notation (Gleick, 2004; Jourdain, 1913/1956; Turnbull, 1929).
☐☐ Functions and Function Graphs The concept of a mathematical function emerged relatively recently in the history of mathematics, but having emerged, its importance is widely acknowledged. Dubinsky
(1994a) refers to it as perhaps “one of the most important ideas that students must learn in all of their studies of mathematics” (p. 235). Kasner and Newman (1940) make the even more extravagant
claim that the word function “probably expresses the most important idea in the whole history of mathematics” (p. 5). The functional notation, y = f(x), indicates that the value of the variable y
depends on—is a function of—the value of the variable x; similarly y = f(x,z) indicates that the value of the variable y depends jointly on the values of the variables x and z. Typically in algebra
functional relationships are represented without the use of the f() notation; one simply writes y = 3x2 to represent y as a particular function of x. An instance in which the f() notation is used to
advantage is that of representing probabilities. The convention of using p to represent probabilities in
Representation in Mathematics
19th-century textbooks led to difficulties that are avoided by the current convention of representing probabilities as functions of one or two arguments: p(A) or p(A|B), the latter meaning the
probability of A given B. The concept of a function is closely associated with analytic geometry invented by Descartes (and Fermat, whose work preceded Descartes’s but was published later). The
application of algebraic symbols to geometry permits one to think of figures in a “cartesian space” in terms of equations that define the value of one coordinate (y) as a function of the value of the
other coordinate (x). The graphical representation of functional relationships is used so extensively in mathematics today that we are likely to overlook how powerful it is and how long it took for
the idea to emerge and to be adopted widely. By blending geometry and algebra, analytic geometry makes it possible to solve algebraic problems geometrically and geometric problems algebraically.
Stewart (1990) calls the cartesian coordinates of analytic geometry “a trick to convert geometry into algebra” (p. 41). Some of the groundwork for the development of analytic geometry was done in the
14th century by French polymath and cleric Nicole (sometimes Nicholae) Oresme. His work spurred considerable interest in the graphical representation of functions, which he referred to as the
“latitude of forms” (Boyer & Merzbach, 1991; see also Boyer, 1959; Kaput, 1994). But it is Descartes who is generally credited with clearly articulating the correspondence between plane curves and
equations in two variables, and thereby coupling geometry with algebra in a way that had not been done before. A function graph is a plot of the values of an independent variable for all values of a
dependent variable over some range of the latter. It provides explicitly and graphically information contained implicitly in a functional equation. Most commonly, function graphs are drawn on a
cartesian plane with linear dimensions, which is to say the dimensions are at right angles to each other and are divided into units of equal length. For some purposes, it is more convenient to use
dimensions that are not at right angles or that are divided unequally (e.g., in logarithmic or other nonlinear units). The use of graphs did not become widespread quickly after the possibility was
initially noted; in fact, it took a rather long time for their use to become common. According to Wainer (1992), the only European journal that contained any graphs during the entire 18th century was
the Mémoires de l’Académie Royal des Sciences et Belle-Lettres, and it contained very few. British theologian and philosopher Joseph Priestly, the discoverer of oxygen, “found it necessary [in 1765]
to fill several pages with
Mathematical Reasoning
explanation in order to justify, as a natural and reasonable procedure, representing time by a line in his charts” (p. 12). In addition to the connotation on which the foregoing has focused, the term
graph has a quite different connotation as well. It sometimes refers to a set of nodes connected by lines. The nodes represent the elements of some set, and the lines show how the elements are
connected. Graphs of this type differ from function graphs in that they are nonmetric and represent connectedness in a topological sense. Figure 7.1 illustrates the distinction between function
graphs and nonmetric graphs, showing the graph of the function y = x + 1 on the left and a graph like 2 one that might be used to represent the semifinals and finals of an elimination tournament on
the right.
☐☐ Equations The power of equations can seem magical. Like the brooms created by the Sorcerer’s Apprentice, they can take on a power and life of their own, giving birth to consequences that their
creator did not expect, cannot control, and may even find repugnant. (Wilczek, 2003, p. 132)
An equation is a mathematical expression composed of two parts separated by an equals (=) sign; it expresses what would appear to be the most straightforward of all relationships—equality. This
equals that; the part to the left of the sign is said to equal, or be equivalent to, the part to the 3 A
2 Y = X/2 + 1
1 X
B 1
3 C
–1 –2 Y
Figure 7.1 A graph of the function y = x + 1 (left) and a nonmetric graph like
one that might be used to represent an elimination tournament.
Representation in Mathematics
right of it. What could be simpler and more mundane? And yet the concept is an enormously powerful one. If one has a valid equation—valid in the sense that the two parts are indeed equal or
equivalent—then the parts will remain equal or equivalent no matter how many legitimate mathematical operations one performs on them, so long as one always performs the same operation on both parts.
Examples of simplifications in representations in the interest of economy of expression are easy to find in any field with a history. In mathematics the invention of symbolic algebra represented an
enormous economy of expression over the syncopated algebra that preceded it and the rhetorical algebra that preceded that. Consider how the Greeks expressed the idea of a constant ratio in the time
of Plato: Whenever among three numbers, whether solids or any other dimension, there is a mean, so that the mean is to the last term as the first term is to the mean, and when (therefore) the mean is
to the first term as the last term is to the mean, then, the mean becoming both first and last, and the first and last both becoming means, all things will of necessity come to be the same, and being
the same, all will be one. (Bell, 1946/1991, p. 170)
Today, we would express the relationship between the three terms, letting x, y, and z stand for first, second, and last-mentioned numbers, respectively, as x/y = y/z Tarski (1941/1956) illustrates
that it is possible to represent in a relatively terse mathematical equation relationships that would require many words to describe by contrasting the way an elementary theorem of arithmetic is
represented using the conventional notation of algebra: For any numbers x and y, x3 – y3 = (x – y)(x2 + xy + y2) with the way the same theorem might be represented without the use of this notation:
The difference of the third powers of any two numbers is equal to the product of the difference of these numbers and a sum of three terms, the first of which is the square of the first number, the
second the product of the two numbers, and the third the square of the second number. (p. 1908)
The importance of the emergence of an effective algebraic symbolism for the further development of mathematics has been stressed by Bell (1945/1992):
Mathematical Reasoning
Unless elementary algebra had become “a purely symbolical science” by the end of the sixteenth century, it seems unlikely that analytic geometry, the differential and integral calculus, the theory of
probability, the theory of numbers, and dynamics could have taken root and flourished as they did in the seventeenth century. As modern mathematics stems from these creations of Descartes, Newton and
Liebniz, Pascal, Fermat, and Galileo, it may not be too much to claim that the perfection of algebraic symbolism was a major contributor to the unprecedented speed with which mathematics developed
after publication of Descartes’ geometry in 1637. (p. 123)
Even to the casual observer with little knowledge of mathematics, it is obvious that equations can differ greatly in complexity. What is perhaps less obvious is that even simple equations can differ
qualitatively with respect to the type of equality they connote. We have already noted the distinction among tautologies, functional relationships, and constraint equations. More subtle distinctions
can also be made. Both 2 + 3 = 5 and (x + y)2 = x2 + 2xy + y2 are tautologies, but of rather different kinds. The equation 2 + 3 = 5 states a relationship among specific quantities; it asserts that
adding the quantity 3 to the quantity 2 yields the quantity 5, always, everywhere, without exception (assuming, of course, use of the decimal number system, or at least one with a radix greater than
5). In contrast, (x + y)2 = x2 + 2xy + y2 says no matter what quantities x and y represent, one will get the same result if one takes the sum of the square of x, the square of y, and twice their
product as one will if one adds x and y and takes the square of their sum. Now consider the familiar C/D = 3.14159 …, where C and D represent the circumference and diameter of the same circle
(measured in the same units). This says that the ratio of a circle’s circumference to its diameter is the same—a constant—for all circles. As already noted, this particular constant is so ubiquitous
and useful in mathematics that it is represented by its own universally recognized symbol, π, mention of which prompts notice of another type of equation, namely, a definitional one, illustrated by π
= 3.14159… and e = 2.71828. … A definitional equation does no more than give a name (π and e in the examples) to a value. The equation h2 = a2 + b 2 , where h represents the hypotenuse of a right
triangle and a and b represent the other two sides—the celebrated Pythagorean theorem—expresses another universal geometric truth; unlike C/D = π, however, there are no constants involved. What is
asserted is that the length of the hypotenuse of a right triangle is a function of the lengths of the other sides forming the triangle, which is to say that if one knows the lengths of the sides
forming the right angle, one can infer the length of the remaining side, and this is true for all right triangles. This type of functional relationship is represented by many equations familiar from
high school geometry and algebra. Examples:
Representation in Mathematics
A = LW, where A, L, and W represent, respectively, area, length, and width of a rectangle D = ST, where D, S, and T represent, respectively, distance, speed, and time F = M A, where F, M, and A
represent, respectively, force, mass, and acceleration Although the types of equations mentioned so far find many applications to the world of real processes and physical entities, none of them
depends on real-world observations or measurements for its authenticity. The concepts and relationships involved are matters of definition and can be treated in the abstract without reference to the
world of tangible objects and events. C/D = π whether or not the world contains any truly circular objects, and D = ST even in a world where no one goes anywhere. On the other hand, equations that
have been found to be descriptive of physical relationships have enormous practical value for many purposes. The use of equations is so common today that it is easy to forget that they were invented
only a few hundred years ago. The importance of this development to the advance of mathematics is difficult to overstate.
☐☐ Representational Systems as Aids to Reasoning Most thinking in which human beings engage, even in highly mathematical fields like physics or economics, is not rigorous in the sense in which
logicians and pure mathematicians use that term. Words, equations and diagrams are not just a machinery to guarantee that our conclusions follow from their premises. In their everyday use, their real
importance lies in the aid they give us in reaching the conclusions in the first place. (Simon, 1995, p. xi)
One of the major benefits that the use of mathematical symbols provides is the economizing of thought that it makes possible. Jourdain (1913/1956) makes this point and notes that mathematical
symbology permits us to represent many observations “in a convenient form and in a little space,” to remember or carry about “two or three little formulae instead of fat books full of details” (p.
6). Again, “it is important to realize that the long and strenuous work of the most gifted minds was necessary to provide us with simple and expressive notation which, in nearly all parts of
mathematics, enables even the less gifted of us to reproduce theorems which needed the
Mathematical Reasoning
greatest genius to discover. Each improvement in notation seems, to the uninitiated, but a small thing: and yet, in a calculation, the pen sometimes seems to be more intelligent than the user” (p.
13). Jourdain stresses especially the importance of the representational systems of analytical geometry and the infinitesimal calculus, and argues that their ability to make thinking more efficient
is responsible, to no small degree, for their usefulness as instruments for solving geometrical and physical problems. The secret of these systems, Jourdain argues, is that they make it possible to
solve difficult problems almost mechanically. Diagrams are often used to elucidate mathematical proofs. Under standing Cantor’s proof that the set of real numbers is uncountable, for example, is
facilitated by the visual representation of a list of numbers, as shown in Chapter 5. Cantor’s argument can be made without the use of the representation, because the logic does not depend on it and
can be expressed without it. In general, it is probably not correct to say that such elucidating representations constitute proofs. Moreover, diagrams can be misleading. Ogilvy (1984) cautions
against their use in proof making: “If we set out to prove something in mathematics, we must prove it. We are not allowed to say, ‘Well, it’s so because I can see it in the diagram’” (p. 90). That
diagrams can facilitate comprehension of an argument, however, is beyond doubt. The power of representational systems as vehicles of thought is seen not only in mathematics. Natural language, in both
its spoken and written forms, is of course the most obvious example of a representational system without which thinking would be very different indeed. But there are numerous examples of systems that
have evolved to meet special needs—music notation, logic diagrams, chemical transformation equations, blueprints, circuit diagrams, geopolitical maps, and so forth. Such systems are used to great
advantage in countless contexts. The details of any representational system must be constrained by what the system is intended to represent, by the representational medium, and by the capabilities
and limitations of human beings as information processors. Symbols must correspond, in some fashion, to what they symbolize; they must be producible with the media at one’s disposal (stone and
chisel, clay and stylus, papyrus or paper and pen), and they must be discriminable and—at least potentially—interpretable by human observers. But within these broad constraints, there is much
latitude for arbitrariness. We may ask with respect to any representational system why it is what it is and not something else. Why, for example, do we represent numerical concepts the way we do? How
did the system most commonly used for representing Western music come to be what it is? Where did the conventions that rule the production of geopolitical maps come from, and why are they what they
Representation in Mathematics
The representation of numerical and mathematical concepts is of special interest in part because of the enormous range of applicability of these concepts and in part because of the obvious importance
of symbols and notational conventions for the doing of mathematics. A better understanding of the role of representation in mathematics is one avenue to a better understanding of mathematical
thinking and, quite possibly, to important insights into the nature of thinking more generally as well.
☐☐ Representations as Aids to “Seeing” Relationships According to a well-known theorem in number theory, n
∑ (2k - 1) = n
k =1
which is to say that the sum of the first n odd positive integers is equal to n2. Thus, 1 = 1 = 12 1 + 3 = 4 = 22 1 + 3 + 5 = 9 = 32 1 + 3 + 5 + 7 = 16 = 42 and so on. When one first learns of this
theorem, it may seem to involve a curious relationship. Why should there be a connection between odd numbers and squares? How can one be sure the relationship holds in general? A simple pictorial
representation dispels the mystery and makes the reason for the relationship perfectly clear. If we represent n2 with an n × n square arrangement of dots, we see immediately that in order to go from
this arrangement to an (n + 1) × (n + 1) arrangement, representing (n + 1)2, we need to add an odd number of dots, in particular 2n + 1 dots, to the picture. Starting at the beginning, we represent
12 with a single dot:
Mathematical Reasoning
• In order to go from this arrangement to an arrangement representing 22, we have to add three dots:
• • • • To go from this to an arrangement representing 32 , we have to add five dots:
• • • • • • • • • and so on:
1 + 3 + 5 + 7 +...... + 2n-1
• • • .• . 2 . n • 2
• • • •
• • • •
• • • •
• • • .• . . • • • .......... •
In general, in order to expand an n × n arrangement of n2 dots into an (n + 1) × (n + 1) arrangement of (n + 1)2 dots, one must add a row and a column with n + 1 dots in each, but inasmuch as one dot
is common to the row and column, the total number of added dots is 2(n + 1) – 1 or 2n + 1. The relationship can be demonstrated, of course, without the use of a diagram: n
∑ (2k - 1) = n , 2
k =1
∑ (2k - 1) = (n + 1) , k =1
Representation in Mathematics n+1
∑(2k - 1) = ∑(2k - 1) + (2k + 1) = n + (2n + 1) = (n +1) . 2
k =1
k =1
Given this relationship, if ∑ (2k - 1) = n2 holds for any value of n, it holds for k =1 n + 1 and, by induction, for all subsequent integer values of n. The equation holds for n = 1; therefore, it
holds for all positive integer values of n. For most of us, I suspect, the diagram provides an intuitively more compelling demonstration of why the sum of successive odd positive integers beginning
with 1 is a square than does the series of equations. Consider the following elegant equality: n
∑ k =1
k3 =
∑ k =1
which says that the sum of the cubes of the first n integers equals the square of the sum of those integers: 13 + 23 + 33 + … + n3 = (1 + 2 + 3 + … + n)2 Again, it is not immediately obvious from the
equation why this relationship should hold. We see from Table 7.1 that it does indeed hold, at least for the values considered. This is, of course, no proof that the relationship holds in general,
but simply a demonstration that it holds for a few values of n, which is enough to make us wonder whether it might hold indefinitely. But consider again the sequence of odd numbers 1, 3, 5, 7, 9, 11,
13, 15, 17, 19, 21, 23, 25, 27, 29 We have seen already that the sum of the first n odd integers is equal to n2. Greek mathematician Nicomachus of Gerasa, who lived around the end of the first
century AD, noticed that when the odd integers are grouped so that the jth group contains j integers—the first one, the second two, the third three, and so on—the sum of the integers in each group
equals the cube of the number of integers in that group. Thus, the integers in the first five groups, (1) (3, 5) (7, 9, 11) (13, 15, 17, 19) (21, 23, 25, 27, 29) sum to 1, 8, 27, 64, and 125, or 13,
23, 33, 43, and 53, respectively.
Mathematical Reasoning
Table 7.1. Showing the Sum of the Cubes of the First n Integers Equals the Square of the Sum of Those Integers n
k =1
k k =1 n
k =1
Is there a way to represent the sequence of odd numbers that will make it apparent why the rule noted by Nicomachus holds? Suppose we were to replace each of the numbers in each group with the mean
of the numbers in that group. By definition, the sum of n numbers equals n times the mean of those numbers, so we know that the sums of the numbers in the groups will not be affected by this
substitution. (1) (3, 5) (7, 9, 11) (13, 15, 17, 19) (21, 23, 25, 27, 29) … (1) (4, 4) (9, 9, 9) (16, 16, 16, 16) (25, 25, 25, 25, 25) … What one notices immediately is that each of the numbers in
this new arrangement is a square, and in particular each of the numbers in the jth group is j2. So the arrangement can be represented as (12) (22, 22) (32, 32, 32) (42, 42, 42, 42) (52, 52, 52, 52,
52) … And, inasmuch as there are j squares in the jth group, we can represent the sums as (1 × 12) (2 × 22) (3 × 32) (4 × 42) (5 × 52) …
Representation in Mathematics
or equivalently, (13) (23) (33) (43) (53) … Again, not a proof that the relationship holds in general, but the representation helps one see why it might. There are other intriguing relationships in
the sequence of odd numbers (see Backman, 2007), but to explore them would take us too far afield from the main point of this discussion, which is to illustrate that the “seeing” of a mathematical
relationship may be facilitated by the way in which the relationship is represented. This focus on the equivalence between the sum of the first n cubes and the square of the sum of the first n whole
numbers prompts mention of a report by Tocquet (1961, p. 16, footnote 1) of the empirical discovery by 81-year-old “lightning calculator” Jacques Inaudi (about whom more is in Chapter 10) that the
sum of the first n cubes could be found with the formula n(n + 1) S= 2
Inasmuch as n(n2+1) is the sum of the first n integers, this formula is the equivalent to the right-hand term in Equation (7.1). The ancient Greeks, even when solving problems that would be
classified as algebraic today, tended to think in geometric terms. In his discussion of algebraic problems in the Elements, for example, Euclid represents numbers as line segments. Although this may
have been constraining in some respects, it also may have helped them see the reason for certain relationships more clearly than if they had thought strictly in algebraic terms. For example,
comprehension of the distributive law of multiplication, according to which a(b + c + d) = ab + ac + ad, would have been easy for the Greek scholar who would have interpreted it to mean that the area
of the rectangle on a and the sum of line segments b, c, and d is equal to the sum of the areas of the rectangles formed by a and each of the line segments individually, as shown in Figure 7.2.
Figure 7.2 Illustrating geometrically the distributive law of multiplication, that is, that a(b + c + d) = ab + ac + ad.
Mathematical Reasoning
Figure 7.3 Illustrating geometrically that (a + b)2 = a2 + 2ab + b2.
Similarly, the relationship (a + b)2 = a2 + 2ab + b2 becomes obvious when one thinks in terms of a geometrical representation of it, as shown in Figure 7.3. Other algebraic relationships could be
represented geometrically in a similar way. The following problem also illustrates how representations can be useful, not only in aiding problem solving, but in helping to explain why a solution is a
solution and in clarifying relationships that otherwise might be difficult to see. There are two containers, A and P. Container A has in it 10 ounces of water from the Atlantic Ocean; P contains 10
ounces of water from the Pacific. Suppose that 2 ounces of Atlantic water is removed from container A and added to the contents of container P, and then, after the water in container P is thoroughly
mixed, 2 ounces of the mixture is removed and added to the contents of container A. Which container now has the greater amount of foreign water, the Atlantic water being foreign to P and the Pacific
to A?
The answer is that both have the same amount. One way to demonstrate that this is true is to track the interchange of liquids step by step. After 2 ounces of the Atlantic water from A is added to the
10 ounces of the Pacific in P, P contains 12 ounces of water in the proportion 10 parts Pacific to 2 parts Atlantic. When 2 ounces of this thoroughly mixed mixture is transferred to A, P is left with
2/12 × 10, or 1.67, ounces of foreign (Atlantic) water. Container A, on the other hand, receives 2 ounces mixed in the proportion 10 parts Pacific to 2 parts Atlantic, so the amount of foreign
(Pacific) water that goes into A will be 10/12 × 2, or 1.67, ounces. Thus, each container ends up with the same amount of foreign water. Another way of viewing the problem makes the answer obvious.
Suppose we represent the situation, after the exchanges have been made, with a 2 × 2 table in which the rows represent the two containers and the columns the two liquids, as shown in the left-most
table in Figure 7.4.
Representation in Mathematics
Container A
Container P
Pacific X
10 – X
10 – X
Figure 7.4 A way of representing the two-containers problem.
What we want to show in the cells of this table is how much of each liquid is in each container. We can fill in the row and column totals right away (as shown in the middle table of Figure 7.4),
because we know that, assuming no liquid was lost in the exchanges, we end up with the same amount of liquid of both types (Atlantic and Pacific) as we began with (10 ounces), and because we took 2
ounces out of container A and then put 2 back, we also end up with the same amount of liquid (10 ounces) in each container. Now suppose we had not done the calculation to discover how much Pacific
water ended up in A. We know that it is some amount, but not how much, so we represent the amount in our table by X, as also shown in middle table of Figure 7.4. It should be clear at this point that
all the other cells of the table are determined: If there are X ounces of Pacific water in A, A must contain 10 – X ounces of the Atlantic; if X ounces of Pacific water is in A, that means the
remaining 10 – X ounces of the Pacific must be in P; and if P has 10 – X ounces of Pacific water, it must have X ounces of the Atlantic. In other words, given that the total amounts of Atlantic water
and Pacific water do not change, and that both containers end up with the same amount of liquid that they had initially, it follows that whatever amount of the Atlantic is missing from A (and
therefore is in P) must have been replaced by an equal amount of the Pacific (which is missing from P), as shown in the rightmost table of Figure 7.4. This example illustrates that problems often can
be approached in radically different ways. Both of the solutions given above are correct, but they differ in some important respects. Most of the people I have watched try to solve this problem have
taken the first approach of tracking the results of the individual transactions. This approach, although tedious, produces a solution that suffices to answer the specific question that was asked, but
it does not generalize readily to related cases. Moreover, the answer that is obtained seems to lack intuitive force; one’s belief in its accuracy rests on one’s confidence that the sequence of
calculations was performed without error.
Mathematical Reasoning
The second approach produces a solution that has considerable generality. Viewing the problem in this way makes it clear that the “equal amounts of foreign water” answer holds independently of how
many transfers are made, what amounts are involved in each transfer, or how thoroughly the mixing is done, provided each container holds the same amount of liquid in the end as in the beginning. This
representation of the problem solution also is intuitively compelling; one sees the relationships involved and why the answer has to be what it is. The drawback to this approach is that people
typically do not think initially to look at the problem this way; something of an insight seems to be required to put one on this track. Here is a third example of how a simple representation can
help to make clear a relationship that some people may have difficulty seeing without it. The problem is as follows: One morning, exactly at sunrise, a monk began to climb a mountain. A narrow path,
a foot or two wide, spiraled around the mountain to a temple at the summit. The monk ascended at varying speeds, stopping many times along the way to rest. He reached the temple shortly before
sunset. After several days at the temple, he began his journey back along the same path, starting at sunrise and again walking at variable speeds with many pauses along the way. His average speed
descending was, of course, greater than his average climbing speed. Does there exist a particular spot along the path that the monk will occupy on both trips at precisely the same time of day?
(Adapted from Adams, 1974)
Some people have trouble with this problem, imagining first the upward journey and where the monk would be at different times of the day and then the downward trek and how that might progress. The
difficulty seems to be in somehow getting the two journeys into the same frame of reference. A natural way to represent the situation diagrammatically is with a graph showing position (say, distance
from the bottom of the mountain) as a function of time of day. Thus, the upward and downward journeys might be represented as shown in Figure 7.5. This representation makes it clear that there indeed
will be some spot that the monk will occupy at the same time of day. The spot could be at any of many places, depending on the relative speeds of the ascending and descending journeys, but there
obviously must be at least one such spot. (There could be more than one, if he did any backtracking on either trip.) There is no way to draw one line from the bottom to the top of the graph and
another from the top to the bottom without having the lines cross. This story illustrates the power of a graphical representation to facilitate the understanding of a relationship that might be
difficult to
Representation in Mathematics
Top Journey down
Place on Mountain
Bottom Sunrise
Journey up
Time of Day
Figure 7.5 Illustrating that there must be at least one spot along the path that one takes going up and down a mountain on different days that one will occupy on both trips at precisely the same
time of day, assuming one starts out at the same time both days.
see otherwise. The problem also presents another opportunity to note the effectiveness of the heuristic of finding an analogous problem that is easier to solve, in the hope that solving the easier
problem will provide some useful hints regarding the solution to the more difficult one. A situation that is analogous to the one just considered is that of two monks, one starting at the bottom and
climbing to the top and the other starting at the top and descending to the bottom, both beginning at the same time and completing their journeys on the same day. In this case, it is intuitively
obvious that the two monks’ paths must cross, so they will be at the same place at some point in the day. To accept this answer as appropriate also for the original problem, one must, of course, be
convinced that the two situations are indeed analogous. Most readers, I suspect, will see the situations as sufficiently similar in the right ways to support the conclusion that what holds in the one
case holds also in the other. What is likely to be the more serious limitation of the heuristic of finding analogous but easier problems is the difficulty most of us may have in coming up with them
when we need them.
☐☐ Representations in Problem Solving Whatever the details, most would agree that some idea of representation seems to be at the heart of understanding problem-solving processes. (Kaput, 1985, p.
Mathematical Reasoning
Numerous studies have shown the important role that representations play in problem solving, in mathematics and other domains (Hayes & Simon, 1974; Koedinger & Anderson, 1995; Larkin, 1980, 1983;
Larkin & Simon, 1987; Mayer, 1983; Paige & Simon, 1966). Books giving “how to” advice on problem solving almost invariably stress the finding of a useful representation of the problem as an
indispensable early step in the process—and the finding of a different representation if the one in hand does not appear to be leading to a solution. Studies of the differences in the performance of
expert and novice problem solvers highlight the greater use by experts than by novices of qualitative representations (e.g., diagrammatic sketches) to ensure they understand a problem and to help
plan an approach to it before rushing ahead to attempt to compute or deduce a solution. Heller and Hungate (1985) characterize the difference between expert and novice problem solvers in this regard
this way: “Understanding is viewed as a process of creating a representation of the problem. This representation mediates between the problem text and its solution, guiding expert human and computer
systems in the selection of methods for solving problems. Novices tend to be quite deficient with respect to understanding or perceiving problems in terms of fundamental principles or concepts. They
cannot or do not construct problem representations that are helpful in achieving solutions” (p. 89). In a meta-analysis of a large number of studies of mathematical problem solving by children in
grades K through 4, Hembree (1991; also summarized in Hembree & Marsh, 1993) found that, of the various techniques for problem solving on which instruction was focused, the most pronounced effect on
performance was obtained from the development of skill with diagrams. Hembree and Marsh note, however, that explicit training appeared to be essential inasmuch as performance was not improved as a
result of practice without direct instruction. Sometimes the right representation can greatly reduce the amount of cognitive effort that solving a problem requires. A representation can, for example,
transform what is otherwise a difficult cognitive problem into a problem, the solution of which can be obtained on a perceptual basis. A compelling illustration of this fact has been provided by
Perkins (2000). Consider a two-person game in which the players alternately select a number between 1 and 9 with the single constraint that one cannot select a number that has already been selected
by either player. The objective of each player is to be the first to select three numbers that sum to 15 (not necessarily in three consecutive plays). A little experimentation will convince one that
this is not a trivially easy game to play. One must keep in mind not only the digits one has already selected and their sum, but also the digits one’s opponent has picked and their running sum.
Suppose, for example, that one has already selected 7 and 2
Representation in Mathematics
and it is one’s turn to play. One would like to pick 6, to bring the sum to 15, but one’s opponent has already selected 6 along with 4. So, inasmuch as one cannot win on this play, the best one can
do is to select 5, thereby blocking one’s opponent from winning on the next play. In short, to play this game, one must carry quite a bit of information along in one’s head as the game proceeds. One
could reduce the memory load of this game, of course, by writing down the digits 1 to 9 and crossing them off one by one as they are selected. And one could also note on paper the current sum of
one’s own already selected digits and that of those selected by one’s opponent. Better yet, as Perkins points out, the game can be represented by a “magic square”—a 3 × 3 matrix in which the numbers
in each row, each column, and both diagonals add to 15. With this representation, the numbers game is transformed into tic-tac-toe. The player need only select numbers that will complete a row,
column, or diagonal, while blocking one’s opponent from doing so. There is no need now to remember selected digits (one simply crosses them out on the matrix as they are selected) and no need to keep
track of running sums. Students often have difficulty with mathematical word problems even when they are able to do the computations that the solutions of the problems require, if given the
appropriate computational formulas (Hegarty, Mayer, & Green, 1992; Hegarty, Mayer, & Monk, 1995; Lewis & Mayer, 1987). Unquestionably, the ability to solve such problems can be facilitated by
representing the relationships between variables in diagrammatic form. The educational challenge is to find ways to teach students how to construct effective diagrams.
C H A P T E R
What is man in nature? A Nothing in comparison with the Infinite, an All in comparison with the Nothing, a mean between nothing and everything. Since he is infinitely removed from comprehending the
extremes, the end of things and their beginning are hopelessly hidden from him in an impenetrable secret; he is equally incapable of seeing the Nothing from which he was made, and the Infinite in
which he is swallowed up. (Pascal, 1670/1947, p. 200) The infinite more than anything else is what characterizes mathematics and defines its essence…. To grapple with infinity is one of the bravest
and extraordinary endeavors that human beings have ever undertaken. (Byers, 2007, p. 187)
Some concepts in mathematics constitute more of a challenge to intuition than do others. Some have been accepted only slowly over many years, proving their practical worth before being widely
acknowledged as fully legitimate. Some continue to baffle, especially when one tries to understand at a deep level what they “really mean.” Among those in this category are infinity and
infinitesimals. These concepts have given mathematicians—among others, including many of the world’s most notable philosophers—great pleasure and much trouble for a very long time. They are at once
fascinating and extraordinarily perplexing. They draw any thinking person to questions that engage the mind in exciting excursions that transcend the constraints of the physical world as we know it,
and that, in many cases, do not admit to uncontroversial answers. As Burger and Starbird (2005) put
Mathematical Reasoning
it, “Long before we reach infinity, we must face ideas beyond our grasp” (p. 233). Byers (2007) contends that the use of infinity brought mathematics from the domain of the empirical to the domain of
the theoretical. “The use of infinity in any specific manner requires considerable mental flexibility. It requires new ways of using the intellect, a certain subtlety of thought, an ease with complex
contradictory notions. In the use of the concept of infinity there is always the danger that things will get out of control and slip into the realm of the purely subjective. That is, there is the
danger that we will not be doing mathematics anymore” (p. 121). Mathematicians and philosophers who have struggled with ideas invol ving the infinitely large and the infinitesimally small include
Archimedes, Aristotle, Zeno, Descartes, Pascal, Kant, Leibniz, Gauss, Cantor, as well as countless other lesser lights. Few concepts have captured more attention from inquiring minds over the
centuries. It is with some trepidation that I turn to these ideas in this chapter and the next, but their prominence in the history of mathematics makes it imperative that they be considered in any
book that purports to be about mathematical reasoning. A caveat is in order before proceeding. In what follows reference is often made to what can and cannot be done in mathematics. Generally, unless
otherwise stipulated, when I claim that something can be done, I mean that it can be done conceptually, but not necessarily actually—that one can conceive of it being done, even if one cannot do it.
I borrow an illustration from Moore (2001). Suppose there were no practical constraints on how fast one could work. It would be possible then to write an infinity of natural numbers in a minute. One
would take half a minute to write 0, one-quarter of a minute to write 1, one-eighth of a minute to write 2, and so on, halving the time to write each successive integer. One would have written an
infinity of them in a minute—obviously impossible, but conceivable. Moore suggests calling stories in which infinitely many things can be done in finite time “super-task stories.” One encounters such
stories often in discussions of infinity.
☐☐ Origin of the Idea of Infinity This is a most important lesson, namely that the infinite in mathematics is conceivable by means of finite tools. (Péter, 1961/1976, p. 51)
Where did the idea of infinity originate? What prompted its emergence? One possibility is that it originated in the compelling belief that every number has a successor—that there is no largest
number. Where does
this idea come from? Not from logical necessity and certainly not from the immediate experience of our senses. But logical necessity and experimental evidence are not all there is to the objective
world we call reality. Perhaps there is, as Dantzig (1930/2005) argues, “a mathematical necessity which guides observation and experiment, and of which logic is only one phase. The other phase is
that intangible, vague thing which escapes all definition, and is called intuition…. The concept of infinity is not an experiential nor a logical necessity; it is a mathematical necessity” (p. 256).
The ancient Greeks were suspicious of the concept of infinity, in part because of the unsettling effect of the famous paradoxes of Zeno of Elea that yielded, by presumably self-evidently true
assumptions and impeccable logic, startling revelations. One example: If swift Achilles gives a lumbering tortoise a head start in a race, the former will never be able to catch the latter. Another:
An arrow can never leave the archer’s bow. And, more generally: Movement (or change of any sort) is impossible. The concept is difficult to avoid completely, however, especially by minds as fertile
of those of the classical Greeks. The claimed dislike of infinity by the ancient Greeks is sometimes described in strong terms. Dantzig (1930/2005), for example, refers to the horror this concept
held for them. Whether the Greeks actually had such a horror is a contested point; Knorr (1982) calls the idea “a preposterous myth whose demise can only be welcome” (p. 143). Moore (2001) takes the
more moderate position that while the Greeks did not make infinity an important object of mathematical study, they embraced the concept in an indirect way, for example, in taking a line to be
indefinitely extendable and infinitely divisible. Whether or not the Greeks were horrified by infinity, there can be no doubt that the concept was problematic for them and continued to be so for
others for many centuries. Hopper (1938/2000) uses the phrase repugnance of the idea of infinity to describe the attitude during medieval times. Wallace (2003) contends that “nothing has caused math
more problems—historically, methodologically, metaphysically—than infinite quantities” (p. 32). Some have argued that serious grappling with the concept has caused, or at least hastened, the descent
of more than one brilliant mathematician into madness. Descartes (1644), who declined to become involved in “tiresome arguments” about the infinite, famously took the position that inasmuch as the
human mind is finite, we have no business thinking about such matters. Galileo (1638/1914) also considered both infinities and indivisibles to be incomprehensible to finite understanding (the one
because of its largeness and the other because of its smallness), but accepted that “human reason does not want to abstain from giddying itself about them.”
Mathematical Reasoning
☐☐ Paradoxes of Infinity To the reader whose interest in mathematics is focused exclusively or primarily on its practical applications, a discussion of paradoxes of infinity may seem an unnecessary
digression. However, development of the calculus—the mathematics that is basic to the study of real-world phenomena of motion in space and change in time and without which modern technology could not
exist—involved confrontation of paradoxes of this sort. Much of the philosophical difficulty people, including mathematicians, had with the calculus when it was in its initial stages of development
had to do with what appeared to be absurdities involving infinity that cropped up in efforts to understand time and motion. It would seem, for example, that if time is infinitely divisible, when an
object passes from a state of rest to one of movement at some speed, it must pass through an infinity of speeds in a finite time. But if it spends any time at all, no matter how tiny an instant, at
each speed, how can an infinity of those instants fit within a span of finite duration? The inability to answer this question seems to force one to the conclusion that the body does not spend any
time at all at any of the speeds between rest and the final speed obtained, but this seems equally as absurd as the belief that it spends a very small amount of time at each of the infinity of
intermediate speeds. Questions of this sort challenged many of the better minds of the 17th century, including those of Galileo, Pascal, Leibniz, and Newton. Zeno’s paradoxes (about which there is
more in the next chapter) have provided entertainment and frustration to generations of mathematicians from his time to ours. They have been resolved to the satisfaction of some, and defined out of
existence by some, but they continue to bedevil others. Bertrand Russell (1926) credits them with affording, in one way or another, grounds “for almost all the theories of space and time and infinity
which have been constructed from his day to our own” (p. 183). They continue to inspire serious philosophical treatises. Adolf Grünbaum (1967), who considers Zeno’s arguments to be fallacious,
describes them as “inordinately subtle, highly instructive, and perennially provocative” (p. 40). His own book-length treatment of them cites treatments also by Henri Bergson, William James, Alfred
North Whitehead, Bertrand Russell, Gerald J. Withrow, Hilary Putnam, and Max Black, among others. In another place (Grünbaum 1955/2001a), he mentions also Emmanuel Kant, Paul du-Bois Reymond, and
Percy W. Bridgman as among the notables who have wrestled with one or another of Zeno’s ideas. An anthology edited by Wesley Salmon (2001) contains writings on the topic by several of those just
mentioned, including Grünbaum, plus J. O. Wisdom, Paul Benecerraf, James Thomson, and G. E. L. Owen.
Many paradoxes involving infinity have been invented since the time of Zeno. Generally credited to Hilbert, the story of an imaginary hotel with an infinity of rooms has been told numerous times in a
variety of versions. The essentials are that Hotel Infinity has an infinity of rooms, and even with an infinity of guests, can always find room for more. If a new guest arrives after infinitely many
have already been booked in, the clerk simply moves the guest that is in Room 1 into Room 2, the one that was in Room 2 into Room 3, and so on, thus making a vacant room for the new guest. If an
infinity of new guests wish to sign in, the clerk moves each of the already-booked guests into a room the number of which is twice the number of his or her existing room—from Room 1 to Room 2, from 2
to 4, from 3 to 6, and so on, which leaves an infinity of odd-numbered rooms for the infinity of new guests. And so the story goes. The following paradox is described by Ross (1976). Imagine an
infinitely large urn and an infinite collection of balls numbered 1, 2, 3,…. At one minute before 12, balls 1 through 10 are placed in the urn and number 10 is removed. At ½ minute before 12, balls
11 through 20 are tossed in and number 20 is removed. At ¼ minute before 12, balls 21 through 30 go in and number 30 comes out. And so on indefinitely. How many balls will be in the urn at 12
o’clock? The answer is an infinite number, inasmuch as any ball whose number is not some multiple of 10 will still be in the urn. But suppose the in-and-out rule is changed slightly so that when
balls 1 through 10 go in, number 1 comes out; when 11 through 20 go in, number 2 comes out; when 21 through 30 go in, number 3 comes out; and so on. Now it appears that the urn is empty at 12
o’clock, because, for any ball, one can say at precisely what time it was removed from the urn. This paradox is discussed also by Paik (1983). Falk (1994) presents a different version of the problem,
which shows that there is still a paradox even if one considers only the case in which balls are tossed out in numerical sequence. An infinite line of tennis balls, numbered 1, 2, 3, …, is arranged
in front of an empty room. Half a minute before 12 o’clock, balls 1 and 2 are tossed into the room and 1 is thrown out. A quarter of a minute before 12:00, balls 3 and 4 are tossed in and 2 is tossed
out. In the next 1/8 minute balls 5 and 6 are tossed in and 3 is tossed out, and so on. The question is how many balls will the room contain at 12:00? Cogent arguments can be made for two
diametrically different answers (p. 44): First answer. There will be infinitely many balls in the room at 12:00. Argument 1: The number of balls in the room increases by one at each tossing event.
Hence, for any N you suggest, I can compute an exact time (before 12:00) when the number of balls in the room exceeded N.
Mathematical Reasoning
Second answer. There will be no ball in the room at 12:00. Argument 2: If you claim that there is any ball there, when you name it, I can tell you the exact time (before 12:00) when it was tossed
To resolve the paradox, Falk uses the concept of a limit of an infinite sequence and distinguishes between two possible limits—the limit of the number of balls in the room and the limit of the
sequence of sets of balls in the room. Regarding the definition of the limit of an infinite sequence of sets, Falk cites Ross (1976, pp. 38–39) and Shmukler (1980, pp. 13–14). She argues that both
answers are legitimate, that one can interpret the original question either way, and that the answer one gets from a formal analysis depends on which interpretation is used (the number interpretation
yields infinity, whereas the set interpretation yields 0). The basic problem, from Falk’s point of view, is psychological: In the absence of an authoritative criterion to tell us which is the correct
choice, we have to decide which of the two formal interpretations of the problem we endorse. That decision is hard because both interpretations make sense and we are disturbed by the disparity
between their conclusions. We face a psychological difficulty inasmuch as we intuitively expect the limit of the cardinal numbers of the sets to equal the cardinal number of the limiting set. The
point of difficulty is that cardinality, as a function of sets, is “discontinuous at infinity in the sense that the values of the function are ever increasing but its value at the limit point is
zero” (Paik, 1983, p. 222). (Falk, 1994, p. 49)
Falk notes that one can react to this analysis in either of two ways: “to question the acceptability of the definition suggested by Ross (1976) and Shmukler (1980) for the limit of an infinite
sequence of sets” (p. 49) or “to admit the existence of a contradiction between the answers according to two interpretations, and to understand that the source of the ‘trouble’ is that the same
question, as phrased in natural language, can be translated into two different (mathematical) questions in formal language” (p. 49). There are many paradoxes involving infinity in geometry. One such—
the Koch curve—involves a shape that has finite area but an infinite perimeter. The curve was described by Swedish mathematician Niels Helge von Koch in 1906. Sometimes referred to as a snowflake
curve, the Koch curve is constructed as follows. Start with an equilateral triangle. Replace the middle third of each side with two sides of an equilateral triangle, the base of which is the third of
the side that is being replaced. On each successive step repeat this process on each of the straight-line edges of the figure. Figure 8.1 shows the original triangle and three successive applications
of the transformation rule.
Figure 8.1 The first four steps in producing a Koch “snowflake” object, which is a finite area with an infinite perimeter.
It is easy to see that each application of the rule increases the length of the perimeter by one-third. Kasner and Newman (1940, p. 346) give a proof that in the limit the perimeter is infinite. Not
only does the perimeter of the Koch curve become infinite in the limit, so does the distance between any two points on the curve; this follows from the length of the entire perimeter, and
consequently the distance between any two points on it, increasing by one-third at each iteration. Perhaps even more surprising than the existence (conceptually) of finite areas with infinite
perimeters is the existence (again conceptually) of three-dimensional objects with finite volume but infinite surface area. Rotation of the function y = 1/x, 1 ≤ x, about the x axis produces a
trumpet-shaped figure—a hyperboloid; it is known both as Gabriel’s horn and as Torricelli’s trumpet, the latter after its discoverer, 17th-century Italian physicist-mathematician Evangelista
Torrecelli. The volume of the figure is finite—in fact it is πx—but its surface area is infinite, which means that a horn of this shape could not hold enough paint to cover its surface. Draw a
triangle on a piece of paper (Figure 8.2, left). (The point I wish to make is usually illustrated with an equilateral triangle, but any triangle will do as well.) Construct an enclosed triangle by
joining the
Mathematical Reasoning
Figure 8.2 Illustrating the construction of a triangle with central 1/4 removed, and the process repeating by removing the central 1/4 of the remaining nine triangles.
midpoints of the original triangle’s sides as shown in Figure 8.2, center. By this construction you have divided the original triangle into four congruent triangles, each of which represents 1/4 of
the original triangle’s area. Imagine that you started with the original triangle colored gray, and that now you make the interior triangle white, signifying its removal. You are left with three gray
triangles, each of which is a miniature replica of the original one; and, in the aggregate, the three have 3/4 of the original area. Now treat each of these triangles as you did the original one:
Construct an enclosed triangle by connecting the midpoints of its sides and whiten the enclosed triangle to signify its removal, as shown in Figure 8.2, right. You now have left nine smaller
triangles, which, in the aggregate have 3/4 the area of the three with which you began the second cycle and (3/4)2 or 9/16 the area of the triangle with which you started. Obviously this process can
be continued indefinitely, conceptually. At every step you remove 1/4 of what remains of the original triangle at that point, so after n steps what remains of the original area is (3/4)n. As n
increases, this number shrinks quite rapidly and, in the limit, approaches zero, but the perimeter of the holes is infinite. Figure 8.3 shows the result
Figure 8.3 The result of application of the area-extraction algorithm six times.
of applying the area-extraction algorithm six times. The remaining area is (3/4)6, or about .18 of the original triangle. The shape produced by this process is known as Sierpinski’s gasket or
Sierpinski’s triangle, after Polish mathematician Waclaw Sierpinski, who first described it in 1915. It is representative of many fractal shapes (Mandelbrot, 1977) that are constructed by iteratively
performing a deletion operation on segments of a figure produced by a preceding application of the same operation. Another such shape described by Sierpinski, and known as Sierpinski’s carpet, is
constructed by dividing a square into nine equal squares, removing the central one, and then repeating the division and removal operation on each of the remaining squares at each step in the process
(see Figure 8.4). The area that remains after n steps is (8/9)n of the original area. Austrian mathematician Karl Menger described a threedimensional generalization of Sierpinski’s carpet (Menger’s
sponge) in which successive removal of a fixed proportion of the remaining volume of a cube at each step yields, in the limit, a structure with no volume but infinite area surrounding its holes. All
of these, and other similar fractals, may be seen as generalizations of the simplest illustration of the process, which is its application to a line segment. In this case the center 1/3 of the line
is removed, and in the next step each of the end segments is treated as the original line and the center 1/3 of it is removed, and so on. This fractal is known as a Cantor set. I have argued that
intuition plays a critical role in the development and the understanding of mathematics. How does one reconcile these and similar paradoxes with that claim? Is there anything more counterintuitive
than the claim of the existence of a finite area with an infinite perimeter, or a three-dimensional shape with a finite volume and an infinite surface area? One accepts these things, if one does,
because one has the choice of either accepting them or rejecting the mathematics that yields them. A person who has no great confidence in the mathematics may well reject them—perhaps should reject
them. But one who has
Figure 8.4 Sierpinski’s carpet, three steps.
Mathematical Reasoning
confidence in the mathematics may find it more acceptable to educate one’s intuition, so the paradoxical result no longer is seen as paradoxical, than to toss out the mathematics. There is an elegant
paradox in the belief that any knowledge about infinity is beyond the capability of human reason: “If we cannot come to know anything about the infinite, then, in particular, we cannot come to know
that we cannot come to know anything about the infinite; if we cannot coherently say anything about the infinite, then, in particular, we cannot coherently say that we cannot coherently say anything
about the infinite. So if the line of thought above is correct, then it seems that we cannot follow it through and assimilate its conclusion. Yet that is what we appear to have done” (Moore, 2001, p.
☐☐ Types of Infinity We have noted that the ancient Greeks had some difficulties with the concept of infinity. Aristotle came to terms with it by making a distinction between the potentially infinite
and the actually infinite and resolving to recognize the reality of only the former. The distinction between potential and actual infinity motivated a great deal of discussion and theorizing for many
centuries following Aristotle and indeed to the present day. In one elaboration of the distinction Moore (2001) partitions concepts relating to infinity into two clusters. “Within the first cluster
we find: boundlessness; endlessness; unlimitedness; immeasurability; eternity; that which is such that, given any determinate part of it, there is always more to come; that which is greater than any
assignable quantity. Within the second cluster we find: completeness; wholeness; unity; universality; absoluteness; perfection; self-sufficiency; autonomy” (p. 1). Moore notes that the concepts that
comprise the first cluster, which he calls mathematical infinity, are more negative and convey a sense of potentiality, whereas those in the second cluster, which he associates with metaphysical
infinity, are more positive and convey a sense of actuality. He contends that the first cluster is likely to inform more mathematical or logical discussions of infinity, while the second is likely to
inform more metaphysical or theological discussions of the topic. Regarding the claim that Aristotle abhorred the idea of the infinite, Moore (2001) argues that “what he abhorred was the
metaphysically infinite, and (relatedly) the actual infinite—a kind of incoherent compromise between the metaphysical and the mathematical, whereby
endlessness was supposed to be wholly and completely present all at once. It was the mathematically infinite that he was urging us to take seriously. Properly understood, the mathematically infinite
and the potentially infinite were, for Aristotle, one and the same. Far from abhorring the mathematical infinite, he was the first philosopher who seriously championed it” (p. 44). It is not the
case, however, that mathematical infinity and potential infinity are the same for everyone. Indeed, the question has been raised as to whether mathematical infinity should itself be considered actual
or only potential. There seems to be fairly general agreement that there is no such thing as infinity in the physical world; as Bernstein (1993) puts it, “True infinities never, as far as we know,
occur in nature; and if a theory predicts them, it can be taken as an indication that the theory is ‘sick’ or, at the very least, is being applied in a regime where it is not applicable” (p. 86). But
the debate about the actuality of mathematical infinity continues. “In mathematics no other subject has led to more polemics than the issue of the existence or nonexistence of mathematical
infinities” (Rucker, 1982, p. 43). Gauss was unwilling to consider mathematical infinity to be an actuality. “I protest against the use of an infinite quantity as an actual entity; this is never
allowed in mathematics. The infinite is only a manner of speaking in which one properly speaks of limits to which certain ratios can come as near as desired, while others are permitted to increase
without bound” (quoted in Clegg, 2003, p. 78). Falk and colleagues (Falk & Ben-Lavy, 1989; Falk, Gassner, Ben-Zoor, & Ben-Simon, 1986) recognize the distinction between potential and actual infinity.
Potential infinity is represented by an unending process, seen in the realization that one can increase numbers indefinitely by always adding 1 to any number, no matter how large it is. The set of
numbers, in their view, constitutes an actual infinity. Falk and colleagues argue that comprehension of the infinitude of numbers requires three insights: everlasting process (that the process of
increasing numbers is interminable), boundless amount (that the set of numbers is actually infinite), and immeasurable gap (that the gap between an infinite set and any finite set is itself infinite,
no matter how numerous the finite set is). Falk (in press) presents evidence that by the age of 8 or 9, children generally have acquired the first two insights—they understand that numbers continue
indefinitely and that they comprise an infinite set—but that appreciation of the immeasurable gulf between infinity and any finite set does not come until later, if at all. Lakoff and Núñez (2000)
also recognize the distinction between potential and actual infinity, and describe the latter as a metaphorical
Mathematical Reasoning
concept. Potential infinity they see as illustrated by imagined unending processes such as building a series of regular polygons with more and more sides, or writing down more and more decimals of an
irrational number like 2 . Actual infinity, in contrast, is a metaphorical concept, and as such, it allows us to treat a process that has no end—no final result—as though it did have an end and a
final result. “We hypothesize that all cases of actual infinity—infinite sets, points at infinity, limits of infinite series, infinite intersections, least upper bounds—are special cases of a single
general conceptual metaphor in which processes that go on indefinitely are conceptualized as having an end and an ultimate result” (p. 158). This metaphor, which Lakoff and Núñez refer to as the
basic metaphor of infinity (BMI), plays a very prominent role in their treatment of several key mathematical concepts and developments. An alternative view of the current status of the distinction
between potential and actual infinity is given by Barrow (2005). As we have already noted, before the discovery of geometries other than that of Euclid, which was generally considered to represent
physical reality, “existence” was more or less equated with physical existence. But subsequent to the discoveries—or inventions—of non-Euclidean geometries, mathematical existence gradually came to
be taken to mean no more (and no less) than logical self-consistency. So in this sense, infinity—infinities, thanks to Cantor—could be seen as actually existing, mathematically if not physically.
Cantor himself distinguished three types of infinity: physical infinity, existent in the physical universe; mathematical infinity, existent in the mind of man; and absolute infinity, the totality of
everything, existent only in the mind of God. Czech theologian-philosopher-mathematician Bernhard Bolzano (1851/1921) argued for the acceptance of the idea of actual infinity and introduced the
scandalous notion (Kasner and Newman [1940, p. 44] call it the “fundamental paradox of all infinite classes”) that a part of an infinite collection can be as numerous as the whole of it. German
mathematician Julius Richard Dedekind and Cantor built on this foundation.
☐☐ Infinity and Numbers Most of us, I suspect, tend to think of infinity as a very large number. And for many applications of the concept, such as its use in limit theorems, it does not inhibit our
understanding of a relationship if we think of what happens when some quantity is allowed to become arbitrarily large, as opposed to thinking of it being infinite. But in fact infinity is
not a very large number; it is not a number at all, and such phrases as “approaching infinity,” “an almost infinite number,” and “nearly infinite in extent” are contradictions in terms. Think of the
largest number you can imagine. How close is this to infinity? Not close. And it does not matter how large this number is. A googol, so named in 1938, it is claimed, by a nine-year-old nephew of
American mathematician Edward Kasner, is 10100. A googolplex is 10 raised to the googolth power: 1010100 . This is a very large number indeed, but larger ones have been expressed. The 79 number e ee
, which is approximately equal to and generally represented 34 by 101010 , was used by South African mathematician Stanley Skewes in 1933 in a proof regarding the distribution of prime numbers and
has been known since as Skewes’s (often Skewes or Skewes’) number. Graham’s number, larger still—large enough to require special notation to be expressed—was once held by Martin Gardner (1977) and
the editors of the Guinness Book of World Records (1980) to be the largest number ever used in a serious mathematical proof. In the interim even larger numbers have been used. The important point for
present purposes is that none of these unimaginably large numbers is close to infinity. No matter what one does to increase the size of the largest number that one can conceptualize or represent, and
no matter how large that number becomes, it gets no closer to infinity than the humble 1; between it—our largest number—and infinity there will remain a gulf of infinite extent, and there is nothing
one can do to decrease it. The same point may be made with the observation that every number is closer to 0 than to ∞. Consider any number, X. This number is X units from 0. Given X, one can specify
another number, Y, that is more than twice as large as X, say 3X or 1,000X. For all such cases, X < Y – X, which is to say that X is closer to 0 than to Y and therefore is closer to 0 than to ∞. By
similar reasoning one could argue that no number is closer to infinity than any other, or that every number is infinitely far from infinity. In fact, however, the very term closer to infinity is
contradictory; it makes no sense to describe a point as being close to something that has no location. Or think of it this way. Presumably there is some number, call it X (of course no one knows what
it is), that is the largest number that has ever been, or that ever will be, expressed by a human being. If it were possible to select a number at random from all possible numbers (it is not), the
probability of selecting a number smaller than X is essentially 0. In other words, the probability that a number selected at random from all possible numbers would be within the range of all numbers
expressed by human beings is 0, so miniscule is that range relative to infinity. Slote (1986/1990) poses a question that relates to these ruminations in a discussion of rational dilemmas. Imagine a
wine connoisseur
Mathematical Reasoning
who has been condemned to an infinite life with only finitely much of his favorite wine. For how many bottles should he ask? The point of the story is to illustrate the possibility of a rational
dilemma, because no matter what number is given, one may wonder why it was not bigger. But one might also argue that it really does not matter what the number is; for any finite number it will be the
case that an infinite time will be spent without wine. Imagine participating in a contest in which a very desirable prize is to be given to the contestant who writes the largest integral number, and
assume that the number can be written in any interpretable fashion—as a name, a string of digits, in exponential form. As a contestant, what number should one write? No matter what number one writes,
one knows that there are infinitely many that are larger than it. The concept of “the largest number that one can think of” is strangely frustrating. “In trying to think of bigger and bigger
ordinals, one sinks into a kind of endless morass. Any procedure you come up with for naming larger ordinals eventually peters out, and the ordinals keep coming” (Rucker, 1982, p. 69). Sometimes one
sees references to numbers “selected at random.” Such statements require qualification, inasmuch as it is not possible to select a number at random from the infinite set of all possible numbers. Any
random selection must be from a finite set, so to select a number at random must mean to select at random a number between X and Y, the values of X and Y being either explicit or assumed. (Suppose
one were to claim to have selected a number—a positive integer, to be specific—from the infinite set of positive integers. No matter what integer one selected, there would be an infinite number of
larger positive integers but only a finite number of smaller ones, and this is inconsistent with the idea of random sampling from a set.) Ignoring this proviso and proceeding as though random
selection from an infinite set were possible leads to paradoxes such as Lewis Carroll’s obtuse angle problem (Falk & Samuel-Cahn, 2001). When we use the expressions “as n approaches infinity,”
“letting k go to infinity,” and the like, we perhaps should qualify them with “so to speak” to remind ourselves that what we really mean is that we are imagining n and k becoming indefinitely large,
but that no matter how large they become, they will be no closer to infinity than when they are ever so small. This being said, I need to recognize an important observation made by Falk (1994) in a
very insightful discussion of infinity as a cognitive challenge. Noting that “almost infinite” is a self-contradictory expression, she points out that there is a practical sense in which it can be
meaningful. She cites an explanation by Asimov (1989) that Newton’s theories of motion and gravitation would have been absolutely right only
if the speed of light were infinite, but they were very nearly right in the sense that the error in the time required for light to travel a given distance was very small. “Thus if light traveled at
infinite speed, it would take light 0 seconds to travel a meter. At the speed at which light actually travels, however, it takes it 0.0000000033 seconds. That is why Einstein’s relativistic
equations, taking the finite speed of light into consideration, offered only a slight correction of Newton’s computations” (p. 56, footnote 1). Many assertions in mathematics apply to all numbers. We
may say, for example, that every integer is either even or odd, or that every integer is either prime or nonprime. But what should such statements be taken to mean, given that it is not possible to
produce all integers or even all integers of a given type (odd integers) of which there are infinitely many? In what sense does a very large number that has never been written or thought exist? Is
not the idea of an infinite set itself a contradiction in terms? There is no way to identify all the members of such a set. The notion of “all the members” seems not to apply. We can list as many of
the members of such a set as we wish, but no matter how many we list, the number of unlisted members will still be infinite. How can anything infinite exist in a finite universe? The reader will
think of many other questions of this sort that could be—and probably have been—raised.
☐☐ Modern Conceptions of Infinity The mathematics of the infinite is a sheer affirmation of the inherent power of reasoning by recurrence. (Kasner & Newman, 1940)
What are we to make of the concept of infinity? It seems to be totally beyond our comprehension. The reward for a few minutes’ pondering it can be a feeling of abject intellectual inadequacy. One is
hardly surprised to learn that the ancients disliked the concept, and that it has continued to frustrate thinkers throughout the ages and to the present time. Is the characterization of something as
infinite—in extent, in number, in duration—simply an admission, as Thomas Hobbes believed, of our inability to conceive of it? Is the claim that the infinite exists a misuse of language, as
Wittgenstein insisted? Is infinity a concept that can be grasped by the intellect but not by the imagination, as Leibniz contended? What does it mean to say that it can be grasped by the intellect?
Is it the case, as Moore (2001) contends, that “anything whose existence we can acknowledge we are bound to recognize as finite, on pain of contradiction and incoherence”? (p. 217).
Mathematical Reasoning
Is infinity one concept or several? To make sense of it must we distinguish among physical infinity, metaphysical infinity, mathematical infinity, and perhaps other types? Can there possibly be
anything that is infinite in nature? Is the universe itself infinite in space and time? Is there any way of knowing? Is mathematical infinity anything more than a concept invented by mathematicians
to enable certain types of computation, a fiction sustained by its mathematical usefulness? In what sense, if any, might it be held to be real? Is an understanding of the concept of infinity simply
beyond finite minds, so that attempts to deal with it are bound to end in a quagmire of incoherence? Austrian philosopher-logician-linguist Ludwig Wittgenstein argued compellingly that the meanings
of words are determined by their uses; this is of questionable help in the case of infinity because it was, and is, used by different users in different ways. Wittgenstein held that some of the
difficulties with the concept stemmed from careless use of language, as, for example, when one uses language to refer to an infinite collection as though it existed in nature rather than only
conceptually. Despite the attention and ink that have been spent on such questions, answers remain elusive. As Moore (2001) puts it, “The same old puzzles and preoccupations are as relevant as they
ever were to discussion of the infinite. A survey of the current literature reveals a continuing concern with all the perennials: the distinction between the actual infinite and the potential
infinite; the relationship between the infinite and time; Zeno’s paradoxes; the paradoxes of thought about the infinite; and so forth” (p. 142). Moore speaks dismissively of the view that the last
word regarding what infinitude really is has been spoken in the definition of infinity “as a property enjoyed by any set whose members can be paired off with the members of one of its proper subsets”
(p. 198). Nevertheless, we seem to be able to accept infinity more easily today than did people in previous centuries. That may be in part because the idea has become somewhat more clearly understood
as a consequence of the work of Bolzano, Dedekind, and Cantor, among others. It also may be that we simply accept the idea without thinking very deeply about it. Whatever the reason, and despite the
difficulties the concept of infinity has caused thinkers for millennia, we use it quite matter of factly and effectively in even relatively simple mathematics. When we use it, we typically consider a
trend that can be seen when we let the values of an index variable range over a few numbers, 1, 2, 3, 4, …, and then make the, one might say unconscionable, leap to a conclusion regarding what will
happen as this number is allowed to increase indefinitely. And even though we know that nothing “approaches infinity,” we act, when computing, as though values do, and we seem to be able to solve
problems just the same.
☐☐ Sets, Subsets, and One-to-One Correspondence Scholars sometimes saw in the peculiarities of infinity the basis for conclusions about the nature of the physical world. For example, an argument was
made during the 13th century, perhaps first by the Franciscan cleric St. Bonaventure, that the world cannot have existed from eternity past because if it had, the number of the moon’s revolutions
around the Earth and the Earth’s revolutions around the sun would both be infinite, but the moon would have revolved 12 times as frequently as the Earth (Murdoch, 1982). This argument was anticipated
by seven centuries by the Alexandrian scholar John Philoponus, who observed that, however old the world is, the number of days it has existed is 30 times the number of months, which means that if the
world had always existed, one infinity would be greater than another, which he considered to be absurd. The idea of infinities that, from one point of view, appear to differ in size, but whose items
can be put into one-to-one correspondence, has perplexed thinkers for a long time. In the sixth century, Philoponus also pointed out that if there are infinitely many even numbers, there must be as
many even numbers as odd and even numbers combined, and this he saw as justification for rejecting the idea of countable infinities. Thomas Bradwardine, English scholar and cleric (once Archbishop of
Canterbury), made a similar observation in the 14th century. In the 16th century, Galileo observed that inasmuch as every integer can be squared, there must be as many squares as there are integers.
This observation had the surprising implication that an inexhaustive subset of the set of all integers (the subset of integers that are perfect squares) is as numerous as the entire set. Not only are
the squares an inexhaustive subset of the integers, they are an infinitesimally small subset because, as Galileo also noted, the proportion of the first n integers that are squares gets increasingly
close to 0 as n increases. Galileo held that the infinite is beyond the finite understanding of humans and that the difficulties we have with it come from applying to it concepts that are
appropriately applied only to finite entities. Similar observations were made with respect to other sets and subsets thereof. Consider the simple equation y = 2x, where x and y need not be integers.
This says that for every x there is a y that has twice the value of that x, or conversely that for every y there is an x that has half the value of that y. Imagine the set of all ys that have values
between, say, 0 and 2. According to the equation, there must be, for each of these ys, an x that has a value between 0 and 1. It follows that there must be as many values
Mathematical Reasoning
between 0 and 1 as there are between 0 and 2, even though the second interval is twice as large as the first. Many illustrations of the possibility of putting the items of an infinite set into
one-to-one correspondence with a proper subset of itself were noted over the long period of time that the concept of infinity was taking shape, and that illustrate the truth of a quip by Hoffman
(1998) that “in the realm of the infinite, things are often not what they seem” (p. 236), and Seife’s (2000) definition of the infinite as “something that can stay the same size even when you
subtract from it” (p. 149). In 1872 Dedekind defined infinity as follows: “A system S is said to be infinite when it is similar to a proper part of itself; in the contrary case S is said to be a
finite system.” Boyer and Merzbach (1991) note that this definition may be rephrased in somewhat more modern terminology as “a set S of elements is said to be infinite if the elements of a proper
subset S’ can be put into one-to-one correspondence with the elements of S” (p. 566). This is the definition that Cantor also used in his classic work on the concept. Dedekind’s defining infinity in
terms of the property of being able to put a set into one-to-one correspondence with another set of which it is a subset was a stroke of genius. It resolved the problems by defining them out of
existence. Given this definition, it is possible to make the subset as small a fraction of the original set as one likes and still have the subset be infinite. Consider, for example, the function y =
1,000,000x, where x and y are integers. For every value of x there is a value of y, despite that x can have any integer value on the number line, whereas y can have only one value in every million.
If one wants a sparser subset, one may make the ratio of y to x as large as one pleases. As long as the two sets can be put into one-to-one correspondence, as they can when y = ax, no matter what the
value of a is, then if x is infinite, y is also. The same is true of any single-valued function of x, such as xa, a x, or x x , despite that with such functions the number of integers between
successive values of the function increases without limit. These ideas about infinity lead to such conclusions as that any two line segments, regardless of length, have the same number of points,
and, in particular, that even the smallest line segment imaginable has as many points as a line of infinite length. A well-known demonstration that two lines of different lengths have the same number
of points is shown in Figure 8.5. The circumference of the outer circle is larger than that of the inner circle, but any radius drawn on the former will intersect the latter at a unique point, so
there must be as many points on the smaller circle as on the larger. A diagram of this sort was used by 13thcentury Scottish theologian and philosopher Duns Scotus to support the contention that
lines are not composed of an infinity of infinitesimal points, because if they were, the inner and outer circles—having the
Figure 8.5 Every line from the origin through the larger circle also passes through the smaller one, demonstrating that for every point on one circle there is a corresponding one on the other.
same number of such points—should be equal in circumference, which they manifestly are not. Galileo resolved this conundrum to his own satisfaction by assuming that the larger circle had gaps between
its points that the smaller one did not. Today it is generally held not only that all line segments have an equal number of points, but that every line segment has the same number of points as does a
two-dimensional plane, or a three-dimensional volume. Even though there are an infinite number of points representing rational numbers on a line segment of any length, the proportion of the total
number of points on the segment that represent rational numbers is infinitesimally small. Between any two rationals, no matter how close they are, there are infinitely many irrationals, and between
any two irrationals there are infinitely many rationals. And so on. Thinking about it makes one’s head spin. It seems the more precisely infinity is defined, the more strange the concept becomes.
What could illustrate more clearly the insightfulness of a quip by Wallace (2003): “It is in areas like math and metaphysics that we encounter one of the average human mind’s weirdest attributes.
This is the ability to conceive of things that we cannot, strictly speaking, conceive of” (p. 22).
☐☐ Enter Georg Cantor This is not the end of the story with respect to infinity, of course. One might assume that, given Dedekind’s definition, or the modern restatement of it, it should be possible
to put any infinite set in one-to-one correspondence with any other. But the definition had barely been written down when German mathematician Georg Cantor showed that this is not the case. The real
numbers, for example, cannot be put in one-to-one
Mathematical Reasoning
correspondence with the integers (see Chapter 5). In this respect, the set of reals, which includes irrationals such as e, π, and 2 , differs from the set of rationals. The difference illustrated by
the possibility of putting the rational numbers into one-to-one correspondence with the integers and the impossibility of doing so with the reals led Cantor to note that some infinite sets are
countable or denumerably infinite—can be put into one-to-one correspondence with the integers—and some are not, and to distinguish infinite sets of different powers or cardinality. (For some years
Cantor had believed, and tried in vain to prove, that all infinite sets are countable.) He argued that the set of rationals or the set of integers should be considered the same power, but that the
set of reals should be considered a higher one, although the set was infinite in each case. He went on to establish that there are infinitely many powers of infinite sets, which is to say there is no
largest infinite set—given any infinite set, it is possible to describe a larger one. Suppose there were a largest infinite set, perhaps the set of all sets. Cantor showed that the set of all subsets
of any given set A (which would be called the power set of A, or P[A]) contains more members than does the set A. From this it follows that there is no largest set, because, given any set, no matter
how large it is, there is a larger one, namely, the set of all its subsets. Suppose there is a largest set—call it SL; its power set, P[SL], is larger than SL . Moreover, P[SL] is itself a set and
its power set, P[P[SL]], is larger than it. And so on, ad infinitum. So the supposition that there is a largest set is apparently wrong. Although the conclusion that there is no largest set seems
harmless enough when one first encounters it, it led to a conceptual difficulty involving the set of all sets—the universal set. Moore (1995) describes the problem and Cantor’s treatment of it: Given
Cantor’s theorem, this collection [the set of all sets] must be smaller than the set of sets of sets. But wait! Sets of sets are themselves sets, so it follows that the set of sets must be smaller
than one of its own proper subsets. That, however, is impossible. The whole can be the same size as the part, but it cannot be smaller. How did Cantor escape this trap? With wonderful pertinacity, he
denied that there is any such thing as the set of sets. His reason lay in the following picture of what sets are like. There are things that are not sets, then there are sets of all these things,
then there are sets of all those things, and so on, without end. Each set belongs to some further set, but there never comes a set to which every set belongs. (p. 116)
Moore (1991) describes the desire both to and not to admit the existence of a set of all sets as a paradox and contends that the best way to deal with it is to refrain from talking about it. The
final resolution of the problem was to axiomatize set theory in such a way as to define the universal set out of existence.
Aczel (2000) notes the correspondence between Cantor’s reasoning about the impossibility of the existence of a set containing everything and Gödel’s incompleteness theorem. “The impossibility of a
set containing everything brought Cantor to the conclusion that there was an Absolute—something that could not be comprehended or analyzed within mathematics. Cantor identified the absolute with God
… the impossibility of a universal set, and the unattainable Absolute perhaps lend credence to Gödel’s incompleteness principles: there is always something outside, something larger than any given
system” (p. 196). Cantor’s work on infinities, which included the discovery of infinities of different sizes (orders of infinities, indeed an infinite order of infinities) and the arithmetic of
infinities (transfinite arithmetic), caused more than a little scratching of many heads at the time, including his own. Barrow (2005) contends that Cantor’s discovery that there is an infinity of
infinities of different sizes and that they can be distinguished unambiguously was one of the greatest discoveries of mathematics, and completely counter to the opinion prevailing when it was made.
Cantor went on to develop rules for doing arithmetic with infinite entities. In transfinite arithmetic the rules for basic operations (addition, subtraction, multiplication, division, exponentiation)
differ from the rules that apply to finite arithmetic, and those that apply to transfinite cardinal numbers differ from those that apply to transfinite ordinals. Cantor’s impact on the world of
mathematics was to many perplexing if not deeply troubling. “It was Cantor’s greatest merit to have discovered in spite of himself and against his own wishes in the matter that the ‘body mathematic’
is profoundly diseased and that the sickness with which Zeno infected it has not yet been alleviated” (Bell, 1937, p. 558). Cantor speculated, but was unable to prove, that there does not exist an
infinite set larger than the counting numbers but smaller than the reals. His inability to prove, or disprove, the conjecture—it is claimed that at various times he thought he had done the one or the
other— was a major disappointment to him. The conjecture became known as the continuum hypothesis and appeared as the first of the 23 problems that Hilbert identified as the major unsolved problems
in mathematics as of 1900. The continuum hypothesis continues to be a significant mathematical challenge. Cohen (1963, 1964) showed that whether it is true or false depends on the assumptions with
which one begins, which means that it is independent of the axioms of set theory and thus can be treated as an additional axiom that one is free to accept or reject. Although Cantor is venerated
today as an exceptionally original thinker, his work was severely criticized by some of the mathematical luminaries of the day, notably German mathematician–logician Leopold Kronecker—who believed
that reality was represented by the integers
Mathematical Reasoning
and detested the very idea of infinity—and he died in a mental hospital after spending his last years clinically depressed. The cause of Cantor’s illness, which came and went, has been the topic of
much speculation, and it remains unknown. What is known is that his bouts of depression tended to coincide with periods when he was thinking about his continuum hypothesis. While it would be wrong to
infer cause and effect from this fact, one cannot help but wonder. Recent accounts of Cantor’s work on infinity that are accessible to the general reader include those of Love (1989), Aczel (2000),
and Clegg (2003). For his own groundbreaking articles published in 1895–1997, see Cantor (1911/1955).
☐☐ One-to-One Correspondence Versus Same Size What does it mean for two infinite sets to be the same size, or for one to be larger than another? Rucker (1982) notes that the fact that an infinite set
can have the same cardinality as a proper subset of itself was so puzzling to pre-Cantorian thinkers that “they generally believed it was hopeless to attain a theory of infinite cardinalities much
more sophisticated than: ‘All infinities are equal’” (p. 230). Cantor argued that two infinite sets should be considered the same size if the elements of one can be put into one-to-one correspondence
with those of the other. Sometimes referred to as the “correlation criterion” for size comparisons, this is the fundamental assumption, or definition, that underlies the transfinite mathematics that
he developed. Not everyone accepted, or accepts, the one-to-one matching procedure as the way to compare the sizes of infinite sets. An alternative approach is to consider whether one of the sets is
a proper subset of the other and, if it is, to conclude that the set is larger than the subset on the grounds that when the subset is removed from the set, some elements still remain in the latter.
Leibniz acknowledged that each even number could be paired with each natural number, but he did not see this as a basis for concluding that there are as many even numbers as natural numbers. Lakoff
and Núñez (2000) argue that according to our usual concept of “more than,” there are more natural numbers than even numbers; they resolve the problem by making a distinction between the concepts of
pairabilty and same number as. They object to the characterization of what Cantor did as proving that there are just as many even integers as natural numbers. “Given our ordinary concept of ‘As Many
As,’ Cantor proved no such thing. He proved only that the sets were pairable. In our ordinary conceptual system, there are more natural numbers than there are positive even integers.
It is only by use of Cantor’s metaphor that it is correct to say that he proved that there are, metaphorically, ‘just as many’ even numbers as natural numbers” (p. 144). In effect, Cantor defined
same size in terms of one-to-one correspondence. Lakoff and Núñez contend that Cantor “intended pairability to be a literal extension of our ordinary notion of Same Number As from finite to infinite
sets” (p. 144). In this, they argue, Cantor was mistaken; it is a metaphorical extension only. “The failure to teach the difference between Cantor’s technical metaphorical concept and our everyday
concept confuses generation after generation of introductory students” (p. 144). Byers (2007) makes a similar argument in contending that the notion of equality is ambiguous. According to the notion
that two sets are equal in number if the items of the two sets can be put into one-to-one correspondence, the set of squares is equal to the set of counting numbers. But if two sets are considered to
be equal if and only if they have identical elements, then the set of squares is not equal to the set of counting numbers, because the latter contains elements that the former does not. There is much
more to the story of the concept of infinity as it has evolved in mathematics than this brief discussion conveys. What is important to note for present purposes is that the concept has indeed
evolved, and is still evolving. It has perplexed more than one capable thinker in the past and is likely to remain an intellectual challenge for a long time to come. Despite this, the concept has
proved to be an immensely useful one in mathematics, both pure and applied. Indeed, useful is not sufficiently strong to describe its importance to much of modern mathematics; essential is a more
appropriate word: “Without a consistent theory of the mathematical infinite there is no theory of irrationals; without a theory of irrationals there is no mathematical analysis in any form even
remotely resembling what we now have; and finally, without analysis the major part of mathematics—including geometry and most of applied mathematics—as it now exists would cease to exist” (Bell,
1937, p. 522). Just as different systems of geometry have been developed from different starting definitions and rules, so different concepts of infinity are possible, depending on how one chooses to
define one’s primitives, and the rules of operating on them. Cantor, working with Dedekind’s definition of infinity in terms of the one-to-one correspondence between elements of a set and any of its
proper subsets, was able to develop a system of operations that led to the identification of different levels of infinity, but the definition was crucial as the point of departure. Given a different
definition, the destination would not be the same. But one may ask: What is the real situation. After all is said and done, are even numbers really as numerous as whole numbers? Has Cantor proved
this to be the case? He has shown that every even number can be paired with a natural number, and conversely, but should we take
Mathematical Reasoning
this as proof positive that the two sets really are equal in size? This is perhaps a good place to remind ourselves that mathematics, according to the modern view, has no obligation to be descriptive
of the real world; it is obliged only to be consistent with the definitions and axioms with which one starts. From this perspective, the question of what the real situation is does not arise. The
conclusion that the even numbers are as numerous as the whole numbers follows from the criterion for the same number being defined as the ability to be put in one-to-one correspondence. If one uses a
different definition, one gets a different result.
☐☐ Infinite Time So far, I have mentioned only in passing the concept of infinite time— eternity. It too has its share of paradoxes and mind-numbing puzzles. I have always found it easier to imagine
a future that has no end than to imagine a past that has no beginning. Looking forward, I see no great conceptual problem in the idea that the universe, in some form, could go on indefinitely. But
looking back, the idea that the universe—or time, for that matter—always existed seems incomprehensible. If it always existed—if it stretches back forever—then it has already existed for an eternity.
And if it has existed for an eternity, how did it get to be what it is now? Presumably things are changing and are likely to be different in the future than they are now. But if things have been
going on forever, why have they not long since attained whatever state an eternity of evolving would have produced? After writing the preceding paragraph, I learned that puzzlement of this sort has a
history. Moore (2001) refers to a “curious asymmetry” according to which “an infinitely old world strikes us as more problematical than a world with an infinite future, though it is very hard to say
why” (p. 91). This curious asymmetry was the basis of the “kalam cosmological argument” that the universe must have had a beginning. If its existence extended to the infinite past, the argument goes,
getting to the present would have required crossing an infinite gap, which is impossible. But the assumption that the universe had a beginning does not really solve the puzzle, in my view, because if
the universe began at a certain point in time, one still has the question of how that point in time came to be, if time itself extended to an infinite past; getting from infinity past to the point at
which the universe came into existence would also have required the crossing of an infinite gap. Perhaps the answer is that time itself had a beginning. This of course is one interpretation of the
big bang theory of the origin of the universe. According to it, asking what was going on before the big bang makes no sense because there was no before “before” the big bang; neither time nor space
existed. This may solve the puzzle for some; it does not quite do it for me.
☐☐ Acquiring the Concept of Infinity Falk (1994) reviews evidence that individuals acquire the concept of infinity gradually over several years. The idea that there is no largest number— that no
matter what number one gives, someone can always give one that is larger—appears to be graspable, if not spontaneously expressible, by a majority of children by the time they are about five to seven
years old (Evans, 1983, cited in Falk, 1994). Falk et al. (1986) found that most eight-year-olds recognized the advantage of being the second person to name a number in a game of two players in which
the player who names the larger number wins, and were able to verbalize the idea that no matter how large the number named by the first player, the second player can always name a larger one.
Eight-year-old children were likely to consider the natural numbers to be more numerous than the number of grains of sand on earth; younger children were likely to consider the grains of sand to be
the more numerous. But many of the children, even as old as 12 or 13, who considered the numbers to be more numerous than the grains of sand, thought the latter to be almost as numerous as the
former. “Roughly speaking, children of ages 8–9 and on seem to understand that numbers do not end, but it takes quite a few more years to fully conceive, not only the infinity of numbers, but also
the infinite difference between the set of numbers and any finite set” (p. 40). Results obtained in two of the studies reviewed by Falk (1994)— Fishbein, Tirosh, and Hess (1979) and Moreno and
Waldegg (1991)—suggest that a large majority of primary and junior high school students are likely to believe that the set of natural numbers (1, 2, 3, …) contains more elements than the set of all
even numbers (2, 4, 6, …). Falk found in a study of her own that about 55% of approximately 100 college students who had not taken college courses in higher mathematics also considered the set of
natural numbers to be the more numerous of the two. Anticipating the topic of the next chapter, we may note too that high school students are likely to consider numbers to be infinitely divisible
(Tirosh & Stavy, 1996). Smith, Solomon, and Carey (2005) found that many elementary school students (third through sixth grades) also are likely to consider numbers to be infinitely divisible, and to
consider physical quantities to be infinitely divisible as well. Children who considered infinite divisibility to be the case in either the domain of numbers or that of matter tended to consider it
to be possible in both. **** This discussion of infinity has focused on the concept primarily as it relates to mathematics. Before leaving the topic, we should note that the concept
Mathematical Reasoning
is also highly relevant to science, especially physics, astronomy, and cosmology. Whether the universe is finite or infinite in either space or time is a question that continues to be debated. The
widely accepted view that it had a beginning with the big bang and that it is limited in extent is not universally espoused, and even the big bang theory leaves open the possibility of an infinite
sequence of oscillations between expansion and contraction and of the existence of an infinity of universes (small u) comprising the Universe—all there is (multiverse?)—about which we know only what
little we can learn from the one in which we live. One may take the position that theorizing about such matters is not science but metaphysics. And well it may be. But it is no less fascinating for
that. And not without practical consequences. The argument has been made often that, in an infinite universe, anything that has nonzero probability of happening—you, for example—will happen infinite
times. It is something to reflect upon when you are having trouble getting to sleep. Barrow (2005) provides a thoughtful, and thought-provoking, discussion of this topic, including its relevance to
ethics. “Unusual consequences seem to follow,” he notes, “if we take seriously the idea that there exists an infinite number of possible worlds which fill out all possibilities” (p. 208). The
consequences follow, in large part, because beliefs influence behavior, and what we as individuals believe about the universe in which we live conditions how we treat it, including, importantly,
people in it.
C H A P T E R
The infinitesimal has a fascinating history. At least as far back as Archimedes, it’s been used by mathematicians who were perfectly aware that it didn’t make sense. (Hersh, 1997, p. 289)
Equally as problematic as the concept of infinity, or the infinitely large, is the idea of infinitesimals or the infinitely small. Among the ancient Greeks, both Eudoxus of Cnidus and Archimedes of
Syracuse used the idea of quantities as small as one wished to find areas and volumes. As noted in Chapter 6, Antiphon, Eudoxus, and Archimedes used the method of exhaustion, which involved
determining properties of curved figures by approximating them, ever more closely, with increasingly many-sided polygons. Dantzig (1930/2005) goes so far as to identify Archimedes as the founder of
the infinitesimal calculus, and to suggest that the failure of other Greeks to extend his work in this direction was due in part to lack of a proper symbolism and in part to their horror of the
infinite. (I have already noted that whether the Greeks really had a horror of the infinite is a debated question.) Bell (1946/1991) credits Zeno’s invention of his paradoxes with being “partly
responsible for the failure of the Greek mathematicians to proceed boldly to an arithmetic of infinite numbers, an arithmetical theory of the continuum of real numbers, an analysis of motion, and a
usable theory of continuous change generally” (p. 224). The “partly responsible” in this comment reflects that Bell, like Dantzig, considered the lack of an efficient symbolism for representing
numbers to be another serious limitation of the time.
Mathematical Reasoning
The extent to which Greek mathematicians were influenced by the philosophers’ arguments about infinity and indivisibles is unclear. One gets the impression from some accounts that many of the most
challenging problems with which the mathematicians struggled were first articulated by philosophers. However, there is also the view that the mathematicians were not much influenced by the thinking
of the philosophers, but that the problems on which they focused reflected their own autonomous interests (Knorr, 1982). In any case, questions of the infinite and infinitesimals were on the minds of
many philosophers and mathematicians in ancient Greece, as well as on those of some theologians in later centuries (Stump, 1982; Sylla, 1975). It seems intuitively obvious that the number line is
infinitely divisible. Given any two real numbers, no matter how close they are, one can find a number (their mean) between them. Inasmuch as the operation of finding a mean between two numbers can be
iterated endlessly, it follows that between any two real numbers there are an infinity of real numbers. This leads to the mildly unsettling conclusion that all numbers lack nearest neighbors. If the
number line, with these properties, is considered a legitimate analog of space and time—in the sense of distances between spaces or between times being faithfully represented by differences between
numbers—then, as we shall see, many questions arise regarding the nature of motion and how it is possible.
☐☐ Infinite Series Among the more powerful instruments in the applied mathematician’s tool kit, particularly relative to the study and description of continuous change, are infinite series.
Especially useful are series that, though infinite, converge on a finite value. Convergence here means that as the number of terms of the series increases, the value of the sum gets increasingly
close to an ultimate value. Not all infinite series converge, and distinguishing between those that do and those that do not can be very difficult. Consider the two series, ∞
1 n
and ∞
∑ n +1 n= 0
Table 9.1. The (Approximate) Values of for the First Few Values of n
∑ n= 0
1 and 2n
∑ n 1+ 1 n=0
n 0
0.1250 0.0625 0.03125 0.015625
1.8750 1.9375 1.96875 1.984375
1 n +1
0.2500 0.2000 0.16667 0.142857
2.0833 2.2833 2.45000 2.592857
1 2n ∞
1 n
∑ n +1 1
n= 0
The first of these series, which is a geometric series in that each term in the sum is the same multiple of the preceding term, converges to 2; the second series, generally known as the harmonic
series, does not converge, but increases indefinitely as n increases. Table 9.1 compares the values of the terms and the sums for the first few values of n. It is fairly obvious even from these few
values that the first series is converging to 2. It is not clear from the table that the second series is not converging to any finite value; indeed, it is easy to believe that it is converging given
that the difference between successive sums is decreasing. If it is diverging, it is doing so very slowly. In fact, the sum does not exceed 10 until one has added 12,367 terms, and to get to more
than 100 requires 10 43 terms. Nevertheless, if one could continue the series indefinitely, one would find that it does not converge. This series was proved to be divergent by Oresme in the 14th
century; the proof, simple and clever, is reproduced by Maor (1987, p. 238). A different proof, published by Swiss mathematician Jakob Bernoulli and credited to his mathematician brother Johann in
1689, is reproduced by Dunham (1991, p. 196).
Mathematical Reasoning
I have spoken of the number to which a converging series converges as the sum of the series, but does this make sense, given that the series never actually reaches the sum but only gets increasingly
close to it? Sometimes use of the term sum is justified in the following way. Consider ∞ the geometric series ∑ 21n = 1, which can be written as S = 12 + 41 + 18 + 161 + … = 1. n=1 If we subtract
from this 12 S = 41 + 18 + 161 + …, we have left 12 S = 12 ,from which it follows that S = 1. Series with alternating positive and negative terms can converge as well as series with all positive
terms. The series 1 - 12 + 13 - 41 + 15 - … converges to ln 2, the natural logarithm of 2. Series with alternating positive and negative terms can be tricky, however. The terms of the preceding
series can be reordered, for example, to produce a series that converges to 32 ln 2 (see Byers, 2007, p. 142). The comparison of the geometric and harmonic series illustrates that the appearance of
convergence, as evidenced by a series’ decreasing rate of growth, is not compelling evidence of actual convergence. To be sure that a series is converging, one needs a formal proof. An interesting
aspect of the problem of proving convergence of a series is that it is possible to know that a series converges without being able to determine the value to which it converges. The series ∞
which we have already seen in Chapter 1, was shown by Jakob Bernoulli to converge to some number less than 2 several decades before Euler 2 proved that it converges to p6 = 1.6449…. In Chapter 3,
some series approximations to π and e were noted in the context of a discussion of numbers. Series approximations have been developed for a great variety of functions, including binomial,
exponential, trigonometric, and logarithmic. The trigonometric sine function, for example, is approximated by the power series sin x = x -
x3 x5 x 7 x 9 + + - … 3! 5! 7 ! 9 !
Series approximations of numerous functions are readily available in engineering handbooks and books of standard mathematical tables. Although few would question the usefulness of infinite series
today, many difficulties were encountered by their early users. Kline (1980) describes those difficulties in some detail. “As Newton, Leibniz, the several Bernoullis, Euler, d’Alembert, Lagrange, and
other 18th-century
men struggled with the strange problem of infinite series and employed them in analysis, they perpetrated all sorts of blunders, made false proofs, and drew incorrect conclusions; they even gave
arguments that now with hindsight we are obliged to call ludicrous” (p. 142). Of course, hindsight makes experts of us all, in our own eyes at least, but only with respect to intellectual struggles
of the past; it is of no help while a struggle is going on, let alone about the future. The kinds of problems encountered are illustrated by a series that was the subject of a correspondence between
the Italian Jesuit priest and mathematician Luigi Grandi and Gottfried Leibniz, and became known as the Grandi series, or sometimes the Grandi paradox. Consider the ratio 1 = 1 - x + x 2 - x3 + x 4 -
x5 + … 1+ x From the left side of the equation, it is clear that if x = 1, the ratio equals 1/2. But the right side gives us 1–1+1–1+1–1+… which can be written as (1 – 1) + (1 – 1) + (1 – 1) + … the
sum of which apparently is 0. But the original series can also be written as 1 – (1 – 1 + 1 – 1 + 1 – 1 + …) and grouped as 1 – [(1 – 1) + (1 – 1) + (1 – 1) …] the sum of which appears to be 1.
Liebniz, among others, argued that the sum of this sequence should be taken to be 1/2, the mean of the results of the two groupings, and also the answer one would get from just the left side of the
original equation. One also gets 1/2 as the answer if one writes the series as Sum = 1 – 1 + 1 – 1 + 1 – 1 + … = 1 – (1 – 1 + 1 – 1 + 1 – 1 + …) = 1 – Sum and Sum = 1 – Sum only if Sum = 1/2. The
resolution that is generally accepted today is to say that the sequence has no sum. Kline notes that almost every mathematician of the 18th century made some effort to provide a logical foundation
for the calculus, all to no avail. “The net effect of the century’s efforts to rigorize the calculus, particularly those of giants such as Euler and Lagrange, was to confound and mislead their
contemporaries and successors. They were, on
Mathematical Reasoning
the whole, so blatantly wrong that one could despair of mathematicians’ ever clarifying the logic involved” (p. 151). Kline’s indictment of 17th- and 18th-century mathematicians’ handling of the
concepts of continuity and differentiability, which he calls the basic concepts in all of analysis, is severe: “One can only be shocked to learn how vague and uncertain mathematicians were about
these concepts. The mistakes were so gross that they would be inexcusable in an undergraduate mathematics student today; yet they were made by the most famous men—Fourier, Cauchy, Galois, Legendre,
Gauss—and also by a multitude of lesser lights who were, nevertheless, leading mathematicians of their times” (p. 161). Again, “from the standpoint of the logical development of mathematics, the
principle of continuity was no more than a dogmatic ad hoc assertion intended to justify what the men of the time could not establish by purely deductive proofs. The principle was contrived and
invoked to justify what visualization and intuition had adduced” (p. 164). Byers (2007) describes the idea of continuity as one of considerable complexity and one the understanding of which has
evolved over many years: “The idea is not a single, well-defined object of thought but a whole process of successively deeper and deeper insights” (p. 239). These observations prompt two thoughts.
First, the rigor by which 19th-century mathematics was distinguished was perhaps, at least in part, a reaction against what was perceived to be the lack of rigor of previous times. Second, what is
cognitively very difficult for one generation may be readily accepted, and found to be easy, by a subsequent one; intuitions are malleable and, while familiarity may sometimes breed contempt, it can
also facilitate acceptance and assimilation. It is important to bear in mind that the sum of a convergent infinite series gets increasingly close to its limit as terms are added to the series, but it
never actually reaches the limit. For many purposes, it is convenient to treat the sum of a series as though it were equal to the limit, but my sense is that failure to bear in mind that the sum and
limit are not the same is the basis of considerable confusion.
☐☐ Zeno Again Zeno’s paradoxes pose a compelling temptation for philosophers; few can resist the urge to comment upon them in some fashion. (Salmon, 2001, p. 42)
In trying to understand the struggles of 17th- and 18th-century mathematicians with concepts relating to the phenomenon of continuous
change, it may help to remember that the famous paradoxes invented by Zeno of Elea had stood unexplained for over 2,000 years. (Nor have they yet been resolved to the satisfaction of everyone who has
written or thought about them.) Most of what is known about Zeno’s paradoxes is based on writings of philosophers (Plato, Aristotle, and others) other than Zeno himself. Of Zeno’s own writings, very
little survives. Zeno proposed paradoxes of extension and of motion, both of which involve space and time. Grünbaum (1967) characterizes the paradoxes of extension this way: Zeno challenged geometry
and chronometry to devise rules for adding lengths and durations which would allow an extended interval to consist of unextended elements. Specifically, Zeno challenged physical theory to devise
additivity rules for length and duration which permit physical theory to assert each of the following assumptions without generating a paradox: (1) a line segment of physical space, whose length is
positive, is a linear mathematical continuum of points, each of which is of length zero, (2) the time interval corresponding to a physical process of positive duration is a linear mathematical
continuum of instants, each of which is of zero duration. (p. 3)
How, in short, can extended entities (space and time) be composed of entities of zero extension (points and instants)? In his paradoxes of motion, Zeno argued that motion is impossible. In perhaps
the best known of them, he contended that if, in a race between Achilles and a tortoise, Achilles gives the tortoise a head start, he can never overtake it, no matter how much faster he may be.
Suppose Achilles is twice as fast as the tortoise and he gives the tortoise a head start of a specified distance. During the time it takes Achilles to cover the distance of the head start, the
tortoise will have advanced by half that distance; during the time it takes Achilles to cover that distance, the tortoise will have advanced by half of that distance, and so on ad infinitum. So
Achilles will not only fail to pass the tortoise, but he will never even catch up with it. (Falk [2009] engagingly draws a parallel between Achilles’ race with the tortoise and the relationship
between age and life expectancy as seen in life expectancy tables. As current age increases, remaining life expectancy decreases, but never actually goes to zero; no matter what the current age,
there is a remaining life expectancy of some duration.) Of course, everyone—presumably including philosophers—knows that Achilles will catch and pass the tortoise. And with knowledge of the speed
with which both Achilles and the tortoise run and the head start that the tortoise is given, one can calculate precisely how long it will take him to do so. To use an example from Black (1950/2001),
if Achilles runs
Mathematical Reasoning
10 times as fast as the tortoise and the latter is given a 100-yard head start, Achilles will catch up with the tortoise when he (Achilles) has run 111 91 yards. If Achilles runs at a speed of 10
yards/second, it will take him 11 19 seconds to do it. Black notes that this type of calculation was seen as a resolution of the paradox by no less personages than Descartes, Peirce, and Whitehead,
but he himself expresses doubt that it “goes to the heart of the matter. It tells us, correctly, when and where Achilles and the tortoise will meet, if they meet; but it fails to show that Zeno was
wrong in claiming that they could not meet” (p. 70). So given that we know that Achilles will catch up to, and pass, the tortoise, what is the flaw in Zeno’s argument that he cannot do so? This is
one way of stating the challenge that has enticed numerous mathematicians and logicians and more than a few garden-variety folks over the two-and-a-half millennia or so since Zeno exposed his
perplexing musings to his contemporaries and to posterity. An argument closely related to that of the race between Achilles and the tortoise has it that in order to get from A to B, Achilles must
first get halfway, then 3/4 of the way, then 7/8, then 15/16, and so on, always having to travel half of the remaining distance before completing the trip. In a variation on this theme, Rucker (1980)
has mountain climbers climbing an infinitely high mountain composed of a series of ever-higher cliffs in two hours. The trick is to scale the first cliff in one hour, the second in half an hour, the
third in a quarter hour, and to continue halving the time to scale each successive cliff, which will permit them to do an infinity of cliffs in two hours. Grünbaum (1869/2001) distinguishes between a
legato run, in which Achilles runs in an uninterrupted fashion toward his goal, and a staccato run, in which he runs for a quarter of a minute at twice the speed of his first half minute of the
legato run, pauses for the same amount of time, runs for an eighth of a minute and pauses also for an eighth of a minute, and so on, halving the duration of the run and the pause, at each step. If we
imagine Achilles Legato and Achilles Staccato racing each other, Achilles Staccato would be racing ahead of Achilles Legato at each segment and then resting while the latter catches up. If Achilles
Staccato performs some task during each pause, he will have performed an infinity of them in a minute. As Moore (2001) points out, if he writes one of the digits of π during each pause, he will have
done the complete expansion. “We are loath to admit this as a conceptual possibility,” Moore notes, “although we seem bound to do so” (p. 4). According to the dichotomy, another of Zeno’s paradoxes
of motion, in order to get from point A to point B, Achilles must first cover half the distance; in order to cover half the distance, he must first cover one-fourth
of the total distance (half of the half); in order to cover one-fourth, he must first cover one-eighth; and so on. Inasmuch as covering each of these partial distances may be considered an act, in
order to move any distance at all, Achilles must have already performed an infinite number of acts, which, the argument goes, is clearly impossible. The conclusion that appears to be forced by the
dichotomy is that poor Achilles not only cannot catch the tortoise, but cannot even leave the starting line—of course, according to this argument, neither can the tortoise; racing, or any other act
that requires motion, is impossible. The argument appears to rule out the possibility of anything at rest ever beginning to move, and it prompts the question: How, in fact, does something at rest
begin to move. At one instant in time it is at rest and at a subsequent one (I did not say the immediately following one) it is moving; how did the transition from resting to in motion take place?
Still another of Zeno’s paradoxes of motion involved the flight of an arrow. At any instant of time, the argument goes, the arrow occupies— that is, is stationary within—a region of space that is
precisely equal to its length, and the same can be said of every instant of its flight. If, as claimed in this argument, it is stationary at every instant, its motion must be an illusion. Many
resolutions of one or another of Zeno’s paradoxes have been proposed. Among them are the following: • Space and time are not infinitely divisible. The universe is discrete. Both space and time have
irreducible units—hodons and chronons, say— and these irreducible units have extent (are not points of zero extent or moments of zero duration). A piece of chocolate can be divided into parts only so
long; eventually one comes to something (a molecule of chocolate) that, upon further division, is no longer chocolate. And upon further subdivision, one eventually comes to something (a quark?) which
is not (yet) divisible. The idea that matter is composed of indivisible particles goes back at least to the classical Greek philosophers Leucippus of Miletus, Democritus of Abdera, and Epicurus of
Samos, and it remains a viable possibility today. The application of mathematics to many problem areas in physics— celestial motion, electromagnetism, quantum theory—assumes the continuous nature of
both space and time, but that space and time are continuous in reality is not beyond doubt (Casti, 2001; Smolin, 2001, 2004; Wheeler, 1968). Smolin has proposed a theory of quantum gravity that rests
on the assumption that both space and time are quantal in nature. Space, according to this theory, comes in units of Planck length (about 10–33 cm), so the smallest admissible area is about 10–66 cm2
Mathematical Reasoning
the smallest admissible volume is about 10–99 cm3. Time, according to Smolin’s theory, moves by discrete jumps, each about the duration of Planck time, or about 10–43 s. One may wonder what is gained
by hypothesizing that space (or time) is discrete, while allowing the unit that makes it discrete to have length, width and height (or duration), which presumably could be continuous entities. The
important point in the present context is the idea that space and time are discrete for observational purposes; what goes on within a quantum being not determinable. In any case, Heisenberg’s
uncertainty principle rules out the simultaneous determination of an object’s position and momentum with perfect accuracy. Zeno would have found all of this interesting. According to Salmon (2001),
William James, Alfred North Whitehead, and Henri Bergson all held that, while not proving the impossibility of motion, Zeno’s paradoxes reveal the inadequacy of the mathematical account of continuity
for the description of temporal processes. James and Whitehead, he argues, saw in these paradoxes a proof that temporal processes are discontinuous. It is one thing to say that the number line,
which, after all, is a figment of the mathematician’s imagination, has an infinity of points between any two points, and quite another to hold that physical space and time, each of which presumably
exists independently of what the mathematician thinks, have an infinity of locations or moments between any two locations or moments. • Motion should be conceptualized as a functional relationship.
“A function is a pairing of elements of two (not necessarily distinct) classes, the domain of the function and its values. On the basis of this definition, if motion is a functional relation between
time and position, then motion consists solely of the pairing of times with positions. Motion consists not of traversing an infinitesimal distance in an infinitesimal time; it consists of the
occupation of a unique position at each given instant of time. This conception has been appropriately dubbed ‘the at-at theory of motion.’ The question, how does an object get from one position to
another, does not arise” (Salmon, 2001, p. 23). • Describing the movement of an object from one point to another as the completing of an infinite sequence of tasks is a misuse of language. Thomson
(1954/2001b) makes an argument of this sort in addressing the question of whether it is possible to perform a “super-task” (e.g., complete an infinite number of tasks in a finite time). Thomson
states this way the argument that it is not: “To complete any journey you must complete an infinite number of journeys. For to arrive from
A to B you must first go from A to A’, the mid-point of A and B, and thence to A’’, the mid-point of A’ and B, and so on. But it is logically absurd that someone should have completed all of an
infinite number of journeys, just as it is logically absurd that someone should have completed all of an infinite number of tasks. Therefore it is absurd to suppose that anyone has ever completed any
journey” (p. 89). Noting that philosophers have differed with respect to whether the first or the second of the premises of this argument should be considered false, Thomson contends that the
disagreement is moot because the argument is invalid in any case. It commits, he claims, the fallacy of equivocation, there being more than one connotation that can be given to the reference to
completing an infinite number of journeys. • The appearance of paradox is based on an unfounded assumption that properties of space and time are analogous in specific respects to those of the real
number line. The real number line is infinitely divisible and consequently, as we have seen, no number has a nearest neighbor. If space and time are like the real number line in being infinitely
divisible, then we may say that no point in space has a nearest neighboring point in space and no instant in time has a nearest neighboring instant in time, and that to get from one point to another,
or from one instant to another, one must cross an infinity of intermediate points or instants. But are space and time like the number line in this respect? Why should we assume they are? According to
Bergson (1911/2001), we should not. Supposing that what applies to the line of a movement—its divisibility into as many parts as we wish—applies also to the movement per se leads, he argues, to “a
series of absurdities that all express the same fundamental absurdity” (p. 65). Black (1950/2001), if I understand him correctly, similarly holds that the paradox stems from misapplication of
characteristics of the number line to space and time. “We can of course choose to say that we shall represent distance by a numerical interval, and that every part of that numerical interval shall
also count as representing a distance; then it will be true a priori that there are infinitely many ‘distances.’ But the class of what will then be called ‘distances’ will be a series of pairs of
numbers, not an infinite series of spatio-temporal things” (p. 80). Again, “Achilles is not called upon to do the logically impossible; the illusion that he must do so is created by our failure to
hold separate the finite number of real things that the runner has to accomplish and the infinite series of numbers by which we describe what he actually does. We create the illusion of the infinite
tasks by the kind of mathematics that we use to describe space, time, and motion” (p. 81). Wisdom (1951/2001) also
Mathematical Reasoning
distinguishes between mathematical distance and physical distance and objects to the use of the former to represent the latter. Unlike a mathematical point, a physical point, Wisdom argues, has some
size, however small it may be. It follows that, unlike a mathematical distance, which can consist of an infinity of (mathematical) points, a physical distance can consist of only a finite number of
(physical) points. So, in this view, Zeno’s arguments apply to mathematical entities but not to physical ones. • An object in motion is not equivalent to an object being at rest in a sequence of
positions. Bergson (2001) dispenses with the paradox of the arrow this way: “The arrow never is in any point of its course. The most we can say is that it might be there, in this sense, that it
passes there and might stop there. It is true that if it did stop there, it would be at rest there, and at this point it is no longer movement that we should have to do with” (p. 63). Movement, in
Bergson’s view, is not decomposable: “The arrow which goes from A to B displays with a single stroke, although over a certain extent of duration, its indivisible mobility” (p. 63). “A single movement
is entirely, by the hypothesis, a movement between two stops; if there are intermediate stops, it is no longer a single movement” (p. 64). This argument is similar to one that might have been made by
Aristotle, which, as paraphrased by Owen (1957/2001) in an imagined dialogue between Zeno and Aristotle, goes as follows: “If there is no time in a moment for the arrow to move there is no time for
it to be stationary either. Movement involves having different positions at different moments, and accordingly rest involves having the same position at different moments. But we are considering only
one moment, so neither of these ideas applies. In making either of them apply you [Zeno] treat the single moment as a period of time itself containing different moments” (p. 158). Owen argues that
“talk of movement is appropriate only when we have in mind periods of time within which movements could be achieved … it is absurd either to say or to deny [that movements can be achieved within
moments], for moments are not pieces of time such that within them any process can either take place or lack the time to take place. But this certainly does not show that the arrow is not moving at
any moment. It is, of course: we have seen the sense in which it is. Whether it is, is a question of fact and not of logic” (p. 162). • Rucker (1982) suggests a way out of the arrow paradox, in which
the arrow is said not to be moving in any of the successive instants of time, which he believes not to have been published before. “According
to Special Relativity, an arrow in motion experiences a relativistic length contraction proportional to its speed. So, in fact, the arrow’s state of motion is instantaneously observable!” (p. 244).
So has Zeno met his match in Dr. Einstein? The reader must judge for himself or herself; it is beyond me. • Russell (1929/2001) argues that the arrow paradox rests on our strong tendency to assume
that at any given instant when an arrow is in flight, there is a next position in which the arrow must be located in the next instant, and that the appearance of a paradox disappears when one
realizes that there is neither a next position nor a next instant. It appears that Russell here tacitly accepts the infinitely divisible property of the number line as descriptive of physical space
and time. Seeing the infinite divisibility of space and time as the resolution of Zeno’s paradoxes is in stark contrast to the first view mentioned above, which sees the resolution in the assumption
that space and time are not infinitely divisible. • Some simply dismiss the paradoxes as nonsensical. French physicist Edme Mariotte (1678/1992), for example, makes the case that the answer to the
argument that a man who runs twice as fast as another could never catch the slower runner if the latter had a head start of a league (about 5.56 kilometers) is the counterargument that if the faster
one does a league in an hour, he will have covered three leagues in three hours, during which time the slower one will have covered only one and a half, which means the faster will overtake and pass
the slower. The arguments on which the paradoxes are based are, in Mariotte’s view, sophistical: Bodies obviously change positions (even if we do not understand how), and do so at different rates; to
claim otherwise is nonsense. • Zeno’s own resolution of the paradox was that space, time, and motion are all illusory. This seems unlikely to appeal to many contemporary minds. It will be clear to
the reader that these “resolutions” are not all mutually consistent. I strongly doubt that any of them, or others that might have been included, will be compelling to everyone. What would constitute
a compelling resolution is an interesting psychological question; the only safe conjecture, in my view, is that people who think about such things are likely to disagree on the matter for some time
to come. As Salmon (2001) points out, “Each age, from Aristotle on down, seems to find in the paradoxes difficulties that are roughly commensurate with the mathematical, logical, and philosophical
resources then available” (p. 44). Contemporary discussions
Mathematical Reasoning
of the paradoxes often raise the question of what is conceivable in view of the laws of physics as currently understood, and include references to such concepts as kinematics, dynamics, Newtonian
mechanics, relativity theory, quantum mechanics, limitations imposed by the speed of light, minimal necessary conditions for emitting photons from a light bulb, and so on. How should we view the
paradoxes? Are they frivolous puzzles, mind games devoid of substance and unworthy of sober thought? Or do they present profound questions about the nature of such fundamental concepts as space,
time, and motion? There are undoubtedly a range of views on this matter. I am inclined to agree with the assessment of Russell (1929/2001), who contends that although Zeno’s paradoxes do not prove
that motion and change are impossible, they are not “on any view, mere foolish quibbles: they are serious arguments, raising difficulties which it has taken two thousand years to answer, and which
even now are fatal to the teachings of most philosophers” (p. 47). Salmon (2001) describes Zeno’s paradoxes as having an onion-like quality: “As one peels away outer layers by disposing of the more
superficial difficulties, new and more profound problems are revealed” (p. 43).
☐☐ The Dilemma of Divisibility Questions of the nature of space, time, and motion—especially relating to whether they are to be considered discrete or continuous and infinitely divisible—have amused,
bewildered, and tormented thinkers from Zeno’s time to ours. Are things divisible indefinitely, or only within limits, beyond which no further division is possible? Miller (1982) refers to the
question as the dilemma of divisibility and identifies its two horns as the nihilistic horn, which “starts from the proposition that magnitude is everywhere divisible and argues to the conclusion
that the magnitude is thereby reduced to no extension or, more dramatically, to nothing at all,” and the atomistic horn, which “starts from the premise that magnitude is not everywhere divisible,
leading to the positing of extended but indivisible magnitudes” (p. 89). Thomson (1954/2001b) makes a distinction between asserting that something is infinitely divisible—which means that “the
operation of halving it or halving some part of it can be performed infinitely often”— and asserting “that the operation can have been performed infinitely often” (p. 91). He suggests that people
have “confused saying (1) it is conceivable that each of an infinity of tasks be possible (practically possible) of performance, with saying (2) that it is conceivable that all of an infinite number
of tasks should have been performed” (p. 92).
To clarify this distinction, Thomson describes a reading lamp that has an on-off button that if pushed when the lamp is off, turns it on, and if pushed when the lamp is on, turns it off. Suppose, he
asks, that the lamp is off and the button is pushed once in a minute, again in the next half minute, once again in the next quarter minute, and so on, so that it is pushed an infinity of times within
two minutes. What will be the state of the lamp at the end of two minutes, on or off? Thomson contrasts the question of what the consequence of the last push of the button would be and that of what
the whole infinite sequence of button pushes would produce; the first question has no answer, he contends, because there is no last button push, but in his view the second question would seem to be a
fair one. Benacerraf (1962/2001), who sees the point of Thomson’s lamp to be to demonstrate that the idea of completing a super-task is self-contradictory, contends that the argument is flawed. The
flaw is that Thomson’s description of the on-off states of the lamp applies only to instants of time before the two-minute mark; it says nothing about the state of the lamp at that time. “He does not
show that to occupy all the points in an infinite convergent series of points logically entails occupying the limit point” (p. 120). Benacerraf’s argument here has an analog in the observation that a
convergent mathematical sequence ∞never reaches the limit value. As already noted, to say that the series, ∑ 21n , converges to 1 is not to say that it n=1 eventually actually reaches 1, but only
that it gets ever closer to 1. Benacerraf highlights the distinction between psychological and logical considerations by following the observation that it is not possible to imagine a circumstance in
which one would be justified in saying that an infinite sequence of tasks had been completed with insistence that the inability to imagine something does not make it logically impossible. In a
response to Benacerraf’s critique of his argument, Thomson (2001a) acknowledges the validity of the critique and expresses an inclination “to think that there are no simple knock-down arguments to
show that the notion of a completed ω-task [what Thomson earlier had referred to as a super-task] is self-contradictory” (p. 131). The dilemma of divisibility, which has been a challenge to
philosophers and mathematicians at least from the time of the ancient Greeks, remains a challenge to this day. Some of the associated problems have been discussed in the foregoing; there are many
others. Here I will just mention a few of them: • Imagine a cone being divided into two sections by a cut parallel to its base. How does the size of the bottom of the top section compare with the
size of the top of the bottom one? If one says they are the same,
Mathematical Reasoning
this seems to imply that the shape is that of a cylinder, not a cone, because by slicing all the way up we find no instance in which the bottom of the top and the top of the bottom differ in size. If
one says that the bottom of the top is smaller than the top of the bottom, this seems to require that a discrete jump has occurred from the one plane to the other, despite that there is no distance
between them. • Can a line be divided at every point? If points are not contiguous—if between any two points there are other points—how is it possible, even conceivable, to divide a line at any
point? But suppose a line can be divided at any point. It would seem to follow that a line can be divided at every point. But if a line is divided at every point, what is left? Apparently nothing. If
something is found to be left, this can only mean that the line was not divided at every point. But if division at every point leaves nothing, this seems to imply that a line—which is generally
considered to be composed of points—is the sum of many nothings. • Imagine a line segment bounded at one end by 0 and at the other by 10. Remove from it the subsegment bounded by 5 and 10. What is
the largest value of the remaining subsegment bounded at the bottom by 0? If the line is infinitely divisible, the answer is that it has none. By removing a closed line subsegment from a closed
segment in which it was contained, we have created a subsegment that has no maximum value. Is that not strange? • What is a point? Does it exist physically? And if it does not, what constitutes the
center of a circle or a sphere? And what do we call the place where a circle and a tangent to it touch? • Can indivisible entities touch? By definition, indivisible entities have no parts (e.g., no
edges); does it follow that to be juxtaposed means to be superposed or coincidental—or the same entity? If indivisibles cannot touch, can they lie in an ordered sequence with nothing between them? •
Does an instant of time have duration? If it does, then Aristotle’s dictum that a thing cannot be and not be at the same time does not hold, because it could be during one part of an instant and not
be during another part. But if an instant does not have duration, what is it? In what sense can it be said to exist? • What is “now?” The past and future appear to abut. Does the present not exist,
except perhaps as an illusion? Is time itself an illusion? If, as Aristotle argued, the past no longer exists, the future does not yet exist, and the present has no duration, what is left?
☐☐ The Continuity or Discontinuity of the Number Line The question of how to define the continuum of the number line has drawn the attention of some of the world’s greatest mathematicians. The
challenge has been to explain how the notion of a continuum can be made coherent. When a class of elements (points on a line, instants in time) has the property that any two elements, no matter how
close they are, have elements between them, it is said to be dense with respect to the ordering relationship of betweenness. Denseness is not to be confused, however, with continuity, or the absence
of gaps: The rational numbers are dense, inasmuch as one can always find another rational number between any two of them; however, the rationals do not completely fill up the number line. Indeed,
although both the rationals and the irrationals are infinite in number, the rationals, which are countable, are outnumbered by the irrationals (e, π, 2, 3, …), which are not. The question of whether
the number line is continuous was addressed by Dedekind with his famous “cut” or “partition.” Dedekind’s cut divides the real numbers into two subsets, the members of one of which are all less than
those of the other. A cut of the real number line need not occur at a rational number; if it occurs elsewhere than at a rational number, it can be used to define an irrational, a fact that Dedekind
used to argue the continuity of the real number line. From the notion of a cut, it is a short step to the distinction between intervals that include one or both endpoints and those that do not. This
allows an interval on the number line from, say 5 to 10 that includes the end points (5 and 10) to abut an interval from 10 to 15 that does not include the endpoint at 10. The latter interval is
considered open at its lower end; it includes all values as close as one wishes to 10, but not 10 itself. Lakoff and Núñez (2000) see Dedekind’s work on continuity as illustrative of the
discretization of mathematics (mentioned in Chapter 4). Dedekind’s work, they argue, represented a major departure from the way continuity had been conceptualized for millennia, by introducing a
metaphor that treated continuity as numerical completion rather than as a special concept. “Continuity no longer comes from motion but from the completeness of a number system. Since each number is
discrete and each number is associated one-to-one with the points on the line, there is no longer any naturally continuous line independent of the set of points. The continuity of the line— and
indeed of all space—is now to come from the completeness of the
Mathematical Reasoning
real-number system, independent of any geometry or purely spatial considerations at all” (p. 299). The discretization illustrated in the work of Dedekind was continued by others, Lakoff and Núñez
suggest, notably French mathematician Augustin Cauchy and German mathematician Karl Weierstrass, who further severed the conceptual dependence of arithmetic and calculus on geometry. The result was a
discretized account of continuity. In Lakoff and Núñez’s words: “Just numbers and logic. There is nothing here from geometry—no points, no lines, no planes, no secants or tangents. In place of the
natural spatial continuity of the line, there is just the numerically gapless set of real numbers…. The function is not a curve in the Cartesian plane; it is just a set of ordered pairs of real
numbers” (p. 313).
☐☐ The Continuity or Discontinuity of Space, Time, and Motion The questions of the continuity (infinite divisibility) or discontinuity (discreteness) of the number line and that of the continuity or
discontinuity of space, time, and motion are different, although they have been coupled tightly in treatments of both. Experiments have shown that many people do not make a clear distinction between
mathematical and physical objects with respect to the question of their divisibility, believing either that division can be continued indefinitely in both cases or that there will come a point, again
in both cases, at which further division will prove to be impossible (Stavy & Tirosh, 1993a, 1993b; Tirosh & Stavy, 1996). The absence of nearest neighbors on the number line is the kernel of Zeno’s
dichotomy paradox: Inasmuch as to get started, Achilles must go first from his starting point to the next point, but there is no next point to which to go, so he is stuck on start. It is also the
basis for numerous other paradoxes and puzzles. The following is from Normore (1982). It is now time t. At t + 1 second a light will be on, at t + ½ second it will be off, at t + ¼ second it will be
on, and so on. Will the light be on or off immediately after t? Normore notes that an account by the medieval English logician Walter Burley has it that the statements “Immediately after t the light
will be on” and “Immediately after t the light will be off” are both true. I leave it to the reader to consult Normore’s chapter to see how this conclusion can be justified. (The reader will see a
similarity between Nomore’s question and that of Thomson, which was discussed earlier, but whereas Thomson asks what the state of the light will be after it has been turned on and off an infinity of
times within a finite interval, Nomore asks how the state of the light can be changed at all.)
The dilemma represented by the notion of infinitely divisible space is illustrated by a contest in which the winner is the one who can stand closest to a building. Imagine (unrealistically) that it
is possible to stand as close as one wishes. No matter how close one stands, it can be asked why one does not stand closer. The temporal analog is a contest in which the winner is the one who starts
a process soonest following a specified instant, again supposing it is possible to start as soon as one wishes; in this case, no matter how soon after the instant one starts, it can be asked why one
did not start sooner. It was in part to answer questions like these that some of the ancient Greeks, notably Leucippus, Democritus, and Epicurus, argued that matter, time, and motion are
discrete—composed of indivisible entities—atoms. But as Zeno demonstrated, atomism was not free of difficult questions. For example, if time is discrete—composed of a sequence of time atoms—how is
motion, or change more generally, possible? If motion occurs, a thing in motion must be in different locations in space during different atoms of time. But if it is stationary during any given time
atom, as the atomists held, how does it get from one space atom to another in successive time atoms? One proposed answer was that it does so by jerks or jumps. Another resolution of this problem was
to deny that motion, as such, occurs: One does not say that something “is moving,” only that something “has moved” (Diodorus, quoted in Sorabji, 1982, p. 61). “Diodorus accepts from Aristotle the
idea that motion at indivisible times entails motion through indivisible places and that motion through indivisible places entails having moved without ever being in the process of moving” (Sorabji,
1982, p. 64). Aristotle devoted much energy to refuting the idea, promoted by the atomists, that space, time, and motion are discrete. One of his several arguments against atomism was that
indivisible entities could not be arranged so as to constitute a continuum, because contiguity or succession could not be achieved. A point in Aristotle’s view has no extent; he defined it 2,500
years in anticipation of Dedekind, as a cut or division of the line. It marks the beginning or end of a line segment, but has no substance (Miller, 1982). Aristotle’s handling of the question of how
change takes place, which—it appeared to some—involved the idea that something could be both x and not-x at the same instant, gave rise during the 14th century to what Kretzmann (1982a) refers to as
“quasi-Aristotelianism.” Kretzmann dismisses the proposal of quasi-Aristotelianism for dealing with Aristotle’s problem as a nonsolution of a pseudoproblem and attributes it to a misreading of
Aristotle. Spade (1982), who also notes problems with quasi-Aristotelianism, is less conclusively dismissive of it than is Kretzmann, arguing that it is interesting and not entirely outlandish, and
that it deserves fuller investigation. For present purposes, the point
Mathematical Reasoning
is that Aristotle’s views of the infinite persisted and motivated animated discussion and debate throughout the Middle Ages and continues to do so to this day. Atomism had the challenge also of being
clear about the nature of the atom. Is a point an atom (or a quark)? Is an instant? If an atom has extent or duration, however small, it would be divisible, at least conceptually, in which case, it
does not really solve the various dilemmas it was invented to solve. And if it does not have extent or duration—is not divisible, even conceptually—the question is how to get something with extent or
duration from extentless points or durationless instants—how to get something from nothing. There is the question too of how to deal with the fact that objects move at different speeds. Suppose A
travels twice as far as B during the same time. Does A jump twice as far as B during each instant? Or does A jump twice as frequently as does B, but covers the same distance on each jump? Miller
(1982) puts the dilemma the atomist faces in dealing with travel at different speeds this way. “A pure atomist will hold that an atom A moves in an indivisible jerk over an indivisible magnitude in
an indivisible time. If this atomist concedes that another atom B could move more slowly than A and agrees that a slower body covers a smaller magnitude in the same time, he will be driven to the
conclusion that there is a smaller magnitude than ‘the smallest magnitude.’ The pure atomist can avoid self-contradiction only by refusing to concede that it is always possible to move faster or
slower than any given moving body” (p. 110). This atomistic dilemma is reminiscent of Aristotle’s argument for the infinite divisibility of space and time. Aristotle distinguished between something
that could be said to be (potentially) infinite by addition and something that could be said to be (potentially) infinite by division. His argument that space and time are infinite by division goes
as follows: Consider two moving objects A and B, A being the faster. In the time t1 that A moves a given distance d1, B must move a shorter distance d2. And in the shorter time t2 that A moves the
distance d2, B must move a still shorter distance d3. And so on ad infinitum. (Moore, 2001, p. 41)
Zeno, of course, held that his paradoxes prove that motion is impossible, but even if one does not accept his arguments to that effect, one might still see in them some relevance to the question of
whether space, time, and motion are to be considered continuous or discrete. Although they prompt many questions that relate directly to this one, I do not see that they answer it. Perhaps space and
time are not infinitely divisible. Perhaps they are quantal in nature and the appearance
of continuity is due to the limited resolving power of our instruments of observation. If this is so, Zeno’s paradoxes, resting as they do on the infinite divisibility of the number line, are not
applicable to physical reality, because the number line is not descriptive of physical reality with respect to divisibility. It is conceivable that, at some time, it will be possible to determine
that space and time are discrete, that they are both particulate. It is not clear, however, how it would ever be possible to determine that space and time are continuous. The most that could be
determined is that they are effectively continuous within the limits of the resolving power of the instruments of observation and measurement at the time. That would not rule out the possibility of
discreteness at a more precise level of observation. (The idea that space is quantal brings its own puzzles. Hermann Weyl (1949) points out, for example, that if we imagine a square area being
composed of tiny indivisible square tiles, and if distance between two points is a function of the number of tiles between them, the diagonal of the area would be considered the same length as a side
of it.) It seems reasonably certain that questions of continuity and discreteness will fuel speculation for a long time to come. Dantzig sees the challenge as that of reconciling the “symphony of
number,” which plays in staccato with the “harmony of the universe,” which knows only the legato form. But whether the universe really is best considered spatially and temporally continuous or
discrete and which perspective—physical or mathematical—will have to adjust to effect a grand resolution appear to be open questions.
☐☐ Infinitesimals Persist There are few ideas in the history of mathematics that have proved to be more controversial and, at the same time, useful than the concept of an infinitesimal. The idea is
closely related to that of infinite divisibility. One conception of an infinitesimal is that of a quantity that is infinitely close to zero, but not equal to it. The elusiveness of the infinitesimal
is seen in the common practice of treating this entity as though its value were either zero or not, depending on the demands of the occasion. This dual-personality treatment of the infinitesimal was
critical to the development and use of the calculus. It was also the focus of some of the harshest criticisms. Commenting on it, the noted Anglican cleric and philosopher Bishop George Berkeley
asked, “And what are
Mathematical Reasoning
these fluxions [of which mathematicians speak]? The velocities of evanescent increments. And what are these same evanescent increments? They are neither finite quantities, nor quantities infinitely
small, nor yet nothing. May we not call them the ghosts of departed quantities…?” (1734/1956, p. 292). The “ghosts of departed quantities” is a reference to Newton’s fluxions, today’s differentials,
and alludes to the practice of mathematicians of giving them values only to take them away again by assuming they go to zero. Davis and Hersh (1972) refer to Berkeley’s logic as unanswerable, but it
did not prevent mathematicians from continuing to use infinitesimals to good effect. Infinitesimals, as well as the allied concepts of the derivative and the definite integral, remained very
difficult and subject to severe criticism and even ridicule for a long time during the 17th and 18th centuries. Like many other mathematical constructs, they were used effectively for practical
computational purposes long before anything approaching a consensus as to what they meant was attained. That Newton and Leibniz used infinitesimals does not mean that they were comfortable with the
concept, but only that they recognized its utility. Leibniz referred to the infinitesimal as a façon de parler, and defended the use of it strictly on practical grounds; even if one considers such a
thing impossible, he argued, it is an effective tool for calculation. The concept of infinitesimals proved to be so problematic and resistive to rigorous justification that many 19th-century
mathematicians refused to use it and made it superfluous with the development of the theory of limits. “So great is the average person’s fear of infinity that to this day calculus all over the world
is being taught as a study of limit processes instead of what it really is: infinitesimal analysis” (Rucker, 1982, p. 87). Ogilvy (1984) attributes early antagonism to the work of Newton and Leibniz
to the concept of the infinitesimal, which, he says, was exorcised during the 19th and 20th centuries. The exorcists, in this case, were Cauchy and Weierstrass. Cauchy introduced the notion of a
limit as the value that a variable approaches and from which it eventually differs by as small an amount as one wishes. Weierstrass formalized the idea by defining the limit, L, of f(x) as x
approaches a, as For any ε > 0, there exists a δ > 0 so that, if 0 < |x – a| < δ, then |f(x) – L| < ε Something of the relief with which many mathematicians greeted the introduction of the
concept of a limit is captured in a comment by Kasner and Newman (1940): “Because Weierstrass disposed of the infinitesimal djinn, the calculus rests securely on the understandable and
nonmetaphysical foundations of limit, function, and limit of a function” (p. 332).
With respect to the claim of the subsequent exorcism of the very concept of infinitesimals, it undoubtedly is the case that, as currently taught, calculus makes much greater use of the concept of a
limit than either that of an infinitesimal or of infinity. Moore (2001) describes the result: “What the calculus seems to do, once it has been suitably honed, is to enable mathematicians to proceed
apace in just the sort of territory where the actual infinite might be expected to lurk, without having to worry about encountering it. They can uphold claims ostensibly about infinitesimals or about
infinite additions, and they can even use the symbol ‘∞’, knowing that they are only making disguised generalizations about what are in fact finite quantities. They still need not look at the actual
infinity in the face” (p. 73). On the other hand, it appears that the discreditation of infinitesimals has been less than complete and terminally effective; although many mathematicians avoid their
use, infinitesimals continue to live and to be the focus of some attention. The angst about them has been felt primarily by pure mathematicians; physicists and engineers never stopped using them.
Moreover, they have been returned to respectability among at least some mathematicians thanks to the work of American logician Abraham Robinson (1969) on nonstandard analysis and, in particular, his
introduction of the concept of hyperreal numbers. Rucker (1982) argues that “Robinson’s investigations of the hyperreal numbers have put infinitesimals on a logically unimpeachable basis, and here
and there calculus texts based on infinitesimals have appeared” (p. 87). Explanations of Robinson’s work (building on that of others, including logicians Thoralf Skolem, Anatoli Malcev, and Leon
Henkin) and the significance of nonstandard analysis are provided by Davis and Hersh (1972) and Nelson (1977). Nelson (1977) defines an infinitesimal as a number that lies between zero and every
positive standard number, which is to say between zero and the smallest number one could conceive of writing. Doubt as to whether such entities exist does not preclude defining them. Infinitesimals,
defined thus, are, as McLaughlin (1994) puts it, “truly elusive entities.” They are to mathematics what quarks are to physics, only more so; quarks, although elusive in practice, are presumably
observable in principle. Infinitesimals are, by definition, unobservable, immeasurable. Their elusiveness rests on the mathematical fact that two concrete numbers—those having numerical
content—cannot differ by an infinitesimal amount. The proof, by reductio ad absurdum, is easy: the arithmetic difference between two concrete numbers must be concrete (and hence, standard). If this
difference were infinitesimal, the definition of an infinitesimal as less than all standard numbers would be violated. The consequence
Mathematical Reasoning
of this fact is that both end points of an infinitesimal interval cannot be labeled using concrete numbers. Therefore, an infinitesimal interval can never be captured through measurement:
infinitesimals remain forever beyond the range of observation. (p. 87)
More simply, there is no smallest number greater than zero; name any number ever so small, and ever so close to zero, and one can always find another number closer to zero than that. Infinitesimals
are not all clustered around zero; “every standard number can be viewed as having its own collection of nearby, nonstandard numbers, each one only an infinitesimal distance from the standard number”
(p. 87). Do infinitesimals exist? Does it matter whether they do? One position that some mathematicians have taken, either explicitly or implicitly by their effective use of them, is that the
important thing is not whether infinitesimals exist but whether one can get correct solutions to mathematical problems by proceeding as though they do. Davis and Hersh (1972) contend that nonstandard
analysis evades the question of whether infinitesimals really exist in some objective sense. “From the viewpoint of the working mathematician the important thing is that he regains certain methods of
proof, certain lines of reasoning, that have been fruitful since before Archimedes” (p. 86). Within the theory of measurement, which is a specialized subarea of abstract algebra, one issue of some
importance is how to exclude infinitesimals from consideration. Typically, the algebraic structure is sufficiently strong so that one can define equally-spaced intervals. Any sequence of
equally-spaced successive intervals is called a standard sequence. The structure is called Archimedean provided that every bounded standard sequence is finite, which property in effect rules out
infinitesimals. See Luce and Narens (1992) for greater detail. Despite their elusiveness—perhaps because of it—McLaughlin (1994) contends that infinitesimals, as made respectable by Robinson’s
introduction of hyperreals and subsequent developments—in particular a nonstandard analysis of Nelson, known as internal set theory (IST)— provide the basis for resolution of Zeno’s paradoxes. The
theory resolves the arrow paradox, in McLaughlin’s view, by allowing the motion to occur inside infinitesimal segments, where it would be unobservable. The ineffability of such segments provides, he
contends, a kind of screen or filter. Whether the motion inside an infinitesimal interval is uniform or discrete is indeterminate, because an infinitesimal is not observable. Mclaughlin contends that
all of Zeno’s paradoxes of motion are resolved as a consequence of basic features of IST, but one expects opinions to be divided as to whether the paradox has really been dispatched; unanimity among
mathematicians on this point would be surprising.
Friedlander (1965) argues that the use of infinitesimals can involve mathematics in an uncertainty principle that is analogous to that of Heisenberg in physics. Consider the problem of finding a
tangent to a curve: In plane geometry a tangent is, by definition, a straight line which has only one point in common with the curve in question, without intersecting the curve. As a tangent has
direction—and it is the direction we are particularly interested in—one point is insufficient for its determination. Two points are necessary to fix a straight line in a certain direction. This is
done in analytical geometry and its offshoot, calculus, by assuming two points, infinitesimally close to each other, but not contiguous to each other. Two geometrical points cannot be contiguous to
each other, because they would coincide and would no longer be two points. … If you are able to determine the direction of the tangent you cannot tell which one of the two points on the curve is the
point of contact. If you are able to tell the point of contact you are unable to determine the direction as long as you don’t employ the second point on the curve. The tangent presents a problem
basically no different from the problem of Heisenberg’s principle of uncertainty of Bohr’s principle of complementarity. (p. 26)
Still tangents are used, and their slopes determined, to good effect. Rucker (1982) notes that the dt of differential calculus, which is an infinitesimal, is considered close enough to zero to be
ignored when added to a regular number, but sufficiently different from zero to be usable as the denominator of a fraction. As to the strange rules that this schizoid character follows: “Adding
finitely many infinitesimals together just gives another infinitesimal. But adding infinitely many of them together can give either an ordinary number, or an infinitely large quantity” (p. 6). Noting
also the apparently dual (or indeterminate) nature of dt in the development of the differential calculus, Moore (2001) describes the reasoning involved as fundamentally flawed and ultimately
incoherent, resting as it does “on a certain notion of an infinitesimal difference (as not quite nothing, but not quite something either)” (p. 63). One can easily imagine what Bishop Berkeley might
have said to all this. Between the time of Zeno and that at which mathematicians began to attempt to develop useful approaches to dealing with (presumably) continuous variables, people moved around
in the world and the phenomenon of one racer passing another was witnessed more than once; it was clear to perception that arrows do indeed fly through the air— motion manifestly occurs—but no widely
accepted refutations of the paradoxes were forthcoming. Treating motion mathematically as a succession of states of rest, as suggested by Jean le Rond d’Alembert in the 18th century, does not quite
satisfy the philosophical mind. As Dantzig (1930/2005) notes, “The identification of motion with a succession of
Mathematical Reasoning
contiguous states of rest, during which the moving body is in equilibrium, seems absurd on the face of it” (p. 132). No more absurd, but not less, Dantzig argues, “than length made up of
extensionless points, or time made up of durationless instants” (p. 132). Mathematicians made whatever concessions to absurdities were necessary to permit them to make progress on the solutions to
mathematical and physical problems that captured their interests. Isaac Newton, working in the 17th century, ignored Zeno’s paradoxes, if he ever heard of them, and he created the mathematics he
needed to deal with continuous change. More generally, before the 19th-century emphasis on foundations and rigor, most notably by Cauchy, when the operations that were used by mathematicians produced
presumably correct results, logical difficulties with them were of greater concern to philosophers than to the mathematicians who used them to advantage (Kaput, 1994). Bunch (1982) reflects the
pragmatic attitude that most working mathematicians took with respect to the paradoxes that perplexed the philosophers. “What to do with a paradox? If you are sure that no contradiction results,
incorporate the paradox into mathematics and declare it a paradox no longer” (p. 115). Lakoff and Núñez (2000) see in the study of infinitesimals a lesson about mathematics that is deep and
important—“namely, that ignoring certain differences is absolutely vital to mathematics” (p. 251). This lesson, they note, is contrary to the widely accepted view of mathematics as the science that
is characterized by precision, and that never ignores differences, no matter how small. Calculus, they contend, is defined by ignoring infinitely small differences. More generally, “ignoring
infinitesimal differences of the right kind in the right place is part of what makes mathematics what it is” (p. 253). What makes this acceptable without creating serious conceptual difficulties, in
their view, is recognition of the metaphorical nature of the key concepts—especially infinity—that are involved.
☐☐ Psychology and the Paradoxes Most of the discussion and debate about Zeno’s paradoxes and their implications have centered on matters of philosophy and mathematics. However, the paradoxes are a
rich source of grist for the psychologist’s mill as well. The very fact of the persisting interest of thinkers in the paradoxes over two and a half millennia is an interesting phenomenon from a
psychological point of view. What does the persistence of the paradoxes, despite countless attempts to dispatch them—and claims that they have been dispatched—tell us about the nature of human
thought? What, from a psychological point of view, explains the amply demonstrated fact
that some deep thinkers have found Zeno’s arguments that motion cannot occur compelling, all the while moving about in the world like everyone else? There are many psychological questions pertaining
to specific aspects of the reasoning that the paradoxes prompt. Grünbaum (1955/2001a) asks, for example: “What is the basis for the view that the very meaning of temporal succession involves that
events follow upon one another seriatum, like the consecutive beats of the heart, and not densely?” (p. 173). His speculation is that the answer is to be found in the way time-order is experienced in
human consciousness: Since each act of thought takes a minimum positive amount of time rather than a mere instant of zero duration, it is inevitable that when we analyze the stream of consciousness
into a succession of constituent moments or ‘nows,’ these elements are experienced as occurring in a discrete sequence. No wonder therefore that on such an intuitively grounded meaning of temporal
succession, there is an ever present feeling that if physical events are to succeed one another in time, their order of occurrence must also be discrete, if it is to be a temporal order at all. (p.
Grünbaum argues that refutation of Zeno’s paradoxes of motion requires that this psychological understanding of temporal sequence be replaced by a conception based on a “strictly physical criterion
of ‘event B is later than event A’ that does not entail a discrete temporal order, but allows a dense order instead” (p. 173). The critical thing to note is that event B occurring later than event A
does not entail that event B follows event A immediately (or that any event follows event A immediately). What is hard, but necessary, according to Grünbaum’s conjecture, is to replace one’s
introspection-based belief that every event has an immediately successive event with a conception of temporal sequence for which that is not the case. “Upon freeing ourselves from the limitations of
the psychological criterion of time-order by means of the constructive elaboration of an alternative, autonomous physical criterion, it becomes clear that the dense temporal ordering of the
constituent point-events of a motion is no obstacle whatever to either its inception or its completion in a finite time” (p. 174). From here it is a short step to the conclusion that the inability to
identify the location of Achilles the instant before he reaches his goal (in the dichotomy) or catches the tortoise (in the race) is no warrant for contending that it is impossible for him to do the
one or the other. Grünbaum argues that the inability to identify a final subinterval in a progression during which a motion is completed does not preclude the existence of an instant that occurs
after the motion has been completed.
Mathematical Reasoning
In effect, Grünbaum dismisses both the psychological and the logical considerations as valid reasons for accepting Zeno’s arguments. “In summary, Zeno would have us infer that the runner can never
reach his destination, just because (1) in a finite time, we could not possibly contemplate one by one all the subintervals of the progression, and (2) for purely logical reasons, we could not
possibly find the terminal instant of the motion in any of the ℵ0 [Cantor’s first-order infinity] subintervals of the progression, since the terminal instant is not a member of any of them. But it is
altogether fallacious to infer Zeno’s conclusion of infinite duration from these two premises” (p. 209). Grünbaum (1969/2001b) argues that there is a threshold, or minimum, duration of time people
can appreciate experientially, and that this minimum plays a role in several fallacies that people commit in reasoning about Zeno’s paradoxes of motion. For example, because there is assumed to be a
lower bound on the duration of any run that one can imagine, an infinity of runs, even of steadily decreasing lengths, would last forever. Some of the difficulty people have with infinity,
infinitesimals, and related concepts undoubtedly stems from the imprecision of natural language. Consider, for example, the word more in the context of the question of whether there are more points
in a line segment extending from 0 to 2 than in one extending from 0 to 1. If B having more points than does A is taken to mean that B contains all the points in A and others besides, then the answer
is yes, a line segment from 0 to 2 has more points than one from 0 to 1, inasmuch as the former has all the points of the latter and others as well. But if having more points is taken to mean having
a greater number of points, the answer is no, because—it is generally agreed that—the number of points in any two line segments is the same (the same order of infinity) and independent of their
length. Another common word that requires careful definition in the context of discussion of Zeno’s paradoxes is task. Several writers have raised the question of whether it is conceivable that an
infinite number of tasks can be completed in a finite time (Grünbaum, 1967; Moore, 2001; Thomson, 1954/2001b). Benacerraf (1962/2001) notes that to show that the idea of an infinite number of tasks
being performed in a finite time is self-contradictory, it would suffice to agree that, by definition, a task is something the performance of which takes some time and there is a lower bound on how
little time a task can take. A psychological question of considerable practical importance that is raised by the numerous treatments of Zeno’s paradoxes in the literature is the following: How is it
that some highly intelligent thinkers can consider claims to be compelling that others, equally intelligent, hold to be absurd? Discussions of the paradoxes are not the only context in which this
question arises, of course, but it is one of them; absurd is a
frequently used modifier in this literature. The literature is also replete with charges of inconsistency in argument and with claims and counterclaims of logical error and non sequitor reasoning.
Why do writers, highly intelligent and presumably well versed in logic, often disagree about what does or does not follow from specific claims—about what does or does not constitute a valid argument
when expressed in natural language (rather than in abstract form)? There are, in short, many questions of psychological interest that are prompted by consideration of Zeno’s paradoxes, and especially
of the many efforts to resolve them, the critiques of those efforts and the countercritiques of the critiques. To date, the paradoxes have not received a lot of attention from psychologists, but
there are many opportunities for exploration.
☐☐ A Personal Note I will end this chapter with a personal anecdote. Consider the situation represented by Figure 9.1. The shortest path from A to C is the straight line, the length of which is z =
x 2 + y 2 . An alternative, “city block” path, as shown in the leftmost diagram, would be from A to C by way of B. The length of this path is x + y, considerably longer than x 2 + y 2 . Or one could
take the zigzag path shown in the next-to-leftmost diagram, going north from A halfway to B, then traveling the same distance east, before turning north again, and so on. It should be clear that the
length of this zigzag path is exactly the same as the length of the city block path with only two legs, namely, x + y. As illustrated by the next two diagrams, one could repeat the process of halving
the distance traveled north before turning east and halving the distance traveled east before turning north again, and in each
Figure 9.1 Illustrating the problem of the vanishing distance.
Mathematical Reasoning
case one will have traveled the same distance, x + y, by the time one gets to C. No matter how many times this process is repeated, the length of the zigzag path to C remains x + y. But in the limit
the zigzag path becomes the straight-line path from A to C, the length of which is x 2 + y 2 , does it not? What happened to the difference between x + y and x 2 + y 2 ? Note too that the area
enclosed by the steps and the diagonal is reduced by half with each successive doubling of the number of steps; this value clearly gets ever closer to zero as the doubling of the number of steps is
increased indefinitely. This problem occurred to me on the occasion of driving from one point to another that is well represented by the two left-most diagrams. I was at A and the destination was C.
I knew it was possible to go from A to B to C, but a passenger, who knew the area better than I, suggested that we take a shortcut by turning right on a street that was halfway between A and B, then
left and right to arrive eventually at C. Of course, the “shortcut” saved no distance and introduced two unnecessary turns. When it occurred to me that the shortcut maneuver could be repeated
indefinitely without shortening the distance traveled, I believed for a time that I had stumbled onto a new mathematical puzzle. But alas, I eventually discovered that the problem, or at least the
problem type, is well known. Friend (1954, p. 72) describes a version of it, without resolution, under the heading “The Field of Barley.” Lakoff and Núñez (2000) describe a different version that
they refer to as “a classic paradox of infinity.” The latter version begins with a semicircle with diameter 1 drawn on a line from (0,0), to (1,0), as shown in Figure 9.2, top. The next step is to
draw two semicircles, each with diameter 1/2 within the semicircle with diameter 1 (Figure 9.2, next to top). This process is repeated indefinitely, at each step drawing within each of the
semicircles produced at the preceding step two semicircles with diameter reduced by half. Consider now the semicircle with which the process begins. The length of its perimeter is π/2. Inasmuch as
each of the semicircles drawn within the original one (Figure 9.2, next to top) has diameter 1/2, the length of its perimeter is π/4 and the sum of them is π/2. Every time the process of drawing
within the existing semicircles two with diameter half that of those drawn on the preceding step, one produces a set of semicircles, the sum of the lengths of the perimeters of which is π/2. As the
process is repeated indefinitely, the total area under the semicircles approaches 0, and the semicircles, in the aggregate, approach a straight line. Nevertheless, the sum of the lengths of their
perimeters remains constant at π/2. How can this be? Lakoff and Núñez argue that there are more possible resolutions than one of the semicircle paradox, and that what one accepts as a
Figure 9.2 Illustrating again a vanishing area under a constant-length perimeter.
resolution is something of a matter of preference. Their own resolution (pp. 329–333) makes use of the concept of metaphor. The best I can do with respect to the city block paradox is to contend that
the city block distance does not diminish, ever—that, conceptually, the doubling of the number of steps can be continued indefinitely and the zigzag route never does become a straight line.
Similarly, neither do the semicircles become a straight line; they remain semicircles indefinitely, even though the area they encompass becomes vanishingly small. I leave it to the reader to decide
whether these are acceptable resolutions. That the area in both the city block and semicircle cases approaches 0 despite that the lengths of the perimeters of the figures remain constant should not
be difficult to accept. It is obvious that figures with the same length perimeter can have different areas (a 2 × 2-ft square and a 3 × 1-ft rectangle each has an 8-ft perimeter, while the area of
the first
Mathematical Reasoning
is 4 ft 2 and that of the second is 3 ft2). If one wants to decrease the area encompassed by a perimeter of given length, the most obvious way to do it is to increase the figure’s length-to-width
ratio while holding the length of the perimeter constant—to stretch the figure in one direction and compress it in the orthogonal one. But this is not what is happening with city block and semicircle
figures. If we think of each of these figures as two-sided (the diagonal being one side of the city block figure and the steps the other; the diameter of the original semicircle being one side of the
semicircle figures and the arc sides of the semicircles the other), as the areas within their perimeters are reduced with each iteration of the process, the ratio of the lengths of the two sides in
each case is unchanging. Peculiar, but apparently not impossible.
10 C H A P T E R
Predilections, Presumptions, and Personalities
Mathematicians are finite, flawed beings who spend their lives trying to understand the infinite and perfect. (Schecter, 1998, p. 47)
Mathematicians represent as variable and colorful an assortment of personalities as do the people who comprise any disciplinary group. There is Archimedes (c. 287 BC–c. 212 BC), who, legend has it,
could be so engrossed in a mathematical problem as to ignore the threat of a Roman soldier and to pay for the engrossment with his life. There is Girolamo Cardano (1501–1576), lawyer, physician,
mathematician—self-proclaimed survivor of an attempted abortion. There is Blaise Pascal (1623–1662), who, after making seminal contributions to both mathematics and science, devoted the remainder of
his short life to philosophy and religion. There is Isaac Newton (1643–1727), unsurpassed in his influence on mathematics and science, but with an ego bigger than life and a chronic need for
stroking. There is Abraham de Moivre (1667–1754), who, upon noting that his sleeping time was increasing by about 15 minutes a night, predicted the day of his own death by calculating how long it
would be before his sleeping time reached 24 hours, and then obligingly died on the predicted day. There is Marie-Sophie Germain (1776–1831), correspondent with Gauss and winner of a prize from the
French Academy for work done under the pseudonym Antoine LeBlanc. There is political activist Evariste Galois (1811–1832), who lost his life at 20 in a duel of obscure instigation. There is
Mathematical Reasoning
Bernhard Riemann (1826–1866), recognized as a genius but without a decent job and destitute for much of his adult life. There is Georg Cantor (1845–1918), who tamed infinities and succumbed to the
frailty of his own mind. There is Srinivasa Ramanujan (1887–1920), who was unable to pass his school exams in India, but whose extraordinary mathematical intuition was recognized and cultivated by G.
H. Hardy and J. E. Littlewod until his untimely death at 32. There is Norbert Wiener (1894–1964), a Harvard PhD at 18, widely recognized as the father of cybernetics, who worried in later years
(Wiener, 1964) about some of the possible trajectories of machine intelligence. There is F. N. David (1909–1993), pioneering statistician and namesake of Florence Nightingale, who is generally
recognized as the founder of the modern nursing profession and was herself a statistician of sufficient stature to be elected a Fellow of the Royal Statistical Society. There is Alan Turing
(1912–1954), code breaker in World War II, computer visionary, and inventor of the universal Turing machine, dead by his own hand at 42. There is Paul Erdös (1913–1996), the peripatetic Hungarian
mathematician whose long life appeared to be totally consumed by mathematics. One or more book-length biographies have been written on most of the more famous mathematicians. Short biographical
information is provided for many of them by several writers, including Turnbull (1929/1956), Bell (1937), Asimov (1972), Boyer and Merzbach (1991), Pappas (1997), and Fitzgerald and James (2007).
Short biographies of notable female mathematicians have been provided by Henrion (1997). A few mathematicians have provided revealing and fascinating autobiographical accounts of what it is like to
be a mathematician. Notable among these first-person accounts are G. H. Hardy’s (1940/1989) A Mathematician’s Apology, Norbert Wiener’s (1953) Ex-Prodigy: My Childhood and Youth and (1956) I Am a
Mathematician: The Later Life of a Prodigy, Stanislav Ulam’s (1976) Adventures of a Mathematician, and Mark Kac’s (1985) Enigmas of Chance. An index of biographies of essentially all notable
mathematicians in history is given at http://www-groups.dcs. st-and.ac.uk/~history/BiogIndex.html. Generalizations are risky, but there are, I believe, some observations that can be made about
attitudinal aspects of doing mathematics that are sufficiently in evidence among mathematicians to be viewed as characteristic of the field.
☐☐ Conservatism in Mathematics New ideas…are not more open-mindedly received by mathematicians than by any other group of people. (Kline, 1980, p. 194)
Mathematicians have sometimes been remarkably resistant to new ideas. Georg Cantor’s work on the mathematics of infinity, for example, was
Predilections, Presumptions, and Personalities
attacked and ridiculed mercilessly by fellow mathematicians. Poincaré predicted that future generations would see it as a disease from which they happily had recovered. Kronecker was so upset by
Cantor’s work on the theory of sets that he prevented Cantor from getting any appointment in a German university and from publishing a memoir in any German journal. The seminal work of Lobachevsky
and Bolyai on non-Euclidean geometries was ignored by colleagues for three decades after it was first published. Gauss was reluctant to publish his work in this area, which predated that of
Lobachevsky and Bolyai, because of concern for the ridicule it would evoke. Girolamo Saccheri narrowly missed inventing non-Euclidian geometry a century before the time of Lobachevsky and Bolyai
because he could not accept his own results—which he published in Euclid Vindicated From All Defects in the year of his death—when they suggested the possibility of a geometry other than Euclid’s.
He, like countless others, put much effort into an attempt to prove Euclid’s parallel postulate. His method led to the derivation of many theorems that were inconsistent with the parallel postulate
and consistent with all the rest, but he could not bring himself to believe that a geometry could be constructed for which the parallel postulate did not hold. An amateur mathematician who apparently
developed, but did not publish, a non-Euclidean geometry—which he referred to as astral geometry— before the work of Lobachevsky and Bolyai, was German professor of jurisprudence Ferdinand
Schweikart, who corresponded with Gauss about his work. French mathematician Gérard Desargues, who is remembered today as one of the founders of projective geometry, was ridiculed in his day and
sufficiently discouraged by this treatment that he gave up his explorations in this undeveloped area of mathematics. Because every printed copy of his book published in 1639 was lost, much of what he
did had to be rediscovered 200 years later. The tenability of specific ideas often changes over time and differs from one culture to another. This is as true of mathematical ideas as of ideas in
other domains of thought. Difficulties with such concepts as negative, irrational, and imaginary numbers stemmed at least in part from the assumption that the purpose of mathematics was to represent
aspects of the physical world. When mathematics is seen as a body of rules for manipulating abstract symbols, of which numbers are examples, the same difficulties are less likely to be encountered;
from this perspective the question of what -1 “really means” does not arise. Surprisingly, however, even those extensions of the number system and other mathematical concepts that have seemed most
abstract and philosophically problematic when first introduced have often found some physical interpretation in time and have proved to be useful for practical
Mathematical Reasoning
purposes. And they eventually have become sufficiently widely accepted to be taken for granted. Is the resistance that mathematicians have shown to new ideas just another example of the general
conservatism that human beings appear to have toward the unfamiliar, or does it rest on something distinctive about mathematics as a discipline? Whatever else it may signify, it points out the
humanness of mathematicians. Kline (1953) describes this conservatism in less than complimentary terms: “Mathematicians, let it be known, are often no less illogical, no less closed-minded, and no
less predatory than most men. Like other closed minds they shield their obtuseness behind the curtain of established ways of thinking while they hurl charges of madness against the men who would tear
apart the fabric” (p. 397).
☐☐ Faith in Mathematics One would normally define a “religion” as a system of ideas that contains statements that cannot be logically or observationally demonstrated. Rather, it rests either wholly
or partially upon some articles of faith. Such a definition has the amusing consequence of including all the sciences and systems of thought that we know; Gödel’s theorem not only demonstrates that
mathematics is a religion, but shows that mathematics is the only religion that can prove itself to be one! (Barrow, 1990, p. 257)
Faith may seem a strange quality to associate with mathematics, but without it a mathematician would not get far. An obvious role that faith plays is that of permitting mathematicians to build on the
work of others. “Mathematicians in every field rely on each others’ work, quote each other; the mutual confidence which permits them to do this is based on confidence in the social system of which
they are a part. They do not limit themselves to using results which they themselves are able to prove from first principles. If a theorem has been published in a respected journal, if the name of
the author is familiar, if the theorem has been quoted and used by other mathematicians, then it is considered established. Anyone who has use for it will feel free to do so” (Davis & Hersh, 1981, p.
390). Today teams of mathematicians may produce proofs that are sufficiently complex that no individual can vouch for the correctness of them in their entirety. When that is the case, each member of
the team must rely on the competence and integrity of the others if they are to have any confidence in the aggregated results of their combined efforts. There are roles that faith plays in
mathematics that are more fundamental than those associated with the willingness of mathematicians
Predilections, Presumptions, and Personalities
to rely on the work of other mathematicians. Among the more forceful writers to make this point was Bishop Berkeley (1734/1956). His derisive commentary on the willingness of mathematicians to use
what he considered nonsensical concepts in their development of the calculus were noted in the preceding chapter. His purpose was to show that religion is not alone in demanding faith on the part of
its adherents, which he did by pointing out the ease with which some mathematicians accepted ideas that he saw as having less of a rational basis than some of the religious ideas that they just as
easily dismissed. One might ask, Berkeley suggests, “whether mathematicians, who are so delicate in religious points, are strictly scrupulous in their own science? Whether they do not submit to
authority, take things upon trust, and believe points inconceivable? Whether they have not their mysteries, and what’s more, their repugnances and contradictions?” (p. 293). It is much too easy with
the benefit of hindsight to see Berkeley’s objections to the calculus as the protestations of a man who had a vested interest in putting it down, but there is little reason to doubt the sincerity of
his skepticism. Although the invention of Newton and Leibniz proved to be enormously useful, it had little in the way of a theoretical foundation until the concept of a limit was developed by Cauchy
and Weirstrass nearly 200 years later. “Nobody could explain how those infinitesimals disappeared when squared; they just accepted the fact because making them vanish at the right time gave the
correct answer. Nobody worried about dividing by zero when conveniently ignoring the rules of mathematics explained everything from the fall of an apple to the orbits of the planets in the sky.
Though it gave the right answer, using calculus was as much an act of faith as declaring a belief in God” (Seife, 2000, p. 126). We are unlikely to have trouble with the specific concepts Berkeley
mentioned, and we may not wonder as he did about the calculus because familiarity with it as a practically useful tool has deadened our curiosity about its foundations or rational justification.
There can be little doubt, however, that his general observations are true, if not of mathematicians, at least of those of us who have a passing acquaintance with some aspects of mathematics and use
them to advantage on occasion. We accept that the product of two negative numbers is a positive number, that although it is all right to multiply by 0 it is forbidden to divide by it, that 0! is 1 as
is 1!, and that we do so does not mean that we have a clear understanding of the bases of these rules. We use imaginary numbers to good effect, although we may not be able to imagine what they mean,
and we readily include in our equations symbols that represent infinitely small or infinitely large magnitudes or quantities, although we cannot conceive of their referents.
Mathematical Reasoning
Even our assumption that the real number system is appropriate to the description of natural phenomena is an article of faith. Penrose (1989) puts it this way: “The appropriateness of the real number
system is not often questioned, in fact. Why is there so much confidence in these numbers for the accurate description of physics, when our initial experience of the relevance of such numbers lies in
a comparatively limited range? This confidence—perhaps misplaced—must rest (although this fact is not often recognized) on the logical elegance, consistency, and mathematical power of the real number
system, together with a belief in the profound mathematical harmony of Nature” (p. 87). Barrow (1992) describes our acceptance of the whole enterprise of mathematics and its scientific applications
in a similar fashion: We have found that at the roots of the scientific image of the world lies a mathematical foundation that is ultimately religious. All our surest statements about the nature of
the world are mathematical statements, yet we do not know what mathematics “is”; we know neither why it works nor where it works; if it fails or how it fails. Our most satisfactory pictures of its
nature and meaning force us to accept the existence of an immaterial reality with which some can commune by means that none can tell. There are some who would apprehend the truths of mathematics by
visions; there are others who look to the liturgy of the formalists and the constructivists. We apply it to discuss even how the Universe came into being. Nothing is known to which it cannot be
applied, although there may be little to be gained from many such applications. And so we find that we have adopted a religion that is strikingly similar to many traditional faiths. (p. 297)
There is a deeper sense still in which faith is necessary to mathematics and indeed to any activity in which reasoning plays a critical role. I am speaking of the need to accept the adequacy and
unchanging nature of the rules of inference and to believe that the human mind is capable of comprehending and applying these rules. It is not clear that we could work on any other assumption,
because if the rules of inference were to change from time to time, or if they were completely beyond our comprehension, we could have little hope of knowing anything. So the assumption is critical,
but it is not demonstrably correct; it is an article of faith and can never be anything but. Mathematicians are constantly making claims about all numbers, all triangles, all circles, and other sets,
the members of which are assumed to be infinite in number. Such claims cannot be verified empirically—we cannot check to see if what is claimed of the infinity of members of any set is true in every
case. Nevertheless, we accept many such claims with complete confidence, and we can do so only because of our faith in rules of logic and in our ability to reason according to them.
Predilections, Presumptions, and Personalities
☐☐ Passion in Mathematics Contrary to popular belief, mathematics is a passionate subject. Mathematicians are driven by creative passions that are difficult to describe, but are no less forceful than
those that compel a musician to compose or an artist to paint. (Pappas, 1997, p. i)
To nonmathematicians, mathematics may seem to be the most dispassionate of activities; what could be less exciting than manipulating symbols on a piece of paper, or in one’s head? In fact, it appears
that at least the better mathematicians that the world has produced have found mathematics to be a totally absorbing activity and a source of intense pleasure, if, at times, also enormous
frustration. Biographers of great mathematicians have often described them as being obsessed with mathematics, as being unable sometimes not to think about mathematical problems, as becoming so
engrossed in their thinking about mathematical matters as to be oblivious to what is going on around them. Asking how it was possible for one man to accomplish the colossal mass of highest order work
that Gauss did, Bell (1937/1956) suggests that part of the answer was Gauss’s “involuntary preoccupation with mathematical ideas.” “As a young man,” Bell writes, “Gauss would be ‘seized’ by
mathematics. Conversing with friends he would suddenly go silent, overwhelmed by thoughts beyond his control, and stand staring rigidly oblivious to his surroundings” (p. 326). Tenaciousness was also
one of Gauss’s characteristics: Once engrossed in a problem, he stayed with it until he solved it. Gauss himself attributed his prodigious output to the constancy with which he thought about
mathematics and suggested that if others reflected on mathematical truths as deeply and continuously as did he, they would make the same types of discoveries. Archimedes and Newton are both said to
have often been so engrossed in a problem as to neglect to eat or sleep. Whether true or not, the wellknown story of the death of Archimedes conveys a sense of how complete and uncompromising a
mathematician’s concentration on a problem of interest can be. In his autobiography, Mark Kac (1985) describes being “stricken,” at the age of 16, “by an acute attack of a disease which at irregular
intervals afflicts all mathematicians … I became obsessed by a problem” (p. 1). The problem that gripped him was the—self-imposed—challenge to derive the method for solving cubic equations that had
been discovered, but not derived, by Girolamo Cardano in 1545. Kac reports having a number of such bouts with the “virus of obsession” during his life, but credits the first such experience, as a
teenager in 1930, with establishing his lifelong
Mathematical Reasoning
commitment to mathematics. Here is how he describes this experience. “I rose early and, hardly taking time out for meals, I spent the day filling reams of paper with formulas before I collapsed into
bed late at night. Conversation with me was useless since I replied only in monosyllabic grunts. I stopped seeing friends; I even gave up dating. Devoid of a strategy, I struck out in random
directions, often repeating futile attempts and wedging myself into blind alleys” (p. 3). Kac’s perseverance paid off and he was able to solve the problem on which he had been obsessing. “Then one
morning—there they were! Cardano’s formulas on the page in front of me.” From that moment, Kac says, “having tasted the fruits of discovery,” he wanted to do nothing but mathematics, and mathematics
became his lifelong vocation. Ulam (1976) stresses the importance to mathematical creativity of what he refers to as “‘hormonal factors’ or traits of character: stubbornness, physical ability,
willingness to work, what some call ‘passion’” (p. 289). In describing his own transformation from being an electrical engineer to being a mathematician, he notes that it was not so much that he
found himself doing mathematics, but rather that mathematics had taken possession of him. He estimated that after starting to learn mathematics he spent on the average two to three hours a day
thinking about mathematics and another two to three hours reading or conversing about it. Keynes (1946/1956) suggests that Newton’s peculiar gift was the ability to concentrate intently on a problem
for hours, days, or weeks, if necessary, until he had solved it. Newton himself claimed that during the time of his greatest discoveries in mathematics and science (when he was in his early 20s) he
thought constantly about the problems on which he was working. Even if we discount somewhat the reports of extreme commitment to reflection by first-rank mathematicians, allowing for the possibility
of some exaggeration in reports of the characteristics and behavior of individuals who have become legendary figures, we can hardly escape the conclusion that mathematics is unusual in its ability to
capture the attention of certain minds and to hold it for long periods, sometimes a lifetime. This applies not only to the more glamorous aspects of mathematics, but sometimes to tedious ones as
well. How else could we account for Napier’s willingness to devote 20 years to calculating his table of logarithms, or for the expenditure of a similar amount of time by French astronomer Charles
Delaunay to produce an equation that gives the exact position of the moon as a function of time, or for the dedication of German mathematician Ludolph van Ceulen of most of his life to the
determination of the value of π to 35 places? This is not to suggest that mathematics is unique in this regard; some people devote their lives to thinking about music,
Predilections, Presumptions, and Personalities
philosophy, science, theology, medicine—or chess. It may be that in order to be first rate at anything, one must have the kind of commitment to it that leading mathematicians have appeared to have to
☐☐ Capabilities, Personalities, and Work Habits of Mathematicians If today some earnest individual affecting spectacular clothes, long hair, a black sombrero, or any other mark of exhibitionism,
assures you that he is a mathematician, you may safely wager that he is a psychologist turned numerologist. (Bell, 1937, p. 9)
Contrary to popular stereotypes, mathematicians span a very considerable range with respect to capabilities, personality characteristics, and work habits. Some, like Gauss, have remarkable
computational ability; others, like Poincaré, have difficulty doing simple arithmetic. Some, like Gödel, are reclusive and taciturn; others, like American polymath John von Neumann, are sociable and
gregarious. Some, like Erdös, work with numerous collaborators; others, like Hardy, work with only a very few; and still others generally work alone. British mathematician Arthur Cayley read the work
of other mathematicians extensively; James Sylvester, also a British mathematician and Cayley’s close collaborator, disliked learning what other mathematicians had done. Some mathematicians have had
extraordinary memories—Ramanujan was a case in point—but unusual memory ability has not been considered a requirement for mathematical prowess. Some mathematicians maintain highly regular work
schedules, and rather regular daily routines more generally. Others lead much less highly structured lives. Some seem to be able to mix a certain amount of habitual routine with a good bit of less
organized time. Von Neumann, for example, had the habit of spending some time writing before breakfast every day. A very sociable person, he enjoyed parties and informal gatherings, but he would
sometimes drop out of conversations or withdraw from social affairs (even those he hosted) in order to work on a problem that had presented itself to his mind. Descartes, it is said, made a habit of
staying in bed, thinking, every day until noon, a habit, if true, that was rudely ignored by Queen Christina of Sweden, who insisted, late in his life, on his appearance as her tutor in mathematics
at 5:00 a.m. Poincaré appears to have done his mathematical problem solving in his head, while pacing, writing down what he had done only after having done it, and unlike most mathematicians, he
remembered theorems and
Mathematical Reasoning
formulas primarily by ear. Believing that much mathematical problem solving occurred subconsciously during sleep, he also made it a point to get regularly a good night’s rest. Mark Kac (1985)
describes Stanislav Ulam’s way of doing mathematics as “by talking … wherever he found himself he talked mathematics day in and day out, throwing out ideas and generating conjectures at a fantastic
rate” (p. xxi). Some mathematicians have been immensely productive; the vast majority have been much less so. Bell (1937) puts Euler, Cauchy, and Cayley in a class by themselves in terms of sheer
productivity, ranking Poincaré, who published almost 500 papers on new mathematics, and over 30 books, in addition to his popular essays and work on the philosophy of science, as a distant second.
(Bell made this observation when Erdös was in his mid-20s, still early in his career as a publication machine.) While mathematicians differ in the mentioned respects and others, there are
characteristics that appear to be, if not essential, at least highly conducive to mathematical eminence. The ability to concentrate intensely is a case in point, and persistence appears to be an
asset if not also a requirement for doing creative mathematics. Confidence in one’s ability to see problems through to solution would seem to be necessary to keep one working when progress is proving
to be very difficult. A certain intellectual independence, or toughness of mind, appears also to be an asset if not a requirement for work that departs from well-trodden paths and takes a field off
in new directions. Perhaps it was the lack of this toughness of mind that cost Saccheri the lost opportunity, as noted in Chapter 4, to develop non-Euclidian geometry 100 years before others did so.
The ability to invent new approaches to problems has been seen as one that distinguishes good from mediocre mathematicians. King (1992), for example, characterizes good mathematicians as those to
whom the problem is paramount, who “once attracted to a problem of significance and elegance … learn or create whatever mathematical methods are necessary to solve it.” In contrast, mediocre
mathematicians, in King’s view, “are characterized by their tendency to use only the mathematics they already know and to search for problems that can be solved by these methods” (p. 35). That
mathematics can be, and often is, such an engrossing activity may be the basis of a common misconception about great mathematicians, which is that all they think about is mathematics. There may be
mathematicians among the greats that fit this model, or come close to it, but there are also many who do not. Some of the more prominent mathematicians of history had exceptionally broad education
and interests. Euler, arguably the most prodigious mathematician who ever lived, produced about 800 pages of mathematical output a year. His publications numbered almost 900 books and mathematical
memoirs, in Latin, French, and German, many of which appeared only after his death. It is hard to
Predilections, Presumptions, and Personalities
imagine how he could have managed this if he spent much time thinking about anything other than mathematics. Nevertheless, in addition to mathematics, Euler studied theology, medicine, astronomy,
physics, and oriental languages (Boyer & Merzbach, 1991). Euler was a devout Christian and family man, and a prolific correspondent; he is believed to have written some 4,000 letters, of which nearly
three-quarters have been preserved (Beckman, 1971). Newton’s work in mathematics and science, which made him famous, occupied a relatively small part of his life, perhaps not more than about 10
years, and accounted for a relatively small percentage of the 1.3 million words that he wrote and left to posterity. At least as strong as his commitment to mathematical and scientific investigations
was his lifelong interest in theology. Leibniz was a philosopher, theologian, linguist, and historian as well as a mathematician and logician, and was engaged for much of his life as a professional
political advisor and diplomat. Pascal made lasting contributions to both mathematics and physics before devoting the last decade or so of his short life (dead at 39) to a study of scripture and
vigorous defense of Christianity. Although Gauss devoted a large percentage of his waking hours to mathematics, he also found time to master several languages (learning Russian by himself in two
years, beginning at the age of 62) and to keep abreast of world affairs, to which he devoted an hour or so each day. His keen interest in languages and world affairs is especially noteworthy in view
of the claim that the longest trip he is known to have taken was 27 miles from his home in Göttingen (Bell, 1946/1991). The prototypical mathematician is undoubtedly a mythical character; as a group,
mathematicians include as great a diversity of personalities as one can imagine. Some have led conservative—some might say boring—social lives; others have been very sociable party-loving folk; and
some have been almost scandalously colorful. Some have been sickly; others have been noted for their physical vigor. Some have been political activists; others have showed little or no interest in
the political world. And so on. The range of personalities, capabilities, and work habits adds considerable human interest to the story of mathematics.
☐☐ Extraordinary Numerical or Computational Skills There are numerous reports in the literature of persons who are able to perform extraordinary computational feats. “Lightning calculators” or
“calculating prodigies”—people who are able to produce almost instantly
Mathematical Reasoning
and without the use of paper and pencil answers to computational questions that even competent mathematicians would take considerable time to produce—have been of great interest to students of human
cognition and the general public as well (Binet, 1894/1981; Hermelin & O’Connor, 1986, 1990; Hope, 1987; Hunter, 1962, 1977; Jensen, 1990; O’Connor & Hermelin, 1984; Pesenti, 2005; Rouse Ball, 1892;
Smith, 1983; Tocquet, 1961; Treffert, 1988, 1989). Their feats include multiplying numbers with 10 or more digits, finding cube roots of very large numbers, solving cubic equations, finding
logarithms, distinguishing prime numbers from composites, and identifying the day of the week on which a date in the distant past fell. Jensen (1990) describes the case of an otherwise intellectually
unremarkable Indian woman, Shakuntala Devi, who, according to the Guinness Book of World Records, was able to find the product of two 13-digit numbers in 30 seconds. Devi (1977) has written an
engaging book in which she describes many of the shortcut methods that can be used to facilitate the doing of mental arithmetic. In it she also reports having fallen in love with numbers at the age
of three. “It was sheer ecstasy for me—to do sums and get the right answers. Numbers were toys with which I could play. In them I found emotional security; two plus two always made, and would always
make, four—no matter how the world changed” (p. 9). Although a few great mathematicians, including Ampère, Euler, Gauss, Hamilton, and von Neumann, have been extraordinarily good calculators, most
have not. As a general rule, lightning calculators have not shown exceptional mental ability in other ways; many have had below-average intelligence; several have been autistic. The incongruity
between the extraordinary ability of lightning calculators to calculate and their average or sometimes below-average abilities to perform other cognitively demanding tasks inspired the epithet idiot
savant, a regrettable term often used in the past (much less often now) in reference to individuals who showed unusual abilities in specific other areas (notably music) as well. One capability that
many lightning calculators appear to have in common is an extraordinary short-term memory capacity, visual in most cases, auditory in some, and tactile in at least one documented case— the
blind-from-birth Louis Fleury. Euler is said to have been able to do calculations mentally that required retaining in his head up to 50 places of accuracy. He appears to have had an extraordinary
memory for all kinds of information, as evidenced by the claim that he memorized the entire text of Virgil’s Aeneid as a boy and could still recite it 50 years later (Dunham, 1991). Extraordinary
memory for numbers is not a guarantee of similar capability for retention of nonnumerical information. Tocquet (1961)
Predilections, Presumptions, and Personalities
reports that a Mlle. Osaka—who could immediately give the 10th power of a two-digit number and the sixth root of an 18-digit number, and could repeat a string of 100 digits immediately after it was
read to her, and again, both as given and in reverse order, when unexpectedly asked to do so after a 45-minute interval filled with conversation about other matters—was unable to learn the correct
order of the letters of the alphabet. Others have presented evidence that lightning calculators are unlikely to have unusually large working memory capacities for nonnumerical information (Pesenti,
Seron, Samson, & Duroux, 1999). Among the more prodigious feats of memory for numbers is the widely publicized case of Rajan Mahadevan, who secured a place in the 1984 Guinness Book of World Records
by memorizing π to 31,811 digits. Although commitment to such an amazingly long sequence to memory took considerable time and effort, Mahadevan has also demonstrated an ability to repeat a string of
several dozen random digits upon hearing it once. Mahadevan’s spectacular memory feats appear to rest on strategies he has developed specifically for recalling numbers, and his memory for nonnumeric
information appears not to be extraordinary (Ericsson, Delaney, Weaver, & Mahadevan, 2004; Lewandowsky & Thomas, in press). Tocquet (1961) gives several specific examples of problems solved by
Jacques Inaudi, a computational prodigy who was studied by Alfred Binet, Paul Broca, and a committee of the French Académie des Sciences, whose members included Jean-Gaston Darboux and Henri
Poincaré. Rouse Ball (1892) and Pesenti (2005), among others, also give numerous examples of the types of problems solved by lightning calculators, many of which are quite astounding. Besides having
a prodigious memory, lightning calculators generally give evidence of their extraordinary ability relatively early in life, and many, though by no means all, tend to lose it later, sometimes after
only relatively few years. There have been attempts to attribute the extraordinary feats of lightning calculators to some combination of unusual memory, the previous commitment to memory of certain
numerical facts, and learned or invented shortcut procedures for doing computations. The shortcut procedures usually have the effect of decreasing the amount of information that must be carried along
in memory during the process of a lengthy computation. There can be no doubt that it is possible to stage what appear to be spectacular computational feats that are in reality done with the help of
scripted events, accomplices, and various forms of trickery. Moreover, there are facts about numbers that can be memorized and used to good effect in doing mental computations, and numerous learnable
techniques (not normally taught in conventional mathematics courses) that can, once well learned, greatly simplify many computations. Tocquet (1961) describes several such techniques, for example,
Mathematical Reasoning
a shortcut memory-dependent method for mentally extracting cubic and higher roots from large numbers that are perfect powers. Is there something unusual left to explain when all such possible factors
are taken into account? Tocquet contends that the answer to this question is yes. He points out, for example, that the methods he describes for extracting roots work only for certain roots (cubic,
5th, 9th, 13th, 17th, and 21st) and only when the number for which the root is wanted is a perfect power. Lightning calculators are not constrained to deal only with such cases. Tocquet favors the
view that these people have somehow been able to use abilities that lie below the level of consciousness and that most of us have not been able to tap. “What seems essentially to characterize the
lightning calculator is that to a greater degree than ordinary mortals he can use faculties which are in some way innate and which probably exist in a latent state in every human being” (Tocquet,
1961, p. 43). Dehaene (1997) argues that the feats of lightning calculators can be attributed, at least for the most part, to an unusually large memory capacity (emphasized also by Binet), a passion
for numbers, and great familiarity with many of them as a consequence of years of focusing on them—“Each calculating genius maintains a mental zoo peopled with a bestiary of familiar numbers” (p.
147)—and the learning of various shortcut mathematical procedures, some of which are described by Flansburg (1993), a proficient user of them. He rejects the idea that lightning calculation ability
represents a mystery that defies a natural explanation. “A talent for calculation thus seems to arise more from precocious training, often accompanied by an exceptional or even pathological capacity
to concentrate on the narrow domain of numbers, than from an innate gift” (p. 164). I think it fair to say that, to most of us who struggle to do two-digit multiplications without the crutch of paper
and pencil, the documented feats of lightning calculators—whether mysterious or not— are impressive indeed.
☐☐ Mathematical Disabilities Equally interesting as the question of the existence of extraordinary mathematical capabilities is that of whether there are specifically mathematical disabilities. The
evidence is clear that people can lose mathematical ability as a consequence of brain injury, and losses of specific abilities can follow from lesions in specific cortical areas (Dehaene, 1997;
Dehaene, Spelke, Pinel, Stanescu, & Tsivkin, 1999; Deloche & Seron, 1987; Gruber, Indefrey, & Kleinschmidt, 2001). This does not mean that any specific mathematical operation—counting, addition,
Predilections, Presumptions, and Personalities
comparison—is accomplished by a specific area of the brain. It appears from results of cortical mapping studies that even the simplest of mathematical operations engage numerous cortical areas
(Dehaene, 1996; Dehaene et al., 1996); nevertheless, that brain trauma can result in the loss of specific mathematical skills seems not in doubt. Brain injury aside, is there such a thing as
mathematical disability, as distinct from more general cognitive disability? Are there people who are perfectly capable of reasoning well in other contexts but lack the ability to learn to do
mathematics? Belief that the answer is yes is sufficiently strong among some researchers and educators to have ensured a variety of names for such a disorder, including dyscalculia, acalculia, and
anarithmia (Vandenbos, 2007). Estimates of the prevalence of school-age children with some form of mathematical disability vary, mostly from about 5% to about 8% (Badian, 1983; Fleishner, 1994; Kosc,
1974; Shalev, Auerbach, Manor, & Gross-Tsur, 2000). Children can have difficulty learning basic mathematics for a variety of reasons other than an underlying cognitive deficit of some kind. Geary
(1994) makes this point, and notes that in his own studies (e.g., Geary, 1990; Geary, Bow-Thomas, & Yao, 1992), only about half of the children who had been identified as having difficulty learning
mathematics give evidence of a cognitive deficit. The goal of identifying learning disabilities in mathematics is complicated by poor achievement in mathematics having a variety of causes other than
an actual cognitive disability. Geary and Hoard (2005) note that in the absence of measures specifically designed to diagnose mathematical disability, which have not yet been developed, what is
commonly taken as evidence of such a disability is the combination of a score lower than the 25th or 30th percentile on a mathematics achievement test and a low-average or higher IQ. The intent of
this combination criterion is to identify people whose mathematical disability is distinct from a general cognitive deficit, which might manifest itself in difficulty with mathematics but problems
with other cognitive activities as well. Especially important is the dissociation of difficulty with mathematics from difficulty with reading, inasmuch as the two often occur together, and an
inability to read comprehendingly would limit one’s ability to deal with mathematical word problems (Aiken, 1972; Kail & Hall, 1999). For defenses of the position that the more general notion of
learning disability is more of a political category than a scientific one, see Farnham-Diggory (1980) and Allardice and Ginsburg (1983). Distinctions have been made both on the basis of the assumed
origin of any particular mathematical deficit and on that of the nature of the disability. Disabilities that are assumed to be innate are sometimes referred to collectively as developmental
dyscalculia (Butterworth, 2005; Kosc, 1974; Shalev & Gross-Tsur, 1993, 2001), and those that are known
Mathematical Reasoning
or believed to be the result of some type of trauma are covered by the term acquired dyscalculia. That mathematical disabilities differ in their nature is reflected in the variety of designations
that one finds in the literature, which include number-fact dyscalculia, procedural dyscalculia, spatial dyscalculia, alexia, and agraphia, among others (Badian, 1983; Hécaen, 1962; McCloskey,
Caramazza, & Basili, 1985; Temple, 1989, 1991). Overviews of these and other dysfunctions are provided by Geary (1994), Geary and Hoard (2005), and Macaruso and Sokol (1998). Several dissociations
have been found in which one or more specific aspects of mathematical ability are impaired while others are not. In their review of work on developmental dyscalculia, Macaruso and Sokol (1998)
mention the following: Dissociations between numeral processing and calculation (e.g., Grewel, 1969; McCloskey et al., 1985), between numerical comprehension and numeral production (e.g., Benson &
Denckla, 1969; McCloskey et al., 1986; Singer & Low, 1933), and between Arabic and verbal numeral processing (e.g., Grafman, Kampen, Rosenberg, Salazar, & Boller, 1989; Macaruso et al., 1993, Noel &
Seron, 1993). Within the domain of calculation, dissociations have been reported between operation symbol comprehension and other calculation abilities (Ferro & Botelho, 1980), between retrieval of
arithmetic facts and execution of calculation procedures (e.g., Cohen & Dehaene, 1994; Sokol et al., 1991; Warrington, 1982), and between retrieval of arithmetic facts associated with different
operations (e.g., Dagenbach & McCloskey, 1992; Lampl, Eshel, Gilad, & Sarova-Pinhas, 1994). (p. 208)
Dowker (1998), also with substantiating references, identifies some of the same dissociations as well as some not mentioned by Macaruso and Sokol, including between different arithmetical operations
(Cipolotti & Delacycostello, 1995; Dagenbach & McCloskey, 1992; McNeil & Warrington, 1994), between oral and written presentation modes (Campbell, 1994; McNeil & Warrington, 1993), between conceptual
and procedural knowledge (McCloskey, 1992; Warrington, 1982), and between calculation and estimation (Dehaene & Cohen, 1991; Warrington, 1982). The conclusion that such a diversity of dissociations
seems to force is that mathematical competence is multifaceted and that different facets depend on the proper functioning of different neurological structures. Apparently, it is possible to lose (or
fail to develop) certain capabilities while retaining (or developing) certain others in a remarkable array of combinations. This lends support to Dowker’s (1998) contention that “there is no such
thing as arithmetical ability; only arithmetical abilities. The corollary is that arithmetical development is not a single process, but several processes” (p. 275). How much of the difficulty that
many students have with beginning mathematics can be attributed legitimately to either genetic or
Predilections, Presumptions, and Personalities
environmental causes is a question of continuing interest and research. Twin and sibling studies suggest that siblings (especially twins) of children who have a math disability are more likely to
have such a disability themselves than are siblings of children who do not have a math disability (Alarcon, Defries, Gillis Light, & Pennington, 1997; Shalev et al., 2001). One example of a genetic
disorder that appears to result in difficulties in learning mathematics without necessarily affecting other cognitive abilities, such as reading and verbal reasoning, is Turner syndrome, which is
caused by a chromosomal abnormality that occurs about once in 2,000 to 5,000 female births (Bender, Linden, & Robinson, 1993; Butterworth et al., 1999; Mazzocco, 1998; Mazzocco & McCloskey, 2005;
Rovet, 1993; Rovet, Szekely, & Hockenberry, 1994; Temple & Marriott, 1998). A similar syndrome—the Martin-Bell, or fragile X, syndrome—also caused by a chromosomal abnormality, occurs with about the
same relative frequency as Turner syndrome. It occurs in both males and females but tends to have more severe consequences in males. This too is associated with difficulties with math, but often with
other cognitive problems as well (Mazzocco & McCloskey, 2005). Spina bifida myelomeningocele, a spinal cord defect that occurs about once in 1,000 to 2,000 live births and affects both males and
females, produces both physical and cognitive impairments, difficulty with mathematics among them (Barnes, Smith-Chant, & Landry, 2005; Dennis & Barnes, 2002; Friedrich, Lovejoy, Shaffer, Shurtleff,
& Beilke, 1991; Wills, 1993; Wills, Holmbeck, Dillon, & McLone, 1990). Alzheimer’s disease and other forms of dementia typically are accompanied by mathematical disability, some of which, perhaps, is
specific to particular aspects of numerical or mathematical functions (Duverne & Lemaire, 2005; Kaufmann et al., 2002). Although it has been possible to trace some incidences of mathematical
disabilities to genetic origins, like those that produce Turner and fragile X syndromes, to date such traces account for a small percentage of children and adults who have difficulties with
mathematics. According to one view, very little of the poor performance of many students in their beginning (or subsequent) courses in mathematics can be attributed to genetic and preschool
environmental causes in combination. Allardice and Ginsburg (1983) express this view, for example, in noting that there are several reasons why studies attempting to demonstrate a neurological basis
for difficulties in learning mathematics are inconclusive and in claiming that “the most obvious environmental cause of poor mathematics achievement is schooling that is especially inadequate in the
case of mathematics” (p. 330). Finally, the identification of specific types of mathematical disabilities is complicated by the unquestioned fact that many students experience anxiety in the study of
mathematics, sometimes referred to, in
Mathematical Reasoning
extreme cases, as mathophobia. Such anxiety may, in some cases, be rooted in bona fide cognitive limitations peculiar to mathematics, but even when it is not—which could represent the large majority
of cases—the fear of doing poorly can become a self-fulfilling prophecy, and early failures can reinforce the anxiety, more or less ensuring increasing difficulties at later stages of instruction.
☐☐ Collaboration and Recognition Among Mathematicians No matter how isolated and self-sufficient a mathematician may be, the source and verification of his work goes back to the community of
mathematicians. (Hersh, 1997, p. 5)
A popular stereotype of mathematicians is that of somewhat reclusive individuals who spend most of their time secluded in the privacy of their own thoughts. Perhaps there have been, and are,
mathematicians who fit this image; Cantor may have done so, and until he met British mathematician G. H. Hardy, the Indian genius Srinivasa Ramanujan appears to have worked pretty much by himself.
Certainly much of the creative work that any mathematician does is intensely private. Ulam (1976) gives some credence to the stereotype of the mathematician as a withdrawn thinker by his observation
that for some, mathematics can be an escape from the cares of the everyday world: “The mathematician finds his own monastic nitch and happiness in pursuits that are disconnected from external
affairs. Some practice it as if using a drug” (p. 120). On the other hand, collegial interactions and collaborations often play an indispensable role in creative mathematics. Ulam makes this point
also and notes that much of the development of mathematics has taken place around small groups of mathematicians: Such a group possesses more than just a community of interests; it has a definite
mood and character in both the choice of interests and the method of thought. Epistemologically this may appear strange, since mathematical achievement, whether a new definition or an involved proof
of a problem, may seem to be an entirely individual effort, almost like a musical composition. However, the choice of certain areas of interest is frequently the result of a community of interest.
Such choices are often influenced by the interplay of questions and answers, which evolves much more naturally from the interplay of several minds. (p. 38)
Predilections, Presumptions, and Personalities
Ulam’s own career bears testimony to the possibility of fruitful collaborations in mathematics, as do those of such different personalities as Hardy and Erdös. Hardy worked almost exclusively with
two other mathematicians, John Littlewood and Ramanujan. Erdös is famous for the large number of people with whom he coauthored papers—507 according to Du Sautoy (2004). For many mathematicians, and
not a few nonmathematicians, one’s “Erdös number” has taken its place as a vital statistic along with one’s age, height, weight, and, perhaps, IQ. One’s Erdös number is the number of steps one must
traverse in a coauthor chain to get from oneself to Erdös. Anyone who coauthored a paper with Erdös has an Erdös number of 1. Anyone who coauthored a paper with someone who coauthored a paper with
Erdös has an Erdös number of 2, and so on. If you have coauthored a few papers, your Erdös number may be smaller than you realize. Du Sautoy (2004) has the number of mathematicians who have an Erdös
number of 2 as more than 5,000. Hoffman (1998) and Schechter (1998) have provided very readable accounts of the life of this colorful man of numbers. Other notable collaborations in the history of
mathematics include those of Pascal and Fermat on the beginnings of probability theory, and of Cayley and Sylvester on the theory of invariants. Aczel (2000) argues that mathematical research is best
done in a community, where ideas can be shared. “Working in isolation is hard and slow going, and there are many blind alleys into which a mathematician can stray when there is no possibility of
sharing ideas with colleagues” (p. 99). Sharing ideas cannot only help people avoid blind alleys, but interactions among colleagues can spark insights that might not have occurred had the
interactions not taken place. Collaboration, when effective, can also undoubtedly increase output. Erdös authored (or coauthored) over 1,500 papers in his lifetime, a number that puts him second in
output only to Euler (Du Sautoy, 2004). Collaboration among mathematicians, as judged by the authorship of their publications, has increased considerably in recent years. According to figures
compiled by Jerrold Grossman and cited by Schechter (1998), “In 1940 about 90 percent of all mathematical papers were solo efforts; today that number has dropped to around 50 percent. Fifty years ago
papers written with more than two people were almost unheard of, while today such multiple collaborations account for almost 10 percent of all published articles” (p. 182). Schoenfeld (1994b) argues
that mathematics, like science, is a social rather than a solitary activity and that, as a consequence, the ability to communicate ideas with colleagues is an asset. Because of the nature of some of
the problems that mathematicians are working on today and because the only feasible approach to these problems involves breaking them into parts that can be dealt with
Mathematical Reasoning
simultaneously by different individuals or groups, usually with the help of considerable computing power, a great deal of collaborative mathematics is being done at the present. With few, if any,
exceptions, mathematicians who make major contributions to the advancement of the discipline build on the work of other mathematicians, living and dead. Andrew Wiles worked pretty much alone and
somewhat secretly for several years on his proof of Fermat’s last theorem, but he drew heavily from the work of predecessors and contemporaries in a variety of areas. He was the one who put it all
together, which no one else had done, and therefore deserves the recognition he has received for his monumental feat, but without the work on which he drew, his proof would not have been possible. It
is for this reason that Aczel (1996) could claim that the proof of Fermat’s last theorem was the achievement of many mathematicians who lived between the time of Fermat and when the proof was finally
☐☐ Competition, Rivalry, and Credit Despite the unworldly nature of mathematics, mathematicians still have egos that need massaging. Nothing acts as a better drive to the creative process than the
thought of the immortality bestowed by having your name attached to a theorem. (du Sautoy, 2004, p. 171)
Du Sautoy (2004) describes mathematical research as “a complex balance between the need for collaboration in projects which can span centuries and the longing for immortality” (p. 171). It is easy to
see how the longing for immortality can foster competition and rivalry and thereby be a hindrance to collaboration. Although I do not mean to equate either competition or rivalry with ill will, there
are numerous examples in the history of mathematics of rivalries or disputes about precedence that became acrimonious. Flegg (1983) recounts one such dispute involving Girolamo Cardano, his pupil
Ludovico Ferrari, and self-taught Italian mathematician Nicola Fontana (aka Tartaglia) that included accusations and counteraccusations of plagiarism and bad faith. “So unpleasant was the whole
atmosphere generated by this dispute that Tartaglia was lucky not to have been murdered by Cardano’s supporters” (p. 201). (Violence was not uncommon to the place and time. Tartaglia’s father had
been murdered in 1505, and he himself was maimed by a soldier’s sword blow that slashed his jaw and palate when many citizens of Brescia were massacred by an invading army in 1512. Tartaglia’s wound
left him unable to speak normally, hence his nickname, Tartaglia, which means stammerer.)
Predilections, Presumptions, and Personalities
The acrimonious dispute between Newton and Leibniz over the origination of the differential calculus, which involved many supporters of both men, is perhaps the best known of rivalries among the
greats. (For details regarding claims and counterclaims, see Hall, 1980.) Historians of mathematics appear to be generally agreed that—despite the charges of plagiarism from both sides and
considerable poetic license in retrospective accounts of relevant events—both Newton and Leibniz did original and immensely influential work in establishing the calculus as a major field of
mathematics. Perhaps in part because of the prominence of the dispute over precedence between Newton and Leibniz, important work by others leading to the calculus may get less recognition than it
deserves. Laplace (1814/1951) considered the true discoverer of the differential calculus to be Fermat, who had invented a method for finding minima and maxima before either Newton or Leibniz did
their work. Dantzig (1930/2005) contends that had Fermat been more inclined to publish his work, he would have been remembered by posterity as the creator of both analytic geometry (for which we
credit Descartes) and the calculus, and “the mathematical world would have been spared the humiliation of a century of nasty controversy” (p. 136). (As noted in Chapter 7, some credit for analytic
geometry should also be given to Nicole Oresme, who used ordinate and abscissa to represent functions considerably before Decartes did so.) Unhappily, the dispute between two of the more fertile
minds of all time had some negative effects on later mathematical progress, especially in England, for many decades. Kronecker’s incessant attacks on Cantor and his ideas are believed to have played
some role in Cantor’s recurring bouts of mental depression. How much of Kronecker’s behavior in this regard can be attributed to a zealous desire to protect mathematics against Cantor’s ideas, which
he found profoundly disturbing, and how much can be attributed to some other type of motivation is not clear. That his attacks, including his blocking of Cantor’s chance for a desired position at a
Berlin university, had a devastating effect on Cantor personally appears clear. The story of the long-lasting feud between British statisticians Ronald Fisher and Karl (né Carl) Pearson, as well as
that between Fisher and British statistician Egon Pearson (Karl’s son) and Polish statistician Jerzy Neyman, has been engagingly told by Salsburg (2001). Issues of precedence and claims of credit
stealing tarnished even the reputation of the Bernoulli family, arguably one of the most mathematically gifted and productive families on record (paralleling in mathematics what the Bachs did in
music). French mathematician Gilles de Roberval was an unremitting critic of Descartes, and Descartes was not above ridiculing Roberval in personal terms, expressing to French monk-mathematician
Marin Mersenne, for
Mathematical Reasoning
example, astonishment that he (Roberval) could pass among others as a rational animal (Watson, 2002). Descartes was quite capable of using flattery and expressions of adulation or insult and
duplicity when it suited his purposes. A very readable account of 10 of the major disputes between notable mathematicians, often involving many of their supporters, is provided by Hellman (2006).
This and other accounts make it clear that many disputes have been motivated by personal concerns about recognition for work done. These give the lie to the romantic notion—if anyone has it—that
great mathematicians do mathematics only for the satisfaction they derive from the activity. To be sure, they derive much satisfaction, but as a rule, they are not indifferent to recognition for
their accomplishments. A possible exception was the prolific Euler, who is said to have sometimes withdrawn papers from publication in order to allow younger mathematicians to publish first. It seems
also that some disputes have been motivated, at least in part, by concern about the implications that certain work could have for the future development of mathematics. This arguably could apply to
contentions involving developments—such as Cantor’s work on transfinite numbers and Russell’s logicism—that had serious implications for the “foundations” of mathematics. The question of the
appropriateness of credit to mathematicians for specific discoveries is not always answered by accounts of explicit rivalries and attending evidence of the justification of claims of precedence. As
is true also in science, the history of mathematics has its cases of misappropriated credit that have not been subjects of highly visible controversy. Dantzig (1930/2005) gives the example of the
discovery of what he refers to as Harriot’s principle, named for its discoverer, British astronomer-mathematician Thomas Harriot. The principle involves the procedure of writing polynomial equations,
P(x), in the form P(x) = 0, that is, with all nonzero terms of the equation on one side of the equals sign and 0 on the other. Given the equation 2x2 + 5x = 12, for example, Harriot’s principle calls
for transposing 12 from the right of the equals sign to the left of it so the equation becomes 2x2 + 5x – 12 = 0. Dantizig describes this innovation, simple though it seems, as “epochbreaking”
because, among other things, it reduces the task of solving the equation to that of factoring a polynomial—in this case, 2x2 + 5x – 12 = (2x – 3) (x + 4) = 0, so x = 1.5 and –4. He notes, however,
that because Descartes’s book on analytic geometry, in which Descartes used Harriot’s ideas without ascription, appeared soon after the publication of Harriot’s Praxis, it was Descartes who was
credited with the idea for nearly a century. Correctly attributing credit is complicated in mathematics, as in science, by major developments being very seldom due exclusively to the work of a single
individual, although they may become associated by posterity
Predilections, Presumptions, and Personalities
with a single name. There are many examples of people whose creative work in mathematics has been obscured by the glare of the light shone on the work of their eminent successors. Kline (1980) makes
the point: “No major branch of mathematics or even a major specific result is the work of one man. At best some decisive step or assertion may be credited to an individual” (p. 84). As Boyer and
Merzbach (1991) put it: “Great milestones do not appear suddenly but are merely the more clear-cut formulations along the thorny path of uneven development” (p. 332). The recorded history of
mathematics, as the recorded history of science, is necessarily a gross simplification of the actual history. Such simplification is necessary, if for no other reason, to accommodate the limitations
of our minds; a complete detailed account would be more than we could absorb.
☐☐ The Practice in Publishing Mathematics In developing and understanding a subject, axioms come late. Then in the formal presentations, they come early. (Hersh, 1997, p. 6)
Historically, publication practices have been strongly influenced by two factors. One was very practical. Before the days of tenure, contracts for university positions and renewals thereof often
depended on winning public competitions in solving mathematical problems. The holder of a desirable position could be challenged to a contest at any time by another mathematician who coveted his
chair. In such contests, both challenger and challenged could pose problems for the other to solve. Such a system provided strong motivation for a person who had developed an effective approach to
the solution of some difficult problem to keep that knowledge to himself as a means of job security. If challenged by someone with an eye on his position, he could meet the challenger with a
challenge of his own. The dispute already mentioned involving Cardano, his pupil Ferrari, and Tartaglia illustrates the seriousness attached to the guarding of mathematical secrets. The system of
gaining or keeping academic positions by winning public competitions is a thing of the past. That does not mean that competition is no longer in play, but only that it operates in less explicit and
visible ways. A second factor that has greatly influenced publication by mathematicians is the strong preference among mathematicians to publish polished proofs but not the reasoning that produced
them. Perhaps this is due in part to a sense of esthetics according to which well-constructed proofs are beautiful, whereas the often convoluted reasoning processes that lead to those constructions
are not. Bell (1937) describes Gauss’s
Mathematical Reasoning
attitude toward publishing this way: “Contemplating as a youth the close, unbreakable chains of synthetic proofs in which Archimedes and Newton had tamed their inspirations, Gauss resolved to follow
their great example and leave after him only finished works of art, severely perfect, to which nothing could be added and from which nothing could be taken away without disfiguring the whole. The
work itself must stand forth, complete, simple, and convincing, with no trace remaining of the labor by which it had been achieved” (p. 229). As a matter of principle, Gauss published only
mathematical works that he considered perfect, and deliberately refrained from providing any clues to the thought processes that had brought him to the completed work. It has been said that his
slowness to publish was problematic for his contemporaries because they could never be sure when they were about to go to print with a discovery that he had not already made it and simply failed yet
to publish it. In Bell’s view, Gauss’s reluctance to expose work in progress may have delayed the development of mathematics by decades. Having quoted Bell’s mention of Archimedes, Newton, and Gauss
in the same sentence, I should note that Bell considered these three to be the greatest mathematicians the world had yet produced. They, he says, “are in a class by themselves … and it is not for
ordinary mortals to attempt to range them in order of merit” (p. 218). It is of interest too to note the attitudes of these giants toward pure and applied mathematics. “All three started tidal waves
in pure and applied mathematics: Archimedes esteemed his pure mathematics more highly than its applications; Newton appears to have found the chief justification for his mathematical inventions in
the scientific uses to which he put them, while Gauss declared that it was all one to him whether he worked on the pure or the applied side” (Bell, 1937, p. 218). All three had their feet in both
pure and applied worlds. If Archimedes esteemed pure mathematics more highly than its applications, this did not deter him from making some of the most heralded practical applications of all time,
and if Newton was highly focused on applications, this did not get in the way of his making seminal contributions to the advancement of mathematical theory. An aversion to exposing the psychology of
mathematical thinking, as distinct from the results of such thinking, is seen in G. H. Hardy’s (1940/1989) observation that the mathematician’s function is to prove new theorems and not to talk about
what he or other mathematicians have done. The distinction between doing mathematics and explaining how it is done was to Hardy a qualitative one; he saw the two kinds of activities as existing on
different intellectual planes. In his view, the doing requires creativity; the describing or explaining requires something less. “There is no scorn more profound, or on the whole more justifiable,
than that of the
Predilections, Presumptions, and Personalities
men who make for the men who explain. Exposition, criticism, appreciation, is work for second-rate minds” (Hardy 1940/1989, p. 61). Hardy was apologetic—contrite—about the writing of his Apology: “If
then I find myself writing, not mathematics, but ‘about’ mathematics, it is a confession of weakness, for which I may rightly be scorned or pitied by younger and more vigorous mathematicians. I write
about mathematics because, like any other mathematician who has passed sixty, I have no longer the freshness of mind, the energy, or the patience to carry on effectively with my proper job” (p. 63).
Notable among the giants of mathematics for his somewhat unique attitude toward publishing his findings was Pierre de Fermat. Not a professional mathematician—he was a lawyer—Fermat nevertheless was
a consummate doer of mathematics, and had an enormous impact on the development of the field, but published almost none of his findings in the conventional sense. It appears that he derived great
pleasure from simply solving problems and felt no need to seek fame or recognition for his successes. On occasion he would let contemporaries know that he had solved a problem without revealing how,
apparently as a challenge to them to do the same. Illustrative of this behavior was the announcement that he had proved that 26 is the only number that is sandwiched between a square (25) and a cube
(27), without divulging the complicated proof. What is known about Fermat’s discoveries comes from his correspondence and notes in his belongings, many of which were in the margins of Arithmetica, a
multivolume book written by Diophantus of Alexandria in the third century AD. Fermat had a copy of a 1621 Latin translation by Claude Gasper Bachet of a portion of Arithmetica (several volumes had
been lost when much of the great library of Alexandria was destroyed in 389 and again in 642). This book, more than any other, stimulated Fermat’s mathematical thinking, and many of the results of
that thinking were recorded in notes written in the book’s margins. Fortunately for posterity, Fermat’s eldest son, Clément-Samuel, recognized his father’s genius and saw to it that much of his work
became widely available, in part by arranging the publication in 1670 of an edition of Arithmetica that contained Fermat’s marginal notes, including the “last theorem.” Whether Fermat really had a
proof of this theorem is not known. There are several possibilities. Perhaps he did have a proof. But if so, his proof certainly could not have been anything like the one for which Andrew Wiles
became famous, which depended on several very complicated mathematical developments that occurred over the centuries since Fermat’s time. The possibility that Fermat had a simpler proof will
undoubtedly keep many people searching for it, or another relatively simple one, for a long time. Another possibility is that Fermat thought he had a proof, but that what he had would not have stood
up under the
Mathematical Reasoning
scrutiny of other mathematicians. This possibility gains credence from many post-Fermat mathematicians, some eminent ones among them, having thought they had a proof, only to have its flaws pointed
out by colleagues. Is it conceivable that Fermat’s claim to have a proof was a deliberate hoax? He appears to have gotten considerable enjoyment from tantalizing mathematicians with problems that he
had been able to solve but the solutions of which he did not divulge. Singh (1997) says of Fermat’s personal notes that they contained many theorems, but that they typically were not accompanied by
proofs. “There were just enough tantalizing glimpses of logic to leave mathematicians in no doubt that Fermat had proofs, but filling in the details was left as a challenge for them to take up” (p.
63). In fact, most, if not all, of Fermat’s theorems were proved by others over time. Did Fermat imagine that his notes in Arithmetica would be made public after his death? And is it possible that
his marginal comment about his “last theorem” was a mischievous challenge to mathematicians to find a proof for it? It seems unlikely, but not entirely out of the question. Borrowing a distinction
made by Goffman (1963) in his observations of human behavior in social situations, Hersh (1997) distinguishes between “front” and “back” mathematics. “Front mathematics is formal, precise, ordered,
and abstract. It’s broken into definitions, theorems, and remarks. Every question either is answered or is labeled: ‘open question.’ At the beginning of each chapter, a goal is stated. At the end of
the chapter, it’s attained. Mathematics in back is fragmentary, informal, intuitive, tentative. We try this or that. We say ‘maybe,’ or ‘it looks like’” (p. 36). The practice of publishing only front
mathematics, Hersh suggests, is responsible for perpetuating several myths about mathematics. “Without it, the myths would lose their aura. If mathematics were presented in the style in which it’s
created, few would believe its universality, unity, certainty, or objectivity” (p. 38). An interesting exception to the rule that mathematicians have tended to be reluctant to introspect, at least
publicly, about their doing of mathematics is the legendary Archimedes. In The Method, the manuscript which was discovered only in 1906, he describes in detail a “mechanical” approach that he took in
mathematical problem solving. It involved the imaginary balancing of lines as one might balance weights in mechanics. The approach led him to several beautiful mathematical discoveries regarding
areas and volumes of curved planar figures and solids. Notable among more recent expositors of the type of thinking that often lies behind published mathematical proofs are George Polya (1954a,
1954b) and Imre Lakatos (1976).
Predilections, Presumptions, and Personalities
Whatever their reasons for doing so, mathematicians’ habit of publishing only the results of their thinking—the polished versions of proofs and not the unpolished methods by which they arrived at
them—has had the unfortunate consequence of obscuring the nature of mathematical thought. And it has not prevented the publication of erroneous results. Court (1935/1961) notes that the list of names
in Lecat’s 1935 Erreurs des Mathématiciens looks pretty much like a Who’s Who in Mathematics. The nonmathematician whose primary exposure to the discipline is via high school or college textbooks
sees only a highly sanitized representation of what the thinking of mathematicians has produced, and gets little, if any, hint of the character of the thinking itself. Although perhaps to a lesser
degree than is true of mathematicians, scientists too tend to publish primarily the results of their thinking without revealing much about the processes that led to those results. Merton (1968)
points this out: “The scientific paper or monograph presents an immaculate appearance which reproduces little or nothing of the intuitive leaps, false starts, mistakes, loose ends, and happy
accidents that actually cluttered up the inquiry” (p. 4). Or, as Lindley (1993) puts it, scientists have a habit of covering their tracks when they publish. They do not often reveal all the false
starts and sojourns down blind alleys that preceded the attainment of some publishable result. Even the reporting of “thought experiments” is less revealing than one might hope of scientific thinking
in progress; Nersessian (1992) points out that by the time a thought experiment is presented, it always works, and the false starts that may have preceded the final result are seldom reported. This
is fortunate in some respects—what are of lasting interest for most purposes are the reliable results that have been obtained and not the muddling that led to them, and publication of extensive
records of the thought processes behind those results would, from one point of view, add an enormous amount of noise to the scientific literature. But the absence of such records makes the
mathematical and scientific enterprises appear to be much more logically neat than they really are. **** This chapter has focused on mathematicians as persons—their personalities, preferences,
capabilities, work habits, sometimes quirks. Do mathematicians, as a group, share any characteristics that distinguish them from people in other professions? Clearly, to be a productive
mathematician, one must be able to reason well, and for many areas of mathematics, to reason well about abstract entities. But the ability to reason well, and even to reason well abstractly, is
called for by many other endeavors. Many of the other characteristics that are generally considered descriptive
Mathematical Reasoning
of especially productive mathematicians—the ability to focus intently on a problem, perseverance, commitment to one’s work, satisfaction derived from solving problems—are not unique to
mathematicians. Perhaps the most obvious thing that study of the lives and work of major mathematicians reveals is the great diversity of personalities, lifestyles, capabilities, and interests that
this community contains. Whatever one’s concept of the prototypical mathematician is, there are likely to be very few real live mathematicians who will fit it.
11 C H A P T E R
Esthetics and the Joys of Mathematics
The mathematician’s patterns, like the painter’s or the poet’s, must be beautiful; the ideas, like the colors or the words, must fit together in a harmonious way. Beauty is the first test: there is
no permanent place in the world for ugly mathematics. (Hardy, 1940/1989, p. 85)
No one knows the extent to which the earliest attempts to count, to discover or construct patterns, to measure, and to compute were motivated by purely esthetic as opposed to practical interests. It
seems not unreasonable to assume that both types of factors have been important determinants of mathematical thinking from the beginning, just as they are today. Tracing major developments in
mathematics to their origins has proved to be very difficult, and in many cases impossible, because surviving evidence of the earliest efforts is sparse. Many of the written documents known to have
been produced by the Egyptians, Mesopotamians, Greeks, Chinese, and other ancient cultures have not survived, so constructing a coherent representation of the course of development even during
recorded history requires a considerable amount of conjecture, and there are many questions on which experts are not in agreement. A few aspects of this history, however, are reasonably clear. There
seems to be little doubt, for example, that esthetics, mysticism, and a general interest in matters philosophical energized much of the mathematical thinking of the ancient Greeks. In contrast, the
Egyptians were
Mathematical Reasoning
motivated, in large part, by the mathematical demands of land measurement and pyramid building, and the study of algebra in Arabia may have been stimulated to some degree by the complicated nature of
Arabian laws governing estate inheritance (Boyer & Merzbach, 1991). According to Bell (1945/1992), the Greeks made a distinction between logistic and arithmetica, the former having to do with
computation for practical purposes and the latter with the properties of numbers as such. Bell dismisses their accomplishments in logistic as “nothing that is not best forgotten as quickly as
possible by a mathematician” (p. 50). The Greeks themselves appear to have held logistic in contempt, whereas they considered arithmetica to be worthy of the better minds among them. Their work in
mathematics, exemplified notably by that of Euclid and Pythagoras, put great emphasis on deductive proofs and treated mathematics as an abstract discipline that existed independently of the material
world. The Pythagoreans considered mathematics to be more real, and more nearly perfect, than the world of the senses. This emphasis on deduction and abstraction was an immensely important
contribution to the development of mathematics, but the downside of the Greek perspective was that it provided little incentive for finding practical applications of what they were developing, and
perhaps as a consequence of their intense focus on abstract realities, they tended not to be keen observers of nature. They were nearly oblivious, for example, of the part played by curves of various
types in the world about them. “Aesthetically one of the most gifted people of all times, the only curves that they found in the heavens and on the earth were combinations of circles and straight
lines” (Boyer & Merzbach, 1991, p. 157). An exception to the rule that the Greeks were not much interested in quantitative description was the great mathematician Archimedes, who applied his
mathematics to the physical world and especially to practical engineering problems. He is reputed to have stalled the Roman’s seige of Syracuse for two or three years by his inventions of various
devices and instruments by means of which he twarted their efforts to take the city. It can be argued that Archimedes’ work in physics was unrivaled until the time of Galileo.
☐☐ Number Mysticism The origins of number mysticism, or numerology, are obscure. But if they did not originate it, the early Greeks, especially the Pythagoreans and Plato in his Timaeus, at least
contributed substantially to it and helped ensure
Esthetics and the Joys of Mathematics
its long-term survival (Butler, 1970). To the Greeks, individual numbers, especially the first few integers, had great symbolic significance. Each was endowed with specific qualities, inferred
sometimes from numerical or structural relationships that could be found within them. Six, for example, was associated with perfection because it was both the sum and the product of its divisors,
which happened to be the first three integers: 1 + 2 + 3 = 6 = 1 × 2 × 3. Four was associated with justice, because it could be factored into two equal numbers and equality was seen to be a defining
property of justice. The same could be said of any square number, of course, but the Greeks tended to focus on the smallest number for which a property of interest held, and for some purposes 1 was
not considered a number. The fascination with numbers was deep, and the identification of their distinguishing properties was a serious undertaking by many ancient and medieval scholars. Publius
Nigidius Figulus established a neo-Pythagorean school of philosophy in Rome that promoted numerology. Hopper (1938/2000) gives the following example of number fascination by Plutarch, whom he calls a
superb demonstrator of the operations of Pythagorean mathematics: “He finds that 36 is the first number which is both quadrangular (6 × 6) and rectangular (9 × 4), that it is the multiple of the
first square numbers, 4 and 9, and the sum of the first three cubes, 1, 8, 27. It is also a parallelogram (12 × 3 or 9 × 4) and is named ‘agreement’ because in it the first four odd numbers unite
with the first four even 1 + 3 + 5 + 7 = 16; 2 + 4 + 6 + 8 = 20; 16 + 20 = 36” (p. 45). Hopper describes also the roles played by the Gnostics, philosophers originally in and around Alexandria, in
perpetuating number mysticism through their efforts to integrate Greek philosophy, science, and Eastern religion, as well as those of the early Christian writers—notably St. Augustine—and medieval
scholars in keeping number mysticism alive. The picture includes the development of gematria, in which letters of the alphabet are given numerical values and texts are searched for hidden meanings
derived from those values, and arithmology, which involves the believed significance of, and powers ascribed to, certain integers. Gematria was central to the interpretation of Hebrew scriptures by
the Kabbalists in medieval Europe (Aczel, 2000). The pervasiveness of the effect of number mysticism on the thinking of philosophers and scholars from the classical Greeks through the middle ages is
easy for us today to fail to recognize; Hopper points out that no branch of medieval thought entirely escaped its influence. Number mysticism was rooted in the idea that the world is understandable
in quantitative terms because it was built according to mathematical principles. To the number mystic, that many aspects of nature
Mathematical Reasoning
could be described mathematically was less than surprising; it would have been surprising if this were not the case. Today the desire to describe the universe in mathematical terms appears to be at
least as strong as it was in the days of the early Greeks, which is not to say that the motivation for doing so is necessarily the same. We see vestiges of numerology today in various superstitions
regarding numbers. The idea that individuals have lucky numbers is one such, and undoubtedly one that has been responsible for the loss of considerable money on lotteries and other games of chance.
The widespread fear of certain numbers—13 in particular—is another case in point. The fear of 13, known clinically as triskaidekaphobia, has very real and substantial economic effects in terms of
people absenting themselves from work, appointments, and business transactions on days of the month numbered 13. Ellis (1978) raises the question of how 13 came to be considered unlucky and, while
not providing a definitive answer, discusses a variety of possible contributing factors. Whatever the answer, the superstition apparentle goes back at least to around 1780 BC; the Code of Hammurabi
contains 281 numbered laws—1 through 12 and 14 through 282; there is no law numbered 13, presumably because 13 was considered an unlucky or evil number (http://leb.net/~farras/history/
hammurabi.htm). Although number mysticism is often the object of ridicule among mathematicians and scientists today, and much about it is very easy to criticize, it should not be assumed that it has
ever been the province only of benighted cranks. As Brainerd (1979) puts it, “We would do well to remember that gematria, transcendental arithmetic, St. Isadore’s dictionary, and the rest, no matter
how far-fetched they may seem, were sober attempts by the leading scholars of the period to come to grips with what they believed to be fundamental problems. The mere fact that these efforts did not
succeed or that they were predicated on a wildly improbable methodology does not make them any the less serious” (p. 16). Number mysticism has been taken very seriously, or at least some aspects of
it have, by some of the most productive mathematicians and scientists the world has seen and by intelligentsia in other fields as well (Yates, 1964). Other equally productive and intelligent people
have found it easy to dismiss it in its entirety. Still others may give it more credence than they realize, because they know it by other names. Many of the properties of numbers discovered by
ancient and medieval thinkers are not qualitatively different from the properties that get attention from number theorists today. What makes numerology different from number theory is the imputation
by numerologists of mystic significance to numbers and the relationships among them. Number
Esthetics and the Joys of Mathematics
theorists find numbers and the relationships among them interesting in their own right, independently of either practical or metaphysical applications that might be made of their discoveries.
☐☐ The Pure Versus Applied Distinction The pure mathematician is much more of an artist than a scientist. He does not simply measure the world. He invents complex and playful patterns without the
least regard for their practical applicability. (Watts, 1964, p. 37)
Pythagoras is often credited with originating the belief that mathematics is worth studying for its own sake, quite apart from any practical applications it might have, or, if not originating it, at
least promoting it and giving it a form that would last until the present day. Euclid’s contempt for the idea that the reason for pursuing mathematics is its practical value is seen in the often-told
story of his response to a question from a student regarding what advantage he would gain by learning geometry. Euclid is said to have instructed his slave to “give him three pence, since he must
make profit out of what he learns.” There is more than a hint of intellectual snobbery in the way the distinction between pure and applied mathematics has been articulated, at least by those who
consider themselves on the pure side of this divide. The very use of the word pure, with its connotation of untainted and its moral overtones, to designate the doing of mathematics for its own sake
reveals this prejudice. Plato deserves to share the credit with Pythagoras for the establishment of this view—a view that is not unique to mathematics, it must be said—because of his general
association of usefulness with ignobility. The disdain that some mathematicians have for the idea that mathematics gets its primary justification from its practical usefulness is seen also in a
comment by Dantzig (1930/2005). “The mathematician may be compared to a designer of garments, who is utterly oblivious of the creatures whom his garments may fit. To be sure, his art originated in
the necessity for clothing such creatures, but this was long ago; to this day a shape will occasionally appear which will fit into the garment as if the garment had been made for it. Then there is no
end of surprise and of delight!” (p. 240). Court (1935/1961) makes a similar observation, and dismisses the idea that the solving of practical problems motivates most mathematicians. “He [the
mathematician] likes to exercise his inventiveness and to display it before others, namely before those who can get as
Mathematical Reasoning
excited about it as he does himself. That is the best that can be said in defense of most of mathematics” (p. 216). These sentiments are echoed in a comment by G. H. Hardy (1940/1989) in his Apology
that is often quoted to illustrate the attitude that pure mathematicians sometimes convey about the possibility of using practical utility as an appropriate measure of the merit of their work: I have
never done anything “useful.” No discovery of mine has made, or is likely to make, directly or indirectly, for good or ill, the least difference to the amenity of the world. I have helped to train
other mathematicians, but mathematicians of the same kind as myself, and their work has been, so far at any rate as I have helped them to do it, as useless as my own. Judged by all practical
standards, the value of my mathematical life is nil; and outside mathematics it is trivial anyhow. (p. 150)
In an even more sweeping statement, Hardy gave essentially the same verdict with respect to mathematics generally: “If useful knowledge is, as we agreed provisionally to say, knowledge which is
likely, now or in the comparatively near future, to contribute to the material comfort of mankind, so that mere intellectual satisfaction is irrelevant, then the great bulk of higher mathematics is
useless” (p. 135). It is difficult to know to what extent Hardy intended that these comments be taken at face value. He in fact did some mathematical work that was useful in the sense of being
descriptive of real phenomena. A case in point is his work on population statistics, which led to the Hardy-Weinberg equilibrium, a law of genetics that relates gene frequencies across generations.
It seems clear that Hardy was not apologizing, in the common sense of that word, for the way he had chosen to spend his life. He did not equate usefulness, as he used the term, with worth. He claimed
of his own work that it was not useful. He did not claim that it lacked worth; indeed, quite the contrary. His judgment of his own life was that he had added something to knowledge, and that what he
had added had a value similar to that of the creations of “any of the other artists, great or small, who have left some kind of memorial behind them” (p. 151). Hardy contended that what distinguishes
the best mathematics is its seriousness. As for what constitutes seriousness: “The ‘seriousness’ of a mathematical theorem lies, not in its practical consequences, which are usually negligible, but
in the significance of the mathematical ideas which it connects. We may say, roughly, that a mathematical idea is ‘significant’ if it can be connected, in a natural and illuminating way, with a large
complex of other mathematical ideas” (p. 89). Dantzig (1930/2005) expresses a similar perspective in arguing that mathematical achievement is not to be measured by the scope of its
Esthetics and the Joys of Mathematics
applicability to physical reality but rather by standards that are peculiar to itself. “These standards are independent of the crude reality of our senses. They are: freedom from logical
contradiction, the generality of the laws governing the created form, the kinship which exists between this new form and those that have preceded it” (p. 240). Hardy acknowledged that it is difficult
to be precise about what constitutes mathematical significance, but suggested that essential aspects include generality and depth. A general mathematical idea is one that figures in many mathematical
constructs and is used in the proofs of different kinds of theorems. Hardy considered the concept of depth, in this context, to be very difficult to explain, but he believed it to be one that
mathematicians would understand. Mathematical ideas, in his view, are arranged in strata representing different depths. The idea of an irrational number is deeper, for example, than that of an
integer. Sometimes, though not always, in order to understand relationships at a given level, it is necessary to make use of concepts from a deeper level.
☐☐ The Joy of Discovery Mathematicians do mathematics for a variety of reasons—to earn a living, to attain recognition, to contribute to the shared knowledge of the species. Not least among their
motivations is the satisfaction they derive from working on and solving challenging problems. Byers (2007) expresses this reason somewhat rhapsodically: “Why do mathematicians work so hard to produce
original mathematical results? Is it merely for fame and fortune? No, people do mathematics because they love it; they love the agony and the ecstasy. The ecstasy comes from accessing this realm of
knowing, of certainty. Once you taste it, you can’t but want more. Why? Because the creative experience is the most intense, most real experience that human beings are capable of” (p. 341). The
passion that mathematicians can have for their subject is seen in their descriptions of the sense of elation that solving a recalcitrant problem can bring. There are many accounts of this type of
experience. The story of Archimedes running naked through the streets of Syracuse shouting “eureka” after solving the problem of determining whether the gold in the king’s crown had been diluted with
baser metal is presumably apocryphal, but it is a fine metaphor for the feeling that many mathematicians have described having upon finding the solution to a problem that had been consuming them.
Mathematical Reasoning
Aczel (1996) reports Andrew Wiles’s reaction upon finally seeing how to make his proof of Fermat’s last theorem (that there exists no integral solution of x n + y n = z n for n > 2) work: “Finally,
he understood what was wrong. ‘It was the most important moment in my entire working life,’ he later described the feeling. ‘Suddenly, totally unexpectedly, I had this incredible revelation. Nothing
I’ll ever do again will …’ at that moment tears welled up and Wiles was choking with emotion. What Wiles realized at that fateful moment was ‘so indescribably beautiful, it was so simple and so
elegant … and I just stared in disbelief’” (p. 132). Singh (1997) has chronicled Wiles’s epic struggle—his obsession, as Wiles referred to it—with Fermat’s last theorem, from his introduction to it
as a 10-year-old boy through the 7 or 8 years of solitary concentration on this single problem as a professional mathematician, the exhilaration and acclaim that attended his announcement of a proof
in 1993, the discouragement and depression that came with the discovery that the proof was flawed, and the elation that came again with the final repair of it and acceptance of its authenticity by
the mathematical community. It is a gripping, emotionally charged story. Du Sautoy (2004) quotes contemporary French mathematician Alain Connes describing his discovery of the joy of mathematics as a
young boy—“I very clearly remember the intense pleasure that I had plunging into the special state of concentration that one needs in order to do mathematics”—and his comment as an adult that
mathematics “affords—when one is fortunate enough to uncover the minutest portion of it—a sensation of extraordinary pleasure through the feeling of timelessness that it produces” (p. 305). This is
not to suggest, of course, that mathematicians live in a perpetual state of euphoria. Like everyone else, they have their ups and downs, but by their own testimony, many of them derive moments of
extraordinary pleasure from working on, and especially from solving, challenging mathematical problems. It appears that, at least for some mathematicians, the joy of discovery is sufficiently strong
to motivate long stretches of intense work between episodes of experiencing it. American mathematician Lipman Bers puts it this way: I think the thing which makes mathematics a pleasant occupation
are those few minutes when suddenly something falls into place and you understand. Now a great mathematician may have such moments very often. Gauss, as his diaries show, had days when he had two or
three important insights in the same day. Ordinary mortals have it very seldom. Some people experience it only once or twice in their lifetime. But the quality of this experience—those who have known
it—is really joy comparable to no other joy. (Quoted in Hammond, 1978, p. 27)
Esthetics and the Joys of Mathematics
Poincaré limited to a privileged few the full experience of the esthetic pleasure that mathematics can provide: “Adepts find in mathematics delights analogous to those that painting and music give.
They admire the delicate harmony of numbers and of forms; they are amazed when a new discovery discloses to them an unlooked-for perspective, and the joy they thus experience has it not the esthetic
character, although the senses take no part in it? Only the privileged few are called to enjoy it fully, but is it not so with all the noblest arts?” (quoted in Court, 1935/1961, p. 127). Bers
surmises too that the enjoyment of mathematics experienced by mathematicians is largely unrealized by nonmathematicians, and expresses disappointment that mathematics does not compare favorably with
music and art in this regard, which he assumes can be enjoyed by people other than musicians and artists. That this state of affairs is not of great concern to mathematicians generally is suggested
in a responding comment by French mathematician Dennis Sullivan: “I don’t particularly feel that it’s important that mathematics be enjoyed by a lot of people; that would be very nice, but it’s more
important that mathematicians work on good problems and pursue mathematics” (quoted in Hammond, 1978, p. 33).
☐☐ Beauty and Elegance Mathematics, rightly viewed, possesses not only truth but supreme beauty—a beauty cold and austere, like that of sculpture, without appeal to any part of our weaker nature,
without the gorgeous trappings of painting or music, yet sublimely pure, and capable of a stern perfection such as only the greatest art can show. (Russell, 1910, p. 73)
Many mathematicians have, like Russell, lauded the beauty or elegance that is to be seen in mathematics, at least by the discerning eye, and have described mathematics as a form of art. Here are a
few of the numerous examples of such expressions that can be found. • “An elegantly executed proof is a poem in all but the form in which it is written” (Kline, 1953a, p. 470). • “I am inclined to
believe that one of the origins of mathematics is man’s playful nature, and for this reason mathematics is not only a Science, but to at least some extent also an Art” (Péter, 1961/1976, p. 1).
Mathematical Reasoning
• “The esthetic side of mathematics has been of overwhelming importance throughout its growth. It is not so much whether a theorem is useful that matters, but how elegant it is” (Ulam, 1976, p. 274).
• “The motivation and standards of creative mathematics are more like those of art than like those of science. Aesthetic judgments transcend both logic and applicability in the ranking of
mathematical theorems: beauty and elegance have more to do with the value of a mathematical idea than does either strict truth or possible utility” (Steen, 1978, p 10). • “Mathematics always follows
where elegance leads” (Kaplan, 1999, p. 71). • [Regarding a proposal to apply a particular set-theory concept to transfinite numbers] “The proposal is a piece of mathematical legislation, to be
assessed, if at all, in terms of its power, elegance and beauty” (Moore, 2001, p. 151). • “The poetry of science is in some sense embodied in its great equations” (Farmelo, 2003b, p. xi). Kac (1985)
describes his reaction upon reading a recommended book for the purpose of understanding Dedekind cuts. “As I read, the beauty of the concept hit me with a force that sent me into a state of euphoria.
When, a few days later, I rhapsodized to Marceli [Marceli Stark, an older fellow student who had recommended the book] about Dedekind cuts—in fact, I acted as if I had discovered them—his only
comment was that perhaps I had the makings of a mathematician after all” (p. 32). The esthetic appeal of mathematics and the desire to produce mathematical results that will be seen as beautiful or
elegant both by themselves and by others presumed to be qualified to have an opinion on the matter, appear to be very strong among mathematicians, especially those who have been most influential in
shaping the field. Mathematicians often speak of beauty as a criterion by which the results of a mathematician’s work should be judged, as in the comment of Hardy quoted at the beginning of this
chapter. Du Sautoy (2004) sees an attraction to beauty as something innate to the mathematical mind. “The esthetic sensibilities of the mathematical mind are tuned to appreciate proofs that are
beautiful compositions and shun proofs which are ugly” (p. 210). Hadamard (1945/1954) also expressed a similar sentiment in arguing that in mathematics the sense of beauty is not just a drive for
discovery, but “almost
Esthetics and the Joys of Mathematics
the only useful one” (p 103); the idea of the usefulness of beauty in this context invites reflection. Noting how remarkable it is that the mathematical world so often favors the most esthetic
construction, Du Sautoy (2004) points out that Riemann’s hypothesis (see p. 4), a proof of which has been sought by many top-grade mathematicians, “can be interpreted as an example of a general
philosophy among mathematicians that, given a choice between an ugly world and an esthetic one, Nature always chooses the latter” (p. 55). The assumption appears to be that the beauty that is to be
found in mathematics is a reflection of the beauty of physical reality. Many scientists have also taken the position that beauty should be a goal in the construction of scientific theories. What is
the basis of this interest in beauty? Is it the same in both mathematics and science? Is it rational, in either case, to expect or demand that the products of the discipline satisfy such a criterion?
Is there an underlying assumption that the proper business of mathematics and science is to discover what can be discovered about reality and that truth—mathematical and physical—when seen as clearly
as possible, must be beautiful? If the demand for beauty stems from some such assumption, is the assumption itself an article of blind faith? If such an assumption is not its basis, what is? Whatever
its basis, interest among mathematicians in beauty is undeniable and appears to be very strong. But what is beauty in this context? Indeed, what is beauty generally? The concept is exceedingly
difficult to pin down with an objective definition. The dictionary and thesaurus are of remarkably little help. An observation by Mortimer Adler (1981), who includes beauty in his list of six great
ideas, is telling in this regard. “The test of the intelligibility of any statement that overwhelms us with its air of profundity is its translatability into language that lacks the elevation and
verve of the original statement but can pass muster as a simple and clear statement in ordinary, everyday speech. Most of what has been written about beauty will not survive this test. In the
presence of many of the most eloquent statements about beauty, we are left speechless—speechless in the sense that we cannot find other words for expressing what we think or hope we understand” (p.
103). There can be little doubt that beauty is subjective to a large extent: People differ greatly on what they consider beautiful or ugly. Arguably, though, there is likely to be greater agreement
about what is beautiful in any particular context among people who are highly familiar with that context than among people who are not. Students of 17th-century Dutch paintings are more likely to
agree on what constitutes a beautiful painting from this period and place than are people who have little knowledge
Mathematical Reasoning
of art; baseball fans are likely to be more consistent in distinguishing between beautiful and commonplace swings of a baseball bat than are people who have little knowledge of, or interest in,
baseball. Similarly, one would expect to find greater agreement among mathematicians than among nonmathematicians regarding what should be considered beautiful in mathematics. Adler (1981) makes a
distinction between enjoyable beauty and admirable beauty. Enjoyable beauty is purely subjective, totally “in the eye of the beholder;” in contrast, admirable beauty is beauty that, by consensus of
those presumably qualified to judge by virtue of their familiarity with the appropriate domain, meets some objective standards. The latter kind of beauty deserves to be appreciated; one should strive
to appreciate it, and this may require acquiring some expertise in the domain. This line of thinking, if one accepts the distinction, prompts the question of what may constitute admirable beauty in
mathematics. Is it simplicitly? Regularity? Symmetry? Rigor? What is it about the perfect solids that makes them perfect in the eyes of those who gave them that label? That all the faces of a
particular one are identical? That all the vertices of any given one will lie on a superscribed sphere? That the center of each one coincides with the center of its superscribed sphere? What prompted
Moore (2001) to say of analytic geometry, which he describes as the casting of one whole body of mathematics in terms of another, that it is one of the greatest monuments to mathematical excellence
because of its beauty, depth, and power? Can beauty be described or defined in a noncircular way? And in such a way that nonmathematicians will be able to recognize it when they see it? What is it
about the equation, attributed to Euler, eiπ + 1 = 0 that many mathematicians find so beautiful? King (1992) points out that it satisfies what he refers to as the “aesthetic principle of minimal
completeness” (p. 86). It contains the five most important constants of mathematics—e, i, π, 1, and 0—as well as the “paramount operations” of addition, multiplication, and exponentiation, and the
“most vital relation” of equality, and nothing else. Some may find this to be an adequate explanation of why the equation appears to many to be beautiful; others may not. Another answer is that, to
one who must ask, there is unlikely to be a persuasive answer. Wilczek (2003) more or less makes this point in contending that “it is difficult to make precise, and all but impossible to convey to a
lay reader, the nature of mathematical beauty” (p. 158).
Esthetics and the Joys of Mathematics
Although the foregoing equation is often presented as though it were invented, or discovered, by Euler in a vacuum, it follows from the relationship eiπ = cos π + i sin π, but it is nonetheless
beautiful for that. For the reader who may be unfamiliar with the equivalence between the exponential function and the sum of two trigonometric functions, it is easily seen when each of the component
expressions is represented in its power-series form: sin x = x -
x3 x5 x7 + +L 3! 5! 7!
cos x = 1 -
x2 x4 x6 +L + 2! 4 ! 6!
ex = 1+ x +
x 2 x3 x 4 x5 x6 x7 + + + + + + L. 2! 3! 4 ! 5! 5! 7!
Given that i2 = –1, we can write e ix = 1 + ix + = 1 + ix -
(ix)2 (ix )3 (ix )4 (ix)5 (ix)6 (ix)7 + + + + + +L 2! 3! 4! 5! 6! 7! x2 x3 x4 x5 x6 x7 -i + +i -i +L 2! 3! 4 ! 5! 6! 7!
x2 x4 x6 x3 x5 x7 = 1 + +L +ix + +L 3! 5! 7! 2! 4 ! 6! = cos x + i sin x . When x = π, cos x = –1 and sin x = 0, so we can write eiπ = cos π + i sin π = –1, or equivalently, eiπ
+ 1 = 0. Do mathematicians agree among themselves as to what is beautiful or elegant in mathematics and what is not? In fact, although it is claimed that mathematicians will generally agree in their
judgments about which mathematical results deserve this description, there appears to be no well-supported account of what determines beauty or elegance in this context. (For examples of what many
would consider unbeautiful equations that work, see du Sautoy, 2004, pp. 143, 200.) King (1992) makes the interesting observation that although there exists a voluminous philosophical literature on
esthetics, numerous
Mathematical Reasoning
works on the role of mathematics in art, and a few mathematical models of esthetics (a notable pioneering example of which is that of American mathematician George Birkhoff [1933]), very little
attention has been given to mathematics as art—to the esthetics of mathematics per se. It appears, he suggests, that estheticians, like other educated people outside the mathematical aristocracy,
have little concept of mathematical beauty or even awareness that such a thing exists. Mathematicians have no doubt of the beauty of their subject, but they have been content, for the most part, to
enjoy it privately without taking the trouble—and trouble it would undoubtedly be—to shine the light in such a way that the rest of us could get a glimpse of it too. As King puts it, “Mathematicians
have talked mathematics only to each other and about mathematics to no one at all” (p. 224). King suggests that mathematicians judge the esthetic quality of mathematical ideas in terms of two
principles—those of minimal completeness and maximal applicability: A mathematical notion N satisfies the principle of minimal completeness provided that N contains within itself all properties
necessary to fulfill its mathematical mission, but N contains no extraneous properties. A mathematical notion N satisfies the principle of maximal applicability provided that N contains properties
that are widely applicable to mathematical notions other than N (p. 181).
King sees these statements as somewhat more precise representations of the principles that Hardy was hinting at in his discussion of elegance in A Mathematician’s Apology, in which he used such
descriptors as seriousness, depth, economy, and generality. The principle of minimal completeness, which King associates with Occam’s razor, requires that an elegant mathematical construct be
complete but free of extraneous notions. For a proof to be elegant, for example, nothing essential can be missing, but nothing unessential can be contained. King’s second principle articulates the
idea that, other things being equal, the greater the range of applicability of the construct, the more elegant it is. Here elegance is more or less equated with conceptual power. Weyl (1940/1956) was
much impressed with symmetry as a mark of beauty: “Symmetry, as wide or as narrow as you may define its meaning, is one idea by which man through the ages has tried to comprehend and create order,
beauty, and perfection” (p. 672). (For a very readable discussion of the concept of symmetry and its importance in mathematics and science, see Stewart and Golubitsky, 1992.) Consistent with this
Esthetics and the Joys of Mathematics
is Weyl’s description of the discoveries of the existence of the two most complex regular polyhedra—the dodecahedron (the 12-sided solid, each face of which is a regular pentagon) and the icosahedron
(the 20-sided solid bounded by equilateral triangles)—as among the most beautiful in the history of mathematics. Kline’s (1956/1953) appraisal of projective geometry, which he sees as unique in many
ways among the subareas of mathematics, references a number of characteristics that are at least suggestive of beauty: “No branch of mathematics competes with projective geometry in originality of
ideas, coordination of intuition in discovery and rigor in proof, purity of thought, logical finish, elegance of proofs and comprehensiveness of concepts” (p. 641). Among the more surprising searches
for beauty in mathematics is that which involves prime numbers. To the nonmathematical eye the sequence of primes 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, … seems the epitome of haphazardness and
unpredictability, hardly anything that could be described as elegant. But the attempt to “tame” the primes, to find a way to predict specific properties of the sequence, which was proved by Euclid to
go on indefinitely, has engaged many of the greatest mathematicians, including Gauss, Riemann, Hilbert, and Hardy. There appears to be a deep belief among many mathematicians that there is a unique
kind of beauty—du Sautoy (2004) calls it “the music of the primes”—in the prime sequence and it is just a matter of discovering precisely what it is. Davis and Hersch (1981, p. 198) point to
unification—“the establishment of a relationship between seemingly diverse objects”—as not only one of the great motivating forces in mathematics, but also one of the great sources of esthetic
satisfaction in this field. The search for uniformity in what appear to be diverse phenomena—unity in diversity— has been a theme common to mathematics and science over the history of both. Although
there are some similarities in these references to mathematical elegance and beauty, and in others that could be cited, it is difficult to infer a definition to which all mathematicians would
subscribe, King’s thought-provoking proposal notwithstanding. One cannot expect to see beauty in mathematics in the absence of some minimal understanding of mathematics as a discipline.
Mathematicians may differ among themselves as to precisely what determines whether a particular mathematical result should be considered beautiful, but my sense is that the more one understands of
mathematics, the more likely one is to see beauty not only in many specific mathematical results, but also in the enterprise as a whole.
Mathematical Reasoning
The following algebraic identity, which has been known at least since the publication of Fibonacci’s Liber abaci in 1202, strikes me as beautiful, independently of any usefulness (unknown to me) that
it may have: (a 2 + b 2 )(c 2 + d 2 ) = (ac ± bd)2 + (ad
where a, b, c, and d are any integers. What this identity says is that the product of two sums of two squares of integers is always expressible, in two different ways, as the sum of two squares of
integers. The relationship may be written alternatively as (a2 + b 2 )(c 2 + d 2 ) = u2 + v 2 = x 2 + y 2 where u = ac + bd , v = ad - bc, x = ac - bd , and y = ad + bc . Try it with four random
numbers, for a, b, c, and d. I just did it with 2, 34, 19, and 7. (22 + 342)(192 + 72) = 475,600 = 2762 + (−632)2 = (−200)2 + 6602 Bell (1945/1992) claimed that “with the appropriate restrictions as
to uniformity, continuity, and initial values when a, b, c, d are functions of one variable, the identity contains the whole of trigonometry” (p. 103). What he meant by that, I cannot say.
☐☐ Wonder Dante Alighieri was fascinated by the fact that any angle inscribed inside a semicircle, no matter where on the semicircle its vertex lies, is a right angle; to him, this was wonder
evoking. This relationship, illustrated in Figure 11.1, is known to us as the theorem of Thales, but the Babylonians were aware of it over a millennium before Thales of Miletus stated it.
Esthetics and the Joys of Mathematics
C B D
Figure 11.1 Illustrating the theorem of Thales, according to which any angle inscribed in a semicircle, such as ABE, ACE, and ADE, is a right angle.
Socrates, in his youth, wondered about why the sum and product of two 2s are the same. (Socrates did not give much thought to mathematics in his adult life, claiming to find the results of certain
fundamental arithmetic operations incomprehensible.) The contemporary mathematician Richard Hamming (1980) confesses to being amazed by the possibility of abstracting numbers from the things
numbered—that the universe is constructed in such a way that three things and four things always add to seven things, no matter what the things are that are counted. Peter Atkins (1994) calls
mathematics “the profound language of nature, the apotheosis of abstraction and the archenabler of the applied” and argues that realization of this “should stir us into wondering whether herein lies
the biggest hint of all about our origin and the origin of our understanding” (p. 95). The childlike wonder that Einstein maintained about space and time is well documented, as is the fact that he
himself attributed to it, at least in part, his formulation of the theory of relativity (Holton, 1973). Bertrand Russell (1955/1994) speaks of Einstein’s expression of “‘surprised thankfulness’ that
four equal rods can make a square, since, in most of the universes he could imagine, there would be no such things as squares” (p. 408). What evokes wonder, or even awe, in one person may appear
mundane to another. This could be because the person who wonders lacks the knowledge that reveals the mundaneness of the object of wonderment, or it could be because the one who does not wonder lacks
the sensitivity or insightfulness to appreciate that a mystery is involved. A question that has not received much attention from researchers, to my knowledge, is whether a plausible objective basis
for wonderment can be identified. What inspires wonder? Is it possible to specify the conditions under which wonder should be evoked?
Mathematical Reasoning
These are important questions from an educational point of view. The teaching and learning of mathematics are discussed in later chapters of this book, but it seems appropriate to note here that
wonderment is seldom mentioned in the literature on the teaching and learning of mathematics as a motivating factor in those contexts. This seems to me to indicate missed opportunities to provide
young students with a sense of what a rewarding intellectual adventure the acquisition of mathematical knowledge can be. Whether wonder can be taught is perhaps itself a question for research. My
suspicion is that, like other attitudes or perspectives, its most likely form of transmission is by contagion. Unhappily, there is little reason to doubt that a sense of wonder can be stifled if put
down whenever it spontaneously appears.
☐☐ The Pythagorean Theorem Every school child who has studied basic algebra or plane geometry has encountered the Pythagorean theorem, according to which the sum of the squares of the lengths of the
two sides forming the right angle of a right triangle equals the square of the length of the side opposite that angle—the hypotenuse. That is, if X and Y are the two sides forming the right angle and
Z is the hypotenuse, X 2 + Y 2 = Z 2. Although discovery of this relationship, which must be one of the best known of all mathematical theorems, is credited to the Pythagoreans, the relationship
expressed by it was known by people in other cultures long before the time of Pythagoras. The Pythagoreans may have discovered it independently, however, and, in any case, they found the relationship
to be awe inspiring and wondered much about it. Kepler referred to the Pythagorean theorem as one of “two great treasures” of geometry. (Kepler’s other great treasure is discussed in the next
section.) In a series of articles in the American Mathematical Monthly, B. F. Yancey and J. A. Calderhead (1886, 1887, 1888, 1889) presented 100 proofs of the Pythagorean theorem. More recently,
Elisha Loomis (1968) collected and classified 367 proofs of the theorem. A collection of 79 proofs, some with interactive illustrations, can be seen at http://www. cut-the-knot.org/pythagoras/
index.shtml. It is doubtful if any other mathematical theorem has been proved in a greater number of ways than has the theorem of Pythagoras. Few, if any, other theorems have held the attention of so
many for so long. While proving the theorem is not difficult, constructing a proof that differs from all those that have already been produced is a considerable challenge.
Esthetics and the Joys of Mathematics
Y X
Y X
Figure 11.2 Geometric demonstration that a square erected on the hypotenuse of a right triangle equals the sum of the squares of the triangle’s other two sides.
A particularly elegant proof, in my view, is represented in Figure 11.2. To prove: that given a right triangle with sides X, Y, and Z, Z the hypotenuse, X2 + Y2 = Z2. Draw a square on Z and a
circumscribing square with X and Y representing the corners of one side, as shown in Figure 11.2, left. It should be clear that the triangles around the inner square are all congruent, that is, have
sides X, Y, and Z, as also represented in Figure 11.2, right. Inasmuch as the area of the inscribed square equals the area of the circumscribing square minus the areas of the four triangles, we have
Z2 = X + Y
XY - 4 = X 2 + 2XY + Y 2 - 2XY = X 2 + Y 2 . 2
A similarly simple and elegant demonstration is shown in Figure 11.3, from which we get the equation
Z2 = Y - X
XY + 4 = Y 2 - 2XY + X 2 + 2XY = X 2 + Y 2. 2
Y Z
Figure 11.3 Another simple geometric demonstration that the square of the hypotenuse of a right triangle equals the sum of the squares of the other two sides.
Mathematical Reasoning 2.0 8.0
12.5 4
15.28 3.33
5 3 4.5
A = 11.2
5.50 2.50
A = 17.5
2.8 4
2.0 3.5
A = 9.817
4 5 3 1.5
2.1 A = 6.283
A = 3.534
A = 6.3
Figure 11.4 Illustrating with nonsquare rectangles (upper left), trapezoids (upper right), rhombuses (lower left), and semicircles (lower right) that when similar figures are erected on the sides of
a right triangle, the area of the figure on the hypotenuse is equal to the sum of the areas of the figures erected on the other two sides. (The numbers in this figure and the following three are
Although the Pythagorean theorem, as represented by the preceding equation, is well known, what is undoubtedly much less well known is that if any similar figures are erected on the three sides of a
right triangle, the combined areas of the two smaller figures will equal the area of the largest one. Figure 11.4 illustrates this with rectangles, trapezoids, rhombuses, and semicircles. Figure
11.5 shows the same relationship with a variety of triangles. It even holds with circles, the diameters of which are equal to (or a fixed multiple of) the sides to which the circles are tangent, as
illustrated in Figure 11.6. These relationships may seem peculiar when first encountered, but they all follow from the simple facts that 1. if X2 + Y2 = Z2, then kX2 + kY2 = kZ2, where k is any
constant; 2. the area of any regular planar figure, say W, can be expressed as a multiple of the area of a specified square; and
Esthetics and the Joys of Mathematics
3.75 16.67
6.67 10.67
5 3 3.375
3.00 3.20
3.84 4
2.40 2.40
Figure 11.5 In each of these figures, the sides of the individual triangles are in the ratio of 3:4:5. In the upper left figure, the sides of the center triangle serve as the shorter of the sides
forming the right angle of each of the abutting triangles. In the upper right figure, the sides of the center triangle serve as the longer of the sides forming the right angle of each of the abutting
triangles. In the bottom figure, the sides of the center triangle serve as the hypotenuse of each of the abutting triangles. In all cases, the area of the largest abutting triangle is equal to the
sum of the areas of the other abutting triangles.
3. if the figure W is erected on each side of the triangle such that the ratio of its area on any given side of the triangle to the area of the square on that side is the same for all three sides,
the sum of the areas of W on the sides that form the right angle will be equal to the area of W erected on the hypotenuse. In the illustrations in Figure 11.4, the values of k (the ratios of the
areas of the figures to the areas of squares erected on the same sides) are 0.5, 0.611 (approximately), 0.393 (approximately), and 0.7 for the rectangles, trapezoids, semicircles, and rhombuses,
respectively. For the
Mathematical Reasoning
5 A = 19.63 4 A = 12.56
3 A = 7.07
Figure 11.6 The diameter of each of the abutting circles is equal to the length of the side of the triangle that it abuts. The area of the largest circle is equal to the sum of the areas of the
other two circles (area numbers are approximate).
triangles in Figure 11. 5, the values of k, clockwise from the upper left, are 0.667 (approximately), 0.375, and 0.24. For the circles in Figure 11.6, k = π/4 = 0.785 (approximately). In these
illustrations, k < 1 in all cases, but that is not essential. If any regular polygon of more than four sides is erected on each side of a right triangle, using the side of the triangle as one of the
sides of the polygon, the area of the polygon erected on the hypotenuse will equal the sum of the areas of those erected on the other sides. The values of k in each case will be greater than 1; for
pentagons (see Figure 11.7) it will be 1.72; for hexagons, 2.60; for heptagons, 3.63 (all approximately); and so on. For convenience I have used three, four, and five triangles in all of the
illustrations, but what is illustrated holds for all right triangles. Also, the figures in the illustrations are all simple geometric shapes, but this too is unnecessary. Imagine the outline of a
bust of Pythagoras being drawn on the hypotenuse of a right triangle, and suppose that the ratio of the area enclosed by this figure to the area of a square erected on the hypotenuse to be k. If
similar busts of Pythagoras are drawn on the other sides of the triangle such that the ratio of the areas enclosed to the areas of squares drawn on those sides is also k, then the sum of the areas
within the bust outlines on the smaller sides will equal the area within the bust outline on the hypotenuse. One is led to wonder if the relationship represented by the Pythagorean theorem would be
seen in any different light if the original discovery had been that the area
Esthetics and the Joys of Mathematics
43.01 27.53
Figure 11.7 Illustrating that the area of a regular pentagon erected on the hypotenuse of a right triangle is equal to the sum of the areas of regular pentagons erected on the other two sides. The
ratio of the areas of the pentagons to the areas of squares erected on the sides of the same triangle is approximately 1.72.
of an arbitrary figure erected on the hypotenuse of a right triangle would equal the sum of the areas of similar figures erected on the other two sides. Would realization of the generality of this
relationship have astonished Kepler, or made the relationship with squares mundane?
☐☐ The Golden Ratio The second great treasure of geometry, in Kepler’s view, was what Euclid had referred to as the “division of a line into extreme and mean ratio”; it is known by various names,
including golden ratio, golden section, and divine proportion. Consider a line segment of unit length. There is a unique point such that if the segment is divided at that point, the ratio of the
larger segment—call it x—to the smaller is the same as the ratio of the whole segment to the larger segment, which is to say 1-xx = 1x . The value of this point, x, is approximately .618033988…. For
this (and only this) value of x, not only is the ratio of the larger segment to its complement (the smaller segment) the same as the ratio of the whole to the larger segment, as the preceding
equation indicates, but, as is easily verified, that
Mathematical Reasoning
the ratio of the smaller segment to the larger is equal to the larger, 1- x =x x the value of the larger segment plus 1 is equal to the reciprocal of the larger segment, x +1=
1 , x
and the square of the larger segment is equal to its complement, x 2 = 1 - x. The ratio of the larger segment to the smaller, 1-xx , is conventionally represented by φ, has the value 5 +1 , or
approximately 1.618, and is 2 referred to as the golden ratio, golden section, golden number, golden mean, divine proportion, or, in Euclid’s terms, extreme and mean ratio. Golden ratio is a
misnomer, in a way, inasmuch as the ratio in this case is an irrational number, the value of which has been worked out to thousands of decimal places. Here 1.618 will suffice for our purposes. The
ratio has a unique relationship, one might say, with itself; add 1 to it and you get its square, φ + 1 = φ2, or
φ = φ2 – 1,
subtract 1 from it and you get its reciprocal, f -1=
1 1 , or f = 1 + f f
and so on. These relationships provide a basis for some remarkable equations for the value of φ. The following examples are provided by Livio (2002). Consider y = 1 + 1 + 1 + 1 + ... . Inasmuch as
the square of the right-hand term is 1 + 1 + 1 + 1 + 1 + ...,
Esthetics and the Joys of Mathematics
or 1 + y, we have y = y 2 - 1. So the value of the golden ratio is the limit of this, some might say elegant, expression composed entirely of 1s:
f = 1 + 1 + 1 + 1 + ... . Or consider the continuing fraction 1
y = 1+
1+ 1+
1 1 + ... .
Inasmuch as the denominator of the second term on the right-hand side is y, we have y = 1+
1 . y
So another elegant expression for φ, composed of nothing but 1s, is 1
f = 1+
1+ 1+
1 1 + ... .
The reader who finds such relationships fascinating will get much pleasure from Livio’s exploration of this most unusual number. To say that many people have found the golden ratio to be intriguing
is an understatement. It has fascinated mathematicians at least since the days of the classical Greeks. It has been the subject of numerous books over the ages (the Italian mathematician Luca Pacioli
published a book about it in 1509 that contains drawings by Leonardo da Vinci), and it continues to be a popular topic for writers on mathematics (e.g., Dunlap, 1997; Herz-Fischler, 1998; Huntley,
1970; Livio, 2002; Runion, 1990). Livio (2002) surmises that the golden ratio has probably inspired
Mathematical Reasoning
thinkers across disciplines more than any other number in the history of mathematics. Basic to many of the fascinating properties of the golden ratio is that the line-sectioning process is
self-perpetuating. If we take the longer of the two segments into which the original line segment was divided, which is to say the section of length x, and mark on it a point at a distance equal to
the length of the smaller of the segments (i.e., 1 – x) from one end, we will find that a division of the section of length x at this point will yield subsections with the same properties as those of
the original line segment. The process can be continued indefinitely, and each sectioning will produce two segments such that the ratio of the shorter to the longer will be the same as the ratio of
the longer to the whole. A similar process of successive subdivisons can be applied in two dimensions. If we begin by drawing a rectangle, the longer and shorter sides of which are in the ratio 1.618
to 1, and then divide this figure into a square and a smaller rectangle by drawing a line segment 0.618 units from one of the shorter sides and parallel to it, we will find that the longer and
shorter sides of the resulting smaller rectangle again have the ratio 1.618 to 1, or, if you prefer, 1 to 0.618. If we repeat this operation several times, each time dividing the small rectangle that
resulted from the previous division into a square and a residual rectangle, the ratio of the longer and shorter sides of the resulting rectangles will remain constant at 1.618 to 1 (see Figure 11.8).
Given x = 0.618 …, that is, 1/φ, the sequence x0, x1, x2, x3, … xn–1 has the property that every number except the first two is the difference
Figure 11.8 Showing the division of the golden rectangle within the golden rectangle into another square and smaller golden rectangle, and the result of repeating the process a few times.
Esthetics and the Joys of Mathematics
between the two preceding numbers. Similarly, the sequence 1/x0, 1/x1, 1/x2, 1/x3, … 1/xn–1 has the property that every number except the first two is the sum of the two preceding numbers. This is
the defining property of the famous Fibonacci sequence 1, 1, 2, 3, 5, 8, 13, 21, 34, …, xn, …, where xn = xn–1 + xn–2. The limit of the ratio of two adjacent numbers in this series as the length of
the series increases, lim n–>∞ xn/xn–1, is the golden ratio, φ. What is perhaps more surprising is that this ratio is the limit of a sequence in which each number except the first two is the sum of
the preceding two numbers, no matter what numbers are used to start the sequence. Suppose one starts, say, with the two numbers 17 and 342. This beginning produces the sequence 17, 342, 359, 701,
1,060, 1,761, 2,821, 4,582, 7,403, 11,985, …. The sequence of (approximate) ratios of each number with its immediate predecessor, beginning with 342/17, is 20.118, 1.050, 1.953, 1.512, 1.661, 1.602,
1.624, 1.616, 1.619. Successive ratios alternate between being larger and smaller than φ, and the magnitude of the difference gets progressively smaller. The Fibonacci sequence is a very interesting
phenomenon in its own right, and pops up unexpectedly in a great variety of contexts both in mathematics and in the physical (and biological) world (Garland, 2000; Livio, 2002; Stevens, 1974). The
sequence is generated by adding numbers in Pascal’s triangle (the coefficients of the expansion of the binomial (a + b)n in a certain order (Pappas, 1989, p. 41). Also, the sum of the squares of two
successive Fibonacci numbers is always another Fibonacci number. The first few cases are shown in Table 11.1. The general formula
Table 11.1. Illustrating That the Sum of the Squares of Successive Fibonacci Numbers Is Another Fibonacci Number Numbers
Sum of Squares
1, 2
2, 3
3, 5
5, 8
8, 13
Mathematical Reasoning B
e d
Figure 11.9 Showing the many instances of φ in a star inscribed in a pentagon. 2 2 is ( f n+1) + ( f n+2 ) = f 2n+3, where n is the ordinal position of the number in the conventional Fibonacci
series, 1, 1, 2, 3, 5, 8, 13, …. An aspect of the golden ratio that fascinated the ancient Greeks was its appearance in certain geometric patterns. If a five-pointed star is created by joining
alternate vertices of a regular pentagon, as shown in Figure 11.9, the golden ratio is seen practically everywhere one looks. Considering only the area defined by the triangle ABC, the ratios AC/Ab,
Ab/Aa, Aa/ab, AC/AB, and AB/Aa all are φ. By symmetry the same ratio is seen repeatedly also in triangles BCD, CDE, DEA, and EAB. Note that within the star that sits in the pentagon there exists
another pentagon, abcde, within which a star can be drawn by connecting its vertices. That star, too, has a pentagon within it, within which a still smaller star can be drawn, and so on ad infinitum.
At each level, there occurs the same abundance of instances of φ as at the top level. Figure 11.10 shows the results of carrying the process to three levels. Much has been written about the frequent
appearance of dimensions of the golden ratio in ancient architectural structures and in classical paintings (Bergamini, 1963; Bouleau, 1963; Ghyka, 1927/1946). The idea is that many architects and
artists constructed or painted structures the dimensions of which were related by the golden ratio, either because they intentionally made use of the ratio in proportioning, or because what they
found to be pleasing to the eye turned out to have these proportions. The celebrated Swiss-French architect Charles-Édouard Jeanneret-Gris (aka Le Corbusier) based a theory of architecture on this
Esthetics and the Joys of Mathematics
e d E
Figure 11.10 Nested stars and pentagons.
Numerous masterpieces have been analyzed with the hope of finding evidence of this notion. Livio (2002) contends that much of the evidence of the widespread influence of the golden ratio in
architecture and painting is weak. Some exceptions involve a few relatively recent artists who have explicitly made use of the ratio in some abstract paintings. Livio also recounts numerous efforts
to find evidence of the influence of the golden ratio in music and poetry and again concludes that much (though not all) of what has been seen has really been in the eye of the observer. Among the
earliest experiments performed by psychologists, notably German pioneer psychophysicist Gustav Fechner, were some designed to explore whether rectangles with sides in the golden ratio are
esthetically distinct and preferred to those with other length-width ratios. Much of this work was flawed by a failure to consider the role that procedural artifacts could play in determining the
experimental results. The judgmental context in which preferences were stated (e.g., the range of lengths and widths used and where the “golden” rectangle sat within that range) could affect the
choices made; for example, and the analysis of aggregate data could lead to conclusions of a preference for golden rectangles, even if none of the individuals comprising the group showed such a
preference (Godkewitsch, 1974). This work did not resolve the question one way or the other, but simply lost the attention of experimenters, who moved on to other things. Hardly less interesting than
the golden rectangle, though much less often discussed, is the golden triangle—a right triangle with sides in the
Mathematical Reasoning
Figure 11.11 The golden triangle.
ratio 1:2: 5 . It is formed by dropping a line segment from an upper vertex of a 2 × 2 square to the center of the base, as shown in Figure 11.11. If a sector of a circle with radius 5 is formed from
the apex of the triangle to an extension of the square’s base, the rectangle that is formed by extending the top of the square to match the extension of the base and connecting the two extensions, as
shown in Figure 11.12, is the golden rectangle. The rectangle to the left of the square is itself a golden rectangle, which can be divided into a square and a still smaller rectangle, as shown in
Figure 11.13. The hypotenuse of the golden triangle constructed in the small square is the radius of a circle that is tangent to one side of the smaller golden rectangle. The smallest rectangle in
this figure is also a golden rectangle and can be divided into a square and smaller rectangle still, and so on indefinitely, producing an endless series of golden rectangles. Despite the attention
that the golden ratio has received through the ages, it has not lost its charm or its ability to stimulate whimsical speculations
√5 Figure 11.12 Illustrating the construction of a golden triangle from a square and an enclosed golden triangle.
Esthetics and the Joys of Mathematics
Figure 11.13 Showing the division of the golden rectangle within the golden rectangle into another square and smaller golden rectangle by use of the hypotenuse of a golden triangle.
about the numerical nature of reality. Without denying the possibility that unchecked, Pythagorean speculation can lead to nowhere, or worse, I confess to not being able to understand how anyone who
stumbles onto this number can fail to wonder about it. Its geometric properties strike me as beautiful and fascinating; its penchant for appearing in unpredictable abstract places (the Fibonacci
series being perhaps the best known among these) and its commonality (especially via the Fibonacci series) in nature are intriguing. (For an account of why the Fibonacci series and the golden ratio
are encountered so frequently in botany, see Stewart, 1995b.)
☐☐ A Surprising Connection There is nothing obvious about either the Pythagorean theorem or the Fibonacci series that would lead one to believe that one is related to the other. However, in 1948
Charles W. Raine pointed out that if for any four consecutive Fibonacci numbers the product of the outer two of these numbers and twice the product of the inner two are taken as two legs of a right
triangle, the square of the hypotenuse of that triangle is also a Fibonacci number. Consider, for example, the four successive Fibonacci numbers 5, 8, 13, 21. According to Raine’s rule, the lengths
of the two legs of the triangle are 5 × 21 = 105 and 2 × 8 × 13 = 208, and the hypotenuse is 1052 + 2082 = 233 , a Fibonacci number. The general expression for the relationship is (anan+3 )2 +
(2an+1an+2 )2 = (a2n+3 )2 , where an represents the nth number in the Fibonacci sequence. [This assumes determining n by counting from the first 1 in the series; if the first number in the series is
considered to be 0, the correct formula is (anan+3 )2 + (2an+1an+2 )2 = (a2n+2 )2 .] As already noted, the ratio of the nth to the (n – 1)th Fibonacci number approaches the golden ratio with
increasing n. In a discussion of
Mathematical Reasoning
Raine’s discovery, Boulger (1989) points out that as one goes farther in the Fibonacci sequence, forming triangles from four consecutive terms as described in the preceding paragraph, the golden
ratio appears as the ratio of the sum of the shorter leg and the hypotenuse to the longer leg, shorter leg + hypotenuse = f. n→∞ longer leg
☐☐ Recreational Mathematics Playing around with numbers has been a pastime for the past 3000 years and perhaps for much longer. (Flegg, 1983, p. 225)
To many people, the idea that mathematics can be fun would be curious if not incomprehensible. Such book titles as Amusements in Mathematics (Dudeney, 1958), The Pleasures of Math (Goodman, 1965),
Recreations in the Theory of Numbers (Beiler, 1966), The Joy of Mathematics (Pappas, 1989), and Trigonometric Delights (Maor, 1998) would perhaps sound oxymoronic. But unquestionably some people
derive great pleasure from mathematics— from discovering or simply contemplating mathematical relationships, from solving mathematical problems, from creating or experiencing mathematical patterns.
What else but the unadulterated pleasure of solving problems or discovering relationships could motivate concerted efforts to find ever larger perfect numbers (numbers that equal the sum of their
proper divisors; a proper divisor of n being any divisor of n, including 1, other than n itself) or amicable or friendly pairs of numbers (numbers each of which is equal to the sum of the proper
divisors of the other). The first few perfect numbers—6, 28, 496, 8128—were probably known to the ancients. Others have been discovered gradually over the centuries. The search has been aided greatly
by Euclid’s proof that if, for p > 1, 2p – 1 is a prime number (a Mersenne prime, named for Marin Mersenne, the French theologian-mathematician who later studied primes that are one less than a power
of 2), then 2p–1(2p – 1) is a perfect number. Before the middle of the 20th century, the largest known prime was 2127 – 1, a 39-digit number, which, according to Friend (1954), French mathematician
Édouard Lucas spent 19 years checking; the perfect number that can be calculated from this prime, 2126(2127 – 1), is 77 digits long. (Kasner and Newman [1940/1956] note that there is reason to
believe that some 17th-century mathematicians had a way of recognizing primes that is not known to us. As evidence of this possibility they cite an occasion on which Fermat, when asked whether
100,895,598,169 was prime, was able to say straightaway that it was the product of two primes, 898,423 and 112,303.)
Esthetics and the Joys of Mathematics
The appearance of high-speed digital computers on the scene accelerated the search for Mersenne primes and perfect numbers. The Great Internet Mersenne Prime Search (GIMPS) is an organized approach
that taps the collective power of thousands of small computers to find ever larger Mersenne primes. As of October 2008, the largest known prime, found as a consequence of this effort, was 243,112,609
– 1. This means that the largest known perfect number as of the same date was 243,112,608(2 43,112,609 – 1), which would require almost 26 million digits to write out. All known perfect numbers are
even, but the nonexistence of odd ones has not been proved. A curious consequence of the search for perfect numbers is the finding of many numbers that are almost perfect in that their divisors add
to one less than the number itself, and the failure to find any number that is almost perfect by virtue of its divisors adding to one more than itself, despite the lack of a proof of the nonexistence
of the latter type of near miss. Although the concept of amicable numbers was known to the ancient Greeks, they were perhaps aware of the existence of only the single pair 220 and 284. Fermat found a
second pair—17,296 and 18,416—in 1636, and Descartes a third—9,363,584 and 9,437,056—two years later. Then, about the middle of the 18th century, Euler discovered a technique for generating such
numbers and added 58 new pairs to the existing very short list (Dunham, 1991). A 20th-century extension of the idea of amicable numbers is that of sociable numbers, defined as three or more numbers
that, when treated as a loop, have the property that each number is the sum of the divisors of the preceding number. Singh (1996) gives, as an example of such a loop, 12,496; 14,288; 15,472; 14,536;
14,264. Each number is the sum of the divisors of the preceding number, and treating the set as a loop, the number that precedes 12,496 is 14,264. Fascination with mathematical puzzles is as old as
mathematics, and, as evidenced by the steady stream of books on the subject, it appears not to have abated over time. Claude Gaspar Bachet, a French nobleman and writer of books on mathematical
puzzles, published in 1612 Problèmes plaisants et délectables qui se font par les nombres, which presented most of the more famous numerical or arithmetic puzzles known at the time. At least five
subsequent, and greatly enlarged, editions of the book were published, one as late as the mid 20th century. Although the number of different puzzles that have been published is surely very large,
many of the puzzles are variations on a few generic themes or scenarios, including weighing, river crossing, liquid pouring, dice tossing, and drawing colored balls from an urn. One might be tempted
to assume that puzzles are given little attention by serious mathematicians, but such an assumption would
Mathematical Reasoning
be wrong. Kasner and Newman (1940/1956) mention Kepler, Pascal, Fermat, Leibniz, Euler, Lagrange, Hamilton, and Cayley as among the many major mathematicians who have devoted themselves to puzzles.
“Researches in recreational mathematics sprang from the same desire to know, were guided by the same principles, and required the exercise of the same faculties as the researches leading to the most
profound discoveries in mathematics and mathematical physics. Accordingly, no branch of intellectual activity is a more appropriate subject for discussion than puzzles and paradoxes” (p. 2416). Magic
squares—square tables of numbers in which all rows, columns, and diagonals add to the same sum—go back to antiquity. Flegg (1983) gives as the oldest known magic square one that appears in a Chinese
work on permutations that may have been written in the 12th century BC, but notes that, according to tradition, the idea of the magic square goes back to around 2200 BC. At one time, magic squares
were believed by many people really to have magical powers; charms on which they were engraved were worn for protection against disease or other types of adversity. Flegg (p. 236) describes an
interesting method of constructing magic squares of various orders, based on the way a knight moves on a chessboard. Ellis (1978) also gives rules for constructing magic squares of different order
and shows a 16 × 16 square, holding the numbers 1 through 256, reproduced here as Table 11.2, that he describes as “one of the most ingenious ever devised.” The numbers in every row and every column
add up to 2,056, as do those in every 4 × 4 block anywhere in the square; this includes overlapping blocks and blocks that wrap around either sideto-side or top-to-bottom. That is, a 4 × 4 block can
be composed from cells in the right-most columns followed by cells in corresponding rows in the left-most columns, or from cells in the bottom rows followed by cells in the corresponding columns in
the top rows. Every half column and every half row adds to 1,028, and every 2 × 4 and 4 × 2 block does so as well, again even when the square is treated as a wraparound device laterally and
vertically. Ellis also points out regularities involving chevron patterns in the square. There may be a way of conceptualizing the production of such a square with the numbers 1 through 256 that
makes it unremarkable, but, if so, I am unaware of it; it strikes me as a truly astounding accomplishment. Sudoku is a puzzle with some similarities to magic squares that has become very popular,
first in Japan and then elsewhere, including the United States, over the last 20 years or so. In the standard form, Sudoku is a 9 × 9 grid and the objective is to fill in the rows and columns in such
a way that each of the numbers 1 through 9 appears once and only once in each row, in each column, and in each of the nine nonoverlapping 3 × 3 subgrids. The grid presented to the puzzle-doer has a
few of the cells
Esthetics and the Joys of Mathematics
Table 11.2. 16 × 16 Magic Square
Source: From Ellis (1978, p. 140).
already filled in. Although no mathematics is required to solve Sudoku puzzles—only logic is required—there are some interesting mathematical questions associated with the game. One question, not
easy to answer, is how many ways can a Sudoku grid be filled in according to the rules of the game. Delahaye (2006) gives a number of about 6.7 × 1021 as an estimate produced with the help of a
computer; he identifies a variety of strategies that people use to solve Sudoku puzzles and describes several variations on the Sudoku theme. Mathematics that has been engaged strictly for its own
sake—for fun, one may say—has frequently led to serious mathematical inquiry and notable discoveries with important practical applications. Kasner and Newman (1940) include the theory of equations,
probability theory, calculus, the theory of point sets, and topology among the areas of mathematics that have grown out of problems first expressed in puzzle form.
Mathematical Reasoning
However, that playing with mathematics could lead to such discoveries and applications is not necessary to justify the playing; the doing of mathematics is seen by many people—professionals and
amateurs alike—as its own reward. Something of the pleasure that playing with mathematics can provide has been described by many writers, among them Kraitchik (1942), Friend (1954), Gardiner (1956,
1959, 1961), Dudeney (1958), Beiler (1964), Rouse Ball and Coxeter (1987), Pappas (1989, 1993), and Burger and Starbird (2005).
☐☐ Surprised by Simple Elegancies: Confessions of a Nonmathematician Few of us will ever scale the Himalayas of mathematics but we can all enjoy a stroll in the foothills where the dipping, twisting
pathways of our basic number system lead to a bonanza of unexpected delights. (Ellis, 1978, p. 122)
The Pythagorean theorem has been mentioned several times in this book. That the square of the length of the hypotenuse of a right triangle is equal to the sum of the lengths of the squares of the
other two sides is a useful bit of knowledge. The relationship appears also to be intrinsically fascinating to many people; how else are we to account for the extraordinarily large number of proofs
of it that have been produced? One possible basis for the fascination is the surprising nature of the relationship. As Dunham (1991) puts it, “There is no intuitive reason that right triangles should
have such an intimate connection to the sums of squares … the Pythagorean theorem establishes a supremely odd fact, one whose oddness is unrecognized only because the result is so famous” (p. 53).
Dunham quotes Richard Trudeau, who observes in his book The Non-Euclidean Revolution, “When the pall of familiarity lifts, as it occasionally does, and I see the Theorem of Pythagoras afresh, I am
flabbergasted” (p. 53). Another formula that is familiar to everyone who has had a high school algebra course, and one that we accept without surprise only because of its familiarity, is that which
expresses the area of a circle as a function of its radius, A = pr 2 . That π expresses the ratio of a circle’s circumference to its diameter is unlikely to evoke wonder by anyone who knows anything
about mathematics, because p is defined as that ratio. But how about the fact that the area of a circle is obtained by multiplying by p the area of a square whose side is equal to the circle’s
radius? This is not a matter of definition. Why should the constant that represents the ratio of a circle’s circumference to its diameter turn out to represent also the ratio of the circle’s area to
a square erected on
Esthetics and the Joys of Mathematics
its radius? Archimedes constructed a proof that the area of a circle, Acir, is equal to the area of a triangle, Atri, whose base and height are equal, respectively, to the circle’s circumference, c,
and radius, r, that is, Acir = Atri = 12 cr = pr 2 , and also described a geometric procedure for approximating the value of π, which bracketed the value between 3 17 and 3 10 . So the relationship
is established, but one is still left with a 71 sense of wonder as to why it is what it is. One wonders too about the extent to which knowledge of this relationship may have encouraged the numerous
people who have tried over the years to “square the circle” to believe that it should be doable. The curve that describes the trajectory of a point on the edge of a disc as the disc is rolled along a
straight line is known as a cycloid. The curve that describes the trajectory of fastest frictionless descent of an object from one point, A, to a lower one, B, is called a brachistochrone. What these
two curves have to do with each other is anything but clear. The brachistochrone has to do with the time required to get from one point to another. The phenomenon the cycloid represents has nothing
to do with time; it describes the path taken by a point on a rolling disc independently of how fast the disc is rolled. Nevertheless, the cycloid and the brachistochrone are, in fact, the same curve.
One would like to be able to explain coincidences like these in terms of some overarching theory from which the relationship in both contexts can be deduced. The situation is analogous to that faced
by physicists attempting to make sense of the fact that gravitational mass (which involves a relationship of attraction between two bodies possibly separated by a great distance) and inertial mass
(which involves acceleration of a body from the effect of force acting on it) should have the same value. Einstein, who called this coincidence astonishing, eventually deduced the equivalence
principle from the non-Euclidean geometry of space assumed by his general theory of relativity. There are many coincidences in mathematics that await comparable enlightening. I am not a
mathematician, but even with my very limited mathematical knowledge, I find much to wonder about in the elegant mathematical relationships that one sees on every hand. Sometimes it is hard to tell
whether a relationship is worthy of wonderment or not. Consider, for example, the way in which equal-radius circles pack. If I draw one such circle, and then surround it with as many abutting circles
as I can, I find that six fit precisely. Having placed five, there is exactly room enough for the sixth. Similarly, in three-dimensional space, a sphere can be abutted by 12 other spheres of the same
radius, and the fit is precise. Is this a mundane fact or an interesting one? I confess to being surprised by it and provoked to wonder why it should be so. Here I want to focus on some very simple
mathematical relationships that evoke wonder just because they are simple. One would not expect
Mathematical Reasoning
Figure 11.14 A square within a circle within a square.
them, I think, to be so, and one is surprised to discover that they are. Consider the situations shown in Figures 11.14 and 11.15. In the first case, a square circumscribes a circle, which
circumscribes a square. In the second case, the roles of the square and circle are reversed. It is easy to show that, in both cases, the ratio of the areas of the outer and inner figures is precisely
2 to 1—not 1.98 to 1 or 2.02 to 1, but 2 to 1 exactly. Is this an interesting fact? Why should the ratios be integral? What does a circle have to do with a square that dictates such an elegantly
simple relationship? That the area of the inner square is half that of the outer square is readily seen by rotating the inner square 45 degrees and sectioning it with diameters of the circle as shown
in Figure 11.16. It should be clear that each edge of the inner square divides each quadrant of the outer square into two areas of equal size. That the same relationship holds when the roles of
squares and circles are reversed is shown in Figure 11.17. Let R represent the radius of the outer circle and r the radius of the inner one. If r is set to 1, then, by the Pythagorean theorem, R = 2,
from which it follows that the area of the inner circle is π and that of the outer one is 2π.
Figure 11.15 A circle within a square within a circle.
Esthetics and the Joys of Mathematics
Figure 11.16 Showing that the area of the inner square is 1/2 the area of the outer square.
Figure 11.18 shows the comparable situations with the squares replaced by equilateral triangles. In the first instance, an equilateral triangle circumscribes a circle, which circumscribes an
equilateral triangle. In the second, the roles of triangle and circle are reversed. As the reader may wish to verify, in each case the ratio of the area of the outer and inner figures is precisely 4
to 1. Suppose I draw a right triangle with sides 3, 4, and 5 and an inscribed circle (see Figure 11.19). Is it not interesting that the circle has a radius of 1? Inasmuch as the circle’s diameter is
2, the first five integers are represented in the basic measures of this figure. Six also is there as the area of the triangle, and with only a small stretch, one can see 7, 8, and 9 as the sums of
the three possible pairings of the triangle’s sides: 3 + 4, 3 + 5, and 4 + 5. π is there, too; do you see it? (If not, see the end of this chapter.) I find these relationships surprising, although am
not entirely sure that I should. What I find surprising about them is the integral relationships. If one inscribes an equilateral triangle within a circle, a circle within
R= 2
Figure 11.17 Showing that the area of the inner circle is 1/2 the area of the outer circle.
Mathematical Reasoning
Figure 11.18 A triangle within a circle within a triangle, and a circle within a triangle within a circle. In each case the area of the innermost figure is 1/4 the area of the outer one.
Figure 11.19 The radius and diameter of the circle are, respectively, 1 and 2; the sides of the right triangle are 3, 4, and 5.
Esthetics and the Joys of Mathematics
that triangle, a square within that circle, a circle within the square, a pentagon within that circle, and continues in this fashion, inscribing in each successively smaller circle a polygon with one
more side than the previous one, the radius of the circle one obtains in the limit is approximately 1/12 of the radius of the original circle (Kasner & Newman, 1940, p. 311). The important word here
is approximately. This is what one would expect, in my view, when one inscribes regular geometric figures within other geometric figures; so I am surprised to find the area of a square inscribed in a
circle that is inscribed within a square to be precisely 1/2 the area of the square within which the circle is inscribed. Perhaps I am missing something that, if I could see it, would make the
mundaneness of this and similar relationships obvious. But, in the absence of that insight, I am awestruck by their simplicity and elegance. Certainly such relationships are well known among
mathematicians; nevertheless, I confess to a sense of joy in discovering them, as a nonmathematician, for myself. I have no expectations of ever discovering anything in mathematics that is not
already well known, but this does not preclude my experiencing a feeling of delight upon occasionally discovering a relationship that is new to me. My experience leads me to give some credence to a
claim made by Kaplan and Kaplan (2003): “Anyone who can read and speak (which are awesomely abstract undertakings) can come to delight in the works of mathematical art, which are among our kind’s
greatest glories” (p. 2). Of course, upon becoming aware of these relationships, a real mathematician would want to know whether they generalize. For example, do the integral relationships between
the areas of inscribed and circumscribed circles and triangles and squares generalize to all regular polygons? Alas, the answer is no. The area of a circle circumscribed around a regular pentagon is
approximately 1.53 the size of a circle inscribed in the same pentagon. The same ratio holds between the area of a pentagon enclosing a circle and that of a pentagon within the same circle. Nor do
the integral relationships between the areas of inscribed and circumscribed circles and triangles and squares generalize to forms of higher dimension. The volume of a cube that circumscribes a sphere
of radius r is VCC = (2r)3 while the volume of a cube inscribed in a sphere of radius r is 3
2r VIC = . 3
Mathematical Reasoning
So the ratio of the volume of a circumscribing cube to that of an inscribed cube is
VCC (2r)3 = = 3 3. VIC 2r 3 3
Given a sphere of radius 1, the volumes of the circumscribing and inscribed cubes are, respectively, 8 and 8 33 . Similarly, if we start with a sphere of radius r, inscribe within it a cube (a side
of which would measure 23r ), and then inscribe within that cube another sphere, the inner sphere will have a radius of r3 , so the ratio of the volume of the circumscribing sphere to that of the
inscribed sphere will be 4 3 pr VCS 3 = = 3 3. VIS 4 r 3 p 3 3 Again, given a sphere of radius 1 circumscribing a cube, which in turn circumscribes a sphere, the volumes of the
circumscribing and inscribed spheres are, respectively, 43 p and 94 p3 . Despite their failure to extend to the areas of all regular polygons or to the volumes of three-dimensional shapes, I still
find the integral relationships involving the areas of circles, triangles, and squares to be surprising and elegant. There are numerous relationships that are surprising, at least to some observers,
because there is nothing that would lead one to expect the ratio between the two measures to be integral. Dunham (1991) notes several of them, all of which were proved by Archimedes. It is almost
enough to make one a numerologist! **** The examples of elegant or surprising mathematical relationships noted in this chapter are drawn primarily from basic algebra or geometry and are relatively
simple, reflecting my own limited knowledge of mathematics. However, the testimony of mathematicians working in highly abstract and esoteric areas of mathematics makes it clear that there are
elegancies, often surprising, to be found in their regions of operation as well, although appreciation of them may require levels of mathematical sophistication that few people attain. (π in Figure
11.19 is the area of the circle.)
12 C H A P T E R
The Usefulness of Mathematics
There is one qualitative aspect of reality that sticks out from all others in both profundity and mystery. It is the consistent success of mathematics as a description of the workings of reality and
the ability of the human mind to discover and invent mathematical truths. (Barrow, 1991, p. 173)
To say that mathematics is enormously useful is not to say that all of mathematics is equally useful, or even that all of mathematics is useful at all. To be sure, Florian Cajori (1893/1985) says, in
his classic The History of Mathematics, that “hardly anything ever done in mathematics has proved to be useless” (p. 1). On the other hand, Davis and Hersch (1981) speculate that most of the millions
of theorems contained in the mathematical literature are useless dead ends. And King (1992) makes the startling claim that “an ordinary mathematics research paper is read by almost no one except the
author and the journal ‘referee’ who reviewed it prior to publication” (p. 38). Whatever the merit of any of these claims—it is easy to believe that the last one is true of most disciplines—they are
of little moment to the pure mathematician, for whom neither the usefulness nor the popularity of theorems is the source of motivation for proving them. And practical minded souls who do see the
value of mathematics in its applicability to real-world problems can comfort themselves with the thought that the practical value of that part of mathematics that is useful is sufficiently great to
justify easily the entire enterprise on strictly pragmatic grounds.
Mathematical Reasoning
Whatever the motivations that drive mathematicians to do what they do, there can be no doubt that mathematics, in the aggregate, has been essential to the development of civilization as we know it.
Stewart (1995b) puts the matter this way: “If mathematics, including everything that rests on it, were somehow suddenly to be withdrawn from our world, human society would collapse in an instant” (p.
28). Even those areas of mathematics that have appeared most arcane or frivolous when first being explored have surprisingly often proved in time to have applications, sometimes in many domains. This
seems like incredibly good fortune, especially in view of the fact that interest in applications is not what drove many of the most productive mathematicians to do their work. If, as we have seen
that many modern mathematicians believe, mathematics can be pursued as an abstract discipline independently of its relationship to the physical world, why do esoteric developments in mathematics that
are pursued for purely theoretical reasons so often turn out to have completely unanticipated practical applications? Indeed, why, if mathematics is an abstract discipline that is independent of the
physical world, should it be useful in describing the world at all? Although the usefulness of mathematics may be most obvious in science and technology, it extends to many other areas as well. In
the ancient world the principal area of applications was probably trade. Over the centuries, mathematics has served, in addition to the purposes of science, technology, business, and commerce, those
of art, war, mysticism, gambling, and religion. In short, while the role of mathematics in science has been well documented, “the Hand Maiden of the Sciences has lived a far more ravish and
interesting life than her historians allow” (Davis & Hersh, 1981, p. 89). Mathematics has played an especially important role in the arts through the centrality of such mathematical concepts as
proportion, order, symmetry, harmony, and rhythm. This holds not only for music and architecture, where the influence may be most apparent, but in painting, sculpture, and poetry as well. Butler
(1970) attributes great importance to Plato, Plotinus, Augustine, and Aquinas as purveyors of numerological ideas that had major influence on the art of the Renaissance. A recent and perhaps
surprising application of mathematics to art is that of fractal geometry to an analysis of the paintings of Jackson Pollock (Taylor, 2002); objectives include those of determining the fractal
dimensionality of his (and other abstract) designs, and of discriminating between genuine Pollocks and imitations. Why mathematics is so useful is a mystery. Why should it be that, as Berlinski
(1997) puts it, “By means of purely mathematical operations on purely mathematical objects—numbers, after all—the mathematician is able to say that out there this will happen or that will” (p. 97)?
Stewart (1990) captures the uncertainty as to why mathematics is useful this
The Usefulness of Mathematics
way: “Perhaps mathematics is effective because it represents the underlying language of the human brain. Perhaps the only patterns we can perceive are mathematical because mathematics is the
instrument of our perception. Perhaps mathematics is effective in organizing physical existence because it is inspired by physical existence. Perhaps its success is a cosmic delusion. Perhaps there
are no real patterns, only those that we feeble-mindedly impose” (p. 7). Stewart notes that these are questions for philosophers and contends that “the pragmatic reality is that mathematics is the
most effective and trustworthy method that we know for understanding what we see around us” (p. 7).
☐☐ Utilitarian Interests The belief that much of the most creative thinking in mathematics has been motivated by an interest in mathematics per se and not by any relationship it may bear to the
physical world is held by many mathematicians (Davis & Hersh, 1981; King, 1992). King argues, for example, that instances of new mathematics coming out of interest in real-world problems—of which the
invention of the calculus is a prototypical case— are exceptions to the rule. Much more common, he suggests, is the case in which an area of mathematics that was originally developed for purely
theoretical interest turns out to have unexpected practical applications. Even number theory, which is often considered the area of mathematics that is least motivated by possible applications, is
being found useful in unanticipated ways, as, for example, in encryption for secure communications. Moreover, work on number theory arguably had great impact on the development of both pure and
applied mathematics indirectly. Richards (1978) makes the point: A good theorem will almost always have a wide-ranging influence on later mathematics, simply by virtue of the fact that it is true.
Since it is true, it must be true for some reason; and if that reason lies deep, then the uncovering of it will usually require a deeper understanding of neighboring facts and principles. In this way
number theory, “the Queen of Mathematics,” has served as a touchstone against which many of the tools in other branches of mathematics have been tested. This, in fact, is the real way that number
theory influences pure and applied mathematics. (p. 63)
There is also the view, however, that many areas of mathematics have been developed by people who were keenly interested in questions about physical reality, and that even those subjects that are
usually considered pure mathematics were created, in many cases, in the study
Mathematical Reasoning
of real physical problems (Bell, 1946/1991; Kline, 1980). At least 1,000 years before Pythagoras, the Babylonians were considering questions involving the time required for money to double if
invested at a specified annual rate of interest (Eves, 1964/1983). Trigonometry was created by the Alexandrians, notably the ancient Greek astronomers Hipparchus and Ptolemy, as a tool for enabling
more precise predictions of the movements of the planets and other heavenly bodies. Having been developed primarily to serve the needs of astronomy, this area of mathematics, as in many other cases,
proved to have applications in numerous other areas as well. Ekeland (1993), who sees the development of mathematics as part of the general development of science and technology, attributes the
growth of analysis to the interest of its developers in celestial mechanics and notes that the book in which Gauss established the foundations of geometry was also a treatise on geodesy. “Had
historical circumstances been different, had there been different needs to satisfy, wouldn’t mathematics have been different? If the Earth were the only planet around the Sun and if it had no
satellite, we wouldn’t have spent so many centuries accumulating observations and building systems to explain the strange movements of the planets among the stars, celestial mechanics wouldn’t exist,
and mathematics would be unrecognizable” (p. 55). Others have argued the importance of observation of real-world phenomena as a source of mathematical ideas. British mathematician James Joseph
Sylvester (1869/1956), for example, believed that most, if not all, of the great ideas of modern mathematics have had their origin in observation. Wilder (1952/1956) argues too that, although
theoretically axioms need have no correspondence to anything, they usually are statements about concepts with which those who make them are already familiar. Psychologically, it is hard to imagine
that it could be otherwise. As Wilder puts it, “We may say ‘Let us take as undefined terms aba and daba, and set down some axioms in these and universal logical terms.’ With no concept in mind, it is
difficult to think of anything to say! That is, unless we first give some meanings to ‘aba’ and ‘daba’—that is, introduce some concept to talk about—it is difficult to find anything to say at all”
(p. 1660). Of course, interest in physical reality need not be understood as necessarily quite the same thing as the desire to have a practical impact.
☐☐ A Fuzzy Distinction The line between pure and applied mathematics is a difficult one to draw in practice. This is especially so since the theorizing in some areas of science— quantum physics, for
example—has become increasingly abstract and
The Usefulness of Mathematics
mathematical, and remote from the world of sense and perceptual experience. Bell (1946/1991) suggests that the line between “reputable and disreputable” (pure and applied) mathematics is drawn by the
Pythagorean (pure) mathematician today “somewhere above electrical engineering and below the theory of relativity.” He argues that despite the esteem in which pure mathematics has been held since the
time of Pythagoras, “even a rudimentary knowledge of the history of mathematics suffices to teach anyone capable of learning anything that much of the most beautiful and least useful pure mathematics
has developed directly from problems in applied mathematics” (p. 130). Probability theory, though not fitting Bell’s criterion of “least useful pure mathematics,” is an example of an important area
of mathematics that grew out of attempts to solve applied problems, notably the collaborative attempt of Fermat and Pascal to specify the appropriate way to divide the stakes in a prematurely
terminated game of chance. On the other hand, one finds numerous examples in the history of mathematics of developments that have come out of work that appears to have been motivated totally by
intellectual—what some might call idle—curiosity and that has then, surprisingly, turned out to be usefully applied to practical problems. A striking illustration of this fact is the centuries of
work on the conic sections that eventually found applications in mechanics, astronomy, and numerous other areas of science. One also finds many examples of the development of theoretical mathematics
getting a push from the desire of people to work on realworld problems for which the then-current mathematics did not provide adequate tools. As just noted, an interest in solving wagering problems
that began the development of probability theory is one case in point; the desire to work on problems of instantaneous change and continuity, which led to the development of the infinitesimal
calculus, is another. Because science has become so dependent on mathematics, the recognition of a need for mathematical tools that do not exist can serve as a powerful motivation for the development
of those tools. And the mathematical research that is done to fill the identified need may lead to developments that not only meet the need but have other unanticipated consequences as well. In
short, theoretical and applied interests appear to have coexisted throughout the history of mathematics. Work that has led to new developments has been motivated sometimes by the one and sometimes by
the other; moreover, as Hersh (1997) points out, “Not only did the same great mathematicians do both pure and applied mathematics, their pure and applied work often fertilized each other” (p. 26).
Whether either type of interest has been more important—more productive of new
Mathematical Reasoning
knowledge—than the other, it is probably not possible to say. For the most part, theoretical and applied interests have advanced together in a mutually beneficial way.
☐☐ The Usefulness of Mathematics in Science The common theme that links Plato, Kepler, Einstein, the quantum theorists, and present-day string theorists is the belief that an understanding of the
basic stuff of the universe will be found using mathematics. (Devlin, 2002, p. 68)
It is usefulness for the description of natural phenomena that is perhaps mathematics’ most remarkable property. Mathematics has proved to be so useful in science that scientific progress has often
been made on the basis of mathematical work done somewhat independently of empirical investigations. Mathematics is useful in science not only by reducing the mental effort required to solve certain
types of problems, as Ernst Mach (1893/1960) once suggested that it should, but also by making the answering of many questions possible that otherwise would not be so. Kline (1953a) points out that
the great scientific advances of the 16th and 17th centuries were in astronomy and in mechanics, and that, in both cases, they rested more on mathematical theorizing than on experimentation; Atkins
(1994) argues that the Copernican revolution was less concerned with whether the sun encircled the earth, or vice versa, than with whether mathematical models are descriptive of reality. The
application of geometry—Euclidean and non-Euclidean—to an understanding of space and time is beautifully described by Penrose (1978). An especially useful 17th-century achievement was the development
of the system of logarithms. Kasner and Newman (1940) hold that Napier’s Mirifici Logarithmorum Canonis Descriptio (1614) is second in significance only to Newton’s Principia in the history of
British science. In providing a method for replacing multiplication and division with addition and subtraction, the raising to powers and the extraction of roots to multiplication and division,
logarithms made otherwise inordinately tedious calculations relatively simple and straightforward. The complementary and mutually reinforcing historical relationship between the study of nature and
the development of mathematics has been captured nicely by Atkins (1994). “It should not be forgotten that mathematics and observation jointly squirm towards the truth. The process of discovery of
the world is often a sequence of alternations between observations and mathematics in which the observations are
The Usefulness of Mathematics
stretched like a skin on to a kind of mathematical template. We refine and bootstrap ourselves into a mapping of the physical world by squirming forward, constantly comparing our expectations based
on our current theory with observations they themselves suggest” (p. 105). Observation prompts the development of theory, and theory indicates what observations should be made in an endless spiral of
looking, explaining, predicting, and looking again. Smith (2003) argues that a mathematical model has two advantages over a verbal model. First, to write a mathematical model one has to be clear
about what one is assuming. And if, in explicating such a model, one makes any assumptions of which one is not aware, others, in assessing the model, may see the need for the unstated assumptions and
make them explicit. Expressing one’s ideas in mathematical form is an effective way of clarifying them in one’s own thinking. The second claimed advantage of mathematical models is that they make
predictions that can be tested. This is not to deny that verbal models may make such predictions as well, but other things being equal, the mathematical model is the more likely of the two types to
make predictions that are quantitatively precise, and thus easier to put to a rigorous test. The degree to which an area of science is “mathematized” is sometimes taken as an indication of its
maturity. As the Columbia Associates in Philosophy (1923) put it, “It does seem to be true that the more highly developed a science becomes, and the more knowledge we gain about the relations between
its objects, the more its beliefs tend to fall into mathematical form, and to admit of treatment by purely mathematical methods” (p. 99). Today the term mathematical would undoubtedly be given a
sufficiently broad connotation to include computational. Some would argue that scientific explanations, or even descriptions, at their deepest level, must be mathematical; language without
mathematics, in this view, is inadequate to the task. Peacocke (1993) makes this case: “There is a genuine limitation in the ability of our minds to depict the nature of matter at this fundamental
level; there is a mystery about what matter is ‘in itself,’ for at the deepest level to which human beings can penetrate they are faced with a mode of existence describable only in terms of abstract,
mathematical concepts that are products of human ratiocination” (p. 34). Again, “during the twentieth century we have been witnessing a process in which the previously absolute and distinct concepts
of space, time, matter and energy have come to be seen as closely and mutually interlocked with each other—so much so that even the modification of our thinking to being prepared to envisage ‘what is
there’ as consisting of matter-energy in space-time has to be superseded by more inclusive concepts of fields and other notions no longer picturable and expressible only mathematically” (p. 35).
Mathematical Reasoning
It was noted in Chapter 4 that the history of mathematics has been characterized by increasing abstractness. Peacocke’s comments suggest that the same trend is seen in science. Clearly the trend in
the one domain is not independent of that in the other; the increasingly close coupling of mathematics and science ensures their correspondence in this regard.
☐☐ Surprised by Simplicity One of the major characteristics of Renaissance science that distinguished it from the natural philosophy of the early Greeks was its emphasis on measurement and
quantitative description. The classical Greek thinkers looked for explanations of physical phenomena, but the explanations they sought were of why things are the way they are, and they took no great
pains to quantify nature or even always to make the observations necessary to check out their assumptions. Some of Aristotle’s incorrect beliefs could easily have been corrected by observation. For
the first few centuries after the scientific revolution, mathematicians and scientists often were the same individuals. Their interest in discovering the nature of reality was intense. They looked
for invariant relationships among the phenomena they studied, and when they found them, they attempted to express them as mathematical laws. A surprising result of this effort was the discovery that
many aspects of the physical world and its behavior could be described by very simple mathematical equations. Consider, for example, Kepler’s discovery of his three laws of planetary motion. The
first of these laws states that the orbit of each planet around the sun is an ellipse with the sun at one of its foci. According to the second law, a straight line between the sun and a planet sweeps
out equal areas in equal times as the planet proceeds on its elliptical orbit. The third law states that the cube of the mean distance of each planet from the sun is proportional to the square of the
time taken by the planet to complete its orbit. The elegant simplicity of these laws, in their mathematical formulation, is remarkable. It is easy to think of other equally simple laws that are
descriptive of reality as we understand it. The various “inverse square” laws are cases in point. An inverse square law is a law that states that the strength (intensity, energy) at point B of some
property that originates at point A is inversely proportional to the square of the distance between A and B. It describes the dissipation of any form of energy (gravity, electromagnetic radiation,
electrostatic attraction, sound intensity) that propagates from a source equally in all directions, and follows
The Usefulness of Mathematics
from the fact that the surface area of a sphere increases by a factor of n2 when its radius increases by a factor of n. Barrow (1991) refers to the world being describable by mathematics as an enigma
and to the simplicity of the mathematics involved as “a mystery within an enigma” (p. 2). Farmelo (2003b) expresses a similar sentiment: “Armies of thinkers have been defeated by the enigma of why
most fundamental laws of nature can be written down so conveniently as equations. Why is it that so many laws can be expressed as an absolute imperative, that two apparently unrelated quantities (the
equation’s left and right sides) are exactly equal? Nor is it clear why fundamental laws exist at all” (p. xiii). Kepler’s discovery of the laws of planetary motion was a major achievement—some have
held that it was the most impressive achievement in the history of science. That nature can actually be described in such simple terms was considered by Kepler, and by many others, as powerful
evidence of design. He believed that in discovering these laws, he had been privileged to see the Creator’s mark on nature more directly than it had been seen by anyone before him. Kepler (1619/1975)
spoke of contemplating the beauty of his discovery “with incredible and ravishing delight” (p. 1009). He saw our ability to think quantitatively as a special endowment from the Creator, the purpose
of which was to permit us to understand the mathematical harmonies of creation: God, who founded everything in the world according to the norm of quantity, also has endowed man with a mind which can
comprehend these norms. For as the eye for color, the ear for musical sounds, so is the mind of man created for the perception not of any arbitrary entities, but rather of quantities; the mind
comprehends a thing the more correctly the closer the thing approaches toward pure quantity as its origin. (Quoted in Holton, 1973, p. 84)
Kepler was not the last scientist or mathematician to have been awestruck by the ability of mathematics to describe nature—or, to put it the other way round, by nature’s apparent obeisance to
mathematics. IndianAmerican astrophysicist Subrahmanyan Chandrasekhar’s reaction to Roy Kerr’s finding of an exact solution to Einstein’s general relativity equations that described a rotating black
hole is another case in point: In my entire life, extending over forty-five years, the most shattering experience has been the realization that an exact solution of Einstein’s equations of general
relativity, discovered by the New Zealand mathematician Roy Kerr, provides the absolutely exact representation of untold numbers of massive black holes that populate the universe. This “shuddering
before the beautiful,” this incredible fact that a discovery motivated by a search
Mathematical Reasoning
after the beautiful in mathematics should find its exact replica in Nature, persuades me to say that beauty is that to which the human mind responds at its deepest and most profound. (Quoted in
Pagels, 1991, p. 71)
Does the fact that many aspects of nature can be described so effectively by simple mathematics reflect the basic simplicity of reality generally? Or might it be that simple mathematics works so well
because it has been applied only to those aspects of nature that lend themselves to this kind of description, and that those aspects of nature constitute a tiny fraction of reality? If the latter is
the case, the simplicity with which we are impressed may be the simplicity of our representations of nature and not of nature itself. Many writers have commented on the role that the successful
application of analysis to problems of motion and time played in supporting the idea of a clockwork universe that was, in principle, completely describable in terms of differential equations (which
represent the rates of change of the state variables of a system). However, some have also pointed out that, in reality, only relatively simple situations proved to be mathematically tractable.
“Classical mathematics concentrated on linear equations for a sound pragmatic reason: it couldn’t solve anything else.… Linearity is a trap. The behaviour of linear equations— like that of
choirboys—is far from typical. But if you decide that only linear equations are worth thinking about, self-censorship sets in. Your textbooks fill with triumphs of linear analysis, its failures
buried so deep that the graves go unmarked and the existence of the graves goes unremarked” (Stewart, 1990, p. 83). The answer to the question of whether nature is fundamentally simple is not known,
and it is not clear whether it is knowable. But simplicities are found even in irregular or “chaotic” systems. The finding that the ratio of the meandering lengths of rivers to their linear
source-to-mouth distances tends, on average, to be approximately π is a case in point (Støllum, 1996). The constant δ ≅ 4.669 (Feigenbaum’s number, named for its discoverer, contemporary mathematical
physicist Mitchell Feigenbaum) is another. This constant is the factor by which one must decrease the flow of an output in order to double the period of a cascade (drips of a water faucet). “The
Feigenbaum number δ is a quantitative signature for any period-doubling cascade, no matter how it is produced or how it is realized experimentally. That very same number shows up in experiments on
liquid helium, water, electronic circuits, pendulums, magnets, and vibrating train wheels. It is a new universal pattern in nature, one that we can see through the eyes of chaos” (Stewart, 1995b, p.
122). Since the appearance of computers on the scene, a form of mathematical description that has been used increasingly is that of numerical
The Usefulness of Mathematics
simulation. That it is possible to get very complicated structure and behavior from the iterative application of very simple mathematical rules has led to speculation that the complex structure and
behavior that is seen in natural systems, including biological systems, may also rest on similarly simple mathematical rules. Validation of numerical models of natural systems is a nontrivial matter,
however, perhaps impossible in some cases (Oreskes, Shrader-Frechette, & Belitz, 1994); and equivalence of results does not necessarily mean equivalence of method. Demonstration that process X yields
product Y reveals one way of getting Y but does not prove that to be the only possible one. Mathematical laws are not explanations of why the entities of interest behave as they do. That the
gravitational attraction between two bodies is proportional to the product of their masses and inversely proportional to the square of the distance between them is a handy bit of knowledge, but the
law of gravitation itself provides no clue as to why such a relationship should hold. It can be—has been—argued, however, that the kind of knowledge that is represented by the mathematical formulas
that relate natural variables has been more responsible for the achievements of modern science than have causal explanations of the phenomena involved (Kline, 1953).
☐☐ Unanticipated Uses Mathematicians study their problems on account of their intrinsic interest, and develop their theories on account of their beauty. History shows that some of these mathematical
theories which were developed without any chance of immediate use later on found very important applications. (Menger, 1937, p. 253)
When mathematical developments have grown out of work on practical problems, it is not surprising that what is developed is applicable to the problems of interest, inasmuch as they were developed
with application to those problems in mind. However, many mathematical discoveries have been found to have unanticipated uses long after they were made, or applications to problems quite different
from those that may have motivated their development. The connections between the discoveries and the applications often seem to be completely fortuitous. The conic sections (ellipses, parabolas, and
hyperbolas) had no application beyond the amusement of mathematicians for 2,000 years after their discovery, and then they suddenly proved to be invaluable
Mathematical Reasoning
to the theory of projectile motion, the law of universal gravitation, and modern astronomy. The ellipse—a conic section formed by cutting through a cone at an angle—is a plane curve defined as the
set of all points such that the sum of the distances of each of the points from two fixed points—the foci—is constant. The Greeks contemplated this figure at least as early as the fourth century BC
and discovered many of its properties, none of which was put immediately to any practical use. It is not difficult to imagine Kepler’s delight when he found, 2,000 years later, that this simple
figure described the orbits of the planets precisely, but it left unanswered the question of why it should be so. British mathematician Arthur Cayley invented matrix algebra in the middle of the 19th
century. It found little practical use for decades. In 1925, however, German physicist Werner Heisenberg saw in it the tool he needed to do his work in quantum mechanics. Complex numbers had no
apparent practical applications when they were invented; now they are indispensable to electrical engineering and numerous other practical pursuits. Another example of a mathematical construct that
proved to have applications way beyond those that motivated its development is the Fourier series. Initially applied by French mathematician Joseph Fourier to the solution of problems of heat flow,
the series became widely applied to the analysis of complex wave functions that are descriptive of numerous phenomena in acoustics, optics, seismology, and other fields. In Chapter 8, in the context
of a discussion of paradoxes of infinity, we noted such strange mathematical constructs as Sierpinski gaskets and Menger sponges—species of Cantor sets. Such figures are interesting to think about,
but one may find it difficult to imagine much in the way of practical applications. In fact, it turns out that Cantor sets have proved to be descriptive of distributions in a variety of contexts.
They have been used to represent the distribution of mass in the universe, as well as of “cars on a crowded highway, cotton price fluctuations since the 19th century, and the rising and falling of
the River Nile over more than 2,000 years” (Peterson, 1988, p. 122). In recent years the circulatory system has attracted attention as an example of a fractal in the biological world. The
many-leveled branching structure accomplishes the remarkable feat of producing a system that takes up only about 5% of the body’s volume and yet ensures that for the most part, no cell is more than
three or four cells away from a blood vessel. Many biological structures are fractal-like in that they involve several levels of self-similar branchings or foldings, including the bile duct system,
the urinary collecting tubes of the kidney, the brain, the
The Usefulness of Mathematics
lining of the bowel, neural networks, the placenta, and the heart (West & Goldberger, 1987). Mathematicians and scientists have often been as surprised as everyone else when an esoteric area of
mathematics turns out to be applicable to a problem area of practical importance. Group theory, which now has many applications both within mathematics and in areas of science such as
crystallography, particle physics, and cryptography, was once judged to be useless by American mathematician Oswald Veblen; British mathematician–physicist Sir James Jeans, along with other experts,
recommended in 1910 that it be removed from the mathematics curriculum at Princeton University for that reason (Davis & Hersh, 1981). Who could have anticipated that work on “Kepler’s conjecture”
regarding the optimal packing of constant-radius spheres would prove to be applicable to the current-day problem of devising error-detecting and error-correcting codes for communications systems? The
search for ever-larger prime numbers has been a diversion for some mathematicians for many years; every so often one sees an announcement in the popular press that someone has succeeded in finding a
prime number larger than the largest one known before. It was hard to imagine that such discoveries had any practical use, until recently. Many modern encryption schemes used for purposes of security
in computer networks are based on very large composite numbers being extremely difficult to factor. (Interestingly, it is very much easier to determine, with the help of a fast computer, whether a
large number—say of several hundred digits—is prime than it is to factor the number if it is composite.) If I give you a number that happens to be the product of two very large primes (I know it to
be because I produced it by multiplying the primes) you will have a very hard time finding the two prime factors. That I know the factors permits me to encode messages in a way that only someone who
also knows the factors will be able to decode. There is an intense competition ongoing these days between those who wish to develop evermore secure codes and those who wish to break them; the
challenge of the coders is to stay a jump ahead of the code breakers, and finding more efficient ways to factor ever-larger numbers that are the products of two primes is the basic goal of the
latter. Although the competition is motivated by the practical concerns of network security, this work has led to the development of some deep and difficult mathematics. I have noted already the
enormous impact that the creation of non-Euclidean geometries had on mathematical thinking, and that this development forced a recognition of the possibility of constructing mathematical systems
based on axioms that did not express self-evident properties of the physical world. Some of the axioms of the new geometries seemed to assert what obviously was not true of the perceived world.
Mathematical Reasoning
It was surprising indeed, therefore, when even these renegade mathematical systems turned out to have profoundly significant applications, especially in Einstein’s reconceptualization of the shape of
space. Newman (1956b) contends that “the emancipation of geometry from purely physical considerations led to researches of the highest physical importance when Einstein formulated the theory of
relativity” and describes this as “one of the agreeable paradoxes of the history of science” (p. 646). As an example of the surprising applicability of an aspect of mathematics that was originally
developed for other than practical interests, however, it is only one among many.
☐☐ The Fading Distinction Between Science and Mathematics As mathematics becomes increasingly abstract, one might expect its usefulness to science to decrease, because science is constrained by
physical reality in a way that mathematics is not. Barrow’s (1998) characterization of the difference between the study of formal systems (mathematics and logic) and physical science supports this
expectation. “In mathematics and logic, we start by defining a system of axioms and laws of deduction. We might then try to show that the system is complete or incomplete, and deduce as many theorems
as we can from the axioms. In science, we are not at liberty to pick any logical system of laws that we choose. We are trying to find the system of laws and axioms (assuming there is one—or more than
one perhaps) that will give rise to the outcomes that we see” (p. 227). The appearance of the decoupling of mathematics from science has been the basis of some concern. Von Neumann (1947/1956), for
example, worried about the danger “that the subject will develop along the line of least resistance, that the stream, so far from its source, will separate into a multitude of insignificant branches,
and that the discipline [of mathematics] will become a disorganized mass of details and complexities” (p. 2063). But here is an amazing thing, pointed out by Boyer and Merzbach (1991): Despite that
most of the developments in mathematics since the middle of the 20th century have been motivated by problems in mathematics itself and have had little to do with the natural sciences, applications of
mathematics to science have multiplied exceedingly during the same period. It seems that as mathematics has become increasingly abstract, it also has found increasingly powerful applications to the
real world. Why is that? Is it because science itself is also becoming increasingly
The Usefulness of Mathematics
abstract? Is it the case, as Devlin (2000) claims, that although physicists’ ultimate aim is to understand the physical world, they have been led into increasingly abstract mathematical universes?
Much of modern physics deals with aspects of the world that are beyond our powers of observation, even when aided by the most sophisticated instruments of technology. The primary tool for such work
is mathematics. The indispensability of mathematics for some areas of science is illustrated by quantum theory. The history of quantum physics is the story of a continuing and highly successful
search for simpler, deeper, more comprehensive, and more abstract unifying principles. These principles are not systematizations of experience. They are, as Einstein was fond of saying, ‘free
creations of the human mind.’ Yet they capture, in the language of abstract mathematics, regularities that lie hidden deep beneath appearances. Why do these regularities have a mathematical form? Why
are they accessible to human reason? These are the great mysteries at the heart of humankind’s most sustained and successful rational enterprise. (Layzer, 1990, p. 14)
The recent interest among particle physicists in replacing the familiar concept of particles with the concept of “strings” was motivated by the mathematical intractability, because of the occurrence
of infinities in them, of equations dealing with the force of gravity. These infinities occur when particles are treated as points, but not when they are treated as almost equally simple things,
one-dimensional lines. Surprisingly, in addition to the simplification gained by doing away with the infinities, adoption of the string concept brought some other advantages as well (Greene, 2004;
Gribbin & Rees, 1989). As of this writing, string theory lacks empirical support from the verification of predictions that it makes that simpler theories do not. Despite its esthetic appeal, it has
been criticized as lacking testable predictions and therefore being inherently unfalsifiable (Woit, 2006). Nevertheless, it remains an active area of theoretical physics. Science—at least physics—is
so dependent on mathematics today that much of what has been discovered about the world cannot be communicated effectively apart from mathematics and cannot be understood in a more than superficial
way by anyone who does not understand the mathematics in which it is represented. American physicist Richard Feynman (1965/1989), who had an unusual ability to make complicated ideas intelligible,
held that mathematics is the element that is common to all of science and that connects its various parts. “The apparent enormous complexities of nature, with all its funny laws and rules … are
really very closely interwoven. However, if you do not appreciate the mathematics, you cannot see, among the great variety of facts, that logic permits you to go from one to the other” (p. 41). He
took the position that there is a
Mathematical Reasoning
limit to how much can be explained without recourse to mathematical representations: “It is impossible to explain honestly the beauties of the laws of nature in a way that people can feel, without
their having some deep understanding of mathematics. I am sorry, but this seems to be the case” (p. 40). Probably few observers would dispute these claims as they pertain to physics; whether they
hold for chemistry, biology, economics, psychology, and the social sciences is more debatable. The close coupling of physics and mathematics is seen very clearly in the work of Newton. Berlinski
(2000) puts it this way. “Time, space, distance, velocity, acceleration, force, and mass are physical concepts. They attach themselves to a world of particles in motion. But the relationships among
these concepts are mathematical. Some relationships involve little more than the elementary arithmetical operations; others, the machinery of the calculus. Without these specifically mathematical
relationships, Newton’s laws would remain unrevealing” (p. 186). Physics has even been defined as “the science devoted to discovering, developing and refining those aspects of reality that are
amenable to mathematical analysis” (Ziman, 1978, p. 28). The idea that mathematics is the language in which the book that we refer to as the universe is written goes back at least to Galileo:
“Philosophy is written in this grand book, the universe, which stands continually open to our gaze. But the book cannot be understood unless one first learns to comprehend the language and to read
the letters in which it is composed. It is written in the language of mathematics, and its characters are triangles, circles, and other geometric figures without which it is humanly impossible to
understand a single word of it; without these, one wanders about in a dark labyrinth” (Galileo, quoted in Drake, 1957, p. 237). What has changed since the days of Galileo is that the language itself
has evolved considerably; it now contains characters, constructs, and a level of descriptive power of which Galileo could hardly have dreamed. But the changes have served only to strengthen his
point; the distinction between science and mathematics grows increasingly difficult to maintain as both areas become evermore abstract. Why should the book that is the universe be written in the
language of mathematics? No one who has tried to read it doubts that it is, but neither has anyone given an answer to why it should be so that all, or even most, scientists or mathematicians find
completely satisfactory. The foregoing discussion focuses on the usefulness of mathematics in the physical sciences, and for good reason. Mathematics and science are so intertwined that “math and
science,” or “science and math,” are often treated almost as a single word in the popular lexicon, especially
The Usefulness of Mathematics
in the context of discussions of education. But mathematics has many applications outside the physical sciences. To the extent that one wants to make a distinction between science and engineering,
there can be no question of the importance of mathematics in the latter context as well as in the former. Projective geometry is used extensively in visual art. Trigonometry has, for centuries, been
essential to navigation and wayfinding. The application of arithmetic to business and trade is probably as old as mathematics itself. Politics has its uses of math (Meyerson, 2002). The application
of mathematics to ethics and governance was promoted by British utilitarian philosopher Jeremy Bentham (1789/1879) with his “hedonistic calculus”; variations on this theme have many proponents today.
Recent and current applications of mathematics to problems of choice and decision making in essentially every conceivable context are sufficiently numerous to have motivated the establishment of
several journals devoted solely to these topics. Mathematical modeling, so obviously effective in the physical sciences, is increasingly widely used for descriptive, predictive, and prescriptive
purposes in the softer sciences as well. In psychology, mathematics has been an essential ingredient since the time when controlled experimentation first became the method of choice for exploring
sensation, perception, motivation, learning, and the countless other aspects of behavior and cognition of interest to psychologists. It has served the purposes of discovering and representing
functional relationships in quantitative ways, of analyzing experimental data, and of constructing mathematical models intended to be either prescriptive for, or descriptive of, various aspects of
behavior and mentation. There are journals devoted exclusively to the publication of mathematically oriented psychological work, notable examples of which are Psychometrika and the Journal of
Mathematical Psychology. The early history of psychophysics has been told many times (e.g., Boring, 1957; Osgood, 1953; Woodworth & Schlosberg, 1954). Examples of classic work in mathematical
psychology are provided in volumes edited by Luce (1960) and Luce, Bush, and Galanter (1963a, 1963b). Other notable examples of the application of mathematics in psychology include the theory of
signal detection (Green & Swets, 1966; Swets, 1964), statistical decision theory (Arkes & Hammond, 1986; Edwards, Lindman, & Savage, 1963; Raiffa & Schlaifer, 1961; Von Winterfeldt & Edwards, 1986),
and game theory (Luce & Raiffa, 1957; Rapoport, 1960). Laming (1973) gives an extensive and in-depth overview of the entire field as of the mid-1970s. Relatively recent overviews of mathematical
psychology as a field are provided by Batchelder (2000) and Chechile (2005, 2006).
Mathematical Reasoning
☐☐ Why Is Mathematics So Useful? Why mathematics should turn out to be so immensely useful to our understanding of the world is seen by many to be a great puzzle. If mathematics has to do only with
analytic truths—with tautological statements— how is it that it turns out to be so usefully applied to efforts to describe physical reality? There seems no obvious a priori reason why the world
should be constructed so that its properties are describable in mathematical terms. Why, after all, should the gravitational attraction between two bodies vary inversely with the square of the
distance between them? Why should a straight line between the sun and a planet sweep out equal areas in equal times as the planet orbits the sun? And, as if the ability of mathematics to describe
aspects of the world that we can directly observe were not surprising enough, it appears to be descriptive also of phenomena governed by relativity and the quantum, which are outside our direct
experience. The puzzle that is represented by the usefulness of mathematics for the description of the natural world and the discovery of physical relationships has been noted by many mathematicians
and scientists. Atkins (1994) refers to “the success of mathematics as a language for describing and discovering features of physical reality” as “one of the deepest problems of nature” (p. 99).
Uspensky (1937) makes essentially the same point: “To be perfectly honest, we must admit that there is an unavoidable obscurity in the principles of all the sciences in which mathematical analysis is
applied to reality” (p. 5). “Why,” as Barrow (1992) puts the question, “does the world dance to a mathematical tune? Why do things keep following the path mapped out by a sequence of numbers that
issue from an equation on a piece of paper? Is there some secret connection between them; is it just a coincidence; or is there just no other way that things could be?” (p. 4). “There is,” he argues,
“no explanation as to why the world of forms is stocked up with mathematical things rather than any other sort” (p. 271). Even more puzzling is the observation by Bell (1946/1991) that “scanning each
of several advanced treatises on the various divisions of classical physics—mechanics, heat, sound, light, electricity and magnetism—we note that two or more of them contain at least one pair of
equations identically the same except possibly for the letters in which they are written” (p. 150). Why is it that an equation that was written to describe a specific phenomenon should serve equally
well to describe another phenomenon of a qualitatively different type? What does a plane through a cone have to do with the trajectory of a projectile or the orbit of a planetary body? Why should the
circles of Apollonius of Perga, discovered in the third
The Usefulness of Mathematics
century BC, turn out to describe so precisely the equipotentials of two parallel cylindrical electrical conductors (Beckman, 1971, p. 116)? And so on. These seem curious coincidences indeed. Rucker
(1982) takes “that a priori mathematical considerations can lead to empirically determined physical truths” as evidence that “the structure of the physical universe is deeply related to the structure
of the mathematical universe” (p. 55). Polkinghorne (1998) goes a step farther and connects the structures of mathematics, the physical universe, and the human mind. He argues that the “use of
abstract mathematics as a technique of physical discovery points to a very deep fact about the nature of the universe that we inhabit, and to the remarkable conformity of our human minds to its
patterning” (p. 2). In his view, our ability to comprehend the microworld of quarks and gluons as well as the macroworld of big bang cosmology is also a mystery, which cannot reasonably be attributed
to effects of the pressures for survival on the evolution of our intellectual capacity: “It beggars belief that this is simply a fortunate by-product of the struggle for life” (p. 3). “There is no a
priori reason why beautiful equations should prove to be the clue to understanding nature; why fundamental physics should be possible; why our minds should have such ready access to the deep
structure of the universe” (p. 4). Moreover, Polkinghorne (1991) argues, an explanation of the mathematical intelligibility of the physical world is not to be found in science, “for it is part of
science’s founding faith that this is so” (p. 76). Wigner (1960/1980) sees the usefulness of mathematics—“unreason able effectiveness of mathematics in the natural sciences”—as an enigmatic
blessing. “The first point is that the enormous usefulness of mathematics in the natural sciences is something bordering on the mysterious and that there is no rational explanation for it. … The
miracle of the appropriateness of the language of mathematics for the formulation of the laws of physics is a wonderful gift which we neither understand nor deserve” (pp. 2, 14). Daston (1988) speaks
in similar terms: “For modern mathematicians, the very existence of a discipline of applied mathematics is a continuous miracle—a kind of prearranged harmony between the ‘free creations of the mind’
which constitute pure mathematics and the external world” (p. 4). King (1992), who believes that the development of mathematics has been driven more by esthetic than by practical interests, contends
that the miracle is really a “miracle of second order of magnitude.” We are talking, he says, about the “paradox of the utility of beauty” (p. 121). Kline (1980) speaks of “a twofold mystery. Why
does mathematics work even where, although the physical phenomena are understood in physical terms, hundreds of deductions from axioms prove to be as applicable as the axioms themselves? And why does
it work in domains where we have only mere conjectures about the
Mathematical Reasoning
physical phenomena but depend almost entirely upon mathematics to describe these phenomena?” (p. 340). This is a modern-day puzzle. To most mathematicians of a few centuries ago this question would
not have arisen; the prevailing assumption was that the world was describable in mathematical terms because its Maker made it that way. Kline (1980) points out that the work of 16th-, 17th-, and most
of the 18th-century mathematicians was a religious quest, a search for God’s mathematical design of nature. “The search for the mathematical laws of nature was an act of devotion which would reveal
the glory and grandeur of His handiwork” (p. 34). Kafatos and Nadeau (1990) put it this way: “This article of faith, that mathematical and geometrical ideas mirror precisely the essences of physical
reality, was the basis for the first scientific revolution.… For Newton the language of physics and the language of biblical literature were equally valid sources of communion with the eternal and
immutable truths existing in the mind of the one God.… The point is that during the first scientific revolution the marriage between mathematical ideal and physical reality, or between mind and
nature via mathematical theory, was viewed as a sacred union” (p. 104). Kline (1989), who does not believe, like King (1980), that mathematicians have been motivated more by esthetics than by
practical interests, does not deny that mathematics is often applied in unanticipated ways, but contends that the development of the best mathematics has always been motivated by the quest to
understand and describe physical reality, and he argues that this is why mathematics has proved to be so useful. Ziman (1978) makes a similar argument and likens the physicist who uses mathematics
descriptively to a fisherman who concludes from his net catching only fish larger than the size of its mesh that as a “law of nature” all fish are larger than that size. Browder and Lane (1978)
similarly reject Wigner’s notion that mathematics is “unreasonably” effective in the physical sciences: “Because of its origins and its nature, mathematics is not unreasonably effective in the
physical sciences, simply reasonably effective” (p. 345). But even if it were the case that mathematicians invariably attempted to develop mathematics that would be useful in describing the physical
world, or that scientists focused only on those problems for which mathematics is useful, this would not explain why such a quest should prove to be successful. That the world is mathematically
describable remains a mystery. It is a mystery that does not interfere with the further development and application of mathematics, but a mystery nonetheless. While reflecting on the usefulness of
mathematics, it is well to bear in mind that of all the work in mathematics that has been done to date, only a relatively small fraction has proven to be useful yet. Going a step
The Usefulness of Mathematics
further, Casti (1996) argues that scientists’ insistence on making mathematics the universal language of science actually impedes progress on certain types of questions about the natural world. There
is the danger, he contends, of finding answers to mathematical representations of questions about the world that are not answers to the questions themselves. He illustrates the point with reference
to three well-known problems: the question of the stability of the solar system (the n-body problem of physics), the determination of how a string of amino acids comprising a protein will fold, and
the question of financial market efficiency. The mathematical approach to such problems generally involves the construction of mathematical models. To be confident that one has obtained valid
answers, one must either be sure the mathematical models used are faithful representations of the phenomena of interest—often they are gross oversimplifications—or abandon mathematics altogether. The
point is amply illustrated also by the inability of economic models to predict the national and international economic turmoil of 2008 or to prescribe a clear path to resolution. Models of climate
change, which also require dealing with many interacting variables, are proving to be very difficult to verify. Lindley (1993) acknowledges both the attractiveness of neat mathematical descriptions
of aspects of reality and the need to put them to empirical test. “The lure of mathematics is hard to resist. When by dint of great effort and ingenuity, a previously vague, ill-formed idea is
encapsulated in a neat mathematical formulation, it is impossible to suppress the feeling that some profound truth has been discovered. Perhaps it has, but if science is to work properly the idea
must be tested, and thrown away if it fails” (p. 13). Determining that a mathematical representation of a complex problem is veridical in all important respects generally is not an easy task. In
practice what it boils down to is constructing arguments that most of the people who are presumed to be qualified to judge find compelling. Bunch (1982) reminds us that the correspondence between
mathematics and the real world is often not as precise as we may sometimes assume. Zeno, he of the paradoxes, Bunch argues, recognized that the mathematical and scientific ways of looking at the
world could be contradictory. “As mathematics has grown independently (to some degree) over the centuries, it has been necessary again and again to change the rules slightly so that mathematical
paradoxes become mere fallacies. But from the beginning, from Zeno’s time, it was clear that mathematics does not correspond exactly to the real world” (p. 210). This is not to contend, Bunch notes,
that mathematics is not useful in describing the real world. But there is no guarantee that what is discovered will fit with the mathematics in hand. “If you assume that an arrow behaves like a
collection of mathematical points, you can use mathematics to describe its motion.
Mathematical Reasoning
If you concentrate on the arrow being a finite collection of small packets of energy, none of which can be located at a particular mathematical point, then you are up the creek” (p. 210). Dehaene
(1997) argues that mathematics rarely agrees exactly with physical reality, and realization of this should make the mystery of its “unreasonable effectiveness” somewhat less mysterious. He suggests
too that perhaps the effectiveness of the mathematics that has been applied to a description of the physical world is the result of a selection process that has ensured the development of mathematics
in directions that are effectively applicable to real-world applications. “If today’s mathematics is efficient, it is perhaps because yesterday’s inefficient mathematics has been ruthlessly
eliminated and replaced” (p. 251). The stated reservations notwithstanding, the applicability of mathematics—including simple mathematics—to the description of the physical world is a remarkable
☐☐ Reality as a Consistency Check We have already noted that Gödel (1930, 1931) demonstrated the unprovability of the consistency of any mathematical system sufficiently comprehensive to include all
of arithmetic. In view of the desire of some mathematicians to treat mathematics as the epitome of pure abstract reasoning, unsullied by contact with the real world, it is ironic that the best, if
not the only, test for the consistency of a set of axioms, postulates, or assumptions of a mathematical system that many mathematicians recognize is the test of a concrete representation or
interpretation. “In general, a set of assumptions is said to be consistent if a single concrete representation of the assumptions can be given” (Veblen & Young, 1910/1956, p. 1698). “Representation”
as used by Veblen and Young here is synonymous, I believe, with “interpretation” as used by Wilder (1952/1956): “If ∑ is an axiom system, then an interpretation of ∑ is the assignment of meanings to
the undefined technical terms of ∑ in such a way that the axioms become true statements for all values of the variables” (p. 1662). Remember that in the view of many modern mathematicians, the terms
of axioms of a mathematical system have no meaning; they are just abstract symbols. They may be given meanings when a mathematical system is applied to real-world problems, but those meanings are not
deemed to be intrinsic to the mathematics. As an example of the assignment of an interpretation to a mathematical construct, Wilder points out that the expression x2 – y2 = (x – y)(x + y) is
meaningless—it cannot be said to be true or false—until some interpretation, such as “x and y
The Usefulness of Mathematics
are integers,” is given to the variables. In Wilder’s terminology, a system is said to be satisfiable if there exists an interpretation of it. And satisfiability, it is claimed, implies consistency.
This is because, as Weyl (1940/1956) puts it, “inconsistency would a priori preclude the possibility of our ever coming across a fitting interpretation” (p. 1847). The logic, in other words, is: If a
system is inconsistent, then an interpretation is impossible; therefore, if an interpretation is found, the system must be consistent. There are two things to notice about the representation or
interpretation test. It works only in one direction. If a concrete representation or interpretation of a system can be found, the system is said to be consistent. If a representation or
interpretation has not been found, this is not compelling evidence that a system is inconsistent. This is not a serious limitation, because inconsistency is inherently easier to demonstrate than
consistency in general. The difference is analogous to the difference between proving a true universal statement to be true and proving a false one to be false. The second thing to notice is that the
representation or interpretation test shifts the problem of demonstrating consistency from the domain of mathematics to that of the interpretation. If we are not sure that a system of axioms is
consistent, why should we be confident that a concrete interpretation that is put on that system is consistent? The test involves the assumption that nature is consistent, or, as Court (1935/1961)
puts it, “the fundamental belief that logical consistency is identical with natural consistency” (p. 27). This may be an assumption that most of us have little difficulty making, but it is an
assumption nevertheless and should be recognized as such, and, again as Court points out, “it makes the consistency of nature to be one of the foundations, one of the cornerstones of the mathematical
edifice” (p. 27).
13 C H A P T E R
Foundations and the “Stuff” of Mathematics
In spite, or because, of our deepened critical insight we are today less sure than at any previous time of the ultimate foundations on which mathematics rests. (Weyl, 1940/1956, p. 1849)
☐☐ The “Euclidean Ideal” As already noted, for many centuries the prevailing view among mathematicians appears to have been consistent with what Hersh (1997) calls “the Euclidean ideal,” according to
which one starts with self-evident axioms and proceeds with infallible deductions. Plato, Aristotle, and other Greek philosophers of their era considered the axioms of mathematics to be self-evident
truths. It was the beyond-doubt intuitive obviousness of certain assertions—two points determine a unique line; three points determine a unique plane—that qualified them to be used as axioms from
which less intuitively apparent truths could then be deduced. Geometry (literally “earth measurement”) was rooted in the properties of three-dimensional space. Lakoff and Núñez (2000) associate the
Euclidean view with a widely held folk “theory of essences,” which they liken to Aristotle’s classical theory of categories, according to which all members of a category
Mathematical Reasoning
were members by virtue of a shared essence. The essence of a member of category X was the set of properties that were necessary and sufficient to satisfy the criteria for membership. They contend
that Euclid brought the folk theory of essences into mathematics by virtue of claiming that a few postulates characterized the essence of plane geometry. “He believed that from this essence all other
geometric truths could be derived by deduction—by reason alone! From this came the idea that every subject matter in mathematics could be characterized in terms of an essence—a short list of axioms,
taken as truths, from which all other truths about the subject matter could be deduced” (p. 109). In short, “the axiomatic method is the manifestation in Western mathematics of the folk theory of
essences inherited from the Greeks” (p. 110). In Lakoff and Núñez’s view, the folk theory of essences, manifest in the axiomatic method of Western mathematics, is at the heart of much contemporary
scientific practice, but not all. They argue that it serves the purposes of physics—which seeks the essential properties of physical things that make them the kind of things they are and provides a
basis for predicting their behavior—but it is not useful in biology inasmuch as a species cannot be defined by necessary and sufficient conditions. They consider at least some areas of mathematics to
be well served by the folk theory of essences. Algebra, the study of abstract form or structure, they claim, is about essence. “It makes use of the same metaphor for essence that Plato did—namely,
Essence is Form” (p. 110). Lakoff and Núñez note that the axiomatic approach to mathematics was only one of many approaches that were taken (they point to the Mayan, Babylonian, and Indian approaches
as examples of different ones), but it is the one that arose in Greek mathematics, became dominant in Europe beginning with Euclid, and shaped the subsequent development of mathematics in the West.
They see the folk theory of essences, which was central to Greek philosophy, as key to this history. The close coupling of mathematics with “obvious” truths about the physical world—as represented by
the Euclidean ideal—prevailed for two millennia. The adequacy of this view began to be challenged with the appearance on the scene of such strange entities as imaginary or complex numbers and
infinitesimals. As such concepts proved to be useful, even if difficult to take as representing anything real, mathematicians gradually became more accepting of entities on the basis of their utility
even when the origin was their own imaginations unhampered by the need to identify real-world referents. Kline (1980) credits the creation of strange geometries and algebras during the early 19th
century with forcing mathematicians to realize that neither mathematics proper nor the mathematical laws of nature are truths. This realization, he argues, was only the first of the calamities
Foundations and the “Stuff” of Mathematics
to befall mathematics, and the soul searching it prompted led to another awakening, this one to the fact that mathematics did not have an adequate logical foundation. “In fact mathematics had
developed illogically. Its illogical development contained not only false proofs, slips in reasoning, and inadvertent mistakes which with more care could have been avoided…. The illogical development
also involved inadequate understanding of concepts, a failure to recognize all the principles of logic required, and an inadequate rigor of proof” (p. 5). Kline refers to the perception of
mathematics as a universally accepted, infallible body of reasoning, a view that was possible in 1800, as a grand illusion. He points out that even the rules of arithmetic are not immutable.
Different arithmetics can be defined to serve different purposes. “The sad conclusion which mathematicians were obliged to draw is that there is no truth in mathematics, that is, truth in the sense
of laws about the real world. The axioms of the basic structures of arithmetic and geometry are suggested by experience, and the structures as a consequence have a limited applicability. Just where
they are applicable can be determined only by experience. The Greeks’ attempt to guarantee the truth of mathematics by starting with self-evident truths and by using only deductive proof proved
futile” (p. 95). But even the view of mathematics as an infallible body of reasoning, which Kline considers possible in 1800, was based more on the practical utility of math than on the rigorous
derivation of mathematical truths from indisputable first principles. Wallace (2003) describes what happened throughout much of the 1700s as something like a stock market bubble. Advance after
advance yielded a situation, to use another of Wallace’s similes, like a tree lush with branches but with no real roots. Davis (1978) argues that, despite that people have been computing for
centuries, until the work of Alan Turing in the 1930s there was no satisfactory answer to the question of what constitutes a computation.
☐☐ Doubts and Emerging Perspectives Regarding Foundations Every philosophy of mathematics arises out of the sense that mathematics touches something that is profound yet difficult to make explicit.
(Byers, 2007, p. 349)
The early part of the 20th century found mathematics, from a theoretical perspective, in an unhappy state. Rather than the rock-solid
Mathematical Reasoning
representation of indisputable eternal truth mathematics once was perceived to be, it had become a fractionated discipline with each of several groups of practitioners fully capable of pointing out
the shortcomings of the views of each of the others. Kline summarizes the state of affairs as a consequence of the disagreement over foundations, and compares it with the perception of things at the
beginning of the 19th century, this way: “The science which in 1800, despite the failings in its logical development, was hailed as the perfect science, the science which establishes its conclusions
by infallible, unquestionable reasoning, the science whose conclusions are not only infallible but truths about our universe and, as some would maintain, truths in any possible universe, had not only
lost its claim to truth but was now besmirched by the conflict of foundational schools and assertions about correct principles of reasoning. The pride of human reason was on the rack” (p. 257).
Attempts to deal with the question of the foundations of mathematics had produced several identifiable schools of thought by the early part of the 20th century. Dantzig (1930/2005) distinguishes two
such schools, the proponents of which he refers to as intuitionists and formalists. What separates these schools, in Dantzig’s view, are their perspectives regarding what constitutes a mathematical
proof. “What is the nature of reasoning generally and mathematical reasoning in particular? What is meant by mathematical existence?” (p. 67). Several other writers partition the major mathematical
perspectives regarding foundations into more than two categories. Gellert, Küstner, Hellwich, and Kästner (1977, pp. 718–719) count three of them, represented by the views of logicists, formalists,
and intuitionists. As the leaders of these schools, they identify, respectively, German mathematician-logician Gottlob Frege, German mathematician David Hilbert, and Dutch mathematician Luitzen
Brouwer. Browder and Lane (1978) note the same three schools and the same leaders. To these three, Kline (1980) adds a fourth—set theorists, led by Ernst Zermelo. Casti (2001) also recognizes four
main schools of thought, very similar, but not quite identical, to those listed by Kline. Casti’s list contains: formalism (Hilbert), logicism (Bertrand Russell), intuitionism (Brouwer), and
Platonism (Kurt Gödel, René Thom, and Roger Penrose). Hersh (1997) recognizes five schools: logicism, formalism, intuitionism, empiricism, and conventionalism. He also distinguishes what he refers to
as three main philosophies: constructivism, Platonism, and formalism. It is not clear to me how schools and philosophies differ in this context, but inasmuch as Hersh identifies five schools and
three philosophies, it appears that he sees some philosophical overlap among the schools. Barrow (1995) distinguishes formalism, inventionism, Platonism, constructivism, and intuitionism. What
follows is a somewhat eclectic partitioning that appears to me to recognize the more important
Foundations and the “Stuff” of Mathematics
Table 13.1. Major Schools of Mathematics, Their Kernel Ideas, and Notable Proponents School
Kernel Ideas
Notable Proponents
Math is reducible to logic
Frege, Russell, Whitehead
Math is derived from human intuition; rejection of actual infinity
Brouwer, Poincaré, Kant
Close to intuitionism; limits math to objects that can be constructed; building blocks of math are natural numbers
Kroenecker, Lorenzen, Bishop
Set theory
Takes set (collection) as the foundational concept on which to build math
Zermelo, Fraenkel
Sees math truths as real, abstract, and eternal
Gödel, Erdös, Penrose, Thom
Math is rule-based symbol manipulation
Hilbert, Carnap, Tarski
kernel ideas promoted by one or another of the various schools, or philosophies, of mathematical thought. It will be clear that these ideas are not all independent of each other. The schools, kernel
ideas, and notable proponents are listed in Table 13.1. Logicism is the view that all mathematics is derivable from logic. Leibniz held this view before logicism was identified as a school. Notable
among the founders of the school was Gottlob Frege, who, according to Moore (2001), was “by common consent, the greatest logician of all time” (p. 114). Frege’s vision was that of being able to
derive all mathematical theorems from a few foundational logical principles. But logicism got its strongest endorsement from the Herculean attempt by British
Mathematical Reasoning
philoshoper-mathematicians Alfred North Whitehead and Bertrand Russell to establish the logical foundation of mathematics in their frequently cited but seldom read, by some accounts, Principia
Mathematica. Russell’s identification with logicism is ironic inasmuch as it was his famous paradox involving sets of all sets that are not members of themselves that showed Frege’s vision to be
unattainable. (Are such sets members of themselves? If they are, they are not, and if they are not, they are.) Russell extricated himself from his paradox by defining a set as a different type of
thing from its members, thus making the question of whether it could be a member of itself meaningless. The idea that mathematics could be reduced to logic has a twin in the idea that logic can be
reduced to, or is a form of, mathematics. Intuitionism, the founder of which is usually considered to be Dutch mathematician Luitzen Brouwer, sees the human mind as the ultimate sanction of any rules
of thought. The foundation of mathematics is not logic, as the logicists contend, but psychology. As suggested by the subtitle of his book—How the Mind Creates Mathematics—Dehaene (1997) sees
intuitionism as the most plausible of the various theories of the nature of mathematics that have been proposed, and argues that this view is supported by recent discoveries about the natural number
sense that were not known to proponents of this theory, like Poincaré and Kant. “The foundations of any mathematical construction are grounded on fundamental intuitions such as notions of set,
number, space, time, or logic. These are almost never questioned, so deeply do they belong to the irreducible representations concocted by our brain. Mathematics can be characterized as the
progressive formalization of these intuitions. Its purpose is to make them more coherent, mutually compatible, and better adapted to our experience of the external world” (p. 246). Constructivism, a
close cousin to intuitionism, is described by Barrow (1995) as the mathematical version of operationalism and as a response to the problems created for formalism by the logical paradoxes of Russell
and others. In the views of constructivists, mathematics is what can be constructed from certain undefined but intuitively compelling primitives. The natural numbers, according to this perspective,
are the fundamental building blocks of mathematics; the concept of number neither requires nor can be reduced to a more basic notion, and it is that from which all meaningful mathematics must be
constructed. Proponents include German mathematician-logician Leopold Kronecker, German philosopher-mathematician Paul Lorenzen, and American mathematician Errett Bishop. Although, as already noted,
Dehaene (1997) espouses intuitionism, he sees constructivism as an extreme form of intuitionism, as defended by Brouwer, and he rejects that. What Dehaene objects to is Brouwer’s rejection of
“certain logical principles that were frequently used in
Foundations and the “Stuff” of Mathematics
mathematical demonstrations but that he felt did not conform to any simple intuition” (p. 245). An example is the application of the principle of the law of the excluded middle to infinite sets.
Constructivists are opposed to proofs by contradiction, which lead to such intuitive incongruities as the idea that most real numbers are transcendental despite the fact that relatively few examples
of transcendental numbers have been identified (Byers, 2007). Byers distinguishes within constructivism a radical movement according to which it should not be claimed that any mathematical entities
exist independently of what people do, which is to say that the only mathematical entities that exist are those that people have constructed. Set theory attempts to build mathematics on the
axiomatization of the fundamental concept of a set and of set relationships in order to avoid the categorical and self-referential paradoxes that Russell and others had discovered to be permitted by
traditional (Aristotelian) logic: This statement is false; true or false? Every rule has an exception; including this one? Barbers in this town shave those and only those who do not shave themselves;
do they shave themselves? Does the set of all sets that do not belong to themselves belong to itself? Russell’s own approach to solving such paradoxes was his invention of the theory of types, which
disallows the sorts of propositions that constituted the paradoxes, thereby—to use Wallace’s (2003) term—effectively legislating them out of existence. Notable contributors to the axiomatization of
set theory were the German mathematician Ernst Zermelo and the German-born Israeli mathematician Abraham Fraenkel, whose joint work, including refinements from others as well, is referred to as the
Zermelo-Fraenkel (or simply ZF) set theory. Platonism traces its roots to Plato’s idealism, according to which real-world entities are mere shadows of ideals. (The term absolutism is also sometimes
used to represent much the same view.) Platonists see mathematical concepts and relationships as having an existence outside of space and time that is more real than the tangibles of everyday
experience. “The Platonist regards mathematical objects as already existing, once and for all, in some ideal and timeless (or tenseless) sense. We don’t create, we discover what’s already there,
including infinites of a complexity yet to be conceived by mind of mathematician” (Hersh, 1997, p. 63). Platonists, to whom mathematics is a reflection of reality and mathematical truths are
discoveries, are not surprised by the usefulness of mathematics in describing real-world relationships and solving real-world problems. Proponents include Kurt Gödel, Paul Erdös, Roger Penrose, and
René Thom. Formalism, championed first by German mathematician David Hilbert and later by German philosopher Rudolph Carnap and Polish logician-mathematician Alfred Tarski, sees any system of
mathematics as
Mathematical Reasoning
a creation of the human mind. The doing of mathematics is the manipulation, according to arbitrary rules, of abstract symbols devoid of meaning and independent of physical reality. The way to play
the game is to define terms, state axioms (givens), set operational rules for manipulating symbols—transforming statements from one form to another (making inferences)—and see where it all leads. One
accepts as part of the system any statement that can be derived from the original statements and other statements already derived from the original ones, in accordance with the symbol manipulation
rules. The only requirement is internal consistency; everything else is arbitrary. One need not worry about what the system’s elements “really are,” and whether the axioms are true empirically is
irrelevant. Nor is it necessary to attach some physical meaning to the theorems that are derived from them. Nothing need correspond to anything that is believed to be true of the physical world.
Correspondence is not prohibited, just irrelevant. If the axioms happen to be empirically true, and the proofs of the theorems are valid, the theorems can be assumed to be empirically true as well,
but this would be an extra benefit. And mathematics does not provide the wherewithal to determine whether the axioms are true. It should be clear that this conception allows the existence of many
mathematical systems—it is not required that one system be consistent with another. Formalism is the most abstract of the various schools. Brainerd (1979) also describes formalism as the view that
mathematics is a game played on sheets of paper with meaningless symbols. Mathematical formulas may be applied to real-world problems, and when they are, they acquire meaning and can be said to be
true or false. “But the truth or falsity refers only to the physical interpretation. As a mathematical formula apart from any interpretation, it has no meaning and can be neither true nor false” (p.
139). Bell similarly (1946/1991) describes the formalist’s conception of mathematics as that of “a meaningless game played with meaningless marks or counters according to humanly prescribed rules—the
humanly invented rules of deductive logic” (p. 339). Byers (2007) describes formalism as an ambitious—but failed— attempt to remove subjectivity from mathematics. The attempt failed, he contends,
because mathematics is infinitely richer and more interesting than is apparent in the picture presented by formalism. Moreover, “the attempt to access objective truth and certainty by means of logic
is fundamentally flawed, and not only because of the implications of Gödellike theorems. Logic does not provide an escape from subjectivity. After all, what is logic, in what domain does it reside?
Surely logic represents a certain way of using the human mind. Logic is not embedded in the natural world; it is essentially a subjective phenomenon” (p. 354). It should
Foundations and the “Stuff” of Mathematics
be apparent, Byers argues, that formalism misses most of mathematics. “Where do the axioms come from that form the foundations of a formal system?” Clearly they come from human thought. And the
axioms that people propose are not arbitrary, although there is nothing in the perspective of formalism to prevent them from being so. Most axioms that might conceivably be proposed would not be
considered by most mathematicians to be worth exploring, and decisions about which possibilities are worth exploring and which are not must come from outside the tenets of formalism and are
subjective through and through. The foregoing is a rough partitioning of the major schools of thought regarding the foundations of mathematics. I make no claim to it being the best partitioning that
can be done. Missing from it are several other perspectives that are mentioned in the literature, including empiricism (mathematics is discovered by empirical research), conventionalism (the “truths”
of mathematics are true only in the sense of being agreed to by society), fictionalism (mathematical “truths” are fictional, though useful), and inventionism (mathematics is what mathematicians do,
period). For present purposes, the main point is that mathematics is not seen by all mathematicians through the same lens, and I doubt the possibility of classifying the various perspectives in a way
with which all mathematicians will agree. How the prevailing view of the nature of mathematics has changed over the centuries is described by Maor (1987) this way: Ever since the time of Thales and
Pythagoras, mathematics has been hailed as the science of absolute and unfailing truth; its dictums were revered as the model of authority, its results trusted with absolute confidence. “In
mathematics, an answer must be either true or false” is an age-old saying, and it reflects the high esteem which layman and professional alike have had for this discipline. The 19th century has put
an end to this myth. As Gauss, Lobachevsky, and Bolyai have shown, there exist several different geometries, each of which is equally “true” from a logical standpoint. Which of these geometries we
accept is a matter of choice, and depends solely on the premises (axioms) we agree upon. In our own century Gödel and Cohen showed that the same is true of set theory. Since most mathematicians agree
that set theory is the foundation upon which the entire structure of mathematics must be erected, the new discoveries amount to the realization that there is not just one, but several different
mathematics, perhaps justifying the plural “s” with which the word has been used for centuries. (p. 258)
Hersh (1997) contends that, despite that formalism had become the predominant position in textbooks and other official writing on mathematics by the mid-20th century, nearly all mathematicians were,
and are,
Mathematical Reasoning
closet Platonists. “Platonism is dominant, but it’s hard to talk about in public. Formalism feels more respectable philosophically, but it’s almost impossible for a working mathematician to really
believe it” (p. 7). Or as Davis and Hersh (1981) put it, “Most writers on the subject seem to agree that the typical working mathematician is a Platonist on weekdays and a formalist on Sundays” (p.
321). Formalists and Platonists can coexist harmoniously because they do not differ on the matter of how to go about proving theorems; they differ only on the philosophical question of whether
mathematics exists independently of mathematicians, and this is a difference that need not get in the way of doing mathematics. In contrast, the position of the constructivists, of whom there were
relatively few by mid-century, illegitimizes any areas of mathematics (e.g., those involving infinities) that cannot be constructed from the natural numbers. Dehaene (1997) criticizes formalism on
the grounds that it does not provide an adequate explanation of the origins of mathematics. And clearly its content is anything but arbitrary. “If mathematics is nothing more than a formal game, how
is it that it focuses on specific and universal categories of the human mind such as numbers, sets, and continuous quantities? Why do mathematicians judge the laws of arithmetic to be more
fundamental than the rules of chess? … And, above all, why does mathematics apply so tightly to the modeling of the physical world?” (p. 243). Barrow (1995) also argues that formalists, like
inventionists, have a difficult time accounting for the practical usefulness of mathematics.
☐☐ How Serious Is the Problem of Foundations Concern about foundations should come, if at all, after one has a firm intuitive grasp of the subject. (Kac, 1985, p. 111)
The inadequacy of mathematics’ logical foundation, when it first came to light, was perceived by many as a fixable problem, and it was the various efforts to fix it that eventuated in the several
different schools of mathematical thought, distinguished by what the fixers believed the foundation should be. Undoubtedly the best known, and probably the most ambitious, of these efforts was
Whitehead and Russell’s (1910–1913) Principia Mathematica. The laboriousness of this task and the lack of impact of the result on practicing mathematicians are captured in a comment by Hersh (1997):
“I’m told that finally on page 180 or so they prove 1 is different from 0” (p. 28).
Foundations and the “Stuff” of Mathematics
The debate on the issue of what would constitute an adequate foundation was ongoing when Kurt Gödel, in his 1931 paper “On Formally Undecidable Propositions of Principia Mathematica and Related
Systems,” demonstrated that none of the collections of principles adopted by any of the schools was adequate to prove the consistency of any mathematical system sufficiently complicated to include
arithmetic. Gödel himself did not subscribe to the formalist’s idea that mathematics has no meaning; generally considered a Platonist, he believed that the axioms of set theory “force themselves upon
us as being true” and held that mathematical intuition is no less reliable than sense perception (Gödel, 1947/1964, p. 272). Dyson (1995) credits Gödel’s theorem with showing that pure reductionism
does not work in mathematics: “To decide whether a mathematical statement is true, it is not sufficient to reduce the statement to marks on paper and to study the behavior of the marks. Except in
trivial cases, you can decide the truth of a statement only by studying its meaning and its context in the larger world of mathematical ideas” (p. 6). Moore (2001) describes the consequences of
Gödel’s work for the foundations of mathematics this way: “What Gödel showed was that no such system [set theory] would ever be strong enough to enable us to prove every truth about sets—unless it
was inconsistent, in which case it would enable us to ‘prove’ anything whatsoever, true or false” (p. 130). Rucker (1982) likens Gödel’s demonstration of the incompleteness of mathematics to the
Pythagoreans’ discovery of the irrationality of 2. What the incompleteness theorem makes clear, he argues, is the distinction between truth and provability. “If we have correct axioms, the provable
statements will all be true, but not all the true statements will be provable” (p. 207). But this does not mean that provability is a useless concept; we cannot do away with it in favor of truth,
“because we have no finite definition of what ‘truth’ means” (p. 207). So we see that there are many schools of thought regarding the foundations of mathematics. Precisely how many there are is a
matter of opinion. I have noted seven kernel ideas, but I am not contending that each of them represents a unique school of thought. However the main schools are defined, boundaries between them are
likely to be fuzzy. As we have seen, Barrow (1995) identifies five schools, including constructivism and intuitionism, but he equates these two schools by noting that, because of its appeal to
intuition as the justification of the basic building blocks, constructivism became known as intuitionism. I have noted too that Dehaene (1997) sees constructivism as an extreme form of intuitionism,
but does not equate the terms, because he accepts intuitionism as the most plausible of the various schools, while rejecting what he considers to be its most extreme form. There are many other
instances of imprecise
Mathematical Reasoning
boundaries between, or overlap among, the schools as conceptualized by different writers. The situation has not improved in recent years. Each of the major schools, however one identifies them,
developed various factions within them. The foundations remain as much a matter of dispute as ever. “The claim therefore to impeccable reasoning must be abandoned…. No school has the right to claim
that it represents mathematics. And unfortunately, as Arend Heyting remarked in 1960, since 1930 the spirit of friendly cooperation has been replaced by a spirit of implacable contention” (Kline,
1980, p. 276). The spirit of contention notwithstanding, Hersh (1997) claims that one view—logico-set theoreticism—dominates the philosophy of mathematics today, but that it is a tenuous domination,
“not clear and indubitable like elementary logic, but unclear and dubitable” (p. 149). Even what constitutes a set is a matter of some dispute. The concept and its ramifications for mathematics
appear to be continuing to evolve. “The universe of set theory is infinitely elusive; if there is one thing we can be sure of, it is that the set theories of the future will be vastly more inclusive
than anything we have ever dreamed of” (Rucker, 1982). The issue of foundations is far from settled. In view of the lack of agreement among mathematicians regarding the foundations of the discipline
and of the many shortcomings that have been identified in the original developments of geometry, arithmetic, algebra, the calculus, and other areas of mathematics, it is remarkable that mathematics
remains the robust and immensely useful undertaking that it is. Why, especially given its “independence” from physical truth, should it be so useful in describing, and in facilitating control of, the
physical world? This question was the focus of the preceding chapter. Here I wish to point out four facts that illustrate the intuitive basis of mathematics and of reasoning more generally. First,
concerted efforts were made during the 19th century by George Boole, Ernst Schröder, Charles Peirce, Gottlob Frege, Guiseppe Peano, and others to “rigorize” both logic and mathematics. Kline (1980),
who describes these efforts, notes that they revealed something about the development of mathematics, because the guarantee of soundness they were intended to provide turned out to be largely
gratuitous. “Not a theorem of arithmetic, algebra, or Euclidean geometry was changed as a consequence, and the theorems of analysis had only to be more carefully formulated…. In fact, all that the
new axiomatic structures and rigor did was to substantiate what mathematicians knew had to be the case. Indeed, the axioms had to yield the existing theorems rather than different ones because the
theorems were on the whole correct. All of which means that mathematics rests not on logic but on sound intuitions” (p. 194). I would only add that logic too must rest on sound
Foundations and the “Stuff” of Mathematics
intuitions, inasmuch as there is nothing else from which it can get its sanction. Second, mathematicians, logicians, and philosophers argue about the logical foundations of mathematics or the
mathematical foundations of logic, and they disagree among themselves about many basic points relating to rules of reasoning. In view of the existence of several different schools of thought
regarding the assumptions on which mathematical and logical reasoning rests, it is remarkable, perhaps ironic, that arguments about these issues are not impeded by the unresolved question of the
nature of argumentation. People from different schools of thought seem to be able to engage in these arguments without reserve; their different perspectives about the basics do not seem to get in the
way at all. This is surprising. One might think that unless there could be agreement on the rules, there could not really be a dispute. Apparently, at least for practical purposes, there is a greater
degree of tacit agreement regarding the rules of argument than the disputes about foundations might lead one to believe. Third, disputes about foundations seem not to have interfered much, if at all,
with the application of mathematics to the solution of practical problems of a wide variety of types. For the most part, applied mathematicians have not only continued using mathematics to great
advantage despite the arguments about how the entire enterprise is to be justified, but they have essentially ignored the fact that the arguments were going on. In many cases, concepts that have
frustrated mathematicians in recent years were simply applied to problems by their first users without much concern for philosophical or metaphysical significance. The differential calculus had been
used to good effect for 200 years following its development by Newton and Leibniz before there existed a well-founded explanation of why it worked; as Stewart (1995b) points out, while mathematicians
were worrying about how to make the calculus sound and philosophers were declaring it nonsensical, physicists were using it successfully to understand nature and to predict its behavior. The enigma
of the usefulness of mathematics was deepened by the formalist’s view that mathematics is strictly symbol manipulation and bears no necessary relationship to the physical world. It was deepened again
by the demonstration that the people who best understood the symbol manipulation that constitutes mathematics could not agree on the nature or the adequacy of the foundation on which it rests. But
enigma or no, mathematics continues to work; eclipses are predicted, bridges are built, satellites are launched, and new applications are found continually. Fourth, suppose that mathematicians did
agree with respect to what constitute the foundations of mathematics; how would we know whether, in their aggregate wisdom, they were right? The history of
Mathematical Reasoning
mathematics is full of examples of beliefs held by one generation of mathematicians that were considered untenable by subsequent generations. Again, the same observation can be made with respect to
the foundations of logic. The mathematicians and the logicians may or may not come to agree among themselves on these matters, but you and I have to decide whether to accept what they say, and to do
that, unless we are willing to act on blind faith, we have to appeal to our intuitions—to accept what seems intuitively to us to be right and to reject what does not.
☐☐ Discoveries or Inventions? How “real” are the objects of the mathematician’s world? From one point of view it seems that there can be nothing real about them at all. Mathematical objects are just
concepts; they are the mental idealizations that mathematicians make, often stimulated by the appearance and seeming order of aspects of the world about us, but mental idealizations nevertheless. Can
they be other than mere arbitrary constructions of the human mind? At the same time there often does appear to be some profound reality about these mathematical concepts, going quite beyond the
mental deliberations of any particular mathematician. It is as though human thought is, instead, being guided towards some eternal external truth—a truth which has a reality of its own, and which is
revealed only partially to any of us. (Penrose, 1989, p. 95)
Does mathematics exist independently of human minds; are its concepts and relationships there to be discovered, or are they entirely human inventions? Do the concepts with which mathematics deals
represent things that exist in the real world, outside the heads of mathematicians? Is there really such a thing as an infinitesimal? A circle? A number? Where does one look in the real world to find
an infinity? Are these constructs only computational conveniences, invented to facilitate the solving of certain types of problems, but having no real referents? Is it the case, as Carl Hempel (1945/
1956b) argues, that “the propositions of mathematics are devoid of all factual content” and “convey no information whatever on any empirical subject matter” (p. 1631)? These are not new questions;
they have been asked many times, and certainly not for the last time. Given the still unsettled question of the foundations of mathematics, and the existence of several schools of thought as to what
the essence of mathematics is, it should not be surprising that these questions are also unsettled. When mathematicians prove new theorems or develop new areas of mathematics are they engaged in a
process of discovery or one of
Foundations and the “Stuff” of Mathematics
invention? Are they learning something about reality—mathematical reality—or are they creating symbolic systems that are completely arbitrary except for the requirement to be internally consistent as
judged by some prescribed logic. As we have seen, mathematicians themselves have not been of one mind on this question. Hardy (1940/1989) considered mathematical theorems to be discoveries, not
creations. Mathematical reality, in his view, exists outside us, and the mathematician’s function is to observe that reality. He considered the reality of mathematics to be more real, in some
fundamental sense, than the subject matter of physics. We cannot know what the subject matter of physics really is or really is like, but only what it seems to be. Mathematical objects in contrast
are what they seem to be: “317 is a prime, not because we think so, or because our minds are shaped in one way rather than another, but because it is so, because mathematical reality is built that
way” (p. 130). A similar view is expressed by Polkinghorne (2006). “It is difficult to believe that they [the truths of mathematics] come into being with the action of the human mind that first
thinks them. Rather their nature seems to be that of ever-existing realities which are discovered, but not constructed, by the explorations of the human mind” (p. 90). Penrose (1989) takes
essentially the same position. “The Mandelbrot set is not an invention of the human mind: it was a discovery. Like Mount Everest, the Mandelbrot set is just there! Likewise, the very system of
complex numbers has a profound and timeless reality which goes quite beyond the mental constructions of any particular mathematician” (p. 95). The history of complex numbers is instructive in this
regard, Penrose argues. Although this construct was introduced originally for the specific purpose of enabling the taking of square roots of negative numbers, its use brought additional unanticipated
benefits. But the properties that yielded these benefits were there before they were discovered. “There is something absolute and ‘God-given’ about mathematical truth,” Penrose (1989) contends. “Real
mathematical truth goes beyond man-made constructions” (p. 112). Penrose qualifies this view to the extent of allowing that sometimes mathematicians do invent constructs in order to achieve specific
goals, such as the proof of a recalcitrant theorem. What distinguishes true discoveries from such inventions, in his view, is that more comes out of the structures that result from discoveries than
is put into them in the first place, whereas this is not true in the case of inventions. Not surprisingly, Penrose holds mathematical discoveries in higher regard than mathematical inventions, as he
distinguishes them. Beyond perceiving mathematics to be real, independently of the discovery of its truths, some see mathematics as more basic than physical
Mathematical Reasoning
reality in the sense of dictating what is possible in any world. “Far from being an arbitrary creation of the human mind, such mathematical facts [of logic, arithmetic, geometry, and probability]
have (in my view) universally held before the emergence of life, constraining what is possible in any world. Indeed, abstract mathematical constraints may have determined not only the form of the
universe and its physical laws (as some theoretical physicists now suggest) but also the forms of evolutionarily stable strategies, of sustainable social practices, and of the laws of individual
thought, whenever and wherever life emerged. Leibniz’s claim that this is the best of all possible worlds may have been correct only in that, at the level of abstract principles anyway, this is the
only possible world” (Shepard, 1995, p. 51). Kline (1953a) sees things differently: “Mathematics does appear to be the product of human, fallible minds rather than the everlasting substance of a
world independent of man. It is not a structure of steel resting on the bedrock of objective reality but gossamer floating with other speculations in the partially explored regions of the human mind”
(p. 430). In a more recent book, Kline (1980) makes the argument that mathematics, or at least the best mathematics, has been developed by people whose main interest in the subject was what it could
help reveal about the nature of the physical world; nevertheless, mathematics, per se, he sees as “a human activity and subject to all the foibles and frailties of humans. Any formal, logical account
is a pseudo-mathematics, a fiction, even a legend, despite the element of reason” (p. 331). Over half a century ago, Kasner and Newman (1940) dismissed Platonism as a view of the past: “We have
overcome the notion that mathematical truths have an existence independent and apart from our own minds. It is even strange to us that such a notion could ever have existed…. Today mathematics is
unbound; it has cast off its chains” (p. 359). More recently, Dehaene (1997) takes a similar position in contending that Platonism “leaves in the dark how a mathematician in the flesh could ever
explore the abstract realm of mathematical objects. If these objects are real but immaterial, in what extrasensory ways does a mathematician perceive them?” (p. 242). The feeling that Platonists have
that they are studying real objects that exist independently of the human mind is, in Dehaene’s view, an illusion. I think it fair to say, however, that Platonism, while perhaps not the prevailing
view, is far from dead in the 21st century, and its persistence proves the point that one person’s illusion is another’s vision. Lakoff and Núñez (2000) argue that if transcendental Platonic
mathematics exists, human beings can have no access to it. Human understanding of mathematics is limited by the affordances and constraints of the human brain and mind, and there is no way of knowing
Foundations and the “Stuff” of Mathematics
proved theorems have any objective truth external to human beings. Belief in Platonic mathematics, in their view, is a matter of faith, about which there can be no scientific evidence, for or
against. They reject the idea—which they see as part of a myth that they call “the romance of mathematics”—that “mathematics is an objective feature of the universe; mathematical objects are real;
mathematical truth is universal, absolute, and certain” (p. 339). Lakoff and Núñez (2000) make the telling point that “there is no way to know whether theorems proved by human mathematicians have any
objective truth, external to human beings or any other beings,” and argue that “all that is possible for human beings is an understanding of mathematics in terms of what the human brain and mind
afford” (p. 2). So much seems obvious, but of course this does not make mathematics unique; human understanding generally must be limited by the conceptual affordances and constraints of the human
brain and mind. Giving the perspective of a cultural anthropologist, White (1947/1956) defends the contention that the opposing propositions—(1) that mathematical truths exist independently of the
human mind and (2) that mathematical truths have no existence apart from the human mind—are both valid. His acceptance of such apparently contradictory assertions rests on the idea that “the human
mind” has different referents in the two assertions. In the first, it refers to the mind of an individual person; in the second, it refers to cultural tradition of mankind as a whole. “Mathematical
truths exist in the cultural tradition into which the individual is born, and so enter his mind from the outside. But apart from cultural tradition, mathematical concepts have neither existence nor
meaning, and of course, cultural tradition has no existence apart from the human species. Mathematical realities thus have an existence independent of the individual mind, but are wholly dependent
upon the mind of the species” (p. 2350). Are the truths of mathematics discovered, or are they man-made? They are both, White contends: “They are the product of the mind of the human species. But
they are encountered or discovered by each individual in the mathematical culture in which he grows up” (p. 2357). The numerous occurrences of simultaneous “discoveries,” White argues, demonstrates
the importance of the accumulated store of mathematical knowledge in producing them; the role of the individual human brain is only that of a “catalytic agent” in the cultural process. “In the
process of cultural growth, through invention or discovery, the individual is merely the neural medium in which the ‘culture’ of ideas grows” (p. 2358). The quality of the individual human brain has
not changed over recorded time, White argues, but recent mathematical discoveries or inventions could not have been made before the prerequisite mathematics had been developed; but “when the cultural
Mathematical Reasoning
elements are present, the discovery or invention becomes so inevitable that it takes place independently in two or three nervous systems at once” (p. 2359). Byers (2007) also argues that whether
mathematics is considered invented or discovered is a matter of perspective and that both perspectives are legitimate, despite the fact that they conflict. “‘Discovery’ and ‘invention’ evoke equally
valid, consistent frames of reference that are clearly in conflict with one another” (p. 360). “Mathematics is that one unified activity that looks like discovery when you think of it from one point
of view and appears to be invention when regarded from another” (p. 361). Byers makes the perspective a matter of which side of some mathematical truth one is on. Before the establishment of some
relationship, the work looks like what is needed is some creativity; after the fact it looks like something has been discovered—“there is the sense that it was there all the time, waiting for us” (p.
362). Invention emphasizes the subjective aspect of mathematics; discovery emphasizes the objective nature of relationships that are found. These ideas are illustrative of Byers’s emphasis on the
role of ambiguity as a major source of mathematical creativity. It is clear from all this that there is a profound difference of opinion among mathematicians as to whether the business of mathematics
is discovery or invention. Bell (1946/1991), who discusses this difference at length, suggests that it is an irreconcilable one and one that has significant implications for other views that one may
hold. “Neither the necessity nor the universality [of mathematics] is taken for more than a temporary appearance by those who believe mathematics and logic to be of purely human origin. Others,
including many who believe that numbers were discovered rather than invented, find in mathematics irrefutable proof of the existence of a supreme and eternal intelligence pervading the universe. The
former regard mathematics as variable and subject to change without warning; the latter see mathematics as a revelation of permanence throughout eternity, marred only by such imperfections as are
contributed by the inadequacies of human understanding” (p. 60). The development of non-Euclidian geometries, which is usually taken to be the event that, above all others, destroyed the idea that
the axioms of geometry represent self-evident truths about the physical world, also prompted the distinction between mathematical and physical geometries, and the attendant assumption that only the
latter correspond necessarily to the characteristics of physical reality. With respect to the question of which, if any, of the various geometries that have been developed is true, Bell (1946/1991)
points out that each, including Euclid’s, “when obvious blemishes are removed,” is self-consistent
Foundations and the “Stuff” of Mathematics
and therefore true in a mathematical sense. With respect to which is physically true, each can be usefully applied to physical world problems and therefore can be considered true in the physical
sense for the range of phenomena for which it is appropriate, but inasmuch as these geometries are mutually incompatible, no two are both factually true for the same range. The freedom that geometers
realized, following the work of Bolyai, Gauss, Lobachevski, and Riemann, to invent new geometries without thought of the “obvious truth” of their axioms was soon claimed also by algebraists who began
to invent algebras whose axioms needed no justification beyond internal consistency. However, the freedom referred to here is highly constrained. The mathematician who would invent a new area of
mathematics is free to select the axioms that define the area with no requirement that the system bear any relationship to the physical world, but beyond the “givens,” it is anything but arbitrary.
And determining what the givens imply is very much a matter of discovery. Jourdain (1956) distinguishes between Mathematics (capital M), “a collection of truths of which we know something,” and
mathematics (small m), “our knowledge of mathematics.” In his view, Mathematics (M) is eternal and unchanging; mathematics (m) changes over time. Mathematics (M) is truth; mathematics (m) represents
what has been discovered about that truth at any point in time. Jourdain’s distinction provides a framework in which it is possible to fit other views. Borrowing his terms, one would say that Hardy
and Polkinghorne had in mind Mathematics (M), whereas Kline was speaking of mathematics (m). They, of course, might not agree with having their views pigeonholed in this way. Perhaps on the question
of discovery versus invention, we can do no better than admit that, to a large extent, the answer is a matter of perspective, and that it can change from time to time. Mazur (2003) puts the
difficulty of getting an either–or answer to the question that will remain stable this way: “On the days when the world of mathematics seems unpermissive with its gem-hard exigencies, we all become
fervid Platonists (mathematical objects are ‘out there,’ waiting to be discovered—or not) and mathematics is all discovery. And on days when we see someone who, Viète-like, seemingly by will power
alone, extends the range of our mathematical intuition, the freeness and open permissiveness of mathematical invention dazzle us, and mathematics is all invention” (p. 70). Or, one might accept
Hersh’s (1997) contention that mathematics involves both discovery and invention, that the two processes distinguish two kinds of mathematical advance. “When several mathematicians solve a wellstated
problem, their answers are identical. They all discover that answer. But
Mathematical Reasoning
when they create theories to fulfill some need, their theories aren’t identical. They create different theories…. Discovering seems to be completely determined. Inventing seems to come from an idea
that just wasn’t there before its inventor thought of it. But then, after you invent a new theory, you must discover its properties, by solving precisely formulated mathematical questions. So,
inventing leads to discovering” (p. 74). The disagreement among mathematicians on the question of whether their work has to do primarily with discovery or invention, or both, appears to be a deep and
philosophically grounded one. It is an old disagreement that persists, and it is not likely to be resolved to everyone’s satisfaction anytime soon, if ever. Mathematics itself is not going to provide
the resolution. How one thinks about this question will depend, to no small degree, on other beliefs that one holds about the nature of reality and how one accounts for structure and regularity in
the universe. One cannot think about the nature of mathematics at a very deep level without encountering philosophical questions that do not admit of mathematical answers. Throughout this book I have
referred to mathematical advances sometimes as inventions and sometimes as discoveries, using whichever term seemed more natural in the context, but I have not made a sharp distinction between the
two. I believe that in many cases, if not most, which term one uses is a matter of personal preference. If pressed to make a distinction, however, I would say that I find it natural to consider the
new realization of a mathematical relationship as a discovery and the construction of a proof of the relationship as an invention. Realizing that if the lengths of the two shorter sides of a right
triangle are x and y, the length of the remaining side is x 2 + y 2 seems to me to be appropriately considered a discovery, whereas the many proofs that have been offered of this relationship seem to
me to be better seen as inventions. More generally, theorems, in my view, are better considered to be discoveries, and proofs to be inventions (although I find it easier to consider the latter to be
discoveries than to consider the former to be inventions). Other examples of what seem to me to be appropriately considered inventions are numerals, symbols, notational conventions, axioms, and
algorithms. Discoveries include conjectures (which sometimes morph to proofs), relationships (that the area of a rectangle is the product of its length and width, that the circumference of a circle
is the product of π and its diameter), and applications (discoveries that certain types of mathematics can be usefully applied to specific practical purposes). In this view, the answer to the
question of whether mathematicians are in the business of discovering or inventing is that they are in the business of doing both.
Foundations and the “Stuff” of Mathematics
☐☐ More on Mathematics and Logic Mathematics is the science of the logically possible. (Le Corbeiller, 1943/1956, p. 876)
Given the deductive nature of theorem proving, and much of mathematical reasoning more generally, it would be surprising if mathematics bore no relationship to logic. And indeed, no one, to my
knowledge, argues that mathematics and logic are independent. As we have seen, different opinions have been expressed regarding exactly what the nature of the relationship is. Some consider
mathematics to be founded on logic; others would have it the other way around; still others see them as equivalent. Carl Hempel is in the first category. “Mathematics is a branch of logic. It can be
derived from logic in the following sense: (a) All the concepts of mathematics … can be defined in terms of four concepts of pure logic. (b) All the theorems of mathematics can be deduced from those
definitions by means of the principles of logic” (1945/1956b, p. 1630). Others who share Hempel’s view that mathematics can be derived from logic include Lewis and Langford (1932/1956), Nagel (1936/
1956), and Morris (1987), the last of whom defines mathematics as “the science of finding logical consequences” (p. 179). American philosopher and polymath Charles Sanders Peirce (1839–1914) is in
the second category. “It does not seem to me that mathematics depends in any way upon logic. It reasons, of course. But if the mathematician ever hesitates or errs in his reasoning, logic cannot come
to his aid. He would be far more liable to commit similar as well as other errors there. On the contrary, I am persuaded that logic cannot possibly attain the solution of its problems without great
use of mathematics. Indeed all formal logic is merely mathematics applied to logic” (1902/1956a, p. 1773). Bertrand Russell (1901/1956a) represents the third position. Commenting on British
philosopher-mathematician George Boole’s Laws of Thought, he says it “was in fact concerned with formal logic, and this is the same thing as mathematics” (p. 1576). Others, however, have taken the
monumental work of Whitehead and Russell (1910–1913), as represented in the Principia Mathematica, as establishing the primacy of logic and the dependence of mathematics on it: Lewis and Langford
(1932/1956) point to the Principia as the warrant for their claim: “It [mathematics] follows logically from the truths of logic alone, and has whatever characters belong to logic itself” (p. 1875).
There is general agreement that the relationship between mathematics and logic is very close, but not on the question of the precise nature
Mathematical Reasoning
of that relationship. As the cited comments show, opinions differ, in particular, regarding which should be considered basic and which derivative. Whichever way one jumps on this issue, one is
assured of landing on something less than infinitely firm ground. If mathematics underlies logic, its own foundations are very uncertain, and if logic is the more basic, where does it get its
warrant? As Austrian economist-philosopher Ludwig von Mises points out, “There is by no means an eternally valid agreement about the admissible methods of logical deduction” (1951/1956, p. 1733). The
distinction between mathematics and logic has been blurred considerably in recent years by the appearance of the digital computer on the scene. Especially is this true with respect to computation,
admittedly only one aspect of mathematics, but an important one. It is very difficult to tell, when one looks at the operation of a digital computer at the level of the most basic operations, what to
consider logic and what to consider mathematics. At one level of description, a computer is a device that operates on binary variables, changing their values from 0 to 1, or from 1 to 0, or leaving
them alone. Inasmuch as each binary variable can have either of two values, the values of n binary variables can be combined in 2n ways. Table 13.2 shows all possible combinations of the values of
three binary variables.
Table 13.2. All Possible Combinations of Three Binary Variables A
Foundations and the “Stuff” of Mathematics
Table 13.3. Truth Table for the Propositional Connective and A
A and B
A function of binary variables is a binary variable whose value depends on the value or values of one or more other binary variables. Thus, if A and B are binary variables, and the value of B is
invariably opposite that of A (if A = 0, B = 1, and if A = 1, B = 0), B may be said to be a function of A. Or if A, B, C, and D are all binary variables, and the value of D is 1 if and only if (iff)
the values of A, B, and C are 0, 0, and 1, respectively, we may say that D is a function of A, B, and C. A, B, and C are said to be independent variables, whereas D is referred to as a dependent
variable, reflecting that the value of D depends on the values of A, B, and C. Functional relationships among binary variables can be represented in a variety of ways, one of which is the truth
table. In the logic of propositions, truth tables are used to show the dependency of a compound statement on the truth or falsity of its components. For example, if we let A represent the statement
“Today is Friday” and B the statement “Today is the 13th,” then Table 13.3 shows the dependency of the truth (represented by 1) or falsity (0) of the compound statement “Today is Friday (and today
is) the 13th” on the truth (or falsity) of each of the statement’s components. As indicated in the table, given the connective and, the compound statement is true if and only if both of its component
statements are true. The truth table for a two-variable function (as shown in Table 13.3) has four rows (to accommodate the four possible combinations of the values of the two variables). A function
of the two variables (represented in the table by A and B) can have either 0 or 1 associated with each row of the table, so the total number of functions that can be defined on two binary variables
is 24 or 16; more generally, given n binary variables, we are able to define 22n functions. Table 13.4 shows all 16 functions of two binary variables, and Figure 13.1 shows the same functions as
Venn diagrams.
Mathematical Reasoning
Table 13.4. Sixteen Functions of Two Binary Variables A
not-(A or not-A)
A and B
A and not-B
not-A and B
not-(A or B)
(A and B) or (not-A and not-B)
(A and not-B) or (not-A and B)
A or B
A or not-B
not-A or B
not-(A and B)
A or not-A
If one looks carefully at the table, one sees that the rows representing functions can be divided into subsets on the basis of the number of 1s each contains. There is one row with no 1s, four with
one, six with two, four with three, and one with four, which is to say that there is one function (F1) that has the value 1 for none of the possible combinations of A and B, there are four functions
(F2 – F5) that have the value 1 for
Foundations and the “Stuff” of Mathematics F1 A
F2 B
F5 A
F15 B
F14 B
F13 A
F9 A
F16 B
Figure 13.1 Venn diagrams of the 16 functions of two binary variables. The filled portion of each diagram shows the conditions under which the function is true.
a single combination of A and B, there are six (F6 – F11) that have the value 1 for two combinations, and so on. The numbers 1, 4, 6, 4, 1 are the coefficients of the successive terms of the
expansion of a binomial raised to the fourth power, (a + b)4. Any particular binomial coefficient, m! k !(m- k )! , represents the number of combinations that can be made from m things taken k at a
time. To generate all possible functions of two binary variables, we have combined the basic combinations (the four possible combinations of A and B) in all possible ways, taking zero at a time, one
at a time, two at a time, and so on. Given that the number of functions of n binary variables grows as 22n , the number of functions increases with n very rapidly. With three binary variables, the
number of functions is 256; with four, it is 65,536; and with five, it is over 4 billion. Some of the functions in Table 13.4 are familiar from common parlance. F2, as already noted, is and; F9 and
F12 are, respectively, exclusive or (A or B but not both), and inclusive or (A or B or both). F6 is A, F7 is B, F11 is not-A, and F10 is not-B. F14 deserves special attention. This function
represents the truth functional interpretation of If A then B, known generally in treatments of conditional reasoning as the material conditional. If A then B is considered
Mathematical Reasoning
Figure 13.2 Logic gates for nor (top) and nand (bottom) functions.
to be false only in the case that A is true and B is false. The same function may be described also as not-A or B. (Generally when or is used without a qualifier, it is intended to represent
inclusive or, as it does here.) If … then … statements in everyday discourse frequently—perhaps more often than not—have an interpretation other than the material conditional. Two other functions in
Table 13.4 that deserve special mention are F5 and F15. The first, not-(A or B), is referred to as nor (short for neither … nor …), and the second, not-(A and B), is referred to as nand (short for
not and). The nor function is conventionally represented by a down-pointing arrow—A↓B—which is sometimes referred to as the Peirce arrow, after American logician-philosopher Charles Peirce. The nand
function is typically represented by a vertical line—A|B—which is generally referred to as the Sheffer stroke, after American logician Henry Sheffer. Nor and nand are shown as logic gates in
Figure 13.2, top and bottom, respectively. Each of the gates in Figure 13.2 shows the output of a gate for a particular combination of inputs. For what follows it will be convenient to show a string
of outputs for strings of inputs, as represented in Figure 13.3. What makes the nor and nand functions special is that each is functionally complete, which is to say that a device that is capable of
performing any logical or mathematical function can be constructed from (multiple copies of) either one of them alone, and without the use of any 1100 1010
Figure 13.3 Showing the string of outputs yielded by a nor (left) and a nand (right) gate, given the indicated strings of inputs.
Foundations and the “Stuff” of Mathematics
Table 13.5. And, or, and not Functions Constructed From Only nor or Only nand Functions Using Only nor (↓)
Using Only nand (|)
A and B
(A↓A) ↓ (B↓B)
(A | B) | (A | B)
A or B
(A↓B) ↓ (A↓B)
(A | A) | (B | B)
Not A
(A | A)
other functions. The equations in Table 13.5 and the logic diagrams in Figure 13.4 show how to build and, or, and not functions from nor or nand functions. The “functional completeness” of nor and
nand follows from the interesting fact that one can get truth from falsity (“It is false that it is false” is equivalent to “It is true”) but not falsity from truth (“It is true that it is true” is
not equivalent to “It is false”). Logic circuitry in electronic computers is typically built from components representing the functions and, (inclusive) or, and not. And the same functions are used
to build circuits that can perform addition and AND from NORs A
AND from NANDs 1000 A and B
A B
A and B
0101 OR from NANDs A
OR from NORs A B
1110 A or B
A or B NOT from NANDs
NOT from NORs A
01 not A
01 not A
Figure 13.4 And, or, and not logic circuits built from nor (left) and nand (right) gates.
370 A
Mathematical Reasoning
Figure 13.5 Half adder built from logic gates for and, or, and not.
other basic mathematical operations. Figure 13.5 shows, for example, how a circuit to add binary numbers can be built from logic gates for the functions and, or, and not. This circuit is a half
adder, which means that, given two binary digits as input, it will produce the sum (1 if the inputs are 0 and 1; 0 if they are both 0 or both 1) and a carry (1 if both inputs are 1; 0 otherwise). To
make a full adder, which will accept three inputs, one representing the carry from the preceding addition of two digits, one would combine two half adders in the appropriate way, still using only
and, or, and not components. Figure 13.6 shows how a full adder can be constructed from only nor gates. (The reader may wish to try to sketch one that uses only nand gates.) It has always struck me
as surprising and more than a little interesting that a single function (either nor or nand) is all that is needed to produce any desired logical or mathematical operation. Real computers are not
built using only a single type of logic gate, because it is more efficient to use several types of gates, but in principle they could be. Carry In A
B Carry Out
Figure 13.6 A full adder made from only nor gates.
Foundations and the “Stuff” of Mathematics
The nor and nand functions are a bit of a mystery from a psychological point of view. It is curious that, despite their versatility, they are not much used in common discourse, as compared, say, to
the (inclusive) or and and functions, and they have not been of much interest to either linguists or cognitive psychologists. Presumably the lack of focus on them by researchers is a consequence of
their very little use in everyday language and reasoning. But why we, as a species, appear not to have developed much use for these remarkable logical elements is an interesting question. The purpose
of this little excursion into binary logic is to note the possibility of effecting any computation with a device built of simple logic circuitry. Does this demonstrate that mathematics rests on
logic? Only if mathematics is taken to be synonymous with computation, which it clearly is not. However, computation is an important part of mathematics, and that it can be accomplished with simple
logic gates is significant. Moreover, as has been becoming increasingly clear, those same logic gates can, when organized in sufficiently complex ways, yield performance of other types (pattern
recognition, problem solving) that are also important aspects of mathematics. What the limits are of what can be done with these gates remains to be seen.
14 C H A P T E R
Preschool Development of Numerical and Mathematical Skills
What are the numerical and mathematical capabilities of children and how do they change over the first few years of life? How can children’s understanding of mathematics and their competence to solve
mathematical problems—to do mathematics—be enhanced by formal education? Much research has been addressed to these and closely related questions, especially during the last few decades. This chapter
and the following two focus on the acquisition of numerical and mathematical competence. This chapter deals primarily with the preschool years. The next one focuses on the learning and teaching of
elementary mathematics during the early school years. The chapter following that one relates to the acquisition of skill in mathematical problem solving. While the distinction between the ability to
perform mathematical operations, such as those of basic arithmetic, and mathematical problem solving is a common one (Dye & Very, 1968; Geary, 1994), I make it here largely as a matter of
organizational convenience; the distinction becomes fuzzy when one considers that what constitutes a problem for one person may not be a problem for another whose mathematical competence is at a
different level. Efforts have been made to identify distinct capabilities that are essential to the development of mathematical competence. Spelke (2005), for example, suggests that research has
provided evidence of five cognitive systems that are at the core of mathematical thinking by adults and
Mathematical Reasoning
that serve the following purposes: representation of small exact numbers (one, two, three) of objects; representation (approximate) of large numerical magnitudes; number words and counting;
environmental geometry; and landmarks. Each of these component systems, she suggests, is active relatively early in life. While the following discussion is not organized in terms of these
hypothesized systems, or any other taxonomy, it supports the idea that mathematical reasoning, even as seen in very young children, is multifaceted and complex.
☐☐ Numerosity Perception According to the National Research Council’s Mathematics Learning Study Committee, a set of concepts associated with number is at the heart of preschool, elementary school,
and middle school mathematics, and most of the debate about how and what mathematics should be taught at these levels revolves around number (Kilpatrick, Swafford, & Findell, 2001). The National
Mathematics Advisory Panel (2008) distinguishes between competencies that comprise the number sense that most children acquire before formal schooling (“an ability to immediately identify the
numerical value associated with small quantities (e.g., 3 pennies), a facility with basic counting skills, and a proficiency in approximating the magnitudes of small numbers of objects and simple
numerical operations”) and the “more advanced type of number sense” that schooling must provide (“a principled understanding of place value, of how whole numbers can be composed and decomposed, and
of the meaning of the basic arithmetic operations of addition, subtraction, multiplication, and division” as well as an ability to deal with numbers “written in fraction, decimal, percent, and
exponential forms”) (p. 27). A rudimentary sensitivity to numerosity appears to be present very early in children, even, according to some observers, during the first few weeks, if not days, of life
(Antell & Keating, 1983; Strauss & Curtis, 1981; van Loosbroek & Smitsman, 1990). Evidence has been reported of the beginnings of counting-like behavior, such as the ability to distinguish sets with
different (small) numbers of members, before the end of the first year (Durkin, Shire, Riem, Crowther, & Rutter, 1986; Feigenson & Carey, 2003; Haith & Benson, 1998; Mix, Huttenlocher, & Levine,
2002a, 2002b; Starkey & Cooper, 1980; Wynn, 1995; Xu, 2003; Xu & Spelke, 2000). Studies suggest that infants less than one year of age are likely to notice differences between 2 and 3 and, in some
instances, perhaps between 3 and 4, but not between 4 and 5 (Starkey & Cooper, 1980; Strauss & Curtis, 1981). Six-month-olds can also distinguish between
Preschool Development of Numerical and Mathematical Skills
sets of relatively large numbers of members provided the numerical differences are sufficiently large (Brannon, 2002; Lipton & Spelke, 2003). The behavioral evidence of this type of discrimination in
most studies of infants’ grasp of numerosity is the time spent looking at a display; an assumption underlying interpretation of their behavior is that infants tend to look longer at novel scenes than
at scenes with which they have become familiar—to which they have become habituated. When, following habituation with a display of two objects, an infant looks longer at a display of three objects
than at one of two objects, this is taken as evidence of recognition of the difference between two and three objects. In a different scenario, infants watch a number of objects being placed in a box
into which they can reach but cannot see. If they watch n objects being placed in the box and then reach in n (and only n) times to retrieve the objects one at a time, this is taken as evidence that
they are aware of the number of objects that were placed in the box. Use of this technique has shown that infants of approximately one year of age can keep track of up to three objects, but rarely up
to four (Feigenson & Carey, 2003). The beginnings of a sensitivity to changes in number are also seen during the first year of life. Infants give evidence of being surprised when adding an item to
another item appears to result in one rather than two items, when adding one item to two items appears to result in two items rather than three, or when removing one of two items appears to leave two
items rather than one (Koechlin, Dehaene, & Mehler, 1997; Simon, Hespos, & Rochat, 1995; Wynn, 1992a). The ability to distinguish numerosity is not limited to the distinction of numbers of objects.
Wynn (1996) showed that six-month-old infants can distinguish between a puppet jumping three times and the same puppet jumping twice. Nor is the ability limited to visual stimuli: Six-month-old
infants are also able to distinguish between a sound that occurs twice and one that occurs three times (Starkey, Spelke, & Gelman, 1990). The extent to which such distinctions are made on the basis
of numerosity per se, rather than correlates of numerosity (e.g., surface area covered by visual objects or the density of objects in a fixed space; the duration of an auditory sequence as distinct
from the number of sounds in it), has been a matter of debate, as has the question of whether numerosity information is stored in short-term memory in analog form (as magnitudes) or as discrete
representations of the perceived objects (Carey, 2001, 2004; Dehaene, 1997; Feigenson, Carey, & Hauser, 2002; Feigenson, Carey, & Spelke, 2002; Gallistel & Gelman, 1992; Pylyshyn, 2001; Wynn, 1992b;
Xu, 2003). That numerosity per se can be discriminated gets strong support from the finding that when shown side-by-side
Mathematical Reasoning
visual displays, one with two common objects and one with three, six- to eight-month-old children tend to fixate longer on the display on which the number of objects matches the number of sounds they
hear from a loudspeaker positioned between the displays (Starkey, Spelke, & Gelman, 1983, 1990). Conceivably infants discriminate numerosity sometimes independently of other factors that typically
correlate with it, and sometimes on the basis of the correlates. Some investigators of quantity perception in children contend that children (perhaps animals as well) have two systems for
distinguishing quantities, one of which works primarily with very small quantities (one to three or four) and the other of which deals with anything larger (Brannon & Roitman, 2003; Carey, 2004; Mix
et al., 2002a). The first of these systems represents small quantities discretely; the second one represents larger quantities in an analog fashion. Carey (2004) argues that before they acquire
language, infants develop several different types of representations of numerical quantities, at least two of which are developed also by other species. One such representation codes numerical
quantities as analog magnitudes, the magnitude symbol (e.g., line length) increasing with the number of items in the set being represented. This type of representation serves to make quantitative
comparisons (which set has the larger number of members) and perhaps some elementary arithmetic operations, such as addition and subtraction. The second type of representation is discrete. In it
there is a one-to-one correspondence between the items being represented and the tokens in the representation; thus, a set of three items might be represented by a mental image of a set of three
boxes. Neither of these two systems includes symbols for numbers per se. By 30 months or so, many children appear to realize that a set of four items is larger than a set of three items (knowledge of
ordinality) even if they are unable to label correctly which set contains four and which contains three (knowledge of cardinality) (Bullock & Gelman, 1977). Some investigators contend that even at 18
months some children show a rudimentary awareness of ordinality—for example, realization, in some sense, that three items are more than, and not just different from, two (Cooper, 1984; Strauss &
Curtis, 1984).
☐☐ Number Naming Many children acquire the ability before attending school to say the number words, one, two, three, …, in the correct order at least to ten, and to recognize the numerals on sight
(Fuson, 1991; Gelman & Gallistel, 1978;
Preschool Development of Numerical and Mathematical Skills
Sarnecka & Gelman, 2004; Siegler & Robinson, 1982). The ability to say the number words in sequence up to about 100 is acquired gradually by children from the time they are two years old until about
the age of six; however, numerous factors contribute to individual differences in this regard (Fuson & Hall, 1983). The ability to say which of two single-digit numbers is the larger is not likely to
be acquired before the age of four or five (Schaeffer, Eggleston, & Scott, 1974; Sekuler & Mierkiewicz, 1977), at which age children may also be able to judge whether a collection has more or fewer
objects than a specified (relatively small) number (Baroody & Gatzke, 1991) and which of two collections is closer to a specified number (Sowder, 1989). It appears that generally, though not
invariably, ordinal concepts are learned before cardinal concepts (Brainerd, 1979); however, less research has been addressed to the question of how children learn to use ordinal number words and
acquire an understanding of ordinal relationships than has been done on the acquisition of competence with cardinal numbers (Fuson & Hall, 1983). The verbal names of numbers are typically learned
before their visual representations, just as spoken words are learned before their visual representations. That brain injuries can result in loss of the ability to recognize spoken numbers while
leaving the ability to recognize printed numbers intact, or conversely, suggests that the two abilities are independently encoded neurologically (Anderson, Damasio, & Damasio, 1990; Cipolotti, 1995;
Cohen & Dehaene, 2000). Just as children generally can recognize letters and words in print before they can write them, they generally can recognize one- and two-digit numerals before they can write
them (Baroody, Gannon, Berent, & Ginsburg, 1983). Differences in the principles that guide number naming in different languages appear to have implications for the ease with which the names of
numbers greater than 10 are learned (Fuson, Richards, & Briars, 1982; Miller & Paredes, 1996). Asian children tend to master the number names for numbers above 10 more quickly than do English
speakers, possibly because of the greater regularity of number names in Asian languages (Miller, Smith, Zhu, & Zhang, 1995; Miller & Stigler, 1987; Miura, Okamoto, Kim, Steere, & Fayol, 1993; Miura
et al., 1994). That in English the names for 10, 11, and 12 are ten, eleven, and twelve, instead of, say, ten, ten-one, and ten-two, obscures the base 10 structure of the system and complicates the
English-speaking child’s learning that thirty is followed by thirty-one and thirty-two rather than by twenty-eleven and twenty-twelve. This feature is shared by most European languages. In contrast,
number names in Asian languages, such as Chinese and Japanese, typically make the base 10 nature of the system explicit in the use of names that are equivalent, in the specific languages, to ten-one,
ten-two, and so on.
Mathematical Reasoning
Explicit representation of base 10 structure in number names is found also in parts of Africa (Posner, 1982). (The obscuration of the base 10 structure in 11 and 12 may be the result of linguistic
contraction: the German elf and zwölf may be contractions of ein-lif and zwo-lif—one-ten and two-ten in old German [Dantzig, 1930/2005, p. 12].) A similar distinction holds for ordinal as well as for
cardinal numbers. In Chinese, for example, ordinals are formed by adding an ordinal prefix to cardinal number names, which makes the correspondence between the first few ordinals and cardinals
apparent, while there is little in first, second, and third to remind one of one, two, and three (Miller, Kelly, & Zhou, 2005). An additional confusing infelicity in English number names is that,
unlike the printed numbers in which the 10s digit always precedes the 1s digit, in verbal number names this principle is followed with 20 and larger, but with the teens the 10s digit is represented
after the 1s digit. Munn (1998) argues that learning number symbols (visual representations of numerals) is more problematic than learning other aspects of number and that, for this reason, some
children fail to see the correspondence between quantities and their symbols. She speculates that children may create three different cognitive models regarding numbers: “one … around the verbal
number system, another around objects used in counting and calculating, and yet another around number symbols” (p. 57). She contends that unless “adults deliberately foster a cognitive model that
links objects with symbol systems, children will find it hard to see how the logical structure of concrete objects maps onto that of numeric symbol systems” (p. 57). Miller and Paredes (1996) note
that the variety of ways in which systems for representing numbers differ affect the course of children’s mathematical development. Differences in how systems reflect the base 10 principle, for
example, can affect the time it takes children to learn a system and the types of difficulties they encounter in doing so. Fuson and colleagues have studied the effects of differences in number names
across languages, especially differences between Asian and European languages, not only on the ease with which children acquire the ability to do basic arithmetic but also on how readily they come to
understand the principles of base 10 notation (Fuson & Kwon, 1992a, 1992b; Fuson, Stigler, & Bartsch, 1988). Miura et al. (1993) have done similar studies with French and Swedish. Geary (1994) sees
the differences between Asian and English or European number names to be responsible, to some degree, for certain specific difficulties (e.g., with the concept of borrowing in doing subtraction with
multidigit numbers) that are commonly seen with American children (VanLehn, 1990; Young & O’Shea, 1981) but less so with East Asian (Korean) children (Fuson &
Preschool Development of Numerical and Mathematical Skills
Kwon, 1992b). More generally, poor understanding of the base 10 system is seen as the basis for problems some students have in acquiring skill in multidigit computation (Cauley, 1988; Hiebert &
Wearne, 1996).
☐☐ Naming Quantities That competence in the production and use of numerals in counting or recording the results of counting typically increases gradually during preschool years is well documented and
not surprising (Bialystok & Codd, 1996; Hughes, 1986, 1991; Sinclair, 1991; Sinclair & Sinclair, 1984). Still, many children enter school with a shaky grasp of the process of counting and the use of
numerals. Once learned, the process of counting seems quite straightforward, but many studies of how children learn to count and how their skill in this regard increases during the first few years of
life have revealed that acquisition of this ability is a more complex process than is apparent to the casual observer. The abilities to say number words in sequence and to recognize numerals on sight
are not, by themselves, compelling evidence of an ability to count in the sense of determining the number of items in a set (Hughes, 1986, 1991; Sinclair, 1991; Sinclair, Siegrest, & Sinclair, 1982).
Among the principles that must be learned before a child can be said to have a good grasp of what it means to count the items in a set is that of one-to-one correspondence between the numbers used
and the items in the set to be counted; otherwise, there is nothing to prohibit counting the same item more than once (Gelman & Gallistel, 1978; Montague-Smith, 1997; Starkey, 1992). One must come to
understand, too, that when putting number names into one-to-one correspondence with the items in the set, the last number named is the number of items in the set. It is possible to be able to count
in some sense without realizing that the purpose of counting is to determine the number of items in a set, and this realization may not come until about the fourth year (Wynn, 1990, 1992b). One must
also appreciate what Gelman and Gallistel (1978) call the order irrelevance principle, according to which one gets the same number independently of the order in which the items are counted. Briars
and Siegler (1984) found that four- to five-year-old preschoolers typically recognized the principle of word-to-object correspondence as essential to counting before they knew that certain other
aspects of observed counting episodes (counting adjacent objects sequentially, counting objects left to right) are not essential. Steffe and colleagues (Steffe, Thompson, & Richards, 1982; Steffe,
von Glasersfeld, Richards, & Cobb, 1984) argue that counting skills emerge as a progression, starting with the ability
Mathematical Reasoning
to count only tangible objects and moving through several levels of increasing abstraction to the eventual ability to count imagined entities. Sophian (1988, 1995, 1998) presents evidence from
several studies that suggests that although children’s early counting conforms to principles such as those identified by Gelman and Gallistel, their understanding of the significance of those
principles is acquired only gradually over several years. Carey (2004) argues that children first become one-knowers (learn to distinguish one from many), then two-knowers (distinguishing one and two
from more than two), then three-knowers (able to count to three), and eventually to understand how counting more generally works, each advance in the process taking several months. Going from the
ability to distinguish between one and more than one and being able to count beyond three may take a couple of years. Children commonly point to objects in the process of counting them. This involves
the use of pairing in three ways: between a number word and a pointing action, between a pointing action and an item in the set being counted, and between the number word and the item (Fuson & Hall,
1983). Children find it easier to count (a small number of) objects if they can physically move the objects as they do the count (say the number names) than if the objects cannot be moved (as spots
on a piece of paper), and they find it easier to count all the items in a set than to “count out” a subset of a specified number of the items (Resnick & Ford, 1981; Wang, 1973; Wang, Resnick, &
Boozer, 1971). Citing numerous studies (Baroody, 1992; Frydman & Bryant, 1988; Frye, Braisby, Lowe, Maroudas, & Nicholls, 1989; Fuson, 1988, 1992a; Sophian, 1992; Wynn, 1990, 1992b), Bialystok and
Codd (1996) contend that the evidence is compelling that below the age of five or six, children generally do not comprehend very well the principle of cardinality—the relation between the numbers in
the counting sequence and the abstract notion of quantity—and their ability to produce numerical notation is no guarantee that they understand what the symbols represent. They argue that the ability
to count, coupled with the ability to identify numeric notation, is not compelling evidence of an understanding that the two processes are linked. In a seven-month study of two- and three-yearolds,
Wynn (1992b) showed that children come to realize that each of the counting words refers to a specific numerosity before they realize to which numerosity it refers. Thompson (1994b) contends that
while concepts of quantity and concepts of number are closely related, they are not the same—one evidence of the difference being that number concepts readily become confounded with matters of
notation and language, whereas quantity concepts tend not to do so. In brief, learning to count involves the acquisition of the ability to make several kinds of distinctions, including, but not
limited to, those
Preschool Development of Numerical and Mathematical Skills
Is different from
Is more than
Is more than
Figure 14.1 Some of the distinctions involved in learning to count.
identified in Figure 14.1: recognizing that two collections are different in number, recognizing which of the two contains the larger number, putting a numerical label on a collection, and
recognizing which of two numerals represents the larger quantity. An important aspect of the understanding of number is the realization that a set of objects can be thought of as composed of various
combinations of subsets: that a set of six items can be composed of subsets of four and two items, of three and three items, or of one, two, and three items. Subset-set relationships of this sort are
sometimes referred to as part-part-whole relationships (Fischer, 1990), and an understanding of the principle has been called a major breakthrough in children’s mathematical development (Van de Walle
& Watkins, 1993).
☐☐ Grouping and Unitizing When there are more than a few objects to be counted, errors are easily made. One of the tricks that children appear to learn more or less spontaneously is that of grouping
objects into clusters of modest size, counting the number in each cluster, and adding up the results (Beckwith & Restle, 1966; Carpenter & Moser, 1983; Steffe & Cobb, 1988). If each cluster is made
to contain the same number of items, adding up the amounts
Mathematical Reasoning
in the clusters would be the equivalent of multiplying the number of items per cluster by the number of clusters. The process of constructing a reference unit—sometimes referred to as unitizing—is
seen as instrumental to the development of skill in multiplication, division, and related operations (Lamon, 1994). Unitizing provides a unit, other than 1, in terms of which situations can be
conceptualized. This process is also referred to as norming (Freudenthal, 1983; Lamon, 1994). Once established, a unit can be the basis of a hierarchy of clusters—a unit, a unit of units, a unit of
units of units, and so on—a principle that is seen in a place notation number system such as the Hindu-Arabic system, which organizes quantities in terms of units of 10, units of units of 10
(hundreds), units of units of units of 10 (thousands), and so on. Many students have difficulty with early arithmetic because they do not have a good grasp of the principles of place value notation
(Payne & Huinker, 1993); even many 13-year-olds lack a good understanding of place notation (Baroody, 1990; Kamii, 1986; Kouba & Wearne, 2000). Conversely, a good understanding of the principles of
place notation facilitates the acquisition of computational skills (Fuson & Briars, 1990; Hiebert & Wearne, 1996; Resnick & Omanson, 1987). The learning of this representational system has been
likened to the learning of a foreign language (Kilpatrick et al., 2001). Behr, Harel, Post, and Lesh (1994) describe an instructional approach that is built on the notion of units of quantity that
they expect will help children extend knowledge about addition and subtraction to the domain of multiplication and division.
☐☐ Nature and Nurture Differences in arithmetic ability are evident well before formal schooling (Geary, Bow-Thomas, Fan, & Siegler, 1993; Geary, Bow-Thomas, Liu, & Siegler, 1996), and the magnitude
of the differences increases during the first few years of school (Stevenson, Chen, & Lee, 1993; Stevenson, Lee, & Stigler, 1986). What are the origins of these differences? What determines
mathematical potential and the ease with which that potential can be developed? How much of a role does genetics play? Are people with different ethnic heritages likely to differ in their potential
to acquire mathematical skill? Do males and females differ in this respect? To what extent does the culture in which one is raised influence the development of whatever potential one has? No one
seems to doubt that both innate and learned factors are involved; the question is one of relative importance. Studies of
Preschool Development of Numerical and Mathematical Skills
homozygotic and heterozygotic twins have produced results that have been interpreted as indicating that nature and nurture are roughly equally important (Vandenberg, 1966), but this interpretation
has not been universally accepted. There is a lively debate about the question and the matter is far from settled (Geary, 1994). Specific theoretical positions have been defended by Gelman and
Gallistel (1978), Briars and Siegler (1984), Fuson (1988), Siegler and Jenkins (1989), and Sophian (1998), among others. Whatever the facts of the matter, beliefs about the relative importance of
nature and nurture appear to differ across cultures: According to Stevenson and Stigler (1992), for example, Japanese parents are inclined to attach more importance to effort and quality of teaching
as determinants of children’s acquisition of mathematical competence, whereas American parents tend to attribute differences in performance to differences in innate talent. Hess, Chang, and McDevitt
(1987) report a similar difference between the beliefs of Chinese and American mothers. The idea that the capacity to develop numerical ability is innate and universal gets support from the claim
that the numerical and informal mathematical skills that children acquire by the age of about seven are very similar across many cultural and social groups (Ginsburg, 1982; Ginsburg, Posner, &
Russell, 1981a, 1981b; Ginsburg & Russell, 1981; Klein & Starkey, 1988; Petitto & Ginsburg, 1982). Ginsburg and Baron (1993) summarize evidence on the point from cross-cultural research this way:
“The general finding is that children from various cultures, literate and preliterate, rich and poor, of various racial backgrounds, all display a similar development of informal addition (and other
aspects of informal mathematics, like systems for counting and enumeration)” (p. 8). (Barrow, 1992, in a footnote on p. 34, refers to Australian friends who claim that children from aboriginal groups
that do not use counting have no unusual difficulty learning math when placed in modern educational situations.) Ginsburg and Baron (1993) caution that although the research shows similarities across
cultures, races, and classes in the general course of development of informal mathematical abilities, it does not show that children from the various groups are identical in their mathematical
thinking. Moreover, no one denies that there are cultural and social influences on the ways in which the skills are acquired (Geary, Fan, & BowThomas, 1992; Saxe, 1991; Saxe, Guberman, & Gearhart,
1987). There is a school of thought that holds that knowledge of arithmetic should grow out of social contexts and that the practice of teaching arithmetic divorced from social contexts and only
later making practical applications through the teaching of problem solving is not the way it should be done (Behr et al., 1994).
Mathematical Reasoning
Geary (1996a) contends that there exists “a biologically primary numerical domain which consists of at least four primary numerical abilities: numerosity (or subitizing), ordinality, counting, and
simple arithmetic” (p. 152). He argues, however, that primary abilities are likely to develop into mathematical competence only with the help of appropriate instruction. “Most of children’s knowledge
of complex arithmetic and complex mathematics emerges in formal school settings (Ginsburg et al., 1981) and only as a result of teaching practices that are explicitly designed to impart this
knowledge” (p. 155). Much of the work aimed at identifying mathematical potential has been done from a componential perspective that assumes mathematical ability is the result of some combination of
underlying component abilities and that the goal is to determine what those underlying abilities are. Not surprisingly, for many studies with this perspective, beginning with the work of Spearman
(1904, 1923) and Thurstone (1938; Thurstone & Thurstone, 1941), factor analysis has been a technique of choice (Dye & Very, 1968; Ekstrom, French, & Harman, 1979; Goodman, 1943; Meyer, 1980; Osborne
& Lindsey, 1967). Most of the factor analytic studies that have been done have not been motivated by the desire to identify components of mathematical ability in particular—as distinct from other
types of cognitive ability—but they have often resulted in the identification of one or more factors that would be considered important for mathematical competence. Two that have been identified
relatively consistently are numerical facility and mathematical reasoning; a few others have been identified in some studies, but much less consistently (Geary, 1994). Geary refers to numerical
facility as “among the clearest and most stable factors identified across decades of psychometric research” (p. 139) and sees the stability of this factor as strongly supportive of the conclusion
that “arithmetic involves a fundamental domain of human ability” (p. 140). He also points out a lack of consensus regarding whether, in the absence of experience in solving mathematical problems,
mathematical reasoning ability really differs substantively from reasoning ability more generally. Undoubtedly among the best known and most influential studies of children’s acquisition of numerical
concepts are those conducted by Piaget and his associates (Inhelder & Piaget, 1958, 1964; Piaget, 1942, 1952; Piaget & Inhelder, 1969). For present purposes it suffices to note that Piaget’s theory
of stages of logical development is highly controversial, as is the question of whether it illuminates the acquisition of number concepts. Brainerd (1979) summarizes a review of several studies
inspired by Piaget’s theory by saying that while “there may be a positive statistical relationship between children’s arithmetic competence and their grasp of ordination, cardination, or both … we
still are completely in the dark about whether there is a developmental relationship between
Preschool Development of Numerical and Mathematical Skills
these variables” (p. 120). More generally, Piaget’s work raised a host of questions about the acquisition of numerical and other mathematical concepts and inspired a great deal of experimentation,
but did not settle many of the questions it raised.
☐☐ Beginning Mathematics One reason for studying mathematical abilities of preschool children is the belief that for instruction to be maximally effective, it must take into account what children do
and do not already know when they arrive at school (Fuson et al., 2000; Ginsburg, Klein, & Starkey, 1998; Seo & Ginsburg, 2003). As we have noted, long before they encounter efforts by adults to
teach them elementary mathematics, children give evidence of innate appreciation of some of the rudiments of arithmetic. They appear to notice, for example, when the number of items in a set that
they have been observing has changed as a consequence of simple arithmetic transformations (addition or subtraction of an item or items) (Gelman, 1972; Gelman & Gallistel, 1978; Huttenlocher, Jordan,
& Levine, 1994; Levine, Jordan, & Huttenlocher, 1992; Sophian & Adams, 1987; Wynn, 1992a). Not everyone agrees on how such findings are best interpreted (Bisanz, Sherman, Rasmussen, & Ho, 2005).
Citing numerous sources, among them Booth (1981), Erlwanger (1973), and Ginsburg (1977), Steffe (1994) argues that when children are faced with their first arithmetical problems, they attempt to
solve them using whatever mathematical schemes they already have, and they persist in using those schemes—in preference to others they are being taught—so long as they prove to produce answers that
teachers accept as correct. He contends further that teachers often remain unaware that students are using their own methods. Rather than discouraging children from using their own methods, as much
conventional teaching practice does, Steffe argues that teachers should try to understand those methods and build on them, and that ignoring child-generated algorithms in teaching basic arithmetic is
a serious mistake. Much of the research on preschool mathematics is aimed at identifyng methods and algorithms that children discover on their own. Not surprisingly, many of the techniques that
preschool children invent for accomplishing elementary mathematical operations are based on counting in one way or another (Carpenter, 1985; Carpenter & Moser, 1982, 1983; Fuson, 1992b; Ginsburg,
1989; Groen & Resnick, 1977; Kamii, 1985; Kaye, 1986; Maxim, 1989; Reed & Lave, 1979; Resnick, 1983). It is not unusual for three-year-olds to be able to do some simple addition and
Mathematical Reasoning
subtraction, or at least to understand (not yet having a concept of negative numbers) that addition increases the numerosity of a set and subtraction decreases it (Starkey, 1992; Starkey & Gelman,
1982). (Of course, later they have to unlearn this principle when they begin adding and subtracting negative numbers.) The fuzziness of the line between counting and computing is seen in the common
use of counting, often with the aid of fingers, by preschool children to do simple addition and subtraction, which most of them can do by the age of five or six (Geary et al., 1992; Siegler &
Shrager, 1984; Starkey & Gelman, 1982). Many children are also capable of inventing procedures for carrying out multidigit operations (Carpenter, Franke, Jacobs, Fennema, & Empson, 1998; Hiebert &
Wearne, 1996) and operations with fractions (Huinker, 1998; Lappan, Fey, Fitzgerald, Friel, & Phillips, 1996; Mix, Levine, & Huttenlocher, 1999). More will be said about the methods and strategies
that children use when beginning to do arithmetic. This topic has motivated a great deal of research. Often it is not clear whether the methods and strategies that have been observed were developed
during preschool years or only after the beginning of formal training. Preschool is a somewhat imprecise status inasmuch as many children whose mathematical performance has been studied have been
exposed to some formal instruction by virtue of participation in prekindergarten programs that might be considered schooling by some definitions.
☐☐ Preschool Facilitation of Mathematical Development Considerable attention is being given to the question of what should be done to facilitate the development of mathematical and premathematical
abilities in preschool children, to enhance and sharpen the informal mathematical skills they may have acquired naturally well before they begin their formal schooling (Baroody, 1992, 2000; Ginsburg
& Baron, 1993; Payne, 1990). Does it make sense to provide mathematics instruction to preschoolers? Baroody (2000) asks this question and answers it in the affirmative. He argues that preschoolers
have impressive informal mathematical strengths as well as a natural inclination for numerical reasoning, and that being so, it makes sense to involve them in “engaging, appropriate, and challenging
mathematical activities” (p. 64). Regarding the question of how preschoolers should be taught mathematics, he advocates engaging them in “purposeful, meaningful, and inquiry-based instruction” (p.
66). That there is much to be gained by
Preschool Development of Numerical and Mathematical Skills
exposing preschool children to aspects of mathematics, in both preschool programs (Clements, 2001) and the home (Starkey & Klein, 2000), seems likely, although there are differences of opinion
regarding precisely what that exposure should entail. Standards proposed by the National Council of Teachers of Mathematics (1989, 2000), and adopted or adapted by most of the states (Blank, Manise,
& Braithwaite, 2000), emphasize the role of play in developing mathematical concepts in young children. Games and activities designed to facilitate the development of mathematical capabilities in
preschoolers have been addressed to a variety of types of skills, including classification and set creation, matching (one-to-one correspondence), ordering and seriation, quantitative comparing (more
or less), and counting (Kaplan, Yamamoto, & Ginsburg, 1989; Moomaw & Hieronymus, 1995; Van De Walle, 1990). Teachers and caretakers of preschoolers may be aware of the importance of play in
developing mathematical skills without knowing how to use it effectively for this purpose. In the absence of formal assessment techniques, they may find it difficult to assess the ability of young
children to use mathematical concepts in their play. Kirova and Bhargava (2002), who make this point, provide checklists intended for use in assessing children’s ability to do matching (one-to-one
correspondence), classification, and seriation. Brainerd (1979) notes the trickiness of determining whether a child really uses one-to-one correspondence in determining whether two sets are of equal
number, arguing that one generally has at least one other cue (relative lengths of linear sequences, relative density of two-dimensional groups) in addition to correspondence on which the judgment of
relative manyness could be made. He contends that when trying to determine whether a child understands cardination (the logical connection between correspondence and manyness), judgments of classes
containing the same number of items are more revealing than judgments of classes containing an unequal number of items. “When a child correctly judges that two classes contain unequally many terms,
we cannot be certain that correspondence was the basis for the judgment. In the case of correct judgments of equal manyness, however, correspondence is the only basis for such judgments of which we
are aware” (p. 132). Brainerd presents data from studies of his own (and a replication by Gonchar, 1975) that he interprets as strong evidence in favor of the hypothesis that development of an
understanding of ordination generally precedes that of cardination, and that understanding of ordination is more important than understanding of cardination for success in beginning arithmetic. “In
the absence of either contradictory data or logically compelling counterarguments, one can therefore provisionally conclude that the human number concept, as indexed by arithmetic competence,
initially evolves
Mathematical Reasoning
from a prior understanding of ordinal number and not from prior understanding of cardinal number” (p. 205). As already noted, children generally learn to count—by some definitions—long before
entering school. Many books have been written for use with children who are just beginning to learn to read that feature numerals or number names. Flegg (1983) expresses concern about children’s
books that associate numerals with pictures in the same way as they associate letters or words with pictures. He contends that the presentation of the symbols 1, 2, 3, …, as if they were equivalent
to letters of the alphabet, overlooks a fundamental difference between letters and numerals. “A picture of a cat, together with the letter ‘c’ or the word ‘cat’ is not conveying the same sort of
information as a picture of two cats together with the numeral ‘2’ or the word ‘two’. The letter ‘c’ stands for a sound—the sound with which the spoken name of the animal pictured begins. The word
‘cat’ is the written form of that name. The numeral ‘2’, on the other hand, is not the name of the animals pictured, spoken or written, nor is it related directly to that name. It is a property in
some way possessed by the two cats by virtue of their being two of them” (p. 281). Flegg questions whether numerals should be introduced before they are needed to facilitate written arithmetic
operations and whether it might not be better to concentrate initially on number words. Noting that the concept of numbers in the abstract emerged relatively late in the history of mathematics, Flegg
argues that it probably should be deferred pedagogically as well. “The concrete should always precede the abstract—abstract concepts are very difficult to assimilate unless there have been plenty of
concrete examples with which the pupil has become familiar” (p. 282). He speculates that irreparable damage may be done by having children do calculations before they have had adequate time to
explore number concepts at the level of number words and symbols. Flegg’s speculations cannot be said to rest on solid empirical evidence, but they raise some questions that deserve investigation.
Relatively little is known about the effects on eventual mathematical competence of the ways in which young children are introduced to number concepts in their preschool years. A point that Flegg
makes that I find especially thought-provoking is the importance, from the earliest encounters with mathematics, of keeping an impetus on excitement and discovery. “It all begins with numbers—if
children come to fear them or to be bored with them, they will eventually join the ranks of the present majority for whom the word ‘mathematics’ is guaranteed to bring social conversation to an
immediate halt. If, on the other hand, numbers are made a genuine source of adventure and exploration from the beginning, there is a good chance
Preschool Development of Numerical and Mathematical Skills
that the level of numeracy in society can be raised significantly” (p. 290). It would be good to know if that is the case. There is a need too for research on the assessment of mathematical
potential. Allardice and Ginsburg (1983) note that as of the time of their writing almost no effort had been made to assess children’s learning potential, and they put a focus on this topic at the
top of their short list of needs for future research. The standard IQ score, especially the g component, which is sometimes equated with fluid intelligence, is a reasonably good indicant of
mathematical potential and predictor of how well students who are given special mathematical training are likely to do (Carroll, 1996; Lubinski & Benbow, 1994; Lubinski & Humphreys, 1990; Stanley,
1974), but assessing the potential to become proficient at higher mathematics—as distinct from doing well on cognitively demanding tasks generally—remains a challenge.
15 C H A P T E R
Mathematics in School
In most developed countries, during the earliest years of formal schooling much emphasis is put on the explicit teaching of mathematical skills and skills that are assumed to be prerequisite to the
acquisition of mathematical competence. It is generally recognized that in the absence of such schooling, children are unlikely to acquire mathematical knowledge beyond a rudimentary level. As one of
the “three Rs,” arithmetic has been a staple of elementary education since the beginning of universal public education in the United States and was prominent in schooling long before then.
☐☐ The Current Situation The foregoing chapters have dealt, for the most part, with mathematical reasoning independently of where it occurs. For the immediately following comments, I ask the reader’s
indulgence to focus on the current status of mathematical education in the United States. Concern in the United States about elementary and secondary mathematics education has been fueled by the
results of numerous studies showing that U.S. school children tend to do poorly on international tests of mathematical ability (Byrne, 1989; Carpenter, Corbitt, Kepner, Lindquist, & Reys, 1980;
Crosswhite, Dossey, Swafford, McKnight, & Cooney, 1985; Dossey, Mullis, Lindquist, & Chambers, 1988; Husén, 1967; Lapointe, Mead, & Askew, 1992;
Mathematical Reasoning
McKnight et al., 1987; Mullis, Dossey, Owen, & Phillips, 1991; Schmidt, McKnight, Cogan, Jakwerth, & Houang, 1999). The performance of U.S. students in mathematics compares unfavorably to that of
children of the same age or school grade in several other countries, especially in East Asia, including Japan, South Korea, Taiwan, Hong Kong, Singapore, and mainland China (Geary, 1996; Peak, 1996,
1997; Song & Ginsburg, 1987; Stevenson, Chen, & Lee, 1993; Stevenson & Lee, 1998; Stevenson, Lee, & Stigler, 1986; Stevenson et al., 1990; Stevenson & Sigler, 1992; Towse & Saxton, 1998). U.S.
students also score below the average of students in the 30 member countries (industrialized democracies) of the Organization for Economic Cooperation and Development (Program for International
Student Assessment, 2006). Although interpretation of cross-national comparisons is tricky (Brown, 1996; Reynolds & Farrell, 1996), that U.S. children—as well as those in much of Europe—tend to do
poorly on tests designed for such comparisons, or at least that they have indeed done poorly on several assessments in recent years, is not in question. The National Mathematics Advisory Panel (2008)
was able to note some positive trends in the scores of fourth and eighth graders in recent national assessments, but it reported broad agreement among its diverse membership on the point that “the
delivery system in mathematics education [in the United States] is broken and must be fixed” (p. xiii). In addition to test results, there are other indications that elementary mathematics education
is in trouble in the United States. The difficulty level of the types of problems that comprise basic arithmetic can be from one to three years behind that used in comparable grades in Asian
countries (Fuson, Stigler, & Bartsch, 1988; Stigler, Fuson, Ham, & Kim, 1986). U.S. textbooks for elementary mathematics may cover a broader range of topics than do those of several other countries,
but the coverage is claimed to be less substantive (Mayer, Sims, & Tajika, 1995; Schmidt et al., 1999; Schmidt, McKnight, & Raizen, 1997). Geary (1994) draws from some of these and similar findings
the “very clear” conclusion that among industrialized countries, American children are among the more poorly educated in mathematics. By international standards, he contends, the mathematics
curriculum in the United States is developmentally delayed. On the basis of their review of work on conceptual and procedural knowledge acquisition, Rittle-Johnson and Siegler (1998) conclude that
the results, in the aggregate, suggest that conceptual and procedural knowledge are related, inasmuch as Asian children have both and American children have neither. A similarly negative view of
mathematical education in American schools has been expressed by Dreyfus and Eisenberg (1996). “Lists of
Mathematics in School
‘simple’ problems that no one seems to be able to do are becoming so long that it is embarrassing to continue the practice of making them…. So many students are deficient in so many simple skills
that we are in the midst of an epidemic of ignorance running wild” (p. 256). In its Project 2061 report, Science for All Americans, the American Association for the Advancement of Science (1989)
characterized the problem in equally dire terms: “A cascade of recent studies has made it abundantly clear that by both national standards and world norms, U.S. education is failing to educate too
many students—and hence failing the nation. By all accounts, America has no more urgent priority than the reform of education in science, mathematics, and technology” (p. 3). Concern about the
teaching of mathematics is not limited to elementary and secondary schools. Here is one assessment of the situation at the college level. “Mathematics education at the college level is in a sorry
state. Students are turning away from mathematics, and those who do stay do not seem to learn very much. Our students do very poorly on national and international assessments. Our school teachers
seem almost to be afraid of the subject. Industry complains and does its own teaching to make up for employees’ mathematical deficiencies” (Dubinsky, 1994a). Selden, Selden, and Mason (1994) contend
that even good calculus students cannot solve nonroutine problems. Determining the basis of this poor showing is understandably deemed to be very important by U.S. educators, because what they should
do if they wish to close the achievement gap depends on whether the difference proves to be the result of genetics (Does the genetic makeup of East Asians give them an edge in mathematics?), cultural
differences (Is the development of mathematical competence more highly valued by East Asian families than by American families?), attitudes toward education generally (Do Asian parents have greater
expectations and ambitions for educational achievement by their children?), beliefs (Are Asians more likely to believe that academic success depends largely on effort, while Americans are more likely
to attribute academic success or failure to possession or lack of native ability?), instructional approaches (Has the East Asian educational system developed more effective ways of teaching
mathematics?), opportunity to learn (Do East Asian classrooms simply devote more time to the teaching of mathematics?), or some combination of these and perhaps other variables as well. All of these
possibilities have advocates, and it is not clear that any of them has been conclusively ruled out. However, following a review of the results of several studies that focused on the question of why
East Asian students consistently outperform U.S. students on tests of mathematical ability, Geary (1996) concludes that the performance difference is largely
Mathematical Reasoning
attributable to differences in schooling. Asian students, he notes, get more classroom exposure to mathematics and do more homework than their U.S. cohorts, and mathematical competence appears to
have a higher value within East Asian cultures than in the United States. “The bottom line is that U.S. children lag behind their international peers in the development of secondary mathematical
abilities because U.S. culture does not value mathematical achievement. East Asian children, in contrast, are among the best in the world in secondary mathematical domains, because Asian culture
values and rewards mathematical achievement” (p. 166). Citing Song and Ginsburg (1987) and Stevenson and Lee (1990), Ginsburg and Baron (1993) contend that American and Asian children perform at
approximately the same level during preschool and kindergarten years and that the differences emerge only after a year or two of schooling. These differences, they argue, are primarily the result of
differences in motivation and teaching. “Asian children are taught that by being diligent and working hard they will be able to master even those areas of learning that they find very difficult.
Success is attributed to hard work more than to innate ability. All children are expected to work hard and all children are expected to succeed. And generally they do” (p. 18). The generalization
that American and Asian children perform at about the same level before they begin formal schooling has been challenged by data obtained by Siegler and Mu (2008) comparing the performance of Chinese
and American preschoolers on number-line and addition tasks. The number-line task required the children to locate each of 26 numbers (between 3 and 96) on a number line; the addition task required
performance of 70 addition problems with sums between 2 and 10. The Chinese children did significantly better than their American peers on both tasks. That Asian immigrants to the United
States—whether they received their elementary mathematics training in the United States or in Asia— tend to do better at mathematics than do American-born students has been attributed to
higher-ability Asian parents being more likely than lower-ability Asian parents to immigrate to the United States and to the relatively high value that Asian parents place on academic achievement,
which translates into relatively more time spent on homework (Caplan, Choy, & Whitmore, 1992; Tsang, 1984, 1988). The possibility of attributing the poor showing of American students in mathematics
to overcrowded classrooms or inadequate physical infrastructure appears to be ruled out by the fact that classrooms in Asia often include 40 to 50 students, whereas in the United States the average
tends to be between 20 and 30, and the physical infrastructure of American schools is among the best anywhere (Lapointe et al., 1992).
Mathematics in School
An extensive review of the effects of class size on student achievement in the United States showed the benefit of smaller classes to be greater for lower grades (1–3) than for higher ones, but
relatively small in all cases (Ehrenberg, Brewer, Gamoran, & Willms, 2001). For an example of a study finding a relatively large effect, see Kruger (1999). Attributing the poor mathematical
performance of American students to inadequate funding of U.S. education is challenged by the fact that as of the time of the first IEA assessment (Husén, 1967) the United States was spending roughly
6.5 times as much per student on education as was Japan, and despite that the total U.S. expenditure on education grew from 4.5% of the gross national product to 7.5% between the late 1960s and the
early 1990s, SAT scores were lower in 1991 than in 1964 (Geary, 1994). Particularly disheartening is the finding that the expressed degree to which U.S. students like mathematics decreases
substantially over the middle school through high school years (grades 6 through 12) (Brush, 1985). It is, of course, one thing to note the disheartening state of mathematics education in the United
States and quite another to articulate a workable route to significant improvement. I have no illusions about being able to do the latter, but, clearly, improving the current situation is a challenge
of great national concern. My hope is that this book may contribute in a small way to a better understanding of the problem and perhaps prompt some useful thoughts about possible approaches to
specific aspects of it.
☐☐ Goals of Teaching Mathematics Concern has been expressed not only about the need for more effective methods for teaching mathematics but also about the need for a clearer articulation of what the
goals of mathematical instruction should be. “Perhaps the most serious impediment to the design of more effective learning environments or instructional materials is the fact that, in general, we
lack principled descriptions of the forms of understanding we seek to develop in students” (Wenger, 1994, p. 245). In 1991 the U.S. Department of Education set as one of six goals to be realized by
2000: “U.S. students will be first in the world in science and mathematics achievement” (U.S. Department of Education, 1991, p. 3). In retrospect, this may have been an unrealistic goal, given how
poorly U.S. students were doing on international assessments at the time it was set. In any case, the goal did not come close to being met.
Mathematical Reasoning
Here I wish to make two distinctions relating to the question of what the goals of teaching mathematics in public school systems should be. The first contrasts two possibilities, each of which is
sometimes stated or assumed to be in effect: • Increasing the proportion of students of a given age who meet specified minimum standards • Giving all students a better opportunity to realize more of
their potential The first of these possible goals focuses on the lower end of the performance distribution. The main objective is to bring the poorer performers up to the standards that have been
set. Much has been written about children differing greatly with respect to the mathematical knowledge and skills that they acquire before school. Some of the disparity has been associated with
socioeconomic status. As the National Mathematics Advisory Panel (2008) puts it: Most children from low-income backgrounds enter school with far less knowledge than peers from middle-income
backgrounds, and the achievement gap in mathematical knowledge progressively widens throughout their PreK-12 years” (p. xviii). Adey and Shayer (1994, p. 171) argue that a reasonable goal of an
effort to enhance thinking generally—assuming an effective combining of programs to develop thinking capabilities with improvements in instruction itself—is to bring the mental development of all
children to the range that currently encompasses the top 30% of students. According to this view, the goal should be to decrease the spread of the ability range by moving those students who are
currently at the bottom of it closer to the top. One way to accomplish the alternative goal of giving all students a better opportunity of reaching their full potential is that of using teaching
techniques that are adaptive to the aptitudes of individual students. If successful, this approach would increase the average performance level across the board, and it should have a salutary effect
of increasing the population of people who are qualified to fill jobs that require high levels of mathematical competence. There is also the possibility that it would increase the spread of the
performance continuum— amplify individual differences—by helping the more capable students more than the less capable ones. This possibility was noted decades ago by Carroll (1967). Recently,
Gottfredson (2005) has made the same point and surmised that the result would not be welcome universally. “Targeting instruction to students’ individual cognitive needs would likely improve
achievement among all, but it would not cause the slow
Mathematics in School
learners to catch up with the fast. The fast learners would improve more than the slow ones, further widening the learning gap between them and seeming to make the ‘rich richer.’ This is currently
politically unacceptable” (Gottfredson, 2005, p. 168). Notably missing from the recommendations of the National Mathematics Advisory Panel (2008) is much attention to the question of how to give
mathematically gifted students the best opportunities to realize their potential. The single reference to gifted students in the 45 recommendations in the executive summary notes that “with
sufficient motivation [they] appear to be able to learn mathematics much faster than students proceeding through the curriculum at a normal pace, with no harm to their learning, and should be allowed
to do so” (p. xxiv). This is an extraordinarily important issue. It may well be the case generally that when powerful tools—including effective teaching and learning techniques—become available to
everyone, those people who are more capable (interested, motivated) tend to use them to better advantage than those who are less so. It may be possible to define approaches to the teaching of
mathematics that will help less capable students to learn faster and better than they otherwise would, but the idea that it would be good to minimize, or even reduce the range of, individual
differences as they are expressed in mathematics is neither realistic nor desirable, in my view. The second distinction relating to goals that I want to note is made by Papert (1972). This is the
distinction between the goal of teaching children about mathematics and that of teaching them to be mathematicians. Some feeling for the goal of teaching students to be mathematicians is captured in
Schoenfeld’s (1987b) description of his experience of teaching a college course in mathematical problem solving for many years. “With hindsight, I realize that what I succeeded in doing in the most
recent versions of my problem-solving course was to create a microcosm of mathematical culture. Mathematics was the medium of exchange. We talked about mathematics, explained it to each other, shared
the false starts, enjoyed the interaction of personalities. In short, we became mathematical people. It was fun, but it was also natural and felt right. By virtue of this cultural immersion, the
students experienced mathematics in a way that made sense, in a way similar to the way mathematicians live it. For that reason, the course has a much greater chance of having a lasting effect” (p.
213). Schoenfeld’s classroom approach is described in his 1985 Mathematical Problem Solving and in numerous other publications. One surmises that the effectiveness of this approach depends, to no
small degree, on an extraordinarily knowledgeable and committed teacher.
Mathematical Reasoning
☐☐ Drill and Practice Learning Visions of draconian teachers demanding insane memorization of meaningless mumbo-jumbo prevent a large number of people from reacting normally to the opportunities
offered by contemporary mathematics. (Steen, 1978, p. 2)
Interest among psychologists in the psychology of mathematics and its implications for the teaching of arithmetic to children goes back at least to American psychologist Edward L. Thorndike. His The
Psychology of Arithmetic, which appeared in 1922, gives a prescription for teaching arithmetic to children based on his “law of effect” (Thorndike, 1913), and the idea that mathematical skill is
composed of a stable of stimulus– response bonds that are best acquired and strengthened by rote drill. Thorndike’s emphasis on drill and practice has had many critics. Notably among the earlier ones
was William A. Brownell (1928), who contended that such an emphasis would not yield an integrated comprehension of arithmetic and that what was needed was an approach that stressed a conceptual grasp
of the principles on which arithmetic operations are based. Resnick and Ford (1981), who contrast the views of Thorndike and Brownell, characterize the difference between them as being in their
definitions of what should be learned: “To Thorndike, mathematical learning consisted of a collection of bonds; to Brownell, it was an integrated set of principles and patterns” (p. 19). Despite what
appears to be the prevailing view that drill and practice methods are not likely to produce understanding of the material that is to be learned, it is claimed that the most common form of teaching in
the U.S. schools today is based on recitation, which is a very close cousin to drill and practice (Tharp & Gallimore, 1988). Kilpatrick, Swafford, and Findell (2001) describe the method as follows:
The teacher leads the class of students through the lesson material by asking questions that can be answered with brief responses, often one word. The teacher acknowledges and evaluates each
response, usually as right or wrong, and asks the next question. The cycle of question, response, and acknowledgement continues, often at a quick pace, until the material for the day has been
reviewed. New material is presented by the teacher through telling or demonstrating. After the recitation part of the lesson, the students often are asked to work independently on the day’s
assignment, practicing skills that were demonstrated or reviewed earlier. U.S. readers will recognize this pattern from their own school experience because it has been popular in all parts of the
country, for teaching all school subjects. (p. 48)
Mathematics in School
While this approach will undoubtedly provide students with the ability to perform many mathematical operations successfully, it is likely to fall short of giving them an understanding of why the
rules they learn yield correct results. It is one thing to learn by rote that the product of two negative numbers is positive and quite another to understand why this is the case. From the teacher’s
perspective, the recitation method has the advantage that applying it does not require that one be able to explain the rationale for such rules.
☐☐ Constructivism and Discovery Learning The mathematics curriculum in U.S. schools has changed considerably during the 20th century. Schoenfeld (1987a) describes the curriculum as being relatively
stable over the first half of the century, and then experiencing a series of swings, each of which lasted about a decade, during the latter half. These swings involved, in order, the introduction of
“new math” in the 1960s, largely in response to spectacular Soviet achievements in space technology; the “back to basics” movement in the 1970s; and the turn to an emphasis on mathematical problem
solving in the 1980s. A similar characterization of major shifts in approaches to mathematics education during the 20th century is given in the 2001 report of the Mathematics Learning Study Committee
of the National Research Council (Kilpatrick et al., 2001, p. 115). This report constitutes an extensive review of the current state of the teaching and learning of mathematics in U.S. schools
through the eighth grade and concludes with numerous recommendations for improvement. Another significant change that occurred during the 20th century, in part due to the influence of theoretical
work of Swiss philosopherpsychologist Jean Piaget (1928, 1952; Inhelder & Piaget, 1958, 1964) and Russian psychologist Lev Vygotsky (1962, 1978, 1986), was a shift from a nearly exclusive dependence
on rote instruction at the beginning of the century to an increasing emphasis on participatory learning, in which the student is seen as an active participant in the construction of his or her own
knowledge. During the latter part of the 20th century, constructivism, broadly defined, became widely adopted by educational researchers (Anderson, 1981, 1982; Schoenfeld, 1987a; Wenger, 1987). That
people are more likely to remember and use what they discover than to remember and use what they are told is generally acknowledged to be a fact. From the constructivist’s perspective, learning is
most effective—perhaps occurs only—when learners construct their knowledge (Belmont, 1989; Steffe & Wood, 1990). Some argue
Mathematical Reasoning
that drill and practice methods, especially if introduced too early, are likely to kill the natural interest that children have in numbers and mathematics. The role of the teacher, in this view, is
to facilitate this knowledge–construction process—to structure environments and situations that make it easy and natural for students to discover, invent, and construct (Cobb, Yackel, & Wood, 1992).
This does not mean that the teacher is indifferent to what knowledge gets constructed. “We must make explicit the nature of the knowledge that we hope is constructed and make a case that the chosen
activities will promote its construction” (P. W. Thompson, 1985, p. 192). Steffe (1994) also emphasizes the importance of guidance of discovery. That children naturally learn some things by
discovering them has been argued by Baroody and Gannon (1984), who make the case for the principle of commutativity in addition. When first learning to add, children are very likely to notice,
Baroody and Gannon contend, that one gets the same sum independently of the order of the addends. Of course, an adequate understanding of the principle as applied to addition must include knowledge
that it does not hold for subtraction; misapplication in the latter case can account for some of the errors that have been found in beginners’ performance of subtraction tasks (e.g., always
subtracting the smaller number from the larger, independently of the order of the terms). Resnick (1983) notes the possibility that children naturally assume that all arithmetic operations are
commutative and have to learn that this is not so. Many of the computer-based microworlds that have been developed are intended to make it easy for children to explore physical processes and
mathematical relationships (Dugdale, 1982; Feurzeig, 1988, 2006; Resnick & Johnson, 1988). Such systems may provide environments well suited to facilitate discovery, and some children may make
substantive discoveries by interacting with such systems completely on their own. However, if such systems are to be maximally effective agents of learning for most children, there is probably a need
for some guidance in their use from a teacher who has specific learning goals in mind and who knows how to steer the exploration in directions that are likely to lead to the desired discoveries. In
the absence of much more powerful discovery learning tools than have yet been developed, what can be accomplished by discovery learning is bound to be limited and seems likely to fall short of
producing what is generally considered a good understanding of a significant chunk of mathematics. It is at once exciting and intimidating to realize that humankind took millennia to develop many of
the mathematical concepts and relationships that make up today’s elementary school curriculum. That children could make, even with help and guidance, all
Mathematics in School
the discoveries that define elementary mathematics—recapitulate the history of the development of the discipline, as it were—does not seem remotely possible. On the other hand, knowledge of the
history of mathematics, and especially of the conceptual struggles that have attended many of the key developments, such as the many extensions of the concept of number (see Chapter 3), provides some
insight into the struggles that a child making a similar conceptual journey over a few short years is likely to experience.
☐☐ Need for a Synthesis While the constructivist view is attractive to many, it is seen by others to be unrealistic in its assumption that all children will benefit from this approach and that there
is no need for rote, or “mechanical,” learning at any phase of a mathematical education (Geary, 1994; Sweller, Mawer, & Ward, 1983). Geary argues that the acquisition of an understanding of
mathematical concepts and skill in performing mathematical procedures may require different approaches to instruction, and that procedural skill, in contrast to conceptual understanding, may require
considerable drill and practice. Pointing out that there was a precipitous decline in mathematical competence of public school students following the introduction of the “new math” in the 1960s,
Brainerd (1979) takes issue with some of the assumptions underlying the discovery learning approach. “The assumption that answering leading questions [his characterization of the Socratic method] is
in some meaningful sense self-discovery and the assumption that self-discovery is the best way to learn mathematics are both open to serious doubt” (p. 207). Resnick and Ford (1981) give an account
of the rise of interest in new methods of teaching mathematics that put more stress on conceptual learning than on the rote teaching of computational procedures, and they note that as of the time of
their writing, most educators acknowledged the need for both of the types of learning experiences called for by the contrasting views—drill and meaningful instruction—but that it was not clear how
the two should be integrated. Resnick and Ford review in some detail innovative approaches, promoted by Dienes (1960, 1963, 1967) and others, that make heavy use of concrete materials (e.g.,
attribute blocks, Cuisenaire rods) in the teaching of elementary arithmetic with an emphasis on conceptual understanding. As to whether these approaches produce better learning and deeper
understanding of mathematics than do more traditional approaches that emphasize the acquisition of computational skills, they note that the available evidence
Mathematical Reasoning
is largely indirect and not decisive. Their conclusion is that instructional planning almost certainly should include opportunities for the learning of both concepts and computational skills. The
relationship between computational skill and mathematical understanding is one of the oldest concerns in the psychology of mathematics. It is also one that has consistently eluded successful
formulation as a research question. Over the years, the issue has been posed in a manner that made it unlikely that fruitful research could be carried out. Instead of focusing on the interaction
between computation and understanding, between practice and insight, psychologists and mathematics educators have been busy trying to demonstrate the superiority of one over the other.… What is
needed, and what now seems a possible research agenda, is to focus on how understanding influences the acquisition of computational routines and, conversely, on how, with extended practice in a
computational skill, an individual’s mathematical understanding may be modified. (Resnick & Ford, 1981, p. 246)
The argument that computational skills should be taught—not left to be discovered—is not, of course, a claim that they must be taught strictly by rote. Several investigators have provided evidence
that learning to calculate can be facilitated by experiences designed to promote understanding of the procedures used (Bezuk & Cramer, 1989; Hiebert & Wearne, 1996; Mack, 1990, 1995). Conversely,
rule-based instruction that is not accompanied by efforts to ensure that students gain a conceptual understanding of the procedures that are being taught seems unlikely to provide a solid basis for
the acquisition of higher mathematical knowledge and skill. I think it is safe to say that few, if any, researchers or educators would argue against conceptual understanding as a primary goal of the
teaching of mathematics at all levels. The question of precisely how best to develop that understanding is still wanting an answer. One point on which there appears to be general agreement is that
children should not be treated as tabulae rasae at the outset of their introduction to formal mathematical education. They come to school with a considerable body of concepts and beliefs relating to
counting and arithmetic, and failure to recognize this and to build on what children already know, or believe, more or less ensures confusion and impedance of the development of mathematical
competence. As Dehaene (1997) puts it, “The child’s brain, far from being a sponge, is a structured organ that acquires facts only insofar as they can be integrated into its previous knowledge….
Thus, bombarding the juvenile brain with abstract axioms is probably useless. A more reasonable strategy for teaching mathematics would appear to go through a progressive enrichment of children’s
Mathematics in School
intuitions, leaning heavily on their precocious understanding of quantitative manipulations and of counting” (p. 241).
☐☐ Order of Instruction The idea of hierarchical structure has guided the teaching of mathematics from elementary school on. Number concepts are taught first, then arithmetic, then algebra, then
calculus, and so on. However, this progression is somewhat misleading. Some understanding of number concepts is certainly a prerequisite for learning to do arithmetic, but what one learns about
numbers before starting to learn arithmetic is but a tiny bit of what there is to know about numbers; number theory is a very active area of mathematical research. Some mathematicians spend their
lives studying number theory and generally raise more questions than they answer as a consequence. Even within a relatively well-defined area of mathematics and at a well-specified level of
complexity—like elementary arithmetic— the question of the order in which concepts should be introduced has been a focus of interest to researchers. Arithmetic during the primary grades has meant a
focus mainly on addition and subtraction (Baroody & Standifer, 1993; Coburn, 1989). Geometry, in contrast, has received relatively little emphasis (Clements 2004; Clements & Battista, 1992; Clements,
Swaminathan, Hannibal, & Sarama, 1999; Fuys & Liebov, 1993; Porter, 1989). The potential for making connections between number concepts and geometry—by making use of the number line, for
instance—appears not to have been much exploited (Kilpatrick et al., 2001). The results of some attempts to teach children to locate fractions on the number line suggest that this is a difficult task
(Behr, Lesh, Post, & Silver, 1983; Gelman, 1991; Novillis-Larson, 1980). The order generally followed in teaching arithmetic in the United States is addition, subtraction, multiplication, and
division, the latter two beginning only in the third grade (National Council of Teachers of Mathematics, 2000), but there are many views as to what the best order is (Dienes & Golding, 1971; Gagné,
1968; Gagné, Mayor, Garstens, & Paradise, 1962; Resnick, Wang, & Kaplan, 1973). Some researchers see the presentation of multiplication as based on counting or repeated addition as problematic
(Confrey, 1994; Steffe, 1994). Timing has also been an issue. Some investigators hold that the introduction of addition and subtraction should be delayed until children have developed a firm
foundation of number concepts (Van de Walle & Watkins, 1993). However, what constitutes a firm foundation of number
Mathematical Reasoning
concepts in this context is open to question. The rational number concept, which many would argue is fundamental to basic mathematics, can be problematic even for secondary school students. The
difficulty appears to be due, at least in part, as Behr et al. (1983) note, to rational numbers being interpretable in at least six ways: “a part-to-whole comparison, a decimal, a ratio, an indicated
division (quotient), an operator, and a measure of continuous or discrete quantities” (p. 93). A complete understanding of the rational number concept, it has been claimed, requires an understanding
of all six of these interpretations and their interrelations (Kieren, 1976). When equations or algorithms should be introduced—how much work with less formal concepts should precede their
introduction—is also a matter of debate (Thompson & Van de Walle, 1980; Thornton, 1990). Familiarity with algebra has generally been considered a prerequisite for the learning of calculus, but a case
has been made for introducing some of the ideas that are fundamental to calculus before a student has acquired any competence in algebra (Confrey & Smith, 1994; Kaput, 1994). Given the key role that
the concept of proof plays in mathematics, the question of when it should be introduced in mathematics education is a particularly interesting one. My sense is that relatively little emphasis is
given to this concept in the teaching of elementary mathematics. An exception is the work of Maher and Martino (1996) and Davis and Maher (1997). These investigators have studied the ability of
beginning students to deal with the concept and to use it effectively. Determining the most effective order of introducing mathematical concepts to students is complicated by the finding that the
ability or inability of students to use specific concepts linguistically is not always a good indication of whether they understand the concepts in a more than superficial way (Brainerd, 1973c,
1973d). In some cases children may be able to use a concept appropriately in context and yet not be able to show a semantic comprehension of it by answering questions about it correctly. For many
practical purposes, this perhaps does not matter—the important thing is to get a correct solution to the mathematical problem—but to the extent that understanding is a goal of education, it matters
☐☐ Children’s Strategies in Learning Elementary Arithmetic Strategies is likely to bring to mind sophisticated approaches to complex problems. However, the term is sometimes used to refer to
approaches that children adopt in trying to cope with tasks they are given even in
Mathematics in School
learning basic arithmetic. Interest in identifying strategies that children use when beginning to learn to add and subtract goes back at least to Brownell (1928). Other relatively early investigators
of the subject include Ilg and Ames (1951) and Gibb (1956). The interest has increased considerably in more recent years, and there now exists a very large literature on this topic. Children use a
variety of strategies to do addition or subtraction (Baroody, 1987; Fuson, 1992a, 1992b; Hamann & Ashcraft, 1985; Siegler, 1987, 1989; Siegler & Shrager, 1984). Among those they use to add are some
based on counting. To add 4 and 3, they may start with four and count up three additional numbers, five, six, seven, and take the number arrived at as the answer (Fuson, 1982; Groen & Parkman, 1972;
Groen & Resnick, 1977; Steffe, Thompson, & Richards, 1982; Suppes & Groen, 1967). A counting-up procedure may also be used when the task is to identify the number that must be added to a specified
number to yield a specified total, for example, 4 + ? = 7 (Case, 1978). A counting-up strategy for doing addition works whether one starts the count from the larger or the smaller addend, but it is
more efficient to start with the larger one, and the more so the greater the difference of the addends in size. Several studies have shown that many children spontaneously discover and adopt this
strategy, which has been called the min strategy, because it requires the minimum count (Geary, 1990; Groen & Resnick, 1977; Siegler & Jenkins, 1989; Svenson & Broquist, 1975). Counting up requires
the ability to start a count with a number other than 1, which is a skill that may take some time to acquire after a child has learned to count from 1 (Fuson, Richards, & Briars, 1982). Carpenter and
Moser (1982, 1984; Carpenter, 1985) distinguish three strategies that can be seen in children’s performance of addition tasks involving two addends: direct modeling (in which the child represents
each of the two sets with physical objects or fingers and counts their union), counting (just described, in which the child starts with one of the numbers and counts from there the number of units
represented by the second number), and number facts (in which the sum of the numbers is retrieved from memory). The three represent a progression; the use of number facts depends on having already
committed sums of specific pairs of addends to memory. Steinberg (1985) proposes a similar, but not identical, progression of phases in dealing with addition and subtraction problems: counting,
reasoning, and recall. The reasoning phase, in this model, involves the discovery of ways to apply what one knows to figure out answers, by means other than counting. Countin | {"url":"https://epdf.pub/mathematical-reasoning-patterns-problems-conjectures-and-proofs.html","timestamp":"2024-11-11T10:30:19Z","content_type":"text/html","content_length":"1049632","record_id":"<urn:uuid:a9a6a114-a84d-4d6c-9336-084649c08e96>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00660.warc.gz"} |
Area of a Circle | Stage 3 Maths | HK Secondary S1-S3
We already know that area is the space inside a 2D shape. We can find the area of a circle, but we will need a special rule.
The following investigation will demonstrate what happens when we unravel segments of a circle.
Interesting isn't it that when we realign the segments we end up with a parallelogram shape. Which is great, because it means we know how to find the area based on our knowledge that the area of a
parallelogram has formula $A=bh$A=bh. In a circle, the base is half the circumference and the height is the radius. | {"url":"https://mathspace.co/textbooks/syllabuses/Syllabus-98/topics/Topic-1483/subtopics/Subtopic-17565/?activeTab=theory","timestamp":"2024-11-10T06:15:28Z","content_type":"text/html","content_length":"443521","record_id":"<urn:uuid:e15e09ee-1104-4352-abd3-6059598241c1>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00817.warc.gz"} |
Mathematics has played a significant role in the development of Human Civilization. Emerged in the Indian subcontinent, around 12000 BCE, Indian Mathematicians have contributed a great deal of
theories and measures in the sphere of Maths.
As Aryabhata calculated the value of pi, his inventions have also given us the place value system, Circumference of the Earth to 99.8% Accuracy and calculation of the length of the sidereal year.
While Brahmagupta first stated the rules governing the use of zero, Bhaskara II wrote “Bijaganita” on Algebra and reached an impeccable understanding of calculus, number systems and solving
Sriniwasa Ramanujana – “The Man Who Knew the Infinity” made substantial contributions to mathematical analysis, number theories, infinite series and continuous fractions. Prasanta Chandra known as
“The Father of Statistics”, D R Kaprekar discovered the “Kaprekar Constant”, C R Rao is famous for his “Theory of Estimation” and many more.
These Mathematicians have taught us the importance and implementation of Mathematics in real life and I always make sure that my students realize the utmost importance and implementation of
Mathematics in their real life. By injecting the knowledge about these great Mathematicians in my students’ minds, I make them realize the reason behind learning Mathematics in school as many
students put up this question nowadays that, “Why are we studying Algebra, Equations etc. ?”
I believe that if we learn something, we must know the reason behind it. Instead of spoon-feeding formulae, equations etc. I make my students understand their importance in their practical life, so
that they stand out with flying colors in their careers.
Many students hate mathematics, probably because no one could make them understand its value. Being a mentor, I try my level best to make my students fall in love with the beauty of Mathematics. | {"url":"http://kaushalacademy.org/home/welcome","timestamp":"2024-11-14T05:05:48Z","content_type":"text/html","content_length":"32124","record_id":"<urn:uuid:de1cde5f-7ed7-43ed-96c9-6df66977f12b>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00173.warc.gz"} |
{pageurl} not working?
Hi, I'm trying to use a magic macro inside my generic HTML banner.
Macros like {timestamp} or {random} work but {pageurl} which I need the most does not. It results in an empty string.
What can be wrong?
It does not work with or without an iframe. Even preview in the admin panel does not show any string. Bug?
Edited by tomekb
Yeah, except it's not :P ;)
{pageurl} in your case should be openx.xenium.pl/... and what I see on your screen is the referer (forum.revive..).
Can you check on IE? On IE it does not work at all and it's simply blank. | {"url":"https://forum.revive-adserver.com/topic/3423-pageurl-not-working/","timestamp":"2024-11-12T06:19:16Z","content_type":"text/html","content_length":"160312","record_id":"<urn:uuid:82a9214a-9e5c-454e-b4fe-89f3d0120ae1>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00133.warc.gz"} |
Understanding Matrices and Matrix Notation
Professor Dave Explains
26 Sept 201805:26
TLDRThe video explains how to represent systems of linear equations using matrices. It starts by showing how the coefficients and constants of each equation can be organized into rows and columns of
a matrix. Adding an extra column for the constants creates an augmented matrix containing all info from the system. The matrix size depends on the number of equations and variables. To construct a
matrix, equations must have variables in the same order and missing variables are added with a 0 coefficient. An example shows the process of constructing a 5x5 augmented matrix from a system of 5
equations with 4 variables.
• π A matrix is an array of numbers contained in brackets, with rows and columns.
• π Augmented matrices contain all the information from a system of linear equations.
• π € The coefficients from each equation make up the rows of the coefficient matrix.
• π The last column of an augmented matrix contains the constants.
• π § Missing variables are added with a coefficient of 0 to complete the matrix.
• π ₯Έ Matrices allow linear systems to be expressed without variables.
• π Equations must be in the same format for matrix creation to work.
• 𠀩 Matrix rows match equations and columns match variables.
• π €― Matrix dimensions are # equations x # variables (+1 for augmented).
• π § Practicing creating matrices helps comprehension of the process.
Q & A
• What is the primary goal of linear algebra discussed in the video?
-One of the primary goals of linear algebra is solving systems of linear equations.
• How can we express systems of linear equations using matrix notation?
-We can express systems of linear equations using matrices by creating a coefficient matrix with the coefficients of the variables as rows, and then augmenting it with a column containing the
constant terms.
• What is an augmented matrix and how is it constructed?
-An augmented matrix contains all the information from a system of linear equations. It is constructed by creating a coefficient matrix and adding a column containing the constant terms from the
right side of the equations.
• What are the dimensions of an augmented matrix representing a system with M equations and N variables?
-The augmented matrix will have dimensions M x N+1, with M rows for the equations and N+1 columns for the variables plus the constants.
• Why do we need the variables to be in the same order when creating the matrix?
-The variables need to be in the same order so each one lines up with the proper column in the matrix. If they are mixed up, the matrix will not correctly represent the system.
• What do we do if a variable is missing from an equation when creating the matrix?
-If a variable is missing, we add it back into the equation with a coefficient of 0. This maintains the matrix structure without changing the equation.
• What is represented by the rows and columns of a coefficient matrix?
-The rows of the coefficient matrix represent the equations and the columns represent the variables.
• Why don't we need to include the variable names in the matrix?
-The variable names are not needed because the matrix structure implies which variable is which based on the column. Only the coefficients matter.
• What format do the equations need to be in before creating the matrix?
-The equations need to have all variables on one side and the constant terms on the other side, with the variables in the same order.
• What is done to create an augmented matrix from a system of equations?
-First a coefficient matrix is created from the variable coefficients. Then a column of the constant terms is appended to form the augmented matrix.
π Constructing Matrices to Represent Systems of Linear Equations
This paragraph explains how to construct matrices to represent systems of linear equations. It discusses abbreviating the coefficients into a matrix with rows and columns corresponding to the
equations and variables. An augmented matrix contains the coefficient matrix plus a column for the constants, allowing all information from the system to be represented.
π ‘ Example of Constructing an Augmented Matrix
This paragraph provides a step-by-step example of constructing an augmented matrix from a system of linear equations. It emphasizes bringing the equations into the same format and adding in missing
variables with a 0 coefficient before creating the matrix.
π Matrix Dimensions for Linear Systems
This paragraph notes that in general, a system with M equations and N variables will result in an M by N coefficient matrix. Adding the column of constants creates an augmented matrix with dimensions
M by N+1.
π ‘matrix
A matrix is an array or grid of numbers arranged in rows and columns. It is a useful way to represent systems of linear equations compactly by capturing just the coefficients of the variables. The
script shows how to take a system of equations and convert it into a matrix representation, which is important for solving systems of equations using linear algebra techniques.
π ‘coefficients
The coefficients are the numerical factors multiplied with each variable in a linear equation. When representing systems of equations as matrices, we only need to pay attention to the coefficients
rather than the variable names. The coefficients make up the entries in the matrix.
π ‘augmented matrix
An augmented matrix is a matrix representation of a system of equations that includes an extra column for the constants or numbers on the right hand side of each equation. Along with the coefficient
matrix, it contains all the information from the original system of equations in compact matrix form.
π ‘linear system
A linear system is a set of linear equations involving the same set of variables. Solving linear systems is a major application of matrices and linear algebra. The video shows how to convert a linear
system into a matrix representation.
π ‘variables
The variables are the unknown quantities in a system of linear equations, typically denoted by letters like x, y, z. When converting a system to a matrix, we only need the coefficients of each
variable, not the variable names.
π ‘rows
In a matrix, each row corresponds to one equation in the linear system. The coefficients from each equation make up the entries in the corresponding row of the matrix.
π ‘columns
In a matrix, each column corresponds to one variable in the system of equations. The coefficients of each variable across the equations make up the column entries.
π ‘dimensions
The dimensions of a matrix refer to the number of rows and columns it has. The script explains how the dimensions relate to the number of equations and variables in the linear system.
π ‘format
For a linear system to be correctly represented as a matrix, all the equations must be in the same format - each variable isolated on the left side and constants on the right. The video demonstrates
formatting equations this way before constructing the matrix.
π ‘information
A key benefit of matrices is that they compactly encode all the important information (coefficients and constants) from a bigger system of equations into a small two-dimensional array. The matrix
contains the full information needed to solve the system.
The study found a significant increase in student engagement when using immersive virtual reality technology in the classroom.
Virtual reality allowed students to visit historical sites and interact with 3D models of artifacts, providing a deeper learning experience.
Teachers reported that virtual field trips were more memorable and impactful for students than traditional lessons.
The ability to manipulate 3D models and interact with environments was cited as particularly beneficial for visual and kinesthetic learners.
Students showed increased motivation and participation when using virtual reality compared to traditional classroom instruction.
Virtual reality technology allowed students with disabilities or mobility limitations to engage with environments they could not easily experience.
Teachers felt virtual reality allowed for greater creativity and flexibility in designing interactive, multi-sensory educational experiences.
The study recommends expanded implementation of virtual reality in classrooms based on benefits seen in student engagement and knowledge retention.
Future studies should explore long-term impacts of virtual reality usage on student learning outcomes over an entire academic year.
More research is needed on best practices for integrating virtual reality into school curricula and teacher training programs.
The researchers called for continued innovation in virtual reality educational software and content tailored for classroom needs.
Limitations of the study include a small sample size from one geographic area and lack of a control group for comparison.
Additional factors like the novelty effect of new technology may have influenced the students' initial engagement and motivation levels.
More data is needed on the impact of prolonged virtual reality usage on child development, vision, and motion sickness susceptibility.
The researchers emphasized the importance of appropriate content, time limits, and supervision when implementing virtual reality in schools.
Rate This
5.0 / 5 (0 votes) | {"url":"https://learning.box/video-388-Understanding-Matrices-and-Matrix-Notation","timestamp":"2024-11-02T11:48:54Z","content_type":"text/html","content_length":"111010","record_id":"<urn:uuid:63d0c24c-d9a9-4152-8cae-f9df5786e41b>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00890.warc.gz"} |
Magic Square (pointers)
Hello everyone,
I have been trying to get this code to work, but so far all it does is tell me :
Input the order of the Matrix(Keep it Odd):3
Press any key to continue . . .
I am supposed to create a magic square for any odd number order that anyone puts in. Put in a 3 and get a 3x3 magic square. And it MUST be done using pointers. This is what I have so far:
#include <iostream>
using namespace std;
void main(){
int n,i,j;
int m=1;
cout<<"Input the order of the Matrix(Keep it Odd):";
double **ms;
//allocated the space for the rows and the inputs for each row
ms=new double*[n];
ms[i]=new double[n];
//trying to populate the magic square
if (ms[(i+n-1)%n][(j+n+1)%n])
i = (i+n+1)%n ;
i = (i+n-1)%n , j = (j+n+1)%n ;
//print the magic square
cout<<ms[i][j]<<" "<<endl;
//deallocate the space
delete [] ms[i];
delete []ms;
Could someone please tell me why it wont let me populate it to create the magic square with 1 in the middle of the first row and so on?
i forgot to mention the way that i'm trying to populate the magic square. Lets say n=3, we would get a 3x3 matrix.
we would start with a 1 in the middle of row 0.
all the 0's are the spaces we haven't populated yet.
basically we start with 1 and keep counting until we have all the spaces filled in (1-9). This is accomplished by moving up 1 row and 1 space to the right. since theres no row above row 0, we move
unto the last row. Then again up 1 row and over 1 space. there are no spaces past column 3, we move onto column 1 and put 3. We cant move from the normal way from 3, up 1 row and over 1 space since
its occupied by 1 so we move 1 space down from 3, putting 4.
Then we move again up 1 row and 1 space over until it looks like this:
it should work like this for any odd number magic squares.
Ok, comments.
Indenting is your friend, feel free to use him consistently.
First, why use doubles for the elements of the square. They should all be integer values, right?
I don't think I'd use a for(i ... for (j ... to fill the box. I would pre-initialize i and j (to be the center top like you propose) and then use a for (m = 1; m <= n * n; m++) as the loop.
Did you intentionally not put any braces on your for statements that starts on lines 15 and 16?
If you want to insist on using the for loops as written, consider removing the i++ and j++ from then ones on line 15 and 16. You went to a lot of work on lines 20 and 22 to set them where you wanted
them, why would you want to increment them?
Since we always now this is a square, you only need to allocate once.
Make sure you use the trys and catches, too.
Reply to this Topic | {"url":"https://www.daniweb.com/programming/software-development/threads/170173/magic-square-pointers","timestamp":"2024-11-07T00:49:27Z","content_type":"text/html","content_length":"71429","record_id":"<urn:uuid:e863b46a-0745-4ac4-b298-fc54f53c0098>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00841.warc.gz"} |
ICHEP 2022
Ettore Budassi (Università di Pavia, INFN Pavia)
The anomalous magnetic moment of the muon $a_\mu = (g-2)_\mu/2$ has been measured at the Brookhaven National Laboratory in 2001 and recently at the Fermilab Muon $g - 2$ Experiment. The results
deviate by 4.2 $\sigma$ from the Standard Model predictions, where the most dominant source of theoretical error comes from the Hadronic Leading Order (HLO) contribution $a_\mu^{\mathrm{HLO}}$. MUonE
is a proposed experiment at CERN whose purpose is to provide a new and independent determination of $a_\mu^{\mathrm{HLO}}$ via elastic muon-electron scattering at low momentum transfer. To achieve a
precision that is comparable to the standard timelike estimation of $a_\mu^{\mathrm{HLO}}$, the experiment must reach an accuracy of about 10 parts per million on the differential cross section. This
requires a similar level of accuracy also from the theoretical point of view: a precise calculation of the muon-electron scattering cross section with all the relevant radiative corrections as well
as quantitative estimates of all possible background processes are needed. In this talk the theoretical formulation for the NNLO photonic corrections as well as NNLO real and virtual lepton pair
contributions are described and numerical results obtained with a Monte Carlo event generator are presented. These contributions are crucial to reach the precision aim of MUonE.
In-person participation Yes | {"url":"https://agenda.infn.it/event/28874/contributions/169933/","timestamp":"2024-11-10T09:04:27Z","content_type":"text/html","content_length":"106747","record_id":"<urn:uuid:95b9b354-389d-4af8-85d2-1492207a2b68>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00132.warc.gz"} |
How to Load A Partial Model With Saved Weights In Pytorch?
To load a partial model with saved weights in PyTorch, you first need to define the architecture of the model with the same layers as the saved model. Then, you can load the saved weights using the
torch.load() function and specify the path to the saved weights file. After loading the saved weights, you can transfer the weights to the corresponding layers in the partial model using the
load_state_dict() method. Make sure to load the weights for the layers that are present in both models to avoid any errors. Finally, you can use the partial model with the loaded weights for
inference or further training.
How to fine-tune a model by loading saved weights in Pytorch?
To fine-tune a model by loading saved weights in Pytorch, you can follow these steps:
1. Define your model architecture and load the saved weights:
1 import torch
2 import torch.nn as nn
3 from model import YourModelClass
5 # Create an instance of your model class
6 model = YourModelClass()
8 # Load saved weights
9 model.load_state_dict(torch.load('saved_weights.pth'))
1. Define your loss function and optimizer:
1 criterion = nn.CrossEntropyLoss()
2 optimizer = torch.optim.Adam(model.parameters(), lr=0.001)
1. Set your model to train mode:
1. Iterate over your training dataset and fine-tune the model:
1 for epoch in range(num_epochs):
2 for inputs, labels in train_loader:
3 optimizer.zero_grad()
4 outputs = model(inputs)
5 loss = criterion(outputs, labels)
6 loss.backward()
7 optimizer.step()
9 # Optionally, evaluate your model on a validation set after each epoch
1. Save the fine-tuned weights if needed:
1 torch.save(model.state_dict(), 'fine_tuned_weights.pth')
By following these steps, you can fine-tune your model by loading saved weights in Pytorch.
How to retrieve trained weights for specific layers in Pytorch?
To retrieve the trained weights for specific layers in PyTorch, you can use the state_dict() method of your model.
Here is an example code snippet on how to retrieve the trained weights for a specific layer named 'layer_name' in your model:
1 import torch
3 # Define your model
4 class MyModel(torch.nn.Module):
5 def __init__(self):
6 super(MyModel, self).__init__()
7 self.layer1 = torch.nn.Linear(10, 5)
8 self.layer2 = torch.nn.Linear(5, 1)
10 def forward(self, x):
11 x = self.layer1(x)
12 x = self.layer2(x)
13 return x
15 model = MyModel()
17 # Load the trained weights
18 model.load_state_dict(torch.load('path_to_model_weights.pth'))
20 # Retrieve the trained weights for a specific layer
21 layer_name_weights = model.layer_name.weight.data
23 print(layer_name_weights)
In this example, we defined a model with two linear layers layer1 and layer2. We loaded the trained weights using load_state_dict and then accessed the weights of the specific layer layer_name using
Make sure to replace 'path_to_model_weights.pth' with the actual path to the saved weights file.
What is the benefit of loading a partial model with saved weights in Pytorch?
One benefit of loading a partial model with saved weights in Pytorch is that it allows for faster training and fine-tuning of the model. By initializing the model with saved weights from a previously
trained model, you can start training from a point where the model has already learned useful features and patterns, instead of training from scratch. This can help to speed up the training process
and improve the overall performance of the model. Additionally, loading a partial model with saved weights can help to save computational resources and memory, as you do not have to train the entire
model from the beginning.
How to troubleshoot issues when loading partial models with saved weights in Pytorch?
When encountering issues when loading partial models with saved weights in Pytorch, you can follow these troubleshooting steps:
1. Ensure that the model architecture matches when loading the saved weights. If the architecture of the current model is different from the one used to save the weights, you may encounter errors.
Make sure to define the model architecture the same way it was when the weights were saved.
2. Check that the keys of the state_dict from the saved weights match the keys of the model's state_dict. The state_dict is a dictionary object that maps each layer of the model to its parameter
tensor. If the keys do not match, you may encounter errors when loading the weights.
3. Verify that the layers you want to load weights into are correctly defined. If you want to load weights into specific layers of the model, make sure that those layers are correctly defined in the
model architecture and match the keys in the saved weights.
4. Check for any modifications to the model after loading the weights. If you make any changes to the model after loading the weights, such as adding new layers or changing the architecture, it may
cause issues with the saved weights.
5. Use torch.save() and torch.load() to save and load the model weights. Make sure to use these functions correctly when saving and loading the model weights to avoid any issues.
6. Use the model.eval() method before loading the saved weights. This will set the model to evaluation mode and ensure that the model is ready to load the saved weights.
By following these troubleshooting steps, you should be able to successfully load partial models with saved weights in Pytorch without encountering any issues.
What is the error message when weights do not match the model structure in Pytorch?
When weights do not match the model structure in Pytorch, the error message would typically be something like:
"RuntimeError: Error(s) in loading state_dict for Model: Missing key(s) in state_dict: "layer.weight", "layer.bias". Unexpected key(s) in state_dict: "fc.weight", "fc.bias". Incompatible keys size
between weights and model structure. Model has unexpected key(s) size, please double check the architecture." | {"url":"https://freelanceshack.com/blog/how-to-load-a-partial-model-with-saved-weights-in","timestamp":"2024-11-14T00:38:53Z","content_type":"text/html","content_length":"419086","record_id":"<urn:uuid:cc4205dd-4c3a-47a2-b173-edfd09782ae5>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00624.warc.gz"} |
Difference in SUM
Okay I have a spreadsheet with calculations that I am recreating in Smartsheet. Any idea why there would be a difference in the total SUM of the columns when the numbers are the same with the
calculations? I am dealing with a % but it is the exact same percentage. Does Smartsheet do something that Excel doesn't behind the scene? Do I need to account for something in my formula? Has anyone
else had this experienced this?
Best Answer
• Ok. To exclude zeroes, you could use a SUMIFS instead. Something along the lines of:
=SUMIFS(CHILDREN(), CHILDREN(), @cell <> 0)
• Hi,
Smartsheet handles percent differently. 1 = 100% and 0,1 is 10%.
Can you describe your process in more detail and maybe share the sheet(s)/copies of the sheet(s) or some screenshots? (Delete/replace any confidential/sensitive information before sharing) That
would make it easier to help. (share too, andree@getdone.se)
I hope that helps!
Have a fantastic week!
Andrée Starå
Workflow Consultant / CEO @ WORK BOLD
SMARTSHEET EXPERT CONSULTANT & PARTNER
Andrée Starå | Workflow Consultant / CEO @ WORK BOLD
W: www.workbold.com | E:andree@workbold.com | P: +46 (0) - 72 - 510 99 35
Feel free to contact me for help with Smartsheet, integrations, general workflow advice, or anything else.
• I agree with @Andrée Starå. Screenshots with sensitive/confidential data blocked, removed, or replaced with "dummy data" as needed would be very beneficial. It would also help if you could copy/
paste your exact formula(s) here as well.
• The numbers in each column in CAF DUE and VARIANCE match the numbers in the Excel spreadsheet line for line exactly. The CLIN_BILLED_AMT is a provided amount from another system, not a calculated
amount. Below are my calculations. The {27398 and 32310 CLIN BILLED AMT ENTITY} is a Pivot chart that Smartsheet created with the totals.
CAF DUE calculation:
=MAX(IF([I/C/A CLIN]@row = "C", [TOTAL_FUNDING]@row, ""), IF([I/C/A CLIN]@row = "C", INDEX({27398 AND 32310 CLIN BILLED AMT}, MATCH([Entity Lookup (HELPER)]@row, {27398 and 32310 CLIN BILLED AMT
ENTITY}, 0)) * [I/C/A FEE%]@row / 100))
VARIANCE calculation:
=IF([I/C/A CLIN]@row <> "C", "", [CAF DUE]@row - [CLIN_BILLED_AMOUNT]@row)
Top line calculations for both CAF DUE AND VARIANCE are both:
SMARTSHEET Totals for CAF Due and Variance:
Excel Totals for CAF Due and Variance:
CAF DUE: $7,362,282.15
VARIANCE: $3,191,673.07
• Have you manually calculated to see which one was actually correct?
Are there any values that could have more than 2 decimal places (even though only 2 are shown there could be more stored on the back-end which could change the rounding)?
What kind of data is in the {27398 AND 32310 CLIN BILLED AMT} range? I see you are multiplying by [I/C/A FEE%]@row and then dividing by 100. Is it possible that is adjusting how many numbers go
beyond 2 decimal places (even though only two are shown)?
• The data that is in the {27398 AND 32310 CLIN BILLED AMT} is the SUM of CLIN_BILLED_AMT. It's really weird how it has to be calculated. Since you have to calculate the % on the sum of the Total
Funded grouped by the CLIN/BATCH # and not include the "C" total funding. Then that total sum is multiplied by the I/C/A Fee %.
I just asked for an unprotected copy of the Excel spreadsheet, and reviewed it based on your recommendation -- and low and behold even though it is displaying 2 decimal points on the Excel
spreadsheet it's displaying $0.93, but when you click on it it's actually showing 0.925400000000081. I think I now have discovered where the extra "cents" are coming from. (I think.....)
• Lets hope that's it. I'll keep my fingers crossed for you! 👍️
Feel free to come back and let us know if that was it.
• Okay, it's something to do with the 0.00 amounts on the lines. From what I can tell, this is heavily driven by the line items with zero (0) variance. I tested both spreadsheets side by side. I
used a range as an example (Smartsheet) and the sum total it calculates is equal to $11.002.95, vs. when I calculate the sum for the same range on my spreadsheet I get $11,002.94. All line items
with zero variance are counted on the Smartsheet as part of the sum total as unrounded positive values vs. Excel doesn’t appear to do that. Any thoughts on how to fix that?
• "............All line items with zero variance are counted on the Smartsheet as part of the sum total .................."
Are you referring to the SM(CHILDREN()) portion?
• Ok. To exclude zeroes, you could use a SUMIFS instead. Something along the lines of:
=SUMIFS(CHILDREN(), CHILDREN(), @cell <> 0)
• I saw that Paul answered already!
Let me know if I can help with anything else!
SMARTSHEET EXPERT CONSULTANT & PARTNER
Andrée Starå | Workflow Consultant / CEO @ WORK BOLD
W: www.workbold.com | E:andree@workbold.com | P: +46 (0) - 72 - 510 99 35
Feel free to contact me for help with Smartsheet, integrations, general workflow advice, or anything else.
• Thank you Paul and Andrée, but unfortunately I still have a slight discrepancy. It's only a few cents but I have a very detailed accounting department. LOL So, I am just trying to find out the
why. Right now I have a ticket in with Smartsheet to see if it's something that I have done or if there is something else. I will post here to let everyone know.
• Smartsheet support assisted me. We changed the ) and it fixed the few pennies difference. Thank you so much!
• Excellent!
Glad you got it working!
✅Please help the Community by marking your post with the accepted answer/helpful. It will make it easier for others to find a solution or help to answer!
SMARTSHEET EXPERT CONSULTANT & PARTNER
Andrée Starå | Workflow Consultant / CEO @ WORK BOLD
W: www.workbold.com | E:andree@workbold.com | P: +46 (0) - 72 - 510 99 35
Feel free to contact me for help with Smartsheet, integrations, general workflow advice, or anything else.
Help Article Resources | {"url":"https://community.smartsheet.com/discussion/65917/difference-in-sum","timestamp":"2024-11-03T18:44:58Z","content_type":"text/html","content_length":"487666","record_id":"<urn:uuid:4c96b76e-4343-46f6-935c-5b522118093b>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00798.warc.gz"} |
where the coefficients are those from the last Hartree-Fock iteration and the matrix elements are all anti-symmetrized. You can extend your Hartree-Fock program to write out these matrix elements
after the last Hartree-Fock iteration. Make sure that your matrix elements are structured according to conserved quantum numbers, avoiding thereby the write out of many zeros.
To test that your matrix elements are set up correctly, when you read in these matrix elements in the CCD code, make sure that the reference energy from your Hartree-Fock calculations are reproduced.
Project 2 b):
Set up a code which solves the CCD equation by encoding the equations as they stand, that is follow the mathematical expressions and perform the sums over all single-particle states. Compute the
energy of the two-electron systems using all single-particle states that were needed in order to obtain the Hartree-Fock limit. Compare these with Taut's results for $\omega=1$ a.u. Since you do not
include singles you will not get the exact result. If you wish to include singles, you will able to obtain the exact results in a basis with at least ten major oscillator shells.
Perform also calculations with $N=6$, $N=12$ and $N=20$ electrons and compare with reference [2] of Pedersen et al below.
Project 2 c):
The next step consists in rewriting the equations in terms of matrix-matrix multiplications and subdividing the matrix elements and operations in terms of two-particle configuration that conserve
total spin projection and projection of the orbital momentum. Rewrite also the equations in terms of so-called intermediates, as detailed in section 8.7 of Lietz et al. This section gives a detailed
description on how to build a coupled cluster code and is highly recommended.
Rerun your calculations for $=2$, $N=6$, $N=12$ and $N=20$ electrons using your optimal Hartree-Fock basis. Make sure your results from 2b) stay the same.
Calculate as well ground state energies for $\omega=0.5$ and $\omega=0.1$. Try to compare with eventual variational Monte Carlo results from other students, if possible.
Project 2 d):
The final step is to parallelize your CCD code using either OpenMP or MPI and do a performance analysis. Use the $N=6$ case. Make a performance analysis by timing your serial code with and without
vectorization. Perform several runs and compute an average timing analysis with and without vectorization. Comment your results.
Compare thereafter your serial code(s) with the speedup you get by parallelizing your code, running either OpenMP or MPI or both. Do you get a near $100\%$ speedup with the parallel version? Comment
again your results and perform timing benchmarks several times in order to extract an average performance time.
1. M. Taut, Phys. Rev. A 48, 3561 - 3566 (1993).
2. M. L. Pedersen, G. Hagen, M. Hjorth-Jensen, S. Kvaal, and F. Pederiva, Phys. Rev. B 84, 115302 (2011)
Introduction to numerical projects
Here follows a brief recipe and recommendation on how to write a report for each project.
• Give a short description of the nature of the problem and the eventual numerical methods you have used.
• Describe the algorithm you have used and/or developed. Here you may find it convenient to use pseudocoding. In many cases you can describe the algorithm in the program itself.
• Include the source code of your program. Comment your program properly.
• If possible, try to find analytic solutions, or known limits in order to test your program when developing the code.
• Include your results either in figure form or in a table. Remember to label your results. All tables and figures should have relevant captions and labels on the axes.
• Try to evaluate the reliabilty and numerical stability/precision of your results. If possible, include a qualitative and/or quantitative discussion of the numerical stability, eventual loss of
precision etc.
• Try to give an interpretation of you results in your answers to the problems.
• Critique: if possible include your comments and reflections about the exercise, whether you felt you learnt something, ideas for improvements and other thoughts you've made when solving the
exercise. We wish to keep this course at the interactive level and your comments can help us improve it.
• Try to establish a practice where you log your work at the computerlab. You may find such a logbook very handy at later stages in your work, especially when you don't properly remember what a
previous test version of your program did. Here you could also record the time spent on solving the exercise, various algorithms you may have tested or other topics which you feel worthy of
Format for electronic delivery of report and programs
The preferred format for the report is a PDF file. You can also use DOC or postscript formats or as an ipython notebook file. As programming language we prefer that you choose between C/C++,
Fortran2008 or Python. The following prescription should be followed when preparing the report:
• Use Devilry to hand in your projects, log in at http://devilry.ifi.uio.no with your normal UiO username and password.
• Upload only the report file! For the source code file(s) you have developed please provide us with your link to your github domain. The report file should include all of your discussions and a
list of the codes you have developed. The full version of the codes should be in your github repository.
• In your github repository, please include a folder which contains selected results. These can be in the form of output from your code for a selected set of runs and input parameters.
• Still in your github make a folder where you place your codes.
• In this and all later projects, you should include tests (for example unit tests) of your code(s).
• Comments from us on your projects, approval or not, corrections to be made etc can be found under your Devilry domain and are only visible to you and the teachers of the course.
Finally, we encourage you to work two and two together. Optimal working groups consist of 2-3 students. You can then hand in a common report. | {"url":"https://notebook.community/CompPhysics/ComputationalPhysics2/doc/Projects/2019/Project2CC/ipynb/Project2CC","timestamp":"2024-11-08T15:41:34Z","content_type":"text/html","content_length":"49217","record_id":"<urn:uuid:29ddcfc1-4996-41a3-96fc-e607210a3cdc>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00591.warc.gz"} |
Solving Tile Sliding Puzzles With Graph Searching Algorithms
Puzzles are good way to kill some time when you're bored. I was at a doctors office the other morning, and in the waiting room they had sliding tile puzzle that I played around with for a little bit
while waiting to be called in. Personally, I find writing algorithms to solve the puzzles for me to be an even better way to kill some time. A number of small tabletop puzzle games, like peg
solitaire and various tile sliding games can be modeled algorithmically to find solutions to them. So when I got home, that's exactly what I did.
A puzzle similar to the version I was playing at the doctors office
This can be generalized by viewing A puzzles configuration as it's state, with each possible new configuration resulting from moving a piece from the current configuration representing a state
change. If we have an initial state, and our goal is to get to a finishing state, we can enumerate the possible state changes to create a tree structure that if possible, will guide us from a
starting state to our goal state by traversing the tree. I've covered one approach to this technique, called backtracking, when discussing the N Queens problem. In this post I will be discussing the
8-sliding tile puzzle and various algorithmic techniques used for solving them.
First, lets cover some basics of sliding tile puzzles.
Understanding the Problem Space
Sliding tile problems are based on a matrix of NxN spaces, with one blank space and the rest numbered 1 - N. Common values for N are 3, resulting in the 8-puzzle, and 4 which results in the 15
puzzle. The goal of the puzzle is to get from a starting configuration to a goal configuration by moving the rearranging the tiles using only allowable moves.
Starting and goal configurations of an 8 puzzle board
Before we can even think about solving these puzzles, it is important to know if a given starting configuration is even possible to solve - not all of them are. Thankfully, there is an easy way of
determining this.
Determining If A Puzzle Can Be Solved
In the tabletop version of slider puzzles, the goal state is reached when the tiles are arranged numerically in row wise order. It should be noted that this technique is only applicable to puzzles
whos final state is as pictured.
To determine if a puzzle can be solved depends on the size of the puzzle being solved. We begin by counting the inversions in the starting configuration. An inversion is when two numbers, 'A' and 'B'
appear in the matrix, with the value of 'A' being greater than 'B' but appearing before it in the total ordering. In an NxN matrix, we count the number of inversions, again in row wise order.
If N is an odd number, then only puzzles with an even number of inversions can be solved.
If N is an even number, the puzzle can be solved only when the number of inversions + the row index of the blank space is even.
There are a number of ways to obtain the count of inversions, but all of them will involve flattening the matrix to a 1d Array. The following is the simplest approach, but keep in mind that it is n^2
bool checkBoardIsSolvable(Board b) {
vector<int> ordered;
//flatten the 2d vector.
for (auto row : b)
for (auto col : row)
//count inversions
int inversions = 0;
for (int i = 0; i < 8; i++) {
for (int j = i+1; j < 9; j++) {
//The blank space is not used in the inversion count
if (ordered[i] && ordered[j] && ordered[i] > ordered[j])
//odd size puzzle, must have an even number of inversions.
if (b[0].size() % 2 != 0 && inversions % 2 == 0)
return true;
//even size puzzle, add the row index of the blank spot
if (b[0].size() % 2 == 0 && (inversions + (find(b, 0).y+1)) % 2 ==0)
return true;
return false;
Another method of counting inversions is simply to merge sort the flattened array, and track the inversions it encounters during the sort. If you are not using a flattened array, but instead using a
1d representation of the board, then remember to sort a copy of the board, not the original. This reduces the complexity to O(n log n):
int countInversions(vector<int>& a, vector<int>& aux, int l, int r, int inv) {
if (r - l <= 1)
return inv;
int m = (l+r) / 2;
inv = countInversions(a, aux, l, m, inv);
inv = countInversions(a, aux, m, r, inv);
for (int k = l; k < r; k++) aux[k] = a[k];
int i = l, j = m, k = l;
while (i < m && j < r) {
if (aux[i] < aux[j]) {
a[k++] = aux[i++];
} else {
a[k++] = aux[j++];
if (aux[i] && aux[j-1])
while (i < m) a[k++] = aux[i++];
while (j < r) a[k++] = aux[j++];
return inv;
bool checkBoardIsSolvable(Board b) {
vector<int> ordered;
for (auto row : b)
for (auto col : row)
vector<int> aux;
int inversions = countInversions(ordered,aux, 0, ordered.size(), 0);
if (b[0].size() % 2 != 0 && inversions % 2 == 0)
return true;
if (b[0].size() % 2 == 0 && (inversions + (find(b, 0).y+1)) % 2 ==0)
return true;
return false;
To be honest, these puzzles are generally small enough that the brute force method of counting inversions perfectly sufficient. I've included the merge sort based method for the sake of completeness.
Proceeding When A Puzzle is Solvable
Once we've determined that a puzzle is able to solved, we proceed by trying different board configurations. The board can be reconfigured by swapping the blank space with any of its neighbors
immediately above, below, to the left, or to the right of it. Spaces can not be swapped horizontally however. Seeing as each board can potentially have up to 4 possible next states, the search space
can be quite large due to combinatorial explosion. Knowing this, we can assume that any technique that reduces the size of the search space can aid in speeding up the search.
A portion of the 8puzzle search space
There are two basic techniques for approaching searching problems. Informed search is applicable when we can exploit some known properties of the puzzle to aid us in making a better decision when
selecting which state to transition to next. An example of this being A* search.
The other approach is what's called "uniformed search". That's a fancy way of saying brute force, where we treat any next state as equally likely and traverse them all, albeit in different orders
depending on the method chosen. Depth First Search and Breadth First Search are two such examples of this approach - and being the simplest types of search, they make a good place to start.
Laying the Foundation
Despite there being a number of different ways to proceed all of them require a few things in common:
• Need a practical way of representing the board in memory
• Must have a procedure to generate the possible next states of a given board
• Some way to represent the relationship between states
• And a method to compare a given board to the goal state.
With the board being a simple matrix, it is only logical for it to be represented by a 2d array of integers. I used the C++ vector class because it allows us to avoid some of the boilerplate involved
in constructing the matrix.
For representing the relationship between different board states, we are going to use a special kind of tree structure called a state space tree. Each possible next state makes up the children nodes
of the current board state in our tree. I chose to use a Left Child/Right Sibling tree with an additional parent pointer (to aid in retracing solution paths) as the data structure. Each node of the
tree has a copy of the board, the x/y coordinates of the blank spot in the matrix, a pointer to its first child node, as well as a pointer it's sibling nodes.
typedef vector<vector<int>> Board;
struct point {
int x, y;
point(int X = 0, int Y = 0) {
x = X; y = Y;
struct node {
Board board;
point blank;
node* parent;
node* next;
node* children;
node(Board info, point blank, node* parent, node* next) {
this->blank.x = blank.x;
this->blank.y = blank.y;
this->board = info;
this->next = next;
this->parent = parent;
this->children = nullptr;
typedef node* link;
To generate the child nodes of a board, we pass the current node to a function that uses an array of coordinate off sets that will be applied to the coordinates where blank tile is located to aid in
generating the next state. After applying the offsets to the blank spots coordinates, we check if the new coordinates are a valid board position, and if so swap those positions and create a child
node for that state.
vector<point> dirs = {{1, 0}, {-1, 0}, {0,1},{0,-1}};
int tryCount = 0;
bool isSafe(int x, int y) {
return x >= 0 && x < 3 && y >= 0 && y < 3;
link generateChildren(link h) {
for (point d : dirs) {
point next = {h->blank.x+d.x, h->blank.y+d.y};
if (isSafe(next.x, next.y)) {
Board board = h->board;
swap(board[next.x][next.y], board[h->blank.x][h->blank.y]);
h->children = new node(board, next, h, h->children);
return h->children;
The child state nodes are linked together in a linked list, with each child node also pointing back to its parent. To know if we've found a solution we need an equality check for our boards, as well
as way to display the boards.
bool compareBoards(Board a, Board b) {
for (int i = 0; i < 3; i++)
for (int j = 0; j < 3; j++)
if (a[i][j] != b[i][j])
return false;
return true;
void printBoard(Board& board) {
for (vector<int> row : board) {
for (int col : row) {
cout<<col<<" ";
And of course, we're going to need a driver program to test our algorithms, the following is what I used:
int main() {
Board p1start = {
Board p1goal = {
Board p2start = {
{1, 2, 3},
{5, 6, 0},
{7, 8, 4}
Board p2goal = {
{1, 2, 3},
{5, 8, 6},
{0, 7, 4}
SlidePuzzle ep;
ep.solve(p1start, p1goal);
ep.solve(p2start, p2goal);
return 0;
With these data structures and utility functions in place we are ready to start implementing a strategy to solve the puzzle, as mentioned above we will start with uninformed searches.
Exploring the Search Space
Our search space is organized as a tree, and with trees being a special type of directed acyclic graphs the logical place to start is with a graph searching algorithm. In general tree search
algorithms work like this:
procedure search(start, goal):
add start node to stack
while (stack is not empty) {
pop top item from stack and make current
if (current is goal) {
show path from start to goal and exit.
} else {
foreach (current.children)
add child to stack;
Some search techniques use recursion, and others are iterative. Some use a stack to store the nodes, and some use a queue while still others utilize a priority queue. Regardless of the exact strategy
chosen, all tree searches work fairly similarly with some doing additional work to aid in the search.
Uninformed Searches
The simplest place to start is a recursive depth first search, so that is exactly what I did. I very quickly confirmed my suspicions that when you combine the size of each node with the combinatorial
explosion of the search space, a depth first search will cause a stack overflow or run out of memory long before it finds a solution.
This happens because of the very nature of how depth first search works. Even if the goal state is located very close to the starting state in the tree but along a different branch then the one being
traversed by DFS, it may not be reached. This means that even if we didn't smash the call stack with the depth of recursion or run out of memory from generating such a large tree, it could still take
a VERY long time to arrive at the goal state - if it ever does at all.
One possible way to make depth first search avoid this type behavior is by imposing a limit on how far down a path we want to let it go before calling it quits and trying a different branch. This is
called Depth Limited Search, and it requires only a one small change to DFS, the depth limit:
void DLDFS(link curr, Board goal, int depth) {
if (depth >= 0) {
int cost = getBoardCost(curr->board, goal);
if (cost == 0) {
cout<<"Solution Found!\n";
found = true;
if (curr->children == nullptr)
curr->children = generateChildren(curr);
for (link t = curr->children; t != nullptr; t = t->next) {
if (!found)
DLDFS(t, goal, depth-1);
Allright, lets see if we can find a solution within 8 moves:
max@MaxGorenLaptop:/mnt/c/Users/mgoren$ ./iddfs
Trying with Depth Limit: 6
Solution Found!
Puzzle completed in 6 moves after trying 1800 board configurations.
Ok, now we're getting somewhere. But its not very convenient having to try and guess how many moves it should take - guess to high, and our search may not complete, guess to low and we may not find
the solution. One possible remedy for this could be to use a variant of depth limited search called Iterative Deepening Depth First Search(IDDFS). IDDFS works just like depth limited search by
monitoring how far down its current path the algorithm has progressed, and imposing a limit on it. When that limit is reached, the algorithm is forced to back track and try a different branch.
Iterative deepening enhances this by repeatedly re-trying the search with increasing depth limits.
bool found;
void DLDFS(link curr, Board goal, int depth) {
if (depth >= 0) {
if (compareBoards(curr->board, goal)) {
cout<<"Solution Found!\n";
found = true;
if (curr->children == nullptr)
curr->children = generateChildren(curr);
for (link t = curr->children; t != nullptr; t = t->next) {
if (!found)
DLDFS(t, goal, depth-1);
void iddfs(Board first, Board goal) {
found = false;
tryCount = 0;
int last = 0;
link start = new node(first, find(first, 0), nullptr, nullptr);
for (int i = 2; i < 20; i++) {
if (!found) {
cout<<"Trying with Depth Limit: "<<i<<"\n";
DLDFS(start, goal, i);
cout<<"Search expanded "<<(tryCount-last)<<" nodes with no solutions found.\n";
last = tryCount;
} else {
Ok, lets see where we get with IDDFS taking the guess work out of the depth limit for us:
max@MaxGorenLaptop:/mnt/c/Users/mgoren$ ./iddfs
Trying with Depth Limit: 2
Tried: 35 board configurations with no solution found.
Trying with Depth Limit: 3
Tried: 347 board configurations with no solution found.
Trying with Depth Limit: 4
Tried: 3499 board configurations with no solution found.
Trying with Depth Limit: 5
Solution Found!
Puzzle completed in 6 moves after trying 683 board configurations.
All right! Success. Now, our search finds a solution after expanding 683 nodes. I would say a 1/3 reduction in the configurations needed to find a solution is definitely an improvement. IDDFS
succeeds where regular DFS failed, because the depth limiting forces it to search a wider selection of branches than it normally would, in a way this is similar to how a breadth first search would
perform. And just like Breadth First Search, IDDFS also has the desirable trait of finding a solution in the shortest number of moves possible.
Breadth First Search utilizes a First In First Out queue to explore each of a nodes children in turn before then searching their children, as opposed to depth first search which continues expanding
the first child node it encounters due to its utilizing a Last In First Out ordered stack.
void BFS(link start, Board goal) {
queue<link> fq;
while (!fq.empty()) {
link h = fq.front();
if (compareBoards(h->board, goal)) {
cout<<"Solution Found!\n";
h->children = generateChildren(h);
for (link t = h->children; t != nullptr; t = t->next)
Breadth first search is not only simple to implement but it is also considered complete search wise, meaning that it is guaranteed to find the solution if one exists. So let's see how it does:
max@MaxGorenLaptop:/mnt/c/Users/mgoren$ ./8puzzle
Solution Found!
Puzzle completed in 6 moves after trying 683 board configurations.
Hmm. Interesting result, It would seem that BFS expands the same amount of nodes as IDDFS, but this actually not *quite* the case. Every Node that BFS encounters it only ever explores once. Because
of how IDDFS works, it repeatedly explores nodes that occur higher up in the tree. Looking at our example IDDFS found a solution at depth limit five, that means that every node in the first 4 levels
of the tree were visited 5 times where BFS only visited them once. Even though we can thankfully skip the step of generating a nodes children after the first encounter, that's still a lot less work
because of the drastic reduction in pointer surfing required, not to mention that BFS does not require recursion.
When is 'Good' good enough?
Lets take a moment to recap what we have accomplished up to this point. We've already gone from expanding so many nodes that our program would crash when using plain depth first search, to finding a
solution after searching 683 nodes by employing a slight tweak to the algorithm. Now finding a solution after 683 tries with even less work and all it really took was a change in data structure. We
could end here, having arrived at working solution for the 8-puzzle problem in the shortest amount of moves. But what if like for IDDFS & Depth First Search, we could see a huge improvement in
performance for only a little more effort?
Looking back at our progress we can surmise that practically all of our improvements came about after making a fundamental change to the order in which the nodes were expanded. There is nothing
similar to the DFS -> IDDFS trick that can be applied to Breadth First Search, but what we can do is swap out the FIFO queue for something a bit more powerful.
It is well known from graph algorithms that using a priority queue can lead to faster graph searching algorithms than BFS or DFS can provide, and our state space tree is really just a directed
acyclic graph. But in order to use a priority queue we need to gather more information about the board state than we have been up to now, pushing us into the category of informed searches.
Informed Search
As I mentioned, for a priority queue to actually be useful, we need a way of assigning a distinct value to each state configuration. To assign this value, we will use the approximate distance from
our current state to the goal state. We do this by summing the manhattan distance of each tile position in the goal state from where that value occurs in the current state.
If this sounds like it requires a lot more work, it's because it does. We are making a trade here. We are accepting a higher computational complexity cost for a (hopefully) drastic reduction in
memory usage. In state space tree searching algorithms memory usage is directionally proportional to how many nodes of the tree need to be expanded in order to reach the goal.
In other words, while comparing the two board states will be a more computationally expensive operation than it would be for the previous algorithms, we will be doing it significantly less times
because of how many fewer nodes we need to expand to find the solution.
point find(Board board, int x) {
for (int i = 0; i < 3; i++) {
for (int j= 0; j < 3; j++) {
if (x == board[i][j])
return point(i, j);
return point(-1,-1);
int getBoardCost(Board a, Board b) {
int cost = 0;
for (int i = 0; i < 3; i++) {
for (int j = 0; j < 3; j++) {
if (a[i][j] != b[i][j]) {
point p = find(b, a[i][j]);
cost += std::abs(i - p.x) + std::abs(j - p.y);
return cost;
The getBoardBost() function replaces our previous implementation of compareBoards() that returned a bool. getBoardCost() will return 0 if the boards are the same, rendering compareBoards() redundant.
We will also use the value it returns as the cost for our priority queue. The lower the board cost, the closer to the goal state we are. By using a min-heap priority queue, we can keep selecting the
closest next state instead of just trying them all as we did with BFS. This "greedy" method should significantly trim the search space and thus speeding up our search.
typedef pair<int,link> pNode;
void priorityFirstSearch(Board first, Board goal) {
link start = new node(first, find(first, 0), nullptr, nullptr);
priority_queue<pNode, vector<pNode>, greater<pNode>> pq;
pq.push(make_pair(0, start));
while (!pq.empty()) {
link curr = pq.top().second;
if (getBoardCost(curr->board, goal) == 0) {
cout<<"Solution Found!\n";
curr->children = generateChildren(curr);
for (link t = curr->children; t != nullptr; t = t->next)
pq.push(make_pair(calculateBoardCost(t->board, goal), t));
As you can see, the algorithms are actually very similar with the main difference being the choice of data structure and the other changes being in the helper utilities to make use of the different
data structure.
Lets run this version on the same puzzle we used for BFS and see if we gained any increase in performance:
max@MaxGorenLaptop:/mnt/c/Users/mgoren$ ./TilePuzzle
Solution Found!
Puzzle completed in 6 moves after trying 19 board configurations.
Wow! THAT is progress. We went from having to generate 683 different board configurations down to only 19 before we found our solution! Sure - each try cost more than it did for BFS, but the amount
we chop from the search space more than makes up for that additional cost.
Additional Optimizations
During the course of this post I've covered focused primarily on the order nodes were visited, mainly by tweaking the data structure in an effort to complete the search while expanding as few nodes
as possible.
Another optimization comes about due to the way our boards are generated. We run the risk of arriving back at a board that we have already visited - this is what is called a cycle in graph theory.
And the above implementations do nothing to check for this case. In the worst case, a cycle could cause us to get stuck in an endless loop, with the BEST case being that we process more nodes than
need be due to some being processed more than once. Thankfully we don't have to check each new configuration against every node already in the tree: we only need to compare it to the nodes along the
path from the root to the current node, having maintained parent pointers makes this a very easy thing to do:
bool hasSeen(link h) {
if (h == nullptr || h->parent == nullptr)
return false;
node* x = h->parent;
while (x != nullptr) {
if (compareBoards(x->board, h->board)) {
return true;
x = x->parent;
return false;
And now we just need to check hasSeen(child) before deciding to expand the node, if it returns true we can skip expanding that node. Let's see how we do adding it to our PFS:
max@MaxGorenLaptop:/mnt/c/Users/mgoren$ ./pfs
Solution Found!
Puzzle completed in 6 moves after trying 11 board configurations.
Well golly, It would appear we've chopped the search space nearly in half yet again! It is this checking of previously visited states that transforms our priority first search into the venerable A*
I think for now, this is far as I'm going to take things, seeing as in the course of this article, we've gone from polynomial to linear complexity on the 8-puzzle. Additionally, this optimization is
also what tipped the scales and allowed it solve instances of the 15 puzzle, something it previously struggled to do (with some light tweaks to the heuristic).
max@MaxGorenLaptop:/mnt/c/Users/mgoren$ ./astar
Solution Found!
<removed for brevity>
Puzzle completed in 26 moves after trying 75 board configurations.
One simple optimization that led to a much better search, was to track what depth in the tree the current child is and store that in the node. When pushing a node to the priority queue, add the nodes
depth to the priority.
That being said, some puzzles take ALOT more searching than others:
max@MaxGorenLaptop:/mnt/c/Users/mgoren$ ./astar
Solution Found!
<removed for brevity>
Puzzle completed in 55 moves after trying 23997166 board configurations.
Still, Not bad for a couple hours of hacking.
Other Algorithms
There are other algorithms that can speed up the search even more, such as the many A* search variants. For puzzles with larger search spaces such as the 15-puzzle variant which uses a 4x4 matrix and
the 24-puzzle which uses a 5x5 matrix, anything less than A* search isn't going to cut it. Iterative Deepening A* is considered the gold star for tile sliding puzzles - However, the algorithm is
complicated enough that it really deserves it's own article.
Like A* search, the priority first search that was implemented above uses the cost function in a similar way to how A* uses heuristics. I only explored one heuristic in this post - there are many.
Other cost functions/heuristics, so long as they are admissible have the potential to eek out even more performance from both algorithms. A* is a tricky beast however, It is only as good as
heuristic. It may for example, return a 53 move solution when there is valid 48 move solution for example. BFS and IDDFS wont have this problem, but they also aren't up to the task of finding ANY
solution for many 15 - puzzles.
An improvement upon breadth first search is a technique called Branch and Bound. Branch and bound works like the BFS counterpart to Backtracking DFS in that it utilizes a promising function to
determine if a path is worth following. I encountered several websites through the course of my research that claim to be using branch and bound to solve sliding tile puzzles but are actually using
best first search upon examination of the code.
Of perhaps a bit more interest, is that because we have a known goal state from the start, this problem is a feasible candidate for a bi-directional breadth first search, maybe even a parallel
version with different threads searching from different directions - though I think we can all agree that might be just a touch overkill for a problem such as this.
There are also algorithms for solving these puzzles that require no searching at all - though they do not necessarily find the solution with least number of moves as the algorithms explained above
A Few Words About the Data Structures Used
The node structure that I implemented is admittedly overkill for the problem at hand. The reason I decided to construct a full LCRS representation of the state space tree is simply that I had never
had occasion to ever use an LCRS tree before! Up to now they've just been something I've read about in DSA text books. My first implementation of the BFS algorithm shown above did not utilize child
or parent pointers in the node structure. The tree was generated implicitly, with the children being a simple linked list, the solution path stored in a hash table. I find the implementation shown
above to be far more elegant.
I also have an implementation that utilizes a quad tree available on my github page as linked below.
That's all I've got for you this today so until next time, Happy Hacking!
Further Reading
"Algorithms in a nutshell - a desktop quick reference" By Heineman, Pollice, & Selkow from O'reilly publishers.
"Data Structures Using C" By Tenenbaum, Langsam, & Augenstein | {"url":"http://maxgcoding.com/solving-puzzles-through-searching","timestamp":"2024-11-03T07:11:08Z","content_type":"text/html","content_length":"42504","record_id":"<urn:uuid:5a9bc3f8-dec1-4550-9102-a5dfc618c0aa>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00247.warc.gz"} |
Math, Grade 6, Rational Numbers, Peer Review and Revise
Make Sense and Persevere
Work Time
Make Sense and Persevere
Watch the video to see how Carlos and Jan make sense of a problem and then persevere in solving it.
• How did Carlos and Jan make sense of the problem?
• What did they do that showed they were persevering in solving the problem?
• Did you encounter anything like what Carlos and Jan encountered when they were trying to solve the problem?
• What kinds of things help you make sense of a problem and persevere in solving it?
VIDEO: Mathematical Practice 1 | {"url":"https://openspace.infohio.org/courseware/lesson/2073/student-old/?task=4","timestamp":"2024-11-06T01:26:07Z","content_type":"text/html","content_length":"19220","record_id":"<urn:uuid:3570e909-54c5-4324-96b0-bf436e068865>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00411.warc.gz"} |
[Solved] Two approximations to Planck's Law are us | SolutionInn
Two approximations to Planck's Law are useful in the extreme low and high limits of (lambda T).
Two approximations to Planck's Law are useful in the extreme low and high limits of \(\lambda T\).
a. Show that in the limit where \(\left(C_{1} / \lambda T\right) \gg 1\) that Planck's spectral distribution reduces to the following form:
\[E_{b \lambda}(\lambda, T) \approx \frac{C_{o}}{\lambda^{5}} \exp \left(-\frac{C_{1}}{\lambda T}\right) \quad \text { Wien's Law }\]
Compare this result to Planck's distribution and determine when the error between the two is less than \(1 \%\).
b. Show that in the limit \(\left(C_{1} / \lambda T\right) \ll 1\) Planck's distribution law reduces to:
\[E_{b \lambda}(\lambda, T) \approx C_{2} \frac{T}{\lambda^{4}} \quad \text { Rayleigh-Jeans Law }\]
Compare with Planck's distribution and determine when the two are in error by less than \(1 \%\).
Fantastic news! We've Found the answer you've been seeking! | {"url":"https://www.solutioninn.com/study-help/fundamentals-of-chemical-engineering-thermodynamics/two-approximations-to-plancks-law-are-useful-in-the-extreme-1382942","timestamp":"2024-11-14T17:11:18Z","content_type":"text/html","content_length":"79826","record_id":"<urn:uuid:11f6e95f-1a33-4a1f-a9a9-c13b9ef81006>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00167.warc.gz"} |
How can I debug SSE value during run Neural Net model of each loop in Rapid Miner
The Altair Community is migrating to a new platform to provide a better experience for you. In preparation for the migration, the Altair Community is on read-only mode from October 28 - November 6,
2024. Technical support via cases will continue to work as is. For any urgent requests from Students/Faculty members, please submit the form linked
How can I debug SSE value during run Neural Net model of each loop in Rapid Miner
edited November 2018 in Help
I'm student and interested to use Rapid miner run Neural Network Model to come out the formula to predict the defect using in HDD factory. I've run NN and comeout both prediction model and their
accuracy using x-validation. But, I also would like to get the SSE (sum of square error) of each loop for debug more information. Any possible way to get SSE of training cycle? In this case, I set
maximum 500 training cycle. Need your help. It is very critical for me..
Here are my setting picture.
Overall connection.
Inside validation
Best Answer
land RapidMiner Certified Analyst, RapidMiner Certified Expert, Member Posts: 2,531
you cannot get that directly, but if you use Performance (Regression) operator inside the validation you will get many more choices of how to describe the goodness of a model. One of them is
squared error, which will give you the average squared error. From that (Mikro average, don't use the macro average) you can simply get to your sum by multiplying it with the number of rows you
have in your data set.
Alternatively you can always build a process that will use Generate Attribute to calculate the square error for each row and then use Aggregate to sum that up. With Data to Performance you can
then use this result as performance vector in the test process of the cross validation.
First of all thank you very much for the valuable suggested. I've re-tried with below setting.
1. Run Neural network with 500 training cycle. The setting same as my original posted.
2. Use Performance (regression) and selected squared error with input data for 100 records.
With the below result here are my questions.
1. Does the squared_error in result calculate based on training cycle at 500?
2. To get the sum squared_error, I have to multiply by 100. Is it correct? The value should be 25.3. Right? | {"url":"https://community.rapidminer.com/discussion/31420/how-can-i-debug-sse-value-during-run-neural-net-model-of-each-loop-in-rapid-miner","timestamp":"2024-11-02T12:23:17Z","content_type":"text/html","content_length":"295161","record_id":"<urn:uuid:2983cc63-e222-4b5f-b1fd-bede071c5c7a>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00527.warc.gz"} |
Machine Learning
Because intuition fails in high dimensions, we resort to a combined package of representation, evaluation, and optimization a.k.a machine learning (ML). ML offers a powerful toolkit to build complex
systems faster. A short paper showing the cautionary side of ML makes for an interesting read. A few more things I found useful related to the field:
• You may not need ML.
• Business Logic is superior to ML. In cases where you have insufficient prior knowledge, ML offers good estimates.
• Understand what your model means.
• Keep data dependencies simple and crisp.
• Reduce code volume.
• ML processes should be designed based on the information flow paradigm.
• Training is a low effort exercise.
• Most of the functions placed at low levels of ML systems are redundant or of little value when compared with the cost of providing them at that low level.
• Drawing modular boundaries when designing ML systems is a bad idea. In other words, there should be no features for an ML system -- only functions at the highest application level.
• Choose how to represent your model first: K-NN, SVM, Bayes, Regression, Decision Trees, Rule-Based, Neural Networks, CRFs, or Bayesian Networks. Next, choose your evaluation method: Error Rate,
Recall and Precision, Squared Error, Likelihood, KL-Divergence, Utility or Margin based. And finally, choose your optimization approach: Greedy, Branch-and-Bound, Beam Search, Linear Programming,
Quadratic Programming, Gradient Descent, Conjugate Gradient or Quasi-Newton.
• A dumb algorithm with lots of data beats a decent algorithm with modest amounts of data.
• There is no such thing as minima.
• Not every representable function can be learned.
• Any ML system operates only in a specific observational mode.
• If hyperparameter optimization is your only worry, you got it all wrong.
• Training your ML model to convergence is impractical.
• Remember that modifying the model can have significant effects on memory layout.
• Reward function design is tricky and oftentimes, reinforcement learning isn't practical or accurate.
• Real models diverge.
• It is the behavior of your optimization algorithm that counts, not its 'zero' loss. Is it learning bad correlations or good ones first?
• Overparameterising neural networks is a simple way to get acceptable results.
• Your neural network of million parameters has an equivalent thousand-variable polynomial regression equation.
• There is no best learner. If your algorithm is better at solving one problem, it is worse at another.
• ML development is not a monolithic pursuit. | {"url":"https://densebit.com/posts/20.html","timestamp":"2024-11-12T05:08:57Z","content_type":"text/html","content_length":"4954","record_id":"<urn:uuid:f6e534f2-ad1a-46f7-a6d7-a6d7e9c68f94>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00793.warc.gz"} |
dgelss.f - Linux Manuals (3)
dgelss.f (3) - Linux Manuals
dgelss.f -
subroutine dgelss (M, N, NRHS, A, LDA, B, LDB, S, RCOND, RANK, WORK, LWORK, INFO)
DGELSS solves overdetermined or underdetermined systems for GE matrices
Function/Subroutine Documentation
subroutine dgelss (integerM, integerN, integerNRHS, double precision, dimension( lda, * )A, integerLDA, double precision, dimension( ldb, * )B, integerLDB, double precision, dimension( * )S, double
precisionRCOND, integerRANK, double precision, dimension( * )WORK, integerLWORK, integerINFO)
DGELSS solves overdetermined or underdetermined systems for GE matrices
DGELSS computes the minimum norm solution to a real linear least
squares problem:
Minimize 2-norm(| b - A*x |).
using the singular value decomposition (SVD) of A. A is an M-by-N
matrix which may be rank-deficient.
Several right hand side vectors b and solution vectors x can be
handled in a single call; they are stored as the columns of the
M-by-NRHS right hand side matrix B and the N-by-NRHS solution matrix
The effective rank of A is determined by treating as zero those
singular values which are less than RCOND times the largest singular
M is INTEGER
The number of rows of the matrix A. M >= 0.
N is INTEGER
The number of columns of the matrix A. N >= 0.
NRHS is INTEGER
The number of right hand sides, i.e., the number of columns
of the matrices B and X. NRHS >= 0.
A is DOUBLE PRECISION array, dimension (LDA,N)
On entry, the M-by-N matrix A.
On exit, the first min(m,n) rows of A are overwritten with
its right singular vectors, stored rowwise.
LDA is INTEGER
The leading dimension of the array A. LDA >= max(1,M).
B is DOUBLE PRECISION array, dimension (LDB,NRHS)
On entry, the M-by-NRHS right hand side matrix B.
On exit, B is overwritten by the N-by-NRHS solution
matrix X. If m >= n and RANK = n, the residual
sum-of-squares for the solution in the i-th column is given
by the sum of squares of elements n+1:m in that column.
LDB is INTEGER
The leading dimension of the array B. LDB >= max(1,max(M,N)).
S is DOUBLE PRECISION array, dimension (min(M,N))
The singular values of A in decreasing order.
The condition number of A in the 2-norm = S(1)/S(min(m,n)).
RCOND is DOUBLE PRECISION
RCOND is used to determine the effective rank of A.
Singular values S(i) <= RCOND*S(1) are treated as zero.
If RCOND < 0, machine precision is used instead.
RANK is INTEGER
The effective rank of A, i.e., the number of singular values
which are greater than RCOND*S(1).
WORK is DOUBLE PRECISION array, dimension (MAX(1,LWORK))
On exit, if INFO = 0, WORK(1) returns the optimal LWORK.
LWORK is INTEGER
The dimension of the array WORK. LWORK >= 1, and also:
LWORK >= 3*min(M,N) + max( 2*min(M,N), max(M,N), NRHS )
For good performance, LWORK should generally be larger.
If LWORK = -1, then a workspace query is assumed; the routine
only calculates the optimal size of the WORK array, returns
this value as the first entry of the WORK array, and no error
message related to LWORK is issued by XERBLA.
INFO is INTEGER
= 0: successful exit
< 0: if INFO = -i, the i-th argument had an illegal value.
> 0: the algorithm for computing the SVD failed to converge;
if INFO = i, i off-diagonal elements of an intermediate
bidiagonal form did not converge to zero.
Univ. of Tennessee
Univ. of California Berkeley
Univ. of Colorado Denver
NAG Ltd.
November 2011
Definition at line 172 of file dgelss.f.
Generated automatically by Doxygen for LAPACK from the source code. | {"url":"https://www.systutorials.com/docs/linux/man/3-dgelss.f/","timestamp":"2024-11-10T04:50:36Z","content_type":"text/html","content_length":"11272","record_id":"<urn:uuid:b42d246b-69f2-49a2-9ed7-c627a319902a>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00084.warc.gz"} |
Rhombus - math word problem (975)
The rhombus with area 137 has one diagonal that is longer by 5 than the second one. Calculate the length of the diagonals and rhombus sides.
Correct answer:
Did you find an error or inaccuracy? Feel free to
write us
. Thank you!
You need to know the following knowledge to solve this word math problem:
Grade of the word problem:
Related math problems and questions: | {"url":"https://www.hackmath.net/en/math-problem/975","timestamp":"2024-11-13T21:24:36Z","content_type":"text/html","content_length":"85406","record_id":"<urn:uuid:67ba3a10-f393-4746-b174-cb73b72fa875>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00320.warc.gz"} |
ssdv-fec: an erasure FEC for SSDV implemented in Rust
Back in May I proposed an erasure FEC scheme for SSDV. The SSDV protocol is used in amateur radio to transmit JPEG files split in packets, in such a way that losing some packets only cases the loss
of pieces of the image, instead of a completely corrupted file. My erasure FEC augments the usual SSDV packets with additional FEC packets. Any set of \(k\) received packets is sufficient to recover
the full image, where \(k\) is the number of packets in the original image. An almost limitless amount of distinct FEC packets can be generated on the fly as required.
I have now written a Rust implementation of this erasure FEC scheme, which I have called ssdv-fec. This implementation has small microcontrollers in mind. It is no_std (it doesn’t use the Rust
standard library nor libc), does not perform any dynamic memory allocations, and works in-place as much as possible to reduce the memory footprint. As an example use case of this implementation, it
is bundled as a static library with a C-like API for ARM Cortex-M4 microcontrollers. This might be used in the AMSAT-DL ERMINAZ PocketQube mission, and it is suitable for other small satellites.
There is also a simple CLI application to perform encoding and decoding on a PC.
I have updated the Jupyter notebook that I made in the original post. The notebook had a demo of the FEC scheme written in Python. Now I have added encoding and decoding using the CLI application to
this notebook. Using the ssdv-fec CLI application is quite simple (see the README), so it can be used in combination with the ssdv CLI application for encoding and decoding from and to a JPEG file.
Note that the Python prototype and this new Rust implementation are not interoperable, for reasons described below.
For details about how the FEC scheme works you can refer to the original post. It is basically a Reed-Solomon code over \(GF(2^{16})\) used as an erasure FEC. To make the implementation suitable for
microcontrollers, I have made some decisions about which algorithms to use for the mathematical calculations. I will describe these here.
The finite field \(GF(2^{16})\) is realized as an extension of degree two over \(GF(2^8)\). The field \(GF(2^8)\) is implemented as usual, with lookup tables for the exponential and logarithm
functions. In this way, elements of \(GF(2^{16})\) are formed by pairs of elements of \(GF(2^8)\), and the multiplication and division can be written using relatively simple formulas with operations
on \(GF(2^8)\).
More in detail, \(GF(2^8)\) is realized as the quotient\[GF(2)[x]/(x^8 + x^4 + x^3 + x^2 + 1).\]This choice of a primitive polynomial of degree 8 over \(GF(2)\) is very common. The element \(x\) is
primitive, so the exponential function \(j \mapsto x^j\) and the logarithm \(x^j \mapsto j\) can be tabulated. These two tables occupy 256 bytes each. Multiplication is performed by using these
tables to calculate multiplication as addition of exponents. The case where \(x = 0\) is treated separately, since the logarithm of zero is not defined. An element of \(GF(2^8)\) are encoded in a
byte by writing it as a polynomial of degree at most 7 in \(x\) and storing the leading term of this polynomial in the MSB and the independent term in the LSB. This is all pretty standard, and it is
for example how Phil Karn’s implementation of the CCSDS Reed-Solomon (255, 223) code works.
The field \(GF(2^{16})\) is realized as the quotient\[GF(2^8)[y]/(y^2 + x^3y + 1).\]Here \(x\) still denotes the same primitive element of \(GF(2^8)\) as above. I have selected the polynomial \(y^2 +
x^3y + 1\) because \(k = 3\) is the smallest \(k \geq 0\) for which the polynomial \(y^2 + x^ky + 1\) is irreducible over \(GF(2^8)\). The fact that only one term of this polynomial is different from
one simplifies the multiplication and division formulas. Each element of \(GF(2^{16})\) is a degree one polynomial \(ay + b\), where \(a, b \in GF(2^8)\). Each of the elements \(a\) and \(b\) is
stored in its own byte. Addition of elements of \(GF(2^{16})\) is performed by adding the coefficients of their corresponding degree one polynomials. Since this addition is addition on \(GF(2^8)\),
it amounts to the XOR of two bytes, which is very fast.
To compute the formula for multiplication, note that\[(ay + b)(cy + d) = acy^2 + (ad + bc)y + bd \equiv (ad+bc+x^3ac) y + bd + ac\]modulo \(y^2 + x^3y + 1\). Therefore, multiplication only needs 5
products in \(GF(2^8)\) and some additions. In my implementation, for simplicity this is written exactly as such. The field \(GF(2^8)\) is implemented as a Rust type for which a multiplication is
defined. This has the disadvantage that each multiplication is performing a logarithm evaluation and some of these are repeated. A more optimized implementation calculates the logarithms of \(a, b,
c, d\) only once, but it needs to handle separately the cases when some of these are zero. Perhaps the Rust compiler is smart enough to remove the repeated logarithm evaluations when the
straightforward formula is used, and to figure out that the logarithm of \(x^3\) is simply 3. I haven’t checked how much of this it is able to optimize out.
Division is slightly more tricky to calculate. The formula follows from solving\[(cy + d)(ey + f) \equiv ay + b \mod y^2 + x^3 y + 1 \]for the unknowns \(e, f \in GF(2^8)\). Expanding the product as
above, this gives a 2×2 linear system, which can be solved with Cramer’s rule. This gives\[\begin{split}e &= \frac{ad + bc}{\Delta},\\ f &= \frac{b(d + x^3c) + ac}{\Delta},\end{split}\]where\[\Delta
= c^2 + x^3cd + d^2.\]Note that the irreducibility of \(y^2+x^3y+1\) over \(GF(2^8)\) implies that \(\Delta \neq 0\) unless \(c = d = 0\). This division formula requires the evaluation of 10
multiplications/divisions over \(GF(2^8)\), and some additions.
This implementation of \(GF(2^{16})\) is good for memory constrained systems, because it only requires 512 bytes of tables, but it is still reasonably fast. An implementation that uses tables of
exponentials and logarithms in \(GF(2^{16})\) is faster, but it requires 128 KiB of memory for each of the two tables. Even using Zech logarithms, which only requires one 128 KiB table, is
prohibitive in systems with low memory. In fact, the implementation of \(GF(2^{16})\) as an extension of degree two over \(GF(2^8)\) can also be interesting for large computers with a lot of memory,
because the L1 cache on many CPUs has only 32 KiB, so the cache misses caused by using 128 KiB tables can make the implementation using exponentials and logarithms in \(GF(2^{16})\) slower than the
implementation described here.
This implementation of the arithmetic in \(GF(2^{16})\) deviates from the one I proposed in my original post, which was the usual quotient construction as \(GF(2)[z]/(p(z))\) for \(p(z)\) an
irreducible polynomial of degree degree 16 over \(GF(2)\). Since the construction of \(GF(2^{16})\) as a degree two extension is better for memory constrained systems, I have chosen it for the
“production quality” Rust code, but since there is not a fast and simple way to convert between the two representations of field elements, this means that the Rust implementation and my earlier
Python prototype using the galois library are not interoperable. My recommendation is that other implementations of this FEC scheme follow the construction used in the Rust implementation, so that
they can be interoperable.
The implementations of the fields \(GF(2^8)\) and \(GF(2^{16})\) as described here are exposed in the public API of ssdv-fec, so these can also be used in other Rust applications and libraries.
Another clever idea in the Rust implementation is how the linear system for polynomial interpolation is solved. This needs to be done both for encoding and decoding. The problem can be stated
generically as, given a polynomial \(p \in GF(2^{16})[z]\) of degree at most \(m\) and its values at \(m + 1\) distinct points \(z_0, \ldots, z_m \in GF(2^{16})\), compute the values \(p(z)\) at
other points \(z \in GF(2^{16})\). The way I presented this in the original post was conceptually simple. The linear system for solving the coefficients of \(p\) in terms of \(p(z_0), \ldots, p(z_m)
\) has a single solution, and in fact the matrix of this system is an invertible Vandermonde matrix. Therefore, we can compute the coefficients of \(p\) and then evaluate it at the required points \
A naïve implementation of this idea solves the system by Gauss reduction. This has the disadvantage that the Vandermonde matrix needs to be stored somewhere in order to perform the row operations
that convert it to the identity matrix. If we want the implementation to have a minimal memory footprint, we would rather not store this matrix, which only plays an auxiliary role.
Luckily there is an alternative way to approach this problem that does not require storing a matrix. The polynomial \(p\) is the Lagrange polynomial, and there are some explicit formulas for it. If \
(z\) is not equal to any of \(z_0,\ldots,z_m\), we have\[p(z) = l(z) \sum_{j=0}^m \frac{w_j p(z_j)}{z – z_j},\]where\[l(z) = \prod_{j=0}^m (z – z_j),\]and\[w_j = \prod_{0 \leq k \leq m,\ k \neq j}
(z_j-z_k)^{-1}.\] This formula gives a way of calculating \(p(z)\) without using any other memory besides that used to store the terms \(p(z_0),\ldots,p(z_m)\) and their corresponding points \(z_0,\
Since during encoding or decoding the formula above is to be evaluated for many different values of \(z\), the input data \(p(z_0),\ldots,p(z_m)\) is modified in-place, substituting each \(p(z_j)\)
by \(w_j p(z_j)\), calculating the product \(w_j\) to do this. This makes the evaluation of \(p(z)\) using the formula faster.
This site uses Akismet to reduce spam. Learn how your comment data is processed. | {"url":"https://destevez.net/2023/11/ssdv-fec-an-erasure-fec-for-ssdv-implemented-in-rust/","timestamp":"2024-11-07T09:40:15Z","content_type":"text/html","content_length":"57441","record_id":"<urn:uuid:c48f3aa8-9cb1-4241-9eb0-4093f8940435>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00669.warc.gz"} |
SuiteSparse Matrix Collection
Group VDOL
Optimal control problems, Vehicle Dynamics & Optimization Lab, UF
Anil Rao and Begum Senses, University of Florida
Each optimal control problem is described below. Each of these
problems gives rise to a sequence of matrices of different sizes
when they are being solved inside GPOPS, an optimal control
solver created by Anil Rao, Begum Senses, and others at in VDOL
lab at the University of Florida. The matrices are all
symmetric indefinite.
Rao, Senses, and Davis have created a graph coarsening strategy
that matches pairs of nodes. The mapping is given for each matrix,
where map(i)=k means that node i in the original graph is mapped to
node k in the smaller graph. map(i)=map(j)=k means that both nodes
i and j are mapped to the same node k, and thus nodes i and j have
been merged.
Each matrix consists of a set of nodes (rows/columns) and the
names of these rows/cols are given for each matrix.
Anil Rao, Begum Sense, and Tim Davis, 2015.
Dynamic soaring optimal control problem is taken from
Ref.~\cite{zhao2004optimal} where the dynamics of a glider is
derived using a point mass model under the assumption of a flat
Earth and stationary winds. The goal of the dynamic soaring
problem is to determine the state and the control that minimize
the average wind gradient slope that can sustain a powerless
dynamic soaring flight. The state of the system is defined by the
air speed, heading angle, air-realtive flight path angle,
altitude, and the position of the glider and the control of the
system is the lift coefficient. The specified accuracy tolerance
of $10^{-7}$ were satisfied after eight mesh iterations. As the
mesh refinement proceeds, the size of the KKT matrices increases
from 647 to 3543.
title={Optimal Patterns of Glider Dynamic Soaring},
author={Zhao, Yiyuan J},
journal={Optimal Control applications and methods},
publisher={Wiley Online Library}
Free flying robot optimal control problem is taken from
Ref.~\cite{sakawa1999trajectory}. Free flying robot technology is
expected to play an important role in unmanned space missions.
Although NASA currently has free flying robots, called spheres,
inside the International Space Station (ISS), these free flying
robots have neither the technology nor the hardware to complete
inside and outside inspection and maintanance. NASA's new plan is to
send new free flying robots to ISS that are capable of completing
housekeeping of ISS during off hours and working in extreme
environments for the external maintanance of ISS. As a result, the
crew in ISS can have more time for science experiments. The current
free flying robots in ISS works are equipped with a propulsion
system. The goal of the free flying robot optimal control problem is
to determine the state and the control that minimize the magnitude of
thrust during a mission. The state of the system is defined by the
inertial coordinates of the center of gravity, the corresponding
velocity, thrust direction, and the anglular velocity and the control
is the thrust from two engines. The specified accuracy tolerance of
$10^{-6}$ were satisfied after eight mesh iterations. As the mesh
refinement proceeds, the size of the KKT matrices increases from 798
to 6078.
Goddard rocket maximum ascent optimal control problem is taken from
Ref.~\cite{goddard1920method}. The goal of the Goddard rocket maximum
ascent problem is to determine the state and the control that
maximize the final altitude of an ascending rocket. The state of the
system is defined by the altitude, velocity, and the mass of the
rocket and the control of the system is the thrust. The Goddard
rocket problem contains a singular arc where the continuous-time
optimality conditions are indeterminate, thereby the nonlinear
programming problem solver will have difficulty determining the
optimal control during the singular arc. In order to prevent this
difficulty and obtain more accurate solutions the Goddard rocket
problem is posed as a three-phase optimal control problem. Phase one
and phase three contains the same dynamics and the path constraints
as the original problem, while phase two contains an additional path
constraint and an event constraint. The specified accuracy tolerance
of $10^{-8}$ were satisfied after two mesh iterations. As the mesh
refinement proceeds, the size of the KKT matrices increases from 831
to 867.
title={A Method of Reaching Extreme Altitudes.},
author={Goddard, Robert H},
Range maximization of a hang glider optimal control problem is taken
from Ref.~\cite{bulirsch1993combining}. The goal of the optimal
control problem is to determine the state and the control that
maximize the range of the hang glider in the presence of a thermal
updraft. The state of the system is defined by horizontal distance,
altitude, horizontal velocity, and the vertical velocity and the
control is the lift coefficient. The specified accuracy tolerance of
$10^{-8}$ were satisfied after five mesh iterations. As the mesh
refinement proceeds, the size of the KKT matrices increases from 360
to 16011. This problem is sensitive to accuracy of the mesh and it
requires excessive number of collocation points to be able to satisfy
the accuracy tolerance. Thus, the size of the KKT matrices changes
title={Combining Direct and Indirect Methods in Optimal Control:
Range Maximization of a Hang Glider},
author={Bulirsch, Roland and Nerz, Edda and Pesch, Hans Josef and
von Stryk, Oskar},
Group Description Low-thrust orbit transfer optimal control problem is taken from
Ref.~\cite{betts2010practical}. The goal of the low-thrust orbit
transfer problem is to determine the state and the control that
minimize the fuel consumption during the orbit transfer of a
spacecraft that starts from a low-earth orbit and terminates at the
geostationary orbit via low-thrust propulsion systems. The highly
nonlinear dynamics of the low-thrust orbit transfer problem is given
in modified equinoctial elements (state of the system) and the thrust
direction (control of the system). Furthermore, the low-thrust
optimal control problem is a badly scaled problem because of the
small thrust-to-initial-mass ratio, that is typically on the order of
$O(10^{-4})$, and the long orbit transfer duration. Badly scaling of
the problem leads to a lot of delayed pivots. The specified accuracy
tolerance of $10^{-8}$ were satisfied after thirteen mesh iterations.
As the mesh refinement proceeds, the size of the KKT matrices
increases from 584 to 18476.
title={Practical Methods for Optimal Control and Estimation Using
Nonlinear Programming},
author={Betts, John T},
publisher={SIAM Press},
address = {Philadelphia, Pennsylvania},
Orbit raising problem that is taken from
Ref.~\cite{bryson1975applied}. The goal of the optimal control
problem is to determine the state and the control that maximize the
radius of an orbit transfer in a given time. The state of the system
is defined by radial distance of the spacecraft from the attracting
center (e.g Earth, Mars, etc.) and velocity of the spacecraft and the
control is the thrust direction. The specified accuracy tolerance of
$10^{-8}$ were satisfied after four mesh iterations. As the mesh
refinement proceeds, the size of the KKT matrices increases from 442
to 915.
title={Applied Optimal Control: Optimization, Estimation, and
author={Bryson, Arthur Earl},
publisher={CRC Press}
Minimum-time reorientation of an asymmetric rigid body optimal
control problem is taken from Ref.~\cite{betts2010practical}. The
goal of the problem is to determine the state and the control that
minimize the time that is required to reorient a rigid body. The
state of the system is defined by quaternians that gives the
orientation of the spacecraft and the angular velocity of the
spacecraft and the control of the system is torque. The vehicle data
that is used to model the dynamics are taken from NASA X-ray Timing
Explorer spacecraft. The specified accuracy tolerance of $10^{-8}$
were satisfied after eight mesh iterations. As the mesh refinement
proceeds, the size of the KKT matrices increases from 677 to 3108.
title={Practical Methods for Optimal Control and Estimation
Using Nonlinear Programming},
author={Betts, John T},
publisher={SIAM Press},
address = {Philadelphia, Pennsylvania},
Space shuttle launch vehicle reentry optimal control problem is taken
from Ref.~\cite{betts2010practical}. The goal of the optimal control
problem is to determine the state and the control that maximize the
cross range (maximize the final latitude) during the atmospheric
entry of a reusable launch vehicle. State of the system is defined by
the position, velocity, and the orientation of the space shuttle and
the control of the system is the angle of attack and the bank angle
of the space shuttle. The specified accuracy tolerance of $10^{-8}$
were satisfied after two mesh iterations. As the mesh refinement
proceeds, the size of the KKT matrices increases from 560 to 2450.
Space station attitude optimal control problem is taken from
Ref.~\cite{betts2010practical}. The goal of the space station
attitude control problem is to determine the state and the control
that minimize the magnitude of the final momentum while the space
statition reaches an orientation at the final time that can be
maintained without utilizing additional control torque. The state of
the system is defined by the angular velocity of the spacecraft with
respect to an inertial reference frame, Euler-Rodriguez parameters
used to defined the vehicle attitude, and the angular momentum of the
control moment gyroscope and the control of the system is the torque.
The specified accuracy tolerance of $10^{-7}$ were satisfied after
thirteen mesh iterations. As the mesh refinement proceeds, the size
of the KKT matrices increases from 99 to 1640.
Tumor anti-angiogenesis optimal control problem is taken from
Ref.~\cite{ledzewicz2008analysis}. A tumor first uses the blood
vessels of its host but as the tumor grows oxygen that is carried by
the blood vessels of its host cannot defuse very far into the tumor.
Therefore, the tumor grows its own blood vessels by producing
vascular endothelial growth factor (VEGF). This process is called
angiogenesis. But blood vessels have a defense mechanism, called
endostatin, that tries to impede the development of new blood cells
by targeting VEGF. In addition, new pharmacological therapies that is
developed for tumor-type cancers also targets VEGF. The goal of the
tumor anti-angiogenesis problem is to determine the state and control
that minimizing the size of the tumor at the final time. The state of
the system is defined by the tumor volume, carrying capacity of a
vessel, and the total anti-angiogenic treatment administered and the
control of the system is the angiogenic dose rate. The specified
accuracy tolerance of $10^{-7}$ were satisfied after eight mesh
iterations. As the mesh refinement proceeds, the size of the KKT
matrices increases from 205 to 490.
title={Analysis of Optimal Controls for a Mathematical Model of
Tumour Anti-Angiogenesis},
author={Ledzewicz, Urszula and Sch{\"a}ttler, Heinz},
journal={Optimal Control Applications and Methods},
publisher={Wiley Online Library}
Displaying collection matrices 1 - 20 of 91 in total
Id Name Group Rows Cols Nonzeros Kind Date Download File
2665 dynamicSoaringProblem_1 VDOL 647 647 5,367 Optimal Control Problem 2015 MATLAB Rutherford Boeing Matrix Market
2666 dynamicSoaringProblem_2 VDOL 1,591 1,591 15,588 Optimal Control Problem 2015 MATLAB Rutherford Boeing Matrix Market
2667 dynamicSoaringProblem_3 VDOL 2,871 2,871 32,022 Optimal Control Problem 2015 MATLAB Rutherford Boeing Matrix Market
2668 dynamicSoaringProblem_4 VDOL 3,191 3,191 36,516 Optimal Control Problem 2015 MATLAB Rutherford Boeing Matrix Market
2669 dynamicSoaringProblem_5 VDOL 3,271 3,271 36,789 Optimal Control Problem 2015 MATLAB Rutherford Boeing Matrix Market
2670 dynamicSoaringProblem_6 VDOL 3,431 3,431 36,741 Optimal Control Problem 2015 MATLAB Rutherford Boeing Matrix Market
2671 dynamicSoaringProblem_7 VDOL 3,511 3,511 37,680 Optimal Control Problem 2015 MATLAB Rutherford Boeing Matrix Market
2672 dynamicSoaringProblem_8 VDOL 3,543 3,543 38,136 Optimal Control Problem 2015 MATLAB Rutherford Boeing Matrix Market
2673 freeFlyingRobot_1 VDOL 798 798 5,246 Optimal Control Problem 2015 MATLAB Rutherford Boeing Matrix Market
2674 freeFlyingRobot_2 VDOL 1,338 1,338 11,600 Optimal Control Problem 2015 MATLAB Rutherford Boeing Matrix Market
2675 freeFlyingRobot_3 VDOL 1,718 1,718 12,922 Optimal Control Problem 2015 MATLAB Rutherford Boeing Matrix Market
2676 freeFlyingRobot_4 VDOL 2,358 2,358 18,218 Optimal Control Problem 2015 MATLAB Rutherford Boeing Matrix Market
2677 freeFlyingRobot_5 VDOL 2,878 2,878 24,582 Optimal Control Problem 2015 MATLAB Rutherford Boeing Matrix Market
2678 freeFlyingRobot_6 VDOL 3,358 3,358 27,030 Optimal Control Problem 2015 MATLAB Rutherford Boeing Matrix Market
2679 freeFlyingRobot_7 VDOL 3,918 3,918 31,046 Optimal Control Problem 2015 MATLAB Rutherford Boeing Matrix Market
2680 freeFlyingRobot_8 VDOL 4,398 4,398 34,958 Optimal Control Problem 2015 MATLAB Rutherford Boeing Matrix Market
2681 freeFlyingRobot_9 VDOL 4,778 4,778 39,964 Optimal Control Problem 2015 MATLAB Rutherford Boeing Matrix Market
2682 freeFlyingRobot_10 VDOL 5,218 5,218 40,080 Optimal Control Problem 2015 MATLAB Rutherford Boeing Matrix Market
2683 freeFlyingRobot_11 VDOL 5,438 5,438 40,054 Optimal Control Problem 2015 MATLAB Rutherford Boeing Matrix Market
2684 freeFlyingRobot_12 VDOL 5,578 5,578 41,940 Optimal Control Problem 2015 MATLAB Rutherford Boeing Matrix Market | {"url":"https://sparse.tamu.edu/VDOL","timestamp":"2024-11-06T20:52:56Z","content_type":"text/html","content_length":"46915","record_id":"<urn:uuid:71879214-a1da-4eb2-92d5-0244bd18fdbf>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00147.warc.gz"} |
20 Challenging Riddles to Test Your Critical Thinking and Logic Skills (2024)
Who says mind-bending logic puzzles are just for kids? We’ve come up with 20 brand-new riddles for adults to test your critical thinking, mathematics, and logic skills. With difficulties ranging from
easy, to moderate, to hard, there’s something here for everyone.
So go grab a pencil and a piece of scratch paper and prepare to rip your hair out (and we really do mean that in the best way possible). When you think you’ve got the right answer, click the link at
the bottom of each riddle to find the solution. Got it wrong? No worries, you have 19 other riddles to test out.
Navigate Through Our Riddles:
Puzzmo / The King’s Orders / How Many Eggs? / The Gold Chain / Pickleball / Circuit Breaker / Two Trains, Two Grandmas / Ant Math / Peppermint Patty / Great American Rail Trail / A Cruel SAT Problem
/ Movie Stars Cross a River / Tribute to a Math Genius / One Belt, One Earth / Elbow Tapping / Whiskey Problem / Doodle Problem / Stumping Scientists / What’s On Her Forehead? / Keanu for President /
Who Opened the Lockers?
Riddle No. 1: The King’s Orders Make for One Hell of a Brain Teaser
Difficulty: Easy
King Nupe of the kingdom Catan dotes on his two daughters so much that he decides the kingdom would be better off with more girls than boys, and he makes the following decree: All child-bearing
couples must continue to bear children until they have a daughter!
But to avoid overpopulation, he makes an additional decree: All child-bearing couples will stop having children once they have a daughter! His subjects immediately begin following his orders.
After many years, what’s the expected ratio of girls to boys in Catan?
The likelihood of each baby born being a girl is, of course, 50 percent.
Ready for the solution? Click here to see if you’re right.
Riddle No. 2: How Many Eggs Does This Hen Lay?
Difficulty: Easy
This problem is in honor of my dad, Harold Feiveson. It’s due to him that I love math puzzles, and this is one of the first problems (of many) that he gave me when I was growing up.
A hen and a half lays an egg and a half in a day and a half. How many eggs does one hen lay in one day?
Ready for the solution? Click here to see if you’re right.
Riddle No. 3: The Gold Chain Math Problem Is Deceptively Simple
Difficulty: Moderate
You’re rummaging around your great grandmother’s attic when you find five short chains each made of four gold links. It occurs to you that if you combined them all into one big loop of 20 links,
you’d have an incredible necklace. So you bring it into a jeweler, who tells you the cost of making the necklace will be $10 for each gold link that she has to break and then reseal.
How much will it cost?
Ready for the solution? Click here to see if you’re right.
Riddle No. 4: Try to Solve This Pickleball Puzzle
Difficulty: 🚨HARD🚨
Kenny, Abby, and Ned got together for a round-robin pickleball tournament, where, as usual, the winner stays on after each game to play the person who sat out that game. At the end of their
pickleball afternoon, Abby is exhausted, having played the last seven straight games. Kenny, who is less winded, tallies up the games played:
Kenny played eight games
Abby played 12 games
Ned played 14 games
Who won the fourth game against whom?
How many total games were played?
Ready for the solution? Click here to see if you’re right.
Riddle No. 5: Our Circuit Breaker Riddle Is Pure Evil. Sorry.
Difficulty: 🚨HARD🚨
The circuit breaker box in your new house is in an inconvenient corner of your basement. To your chagrin, you discover none of the 100 circuit breakers is labeled, and you face the daunting prospect
of matching each circuit breaker to its respective light. (Suppose each circuit breaker maps to only one light.)
To start with, you switch all 100 lights in the house to “on,” and then you head down to your basement to begin the onerous mapping process. On every trip to your basement, you can switch any number
of circuit breakers on or off. You can then roam the hallways of your house to discover which lights are on and which are off.
What is the minimum number of trips you need to make to the basement to map every circuit breaker to every light?
The solution does not involve either switching on or off the light switches in your house or feeling how hot the lightbulbs are. You might want to try solving for the case of 10 unlabeled circuit
breakers first.
Ready for the solution? Click here to see if you’re right.
Riddle No. 6: Two Trains. Two Grandmas. Can You Solve This Tricky Math Riddle?
Difficulty: Moderate
Jesse’s two grandmothers want to see him every weekend, but they live on opposite sides of town. As a compromise, he tells them that every Sunday, he’ll head to the subway station nearest to his
apartment at a random time of the day and will hop on the next train that arrives.
If it happens to be the train traveling north, he’ll visit his Grandma Erica uptown, and if it happens to be the train traveling south, he’ll visit his Grandma Cara downtown. Both of his grandmothers
are okay with this plan, since they know both the northbound and southbound trains run every 20 minutes.
But after a few months of doing this, Grandma Cara complains that she sees him only one out of five Sundays. Jesse promises he’s indeed heading to the station at a random time each day. How can this
The trains always arrive at their scheduled times.
Ready for the solution? Click here to see if you’re right.
Riddle No. 7: Here’s a Really F@*#ing Hard Math Problem About Ants
Difficulty: 🚨HARD🚨
Max and Rose are ant siblings. They love to race each other, but always tie, since they actually crawl at the exact same speed. So they decide to create a race where one of them (hopefully) will win.
For this race, each of them will start at the bottom corner of a cuboid, and then crawl as fast as they can to reach a crumb at the opposite corner. The measurements of their cuboids are as pictured:
If they both take the shortest possible route to reach their crumb, who will reach their crumb first? (Don’t forget they’re ants, so of course they can climb anywhere on the edges or surface of the
Remember: Think outside the box.
Ready for the solution? Click here to see if you’re right.
Riddle No. 8: This Peppermint Patty Riddle Is Practically Impossible
Difficulty: 🚨HARD🚨
You’re facing your friend, Caryn, in a “candy-off,” which works as follows: There’s a pile of 100 caramels and one peppermint patty. You and Caryn will go back and forth taking at least one and no
more than five caramels from the candy pile in each turn. The person who removes the last caramel will also get the peppermint patty. And you love peppermint patties.
Suppose Caryn lets you decide who goes first. Who should you choose in order to make sure you win the peppermint patty?
First, solve for a pile of 10 caramels.
Ready for the solution? Click here to see if you’re right.
Riddle No. 9: Can You Solve the Great American Rail-Trail Riddle?
Difficulty: Moderate
This problem was suggested by the physicist P. Jeffrey Ungar.
Finally, the Great American Rail-Trail across the whole country is complete! Go ahead, pat yourself on the back—you’ve just installed the longest handrail in the history of the world, with 4,000
miles from beginning to end. But just after the opening ceremony, your assistant reminds you that the metal you used for the handrail expands slightly in summer, so that its length will increase by
one inch in total.
“Ha!” you say, “One inch in a 4,000 mile handrail? That’s nothing!” But … are you right?
Let’s suppose when the handrail expands, it buckles upward at its weakest point, which is in the center. How much higher will pedestrians in the middle of the country have to reach in summer to grab
the handrail? That is, in the figure below, what is h? (For the purposes of this question, ignore the curvature of the Earth and assume the trail is a straight line.)
Pythagoras is a fascinating historical figure.
Ready for the solution? Click here to see if you’re right.
Riddle No. 10: This Riddle Is Like an Especially Cruel SAT Problem. Can You Find the Answer?
Difficulty: Moderate
Amanda lives with her teenage son, Matt, in the countryside—a car ride away from Matt’s school. Every afternoon, Amanda leaves the house at the same time, drives to the school at a constant speed,
picks Matt up exactly when his chess club ends at 5 p.m., and then they immediately return home together at the same constant speed. But one day, Matt isn’t feeling well, so he leaves chess practice
early and starts to head home on his portable scooter.
After Matt has been scooting for an hour, Amanda comes across him in her car (on her usual route to pick him up), and they return together, arriving home 40 minutes earlier than they usually do. How
much chess practice did Matt miss?
Consider the case where Amanda meets Matt exactly as she’s leaving their house.
Ready for the solution? Click here to see if you’re right.
Riddle No. 11: Can You Get These 3 Movie Stars Across the River?
Difficulty: Moderate
Three movie stars, Chloe, Lexa, and Jon, are filming a movie in the Amazon. They’re very famous and very high-maintenance, so their agents are always with them. One day, after filming a scene deep in
the rainforest, the three actors and their agents decide to head back to home base by foot. Suddenly, they come to a large river.
On the riverbank, they find a small rowboat, but it’s only big enough to hold two of them at one time. The catch? None of the agents are comfortable leaving their movie star with any other agents if
they’re not there as well. They don’t trust that the other agents won’t try to poach their star.
For example, Chloe’s agent is okay if Chloe and Lexa are alone in the boat or on one of the riverbanks, but definitely not okay if Lexa’s agent is also with them. So how can they all get across the
There isn’t just one way to solve this problem.
Ready for the solution? Click here to see if you’re right.
Riddle No. 12: This Ludicrously Hard Riddle Is Our Tribute to a Late Math Genius. Can You Figure It Out?
Difficulty: 🚨HARD🚨
On April 11, John Horton Conway, a brilliant mathematician who had an intense and playful love of puzzles and games, died of complications from COVID-19. Conway is the inventor of one of my favorite
legendary problems (not for the faint of heart) and, famously, the Game of Life. I created this problem in his honor.
Carol was creating a family tree, but had trouble tracking down her mother’s birthdate. The only clue she found was a letter written from her grandfather to her grandmother on the day her mother was
born. Unfortunately, some of the characters were smudged out, represented here with a “___”. (The length of the line does not reflect the number of smudged characters.)
“Dear Virginia,
Little did I know when I headed to work this Monday morning, that by evening we would have a beautiful baby girl. And on our wedding anniversary, no less! It makes me think back to that incredible
weekend day, J___ 27th, 19___, when we first shared our vow to create a family together, and, well, here we are! Happy eighth anniversary, my love.
Love, Edwin”
The question: When was Carol’s mother born?
This problem is inspired by Conway’s Doomsday Rule.
Ready for the solution? Click here to see if you’re right.
Riddle No. 13: To Solve This Twisty Math Riddle, You Just Need One Belt and One Earth
Difficulty: Moderate
Imagine you have a very long belt. Well, extremely long, really … in fact, it’s just long enough that it can wrap snugly around the circumference of our entire planet. (For the sake of simplicity,
let’s suppose Earth is perfectly round, with no mountains, oceans, or other barriers in the way of the belt.)
Naturally, you’re very proud of your belt. But then your brother, Peter, shows up—and to your disgruntlement, he produces a belt that’s just a bit longer than yours. He brags his belt is longer by
exactly his height: 6 feet.
If Peter were also to wrap his belt around the circumference of Earth, how far above the surface could he suspend the belt if he pulled it tautly and uniformly?
Earth’s circumference is about 25,000 miles, or 130 million feet … but you don’t need to know that to solve this problem.
Ready for the solution? Click here to see if you’re right.
Riddle No. 14: This Elbow Tapping Riddle Is Diabolical. Good Luck Solving It.
Difficulty: 🚨HARD🚨
In some future time, when the shelter-in-place bans are lifted, a married couple, Florian and Julia, head over to a bar to celebrate their newfound freedom.
They find four other couples there who had the same idea.
Eager for social contact, every person in the five couples enthusiastically taps elbows (the new handshake) with each person they haven’t yet met.
It actually turns out many of the people had known each other prior, so when Julia asks everyone how many elbows they each tapped, she remarkably gets nine different answers!
The question: How many elbows did Florian tap?
What nine answers did Julia hear?
Ready for the solution? Click here to see if you’re right.
Riddle No. 15: You’ll Need a Drink After Trying to Solve This Whisky Riddle
Difficulty: Easy
Alan and Claire live by the old Scottish saying, “Never have whisky without water, nor water without whisky!” So one day, when Alan has in front of him a glass of whisky, and Claire has in front of
her a same-sized glass of water, Alan takes a spoonful of his whisky and puts it in Claire’s water.
Claire stirs her whisky-tinted water, and then puts a spoonful of this mixture back into Alan’s whisky to make sure they have exactly the same amount to drink.
So: Is there more water in Alan’s whisky, or more whisky in Claire’s water? And does it matter how well Claire stirred?
The size of the spoon does not matter.
Ready for the solution? Click here to see if you’re right.
Riddle No. 16: The Doodle Problem Is a Lot Harder Than It Looks. Can You Solve It?
Difficulty: Moderate
This week’s riddle is relatively simple—but sinister all the same.
The question: Can you make 100 by interspersing any number of pluses and minuses within the string of digits 9 8 7 6 5 4 3 2 1? You can’t change the order of the digits! So what’s the least number of
pluses and minuses needed to make 100?
For instance, 98 - 7 - 6 + 54 - 32 shows one way of interspersing pluses and minuses, but since it equals 107, it’s not a solution.
I call this a “doodle problem”: one that’s best worked on during meetings where you might be doodling otherwise.
You might want to start looking for solutions that use a total of seven pluses and minuses (although there are ways to use fewer).
Ready for the solution? Click here to see if you’re right.
Riddle No. 17: This Math Puzzle Stumped Every Scientist but One. Think You Can Crack It?
Difficulty: HARD
In honor of Freeman Dyson, the renowned physicist who died last month, here’s a legendary tale demonstrating his quick wit and incredible brain power.
One day, in a gathering of top scientists, one of them wondered out loud whether there exists an integer that you could exactly double by moving its last digit to its front. For instance, 265 would
satisfy this if 526 were its exact double—which it isn’t.
After apparently just five seconds, Dyson responded, “Of course there is, but the smallest such number has 18 digits.”
This left some of the smartest scientists in the world puzzling over how he could have figured this out so quickly.
So given Dyson’s hint, what is the smallest such number?
My second grader has recently learned how to add a 3-digit number to itself using the classic vertical method:
18-digit numbers, of course, can be added in the same way.
Ready for the solution? Click here to see if you’re right.
Riddle No. 18: Figure Out What’s on Her Forehead
Difficulty: Moderate
Cecilia loves testing the logic of her very logical friends Jaya, Julian, and Levi, so she announces:
“I’ll write a positive number on each of your foreheads. None of the numbers are the same, and two of the numbers add up to the third.”
She scribbles the numbers on their heads, then turns to Jaya and asks her what her number is. Jaya sees Julian has 20 on his forehead, and Levi has 30 on his. She thinks for a moment and then says,
“I don’t know what my number is.” Julian pipes in, “I also don’t know my number,” and then Levi exclaims, “Me neither!” Cecilia gleefully says, “I’ve finally stumped you guys!”
“Not so fast!” Jaya says. “Now I know my number!”
What is Jaya’s number?
Jaya could be one of two numbers, but only one of those numbers would lead to Julian and Levi both not knowing their numbers. Why?
Ready for the solution? Click here to see if you’re right.
Riddle No. 19: Can You Get Keanu Reeves Elected As President?
Difficulty: Moderate
It’s 2024, and there are five candidates running in the democratic primary: Taylor Swift, Oprah Winfrey, Mark Cuban, Keanu Reeves, and Dwayne Johnson. (Hey, it could happen.) As usual, the first
primary is in Iowa.
In an effort to overcome its embarrassment after the 2020 caucus debacle, the Iowa Democratic Party has just announced a new, foolproof way of finding the best candidate: there will be four
consecutive elections.
First, candidate 1 will run against candidate 2. Next, the winner of that will run against candidate 3, then that winner will run against candidate 4, and finally the winner of that election will run
against the final candidate. By the transitive property, the winner of this last election must be the best candidate ... so says the Iowa Democratic Party.
Candidate Keanu has been feeling pretty low, as he knows he is ranked near the bottom by most voters, and at the top by none. In fact, he knows the Iowa population is divided into five equal groups,
and that their preferences are as follows:
Keanu is childhood friends with Bill S. Preston, Esq., the new head of the Iowa Democratic Party. Preston, confident that the order of the candidates doesn’t matter for the outcome, tells Keanu he
can choose the voting order of the candidates.
So what order should Keanu choose?
How would Keanu fare in one-to-one races against each candidate?
Ready for the solution? Click here to see if you’re right.
Riddle No. 20: Who Opened All These Damn Lockers?
Difficulty: Moderate
There are 100 lockers that line the main hallway of Chelm High School. Every night, the school principal makes sure all the lockers are closed so that there will be an orderly start to the next day.
One day, 100 mischievous students decide that they will play a prank.
The students all meet before school starts and line up. The first student then walks down the hallway, and opens every locker. The next student follows by closing every other locker (starting at the
second locker). Student 3 then goes to every third locker (starting with the third) and opens it if it’s closed, and closes it if it’s open. Student 4 follows by opening every fourth locker if it’s
closed and closing it if it’s open. This goes on and on until Student 100 finally goes to the hundredth locker. When the principal arrives later in the morning, which lockers does she find open?
Make sure you pay attention to all of the factors.
Ready for the solution? Click here to see if you’re right.
Laura Feiveson is an economist for the government, a storyteller, and a lifelong enthusiast of math puzzles. She lives in Washington, DC with her husband and two daughters. | {"url":"https://cucher.best/article/20-challenging-riddles-to-test-your-critical-thinking-and-logic-skills","timestamp":"2024-11-12T01:09:10Z","content_type":"text/html","content_length":"129729","record_id":"<urn:uuid:1f637af9-5975-47a9-844e-bcc5bb3cd772>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00263.warc.gz"} |
How to Calculate Directional Derivatives?
Directional derivatives – something new again?
Krystian Karczyński
Founder and General Manager of eTrapez.
Graduate of Mathematics at Poznan University of Technology. Mathematics tutor with many years of experience. Creator of the first eTrapez Courses, which have gained immense popularity among students
He lives in Szczecin, Poland. He enjoys walks in the woods, beaches and kayaking.
Place and Time of Action
Calculating directional derivatives as a topic for study (i.e., for credit) is actually situated right after partial derivatives of multivariable functions, which most students cover in the second
It’s a topic rarely tackled, so I didn’t include it in my Course on Partial Derivatives, but it’s common enough that I’ll throw it on the blog – for the benefit of those who need to learn directional
derivatives and those who are simply curious about what it’s all about. However, like in the courses, today I’ll focus almost exclusively on practice (“how do I do this?”), not on theory (“what am I
actually doing?”).
Directional Derivatives – How Do I Do This?
In the case of a directional derivative, we are dealing with the simultaneous increase of the x and y arguments, which of course corresponds to a certain increase in the value of the function .
For the task, we need three things:
1. The function from which we’ll calculate the directional derivative.
2. The point at which we’ll calculate the directional derivative.
3. The direction given in the form of a vector.
With the above, the task boils down to converting the vector into a directional vector (something from analytic geometry, I’ll show how to do it in a moment), and then plugging it into the formula:
In which:
is the directional derivative at point in the direction of vector
is the point at which we calculate the directional derivative
are the coordinates of the directional vector
are the partial derivatives of the function at point .
Calculate the directional derivative of the function at point P(1,2) in the direction .
Everything is ready, we just need to turn the vector into a directional vector.
A directional vector is a vector with the same direction (who would’ve thought), same orientation, but with a length of 1.
It is calculated by the formula:
Simply put, divide its coordinates by its length.
So we calculate the length of the vector :
Then we get the directional vector:
For the formula of the directional derivative, we also need the partial derivatives of the function at point P(1,2):
Now we have everything needed for the formula:
Just substitute and we have the result: .
Example 2
Find the directional derivative of the function: at point P(3,1) in the direction from this point to point Q(6,5).
The task is a bit more difficult because the direction vector is not given directly, but no big deal.
We move from point P to point Q, so the shift vector is [3,4].
Now we find the directional vector by calculating the length of the vector [3,4]:
And we have the directional vector:
Now we calculate the partial derivatives at point (3,1):
Then we just substitute into the formula for the directional derivative:
Example 3
Find the directional derivative of the function at point (1,2) in the direction forming an angle with the positive x-axis.
The task seems more difficult, due to the lack of a direction vector in the data. Let’s draw the whole thing:
It’s about finding the coordinates of any vector in the specified direction.
We use the fact that and we can assume that our vector has coordinates , as in the drawing (it was enough to choose any vector in the direction of the line):
And now we proceed as usual.
We calculate the directional vector:
Then the partial derivatives at point (1,2):
Substitute into the formula and we have the result
Feel free to ask questions in the comments – as always 🙂
Are you looking for college or high school math tutoring? Or maybe you need a course that will prepare you for the final exam?
We are "eTrapez" team. We teach mathematics in a clear, simple and very precise way - we will reach even the most knowledge-resistant students.
We have created video courses translated in an easy, understandable language, which can be downloaded to your computer, tablet or phone. You turn on the video, watch and listen, just like during
private lessons. At any time of the day or night.
Your email address will not be published. Required fields are marked *
Your comment will be publicly visible on our website along with the above signature. You can change or delete your comment at any time. The administrator of personal data provided in this form is
eTrapez Usługi Edukacyjne E-Learning Krystian Karczyński. The principles of data processing and your related rights are described in our Privace Policy (polish). | {"url":"https://blog.etrapez.pl/en/derivatives/directional-derivatives-something-new-again/","timestamp":"2024-11-04T02:44:16Z","content_type":"text/html","content_length":"195586","record_id":"<urn:uuid:43775255-0024-4abc-9dcd-0b5e17facef2>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00168.warc.gz"} |
CS184/284A: Lecture Slides
Lecture 4: Transforms (24)
From my understanding, the inverse matrix is equivalent to its transpose because (1) the columns are unit length and (2) the columns are at a right angle of each other (i.e. dot product = 0). This
means we have an orthonormal matrix and an orthonormal matrix's inverse is equivalent to its transpose.
Professor mentioned the inverse is just the transpose if columns are orthogonal to each other. Does it also have to be normalized because this property seem to be only true for orthonormal basis.
Also is it possible to have an axis coordinate frame / basis that is not orthogonal (like spherical cooridnates)?
edit: oops I see professor briefly mention about non-orthogonal basis later.
Orthonormal means orthogonal and each is unit length (normalized). Consider A A^T where A is orthonormal. You should be able to show the identity matrix.
You must be enrolled in the course to comment | {"url":"https://cs184.eecs.berkeley.edu/sp23/lecture/4-24/transforms","timestamp":"2024-11-11T18:01:20Z","content_type":"text/html","content_length":"13831","record_id":"<urn:uuid:8406bf8d-b51a-4bca-ae7a-8d7ea4f42deb>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00048.warc.gz"} |
Handy approximation for computing roots of fractions
Handy approximation for roots of fractions
This post will discuss a curious approximation with a curious history.
Let x be a number near 1, written as a fraction
x = p / q.
Then define s and d as the sum and difference of the numerator and denominator.
s = p + q
d = p − q
Since we are assuming x is near 1, s is larger relative to d.
We have the following approximation for the nth root of x.
^n√x ≈ (ns + d) / (ns − d).
This comes from a paper written in 1897 [1]. At the time there was great interest in approximations that are easy to carry out by hand, and this formula would have been very convenient.
The approximation assumes x is near 1. If not, you can multiply by a number of known square root to make x near 1. There will be an example below.
Positive d
Let’s find the cube root of x = 112/97. We have n = 3, p = 112, q = 97, s = 209, and d = 15. The approximation tells says
^3√x ≈ 642/612 = 107/102 = 1.049019…
while the exact value is 1.049096… .
Negative d
The value of d might be negative, as when x = 31/32. If we want to find the fifth root, n = 5, p = 31, q = 32, s = 63, and d = −1.
^5√x ≈ 312/314= 156/157 = 0.9936708…
while the exact value is 0.9936703… .
x not near 1
If x is not near 1, you can make it near 1. For example, suppose you wanted to compute the square root of 3. Since 17² = 289, 300/289 is near 1. You could find the square root of 300/289, then
multiply the result by 17/10 to get an approximation to √3.
The author refers to this approximation as Mercator’s formula, presumable Gerardus Mercator (1512–1594) [2] of map projection fame. A brief search did not find this formula because Mercator’s
projection drowns out Mercator’s formula in search results.
The author says a proof is given in Hutton’s Tracts on Mathematics, Vol 1. I tracked down this reference, and the full title in all its 19th century charm is
BY CHARLES HUTTON, LL.D. AND F.R.S. &c.
Late Professor of Mathematics in the Royal Military Academy, Woolwich.
Hutton’s book looks interesting. You can find it on Archive.org. Besides bridges and gunpowder, the book has a lot to say about what we’d now call numerical analysis, such as ways to accelerate the
convergence of series. Hutton’s version of the formula above does not require that x be near 1.
Related posts
[1] Ansel N. Kellogg. Empirical formulæ; for Approximate Computation. The American Mathematical Monthly. February 1897, Vol. 4 No. 2, pp. 39–49.
[2] Mercator’s projection is so familiar that we may not appreciate what a clever man he was. We can derive his projection now using calculus and logarithms, but Mercator developed it before Napier
developed logarithms or Newton developed calculus. More on that here.
2 thoughts on “Handy approximation for roots of fractions”
1. It looks to be due to the mathematician Nicholas Mercator. Hutton starts discussing Mercator’s Logarithmotechnia on p405 of vol 1 at https://archive.org/details/tractsonmathemat01hutt/page/404/
mode/2up?q=Mercator . He shows the approximation starting on p411, at https://archive.org/details/tractsonmathemat01hutt/page/410/mode/2up?q=%22approximate+multiplication%22 and points out that x
in that approximation must “not differ greatly from unity” on p412.
Hutton says this is Mercator’s “prop. 7”, and it matches Mercator’s Propoſito VII in the archive.org copy of Logarithmotechnia at https://archive.org/details/ita-bnc-mag-00000857-001/page/n28/
mode/2up .
2. For those curious about the formula, you can reach it by writing x as (s+d)/(s-d) and then doing a 1st order Taylor approximation over d for both numerator and denominator of (s+-d)^t with t = 1/
n. The validity of the approximation hinges on d being close to 0, which is equivalent as x being close to 1. | {"url":"https://www.johndcook.com/blog/2024/02/24/approximate-roots-of-fractions/","timestamp":"2024-11-02T11:12:27Z","content_type":"text/html","content_length":"56090","record_id":"<urn:uuid:77981caf-f495-4ad9-a351-94cad2ece946>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00123.warc.gz"} |
Homotopy for rational riccati equations arising in stochastic optimal control
We consider the numerical solution of the rational algebraic Riccati equations in ℝ^n, arising from stochastic optimal control in continuous and discrete time. Applying the homotopy method, we
continue from the stabilizing solutions of the deterministic algebraic Riccati equations, which are readily available. The associated differential equations require the solutions of some generalized
Lyapunov or Stein equations, which can be solved by the generalized Smith methods, of O(n^3) computational complexity and O(n^2) memory requirement. For large-scale problems, the sparsity and
structures in the relevant matrices further improve the efficiency of our algorithms. In comparison, the alternative (modified) Newton's methods require a difficult initial stabilization step. Some
illustrative numerical examples are provided.
ASJC Scopus subject areas
深入研究「Homotopy for rational riccati equations arising in stochastic optimal control」主題。共同形成了獨特的指紋。 | {"url":"https://scholar.lib.ntnu.edu.tw/zh/publications/homotopy-for-rational-riccati-equations-arising-in-stochastic-opt-2","timestamp":"2024-11-05T04:39:11Z","content_type":"text/html","content_length":"57377","record_id":"<urn:uuid:cbabdab9-f768-4a1b-9a67-51fab32c75c8>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00540.warc.gz"} |
GATE & ESE - Lecture 3: Conservation of Mass ( in Hindi) Offered by Unacademy
Unacademy is India’s largest online learning platform. Download our apps to start learning
Starting your preparation?
Call us and we will answer all your questions about learning on Unacademy
Call +91 8585858585
About usShikshodayaCareersBlogsPrivacy PolicyTerms and Conditions
Learner appEducator appParent app | {"url":"https://unacademy.com/lesson/lecture-3-conservation-of-mass-in-hindi/G7HTYCWZ","timestamp":"2024-11-02T14:51:05Z","content_type":"text/html","content_length":"224962","record_id":"<urn:uuid:2290d109-8a70-4ab2-af4d-fc01c2fa07bb>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00338.warc.gz"} |
Everything You Need to Know about Splines
Recently, I’ve been experimenting with Onshape for a few different projects, including designing parts for a microscope and designing jewellery. I often find myself struggling with the limited
capabilities of native CAD tools on Linux, so the ability to do CAD work in a browser on any platform is fantastic. And Onshape is not a toy: it has some truly impressive capabilities that can
compete with high-end desktop-based CAD systems, and it gets better every few weeks! (Please note that I have no affiliation with the company. I’m just a happy user.)
One feature that I look for in any CAD system is a programming language or extension API. Most CAD tasks have some degree of structure, and I like the idea of writing code to simplify repetitive
tasks. For example, when designing jewellery, one frequently needs to design settings for jewels, and it would be a painstaking process to build these each time from scratch. Onshape’s FeatureScript
language is brilliant for this: once I’ve written my custom ‘features’ in FeatureScript, they behave as first-class citizens in the Onshape GUI, and I can simply insert a ‘setting’ or ‘prong’ with a
given set of parameters.
If you’ve done any programming before, FeatureScript is simple to use. The syntax is quite similar to other procedural languages, so it takes only a few days before it becomes second nature. The
documentation doesn’t always tell you everything you want or need to know, but to be fair, there is a wealth of FeatureScript examples you can reference, created by both the Onshape team and from
For my designs, I needed to draw a lot of curves (and then offset those curves). While I could do approximately what I wanted in the GUI, it took me a long time to figure out how to do it in
FeatureScript as part of my custom features. This article is an attempt to explain some of the missing links for those following in my footsteps. I’ll start with some basic background on splines,
then proceed to describe how to draw 2D splines (in a sketch), how to draw 3D splines and how to offset curves.
Basics of Splines
There’s a lot of mathematical jargon around splines and polynomials; I’ll try to use both the technically correct terms and plain English wherever possible.
A spline is a piecewise polynomial. The curve is made up of one or more pieces, where each piece is a polynomial. The polynomials are normally chosen such that they “match up” at the transitions and
you end up with something that looks like a single continuous curve. There can be various definitions of “matching up.” So to produce a visually smooth curve, at least the function values and the
first derivative need to match (C1 continuity), and usually the second derivative is also chosen to match (C2 continuity).
In the case of Onshape splines used in sketches, the pieces are normally cubic polynomials — polynomials of maximum degree 3 — in two dimensions, x and y. However, I should be more clear what I mean,
as there are at least three different things that could be meant by cubic polynomials in two dimensions:
The explicit form can produce, say, a parabola y=x2, but can’t produce a parabola rotated by 90 degrees. Therefore, it makes little sense for software such as Onshape, which must be able to produce
curves in any orientation.
The implicit form is the most powerful – in that it can express curves that can’t be expressed in the other two forms – but it is difficult to evaluate the set of points that are part of the curve.
Therefore, as might be expected, Onshape curves are of the parametric type: a curve in two dimensions is produced parametrically by evaluating functions x(t) and y(t) for t from 0 to 1. The functions
x(t) and y(t) are splines: there may, for example, be one polynomial piece from t=0 to t=0.5 and one from t=0.5 to t=1. Some of the FeatureScript functions related to curves, such as
evEdgeTangentLine() and evEdgeCurvature(), take this parameter t as an argument.
Drawing 2D Splines – Single Polynomial Piece
Splines can be created programmatically in an Onshape sketch using the skFitSpline() function. First, let’s start with a curve with only one polynomial piece between two points (0,0) and (100,100).
In FeatureScript, we can write:
Here is the result:
Note that I’ve specified a startDerivative and endDerivative that include the starting and ending direction. If I had only specified a start and end point, then the result would have just been a
straight line.
When I was first experimenting with this, my first question was: “Why do the start and end derivatives have length units (in this case, 150 meters)?” With a bit of experimentation, it’s clear that
these vectors should be scaled together with the curve, i.e. if we scale the curve up by a factor of two, we should also double the startDerivative and endDerivative. Also, a larger magnitude makes
the curve launch with more momentum in the given direction. For example, here is the curve with startDerivative increased to (500 meters,0):
But what does the magnitude of these vectors actually mean?
It all becomes clearer, however, when you consider the curves in parametric form as described in the previous section: x=f(t) and y=f(t). It turns out that the given derivative vectors are the
derivatives of (x(t), y(t)) with respect to the parameter t (i.e. (dx/dt, dy/dt)) at t=0 and t=1. If this is too abstract to visualize, there is also a simple mapping to Bézier control points that
I’ll explain below.
It turns out that the start point and end point, and the two derivative vectors – four (x,y) pairs in total – uniquely define the eight parameters of the parametric cubic polynomial. Thus, any
parametric cubic polynomial can be specified in this way.
Drawing Bézier Curves
If you’re familiar with Bézier curves, the above discussion will sound familiar. A Bézier curve is defined by four control points (call them P1, P2, P3, P4). The curve launches from P1 in the
direction of P2, and then approaches P4 from the direction of P3. If P2 is further away from P1, then the curve launches with more momentum in the given direction.
It turns out that these formulations are equivalent with a tiny amount of math, and if you have the Bézier control points, you can calculate the required startDerivative and endDerivative as:
where Δt will be 1 for the simple case that t varies from 0 to 1 across the curve segment, and the 3 arises from the degree of the polynomial (because d/dt(t3) = 3t2). Thus, for a single-piece
spline, the startDerivative and endDerivative will be three times the distance to the corresponding Bézier control point.
Of course, you can also calculate the Bézier control points from the startDerivative and endDerivative by rearranging these equations for P1 and P2:
Evaluating Splines in Onshape
You can evaluate the spline at any point using the evEdgeTangentLine() function in FeatureScript, providing the parameter value t (from 0 to 1). The Line object that is returned has an origin that
provides the evaluated point and has a direction that is tangent to the spline.
This can be useful for performing further geometry calculations after drawing a spline. For completeness, I’ll note that there is also a evEdgeTangentLines() function — which is identical but allows
evaluating the curve at multiple points in one call — and an evEdgeCurvature() function — which returns not only a tangent but also normal and binormal vectors.
Drawing 2D Splines – Multiple Polynomial Pieces
Now let’s add another point to the spline:
Here is the result:
Now there are two polynomial pieces, one that goes from (0,0) at t=0 to (10,10) at t=0.25, and one that goes from (10,10) at t=0.25 to (100,100) at t=1. The breakpoint between the two – called a knot
– is at t=0.25.
Why is the knot at t=0.25? Well, this knot could actually be placed anywhere in ‘t space,’ for instance t=0.1 or t=0.5, but as the second piece of the curve is much longer, there is an argument for
assigning more ‘t space’ to the second piece. Onshape uses the square root of the chord length between points as the metric; the lengths of the two chords here are 14.1m and 127.3m so the knot
location is chosen as sqrt(14.1m)/(sqrt(14.1m)+sqrt(127.3m)) = 0.25.
The polynomial pieces are chosen such that both the first and second derivative are continuous through the middle point, which uniquely defines the two polynomials. Note that if you only care about
first derivative continuity and not second derivative continuity, then you can actually get a larger variety of curves by drawing the two parts individually. Then you can choose any value for the
piece1 endDerivative and piece2 startDerivative as long as the direction matches.
Interested in Even More Spline Talk?
It’s time to take a breather, but I’m far from done talking about splines. In Part 2 of this blog, I will explore B-Splines (contrary to popular belief, there’s nothing voodoo about them), Offset
Curves and much more! Stay tuned... | {"url":"https://www.onshape.com/en/resource-center/tech-tips/everything-you-need-to-know-about-splines","timestamp":"2024-11-09T15:42:54Z","content_type":"text/html","content_length":"29833","record_id":"<urn:uuid:7c53d94f-b019-450f-a74d-5d5002939649>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00459.warc.gz"} |
C++ Programming: Longest substring repeated by a single character (LeetCode: 2213)
Give you a string s with index starting from 0. I will also give you a queryCharacters string with an index starting from 0 and a length of k, as well as an integer index array with an index starting
from 0 and a length of k. Both of these are used to describe k queries.
The i-th query will update the characters located at the subscript queryIndicators [i] in s to queryCharacters [i].
Returns an array length of k, where length [i] is the length of the longest substring consisting of only a single character repetition in s after executing the i-th query.
Example 1:
Input: s=" Babac;, QueryCharacters=" Bcb;, QueryIndicators= [1,3,3]
Output: [3,3,4]
-After the first query update, s=" Bbbacc. The longest substring formed by repeating a single character is" Bbb, The length is 3
-After the second query update, s=" Bbbccc. The longest substring formed by repeating a single character is" Bbb; Or; CCC;, The length is 3
-After the third query update, s=" Bbbbcc. The longest substring formed by repeating a single character is" Bbbb, The length is 4
Therefore, return [3,3,4]
Example 2:
Input: s=" Abyzz;, QueryCharacters=" Aa;, QueryIndicators= [2,1]
Output: [2,3]
-After the first query update, s=" Abazz. The longest substring formed by repeating a single character is" Zz, The length is 2
-After the second query update, s=" Aaazz". The longest substring formed by repeating a single character is" Aaa, The length is 3
Therefore, return [2,3].
1<= s. Length<= 10 ^ 5
S is composed of lowercase English letters
K== QueryCharacters. length== QueryIndicators. length
1<= K<= 10 ^ 5
QueryCharacters are composed of lowercase English letters
0<= QueryIndicators [i]< s. Length.
Approach: A classic line segment tree can solve the problem. Build a regular line segment tree that supports single point updates. Each node on the line segment tree maintains the longest single
character prefix, longest single character suffix, longest single character length, left endpoint character, right endpoint character, and left and right endpoint positions of the current interval.
When merging two sub intervals, if the end character of the previous interval is equal to the first character of the following interval, these two intervals can be merged:
If the previous interval is composed of only a single character, the maximum length of the merged prefix can be updated.
2. How to update the maximum length of the merged suffix if the latter interval is only composed of a single character.
3. The maximum single character length of the interval should also be updated simultaneously.
Main reference for writing style: Excellent problem solutions.
class Solution {
static const int N = 4e5 + 10;
string s;
struct TreeNode {
int l, r, size;
char lc, rc;
int lmax, rmax, dmax;
void pushup(TreeNode &f, TreeNode &l, TreeNode &r) {
f.lmax = l.lmax, f.rmax = r.rmax, f.dmax = max(l.dmax, r.dmax);
f.lc = l.lc; f.rc = r.rc;
f.size = l.size + r.size;
if (l.rc == r.lc) {
if (l.rmax == l.size) f.lmax += r.lmax;
if (r.lmax == r.size) f.rmax += l.rmax;
f.dmax = max(f.dmax, l.rmax + r.lmax);
void build(int id, int l, int r) {
if (l == r) {
tr[id] = {l, l, 1, s[l - 1], s[l-1], 1, 1, 1};
tr[id].l = l; tr[id].r = r;
int mid = (l + r) / 2;
build(id * 2, l, mid);
build(id * 2 + 1, mid + 1, r);
pushup(tr[id], tr[id * 2], tr[id * 2 + 1]);
TreeNode query(int id, int l, int r) {
if (tr[id].l >= l && tr[id].r <= r) {
return tr[id];
int mid = (tr[id].l + tr[id].r) / 2;
if (r <= mid) return query(id * 2, l, r);
else if (l > mid) return query(id * 2 + 1, l, r);
else {
TreeNode lt = query(id * 2, l, r);
TreeNode rt = query(id * 2 + 1, l, r);
TreeNode tmp;
pushup(tmp, lt, rt);
return tmp;
void modify(int id, int x, char y) {
if (tr[id].l == x && tr[id].r == x) {
tr[id] = {x, x, 1, y, y, 1, 1, 1};
int mid = (tr[id].l + tr[id].r) / 2;
if (x <= mid) modify(id * 2, x, y);
else if (x > mid) modify(id * 2 + 1, x, y);
pushup(tr[id], tr[id * 2], tr[id * 2 + 1]);
vector<int> longestRepeating(string s, string queryCharacters, vector<int>& queryIndices) {
this->s = s;
build(1, 1, s.size());
vector<int> ans;
for (int i = 0; i < queryIndices.size(); i++) {
modify(1, queryIndices[i] + 1, queryCharacters[i]);
ans.push_back(query(1, 1, s.size()).dmax);
return ans; | {"url":"https://www.iopenv.com/V65QIFU8U/C++Programming-Longest-substring-repeated-by-a-single-character-LeetCode-2213","timestamp":"2024-11-06T07:41:03Z","content_type":"text/html","content_length":"18111","record_id":"<urn:uuid:6e44cf35-aaea-4a68-96a7-279155580ecb>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00758.warc.gz"} |
3/14: A Commemoration of “Pi”
top of page
3/14: A Commemoration of “Pi”
Original image by Christina Ossa
March 14th is a date that ironically celebrates the math notion Pi (or π), but some of you may ask: “Why exactly 3/14”? Well, as many of you know, π is an irrational number that begins with the three
numbers 3.14. So, since π’s first three numbers are 3.14, it only makes sense that the commemoration of the numerical symbol would be 3/14. While there won’t be much discussion about how the pies
themselves are made, the mathematical aspects of the pies will be reviewed, but if you’d like to recreate these pies, the recipes will be at the bottom. So, to celebrate this valued symbol, let’s
take a look at a few different pie recipes and their areas, circumferences, radii, and diameters! Also, let’s discuss the various formulas that go into calculating such a simple-looking symbol like π
because calculating it may be more complicated than expected.
What is Pi (π)?:
Put simply, pi (or π) is C (circumference) / D (diameter). Now, you may be wondering what that exactly implicates, and well, the answer is more complicated than expected. Pi (π) has been studied
across multiple different areas of the world, from the Middle East, Europe, China to the United States. Its studies show information revealing the true meaning of π but also a rich history of
mathematics throughout time. A more shocking discovery in π’s research was the method of exhaustion, which explored the idea that if different shapes with multiple different sides (ex: pentagons,
hexagons, and so on) were drawn inside of a circle, this would lead to an increasingly more accurate calculation of π. But, this obviously was not optimal since πrepeats forever, and the highest
value of any mathematician (Archimedes) was a 96 sided shape known as an enneacontakaihexagon. However, even this shape did not lead to an exact calculation of π. But, the main takeaway from
thousands of years of research on π is that the symbol is more deceiving than it seems since it has infinite numbers calculations and continues indefinitely. Since a circle continues indefinitely,
that means π will as well, which is why πconstitutes an irrational number. While it may be common knowledge that circles have infinitely many sides, that’s not exactly the case, and that’s not the
exact reason for πhaving infinitely many digits. A circle is essentially a type of polygon with infinitely many sides, but since you clearly can't tell these same sides from the circle itself, that
means every point you take on a circle is aside. So, that means π will always be an irrational number that continues forever, containing infinite digits related to the “infinite” sides of a circle.
But, what is the relevance of π? Well, that can be explained using actual, real-life pie...
Apple “Pi”:
This dessert is perfect for family or friendly get-togethers, it only takes a few elements to put together and make, but the more amusing part is how this pie, like the others, happens to be
circular. This means we’ll be able to see the dimensional aspects that make up this pie using π to calculate them!
Radius: 4.5 in Surface Area: 63.617 in^2 or 20.25π^2
Diameter: 9 in Circumference: 28.274 in or 9π in
Original Photography by Christina Ossa
The “Pi” is not only sweet but contains rich dimensional elements! The radius of this “pi” is 4.5 inches, and the diameter is 9 inches, so from there, we can calculate the circumference and surface
area. If you don’t remember, the formula for calculating circumference is either 2πr or πd (d standing for diameter), and for the purposes of this “pi,” we’ll be using both. Utilizing both formulas,
we get that the circumference of this apple “pi” is about 28.274 inches or 9π inches. Now, we can use the radius of our “pi” here to calculate the area of it, which will let us know exactly how much
filling can fit into our “pi.” Using the formula πr^2 to calculate the area of a circle (our “pi”), we can infer that the surface area will be about 63.617 in^2 or 20.25π in ^2. That means we would
be able to fill this “pi” with 63.617 inches of filling, which could feed at least five or six people per about 12-13 inches! These calculations allow bakers to see the exact amount of filling
appropriate for a pie dish. While many (including myself) would not calculate like this, it’s interesting to figure out step by step the dimensional, mathematical aspects of pie that include the use
of π.
Blueberry and Pumpkin “Pi”:
This is a pie I’ve talked about before, and you would think that this pie would be the same size as the last one, but surprisingly it’s not. Pie pans can vary by a slim margin, a fact that not many
bakers take into account since the essential part of a dish is mostly the ingredients. But that shouldn’t mean anyone should undermine the relevance of the pan! Also, since these pies were baked with
the same pan (albeit at different times), their dimensions should be roughly the same, and we should be able to draw similar conclusions from calculating solely one!
Radius: 5 in Surface Area: 78.540 in^2 or 25π^2
Diameter: 10 in Circumference: 31.416 in or 10π in
Original Photography by Christina Ossa
The diameter of these “pi’s” turned out to be 10 inches instead of 9 inches. That would mean the radius would be 5 inches, and from here, we can use our circumference formulas! From the formulas, we
would get that this “pi” has a circumference of about 31.416 inches or 10π inches, which gives us the exact value of the outside/crimping of this pie. Now, we would want to figure out how much this
“pi” dish could be filled with apple filling, so we’d switch to our surface area formula (πr^2). Using the formula, we would end up with an area of about 78.540 in^2 or 25π in^2, telling us that this
pie dish would feed at least 6-7 if they took 11-12 inches of pie per piece. This is helpful information since this delectable dessert could be served and shared amongst guests fairly, and we know
the optimal dimensions for filling this pie. Also, this shows us how this pie dish is superior to our pie dish in the apple “pi” since it could supply more and, as a result, feed more people. Not
only did the more profound and mathematical background to the “pi” help us figure out the optimal dish to use so that we could feed the maximum amount of people!
As you can see, the relevance of π in pies is more interesting and complex than it might appear on the surface. Even though circumference, surface area, diameter, radius, and π may not be the end all
be all to baking a simple pie, it could give you insight into the physics of how baking truly works and explain the mathematical aspects of baking that allow a pie to be constituted as a pie. So,
next time you think π is useless or has no real-world applications, think back to actual pies and how without π, there would be no pie!
Apple “Pi”:
Refer to either “Pi” article listed below!
-5 lbs apples -1 cup brown sugar
-½ tsp salt -2 tsp ground cinnamon
-½ a lemon, squeezed -1 tbsp + ¾ tsp cornstarch
-3 tbsp water -1 tbsp + ¾ tsp cornstarch
-3 ½ tbsp unsalted butter
• Peel and thinly slice apples into a large bowl
• Add brown sugar, salt, cinnamon, lemon, and the 1 tbsp and ¾ tsp cornstarch to the apples; make sure to combine the ingredients thoroughly into the apples (use your hands if you have to!)
• Add the apples to a colander and leave over the large bowl to drain the excess liquid for about 30-45 minutes
• Once drained, add the collected liquid to a medium-sized saucepan and add the apples back into the bowl
• Keep the saucepan over medium to medium-low heat until it begins thickening
• Once the mixture slightly thickens, add the remaining cornstarch to the water to make what’s called a slurry
• Once combined, add the slurry to the saucepan mixture and allow to thicken for 1-3 minutes; after thickened, add the butter and let melt
• After the “glaze” is done, allow it to cool for at least 15 minutes and begin prepping the crust or pre-made dough into a pie dish
• Add the glaze once cooled back into the apples, and once combined add this apple mixture evenly into the pie dish
• Place either a lattice or cover (slicing thin lines on top of the top piece of dough) on top of the apple mixture onto the pie dish (refer to blueberry or pumpkin pie recipes for detailed
• Bake at 400°F for 40-45 minutes, let cool for at least 15-30 minutes, and enjoy!
Blueberry “Pi”:
Refer to this article!: https://www.vsnorthstar.com/articles/holiday-sweets
Pumpkin “Pi”:
Refer to this article!: https://www.vsnorthstar.com/articles/fresh-pumpkin-pie-vs.-canned-pumpkin-pie%3A-is-it-worth-the-time%3F-
bottom of page | {"url":"https://www.vsnorthstar.com/articles/3%2F14%3A-a-commemoration-of-%E2%80%9Cpi%E2%80%9D---","timestamp":"2024-11-02T05:38:33Z","content_type":"text/html","content_length":"646590","record_id":"<urn:uuid:900f7142-8a6c-4913-9d2a-0d6ebcc019dd>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00710.warc.gz"} |
Walnut Software
What is Walnut?
Walnut is free software, written in Java, for deciding first-order statements phrased in an extension of Presburger arithmetic, called Buchi arithmetic. It can be used to prove hundreds of results in
combinatorics on words, number theory, and other areas of discrete mathematics. It can handle a wide variety of problems. There are some recent additions to Walnut, written by Aseem Baranwal, Kai
Hsiang Yang, and Anatoly Zavyalov.
March 27 2023
Walnut 5.4 is now available! It contains several new commands, including the ability to do transductions of automatic sequences, and fixes one obscure bug introduced in Walnut 3: namely, regular
expressions with negative elements could under some rare circumstances be interpreted incorrectly. This version was modified by Anatoly Zavyalov. This version supersedes all previous versions.
It is available for free at https://github.com/firetto/Walnut, and the documentation is here.
December 2 2022
The Walnut book is now available from Cambridge University Press. Get your copy today!
August 15 2022
Walnut 4 is now available! It contains several new and useful commands, in particular for handling negative bases and quantification over Z instead of N.
It is available for download here. This version incorporates new original work by Kai Hsiang Yang.
September 6 2021
Walnut 3 is now available! It contains several new and useful commands. It is available for download here. This version represents additional work done by Laindon C. Burnett to the previous versions
of Walnut written by Hamoon Mousavi and updated by Aseem Raj Baranwal. For this version, you should type java Main.Prover to get started, insted of the older java Main.prover.
Installing Walnut
When you go to the github site above, click on the green "Code" button, then choose "Download Zip".
When you get the zip file downloaded, you might (depending exactly one what system you use) just click on it to extract it. Otherwise you might have to use a zip extractor.
Now you should have a directory called something like Walnut-main.
Go into that directory and there is a command called "build.sh".
Execute that command - on my Unix system you type
to do that.
That should take a few seconds to compile.
Then go to the "bin" directory and type
java Main.Prover
that gets you into Walnut. To leave, type quit;
Depending on your system, you may need a "Java Development Kit" to install Walnut. If so, try the command
sudo apt install openjdk-11-jdk
from a terminal window. Once that is done, to compile Walnut, go to the Walnut directory and type
sh build-sh (Windows)
It might be that you can get by without the "sh" before the command, depending on your system.
./build-sh (Linux)
That should do it! Now you are ready to go to the "bin" directory and type
java Main.Prover
Let me know if you have any difficulties.
Walnut Software
The old version of Walnut is available at github. For the new version, see above. After you download it and install it, go to the directory Walnut/bin and type java Main.prover to get started. A
manual of how to use it is available on the arxiv. There is also a text file with some examples of how to use Walnut.
Here is a video tutorial on how to use Walnut:
Here is a talk, given on October 18 2022, about how to use Walnut to prove properties of the sequences in the OEIS:
If you are using Walnut under the Eclipse environment, here are a few tips. Download the Walnut software. Start Eclipse up. Use the default workspace. Open "Project" from the File choices, and choose
"Walnut". Next, go to src/Main in the menu choices, right-click on prover.java or Prover.java and choose "Run As Java Application". You should now get a window where you can enter Walnut commands. To
see results, go to the Eclipse file menu, right-click on "Result" and choose "Refresh" and the results should be there. Thanks to Stepan Holub for this info.
Recently Walnut has been modified by Aseem Baranwal to handle the Pell number system, and more generally, the Ostrowski number system based on any quadratic irrational. To use this version of Walnut,
visit https://github.com/aseemrb/walnut. After you download and install it, go to the directory Ostrowski/bin and type java Main.Prover to get started. (Note: for the old Walnut, you use lowercase
"p" in prover; for the new version you use uppercase "P" in prover.) The command "ost name [0 1 2] [3 4]", for example, defines an Ostrowski number system for the continued fraction
[0,1,2,3,4,3,4,3,4,...]. It can then be used by prefacing a query with "?msd_name" or "?lsd_name". Aseem Baranwal has prepared a brief summary of his additions to Walnut here.
If you find Walnut useful in your research, please be sure to cite Hamoon Mousavi as the author of the software, and let me know what you achieved with it.
Walnut has been used in a variety of papers and books. A partial list is here (will be updated):
1. Jonathan Andrade and Lucas Mol, Avoiding abelian and additive powers in rich words, Arxiv preprint arXiv:2408.15390 [math.CO], August 27 2024.
2. Lucas Mol, Narad Rampersad, Jeffrey Shallit, Dyck words, pattern avoidance, and automatic sequences, Communications in Mathematics 33 (2025) (2), Paper no. 5.
3. Gabriele Fici, Jeffrey Shallit, Jamie Simpson, Some remarks on palindromic periodicities, ArXiv preprint arXiv:2407.10564 [math.CO], July 15 204. Available at https://arxiv.org/abs/2407.10564.
4. Michel Rigo, Manon Stipulanti, and Markus A. Whiteland, Characterizations of families of morphisms and words via binomial complexities, Europ. J. Combinatorics 118 (2024), 103932.
5. Jonathan Andrade, Avoiding additive powers in words, B. Sci. thesis, Department of Mathematics and Statistics, Thompson Rivers University, 2024. Available at https://arcabc.ca/islandora/object/
6. Nicolas Ollinger, Jeffrey Shallit, The repetition threshold for Rote sequences, Arxiv preprint arXiv:2406.17867 [math.CO], June 25 2024. Available at https://arxiv.org/abs/2406.17867. There is
some supplementary material, including
7. Olivier Carton, Jean-Michel Couvreur, Martin Delacourt, Nicolas Ollinger, Addition in Dumont-Thomas Numeration Systems in Theory and Practice, Arxiv preprint arXiv:2406.09868 [cs.FL], June 14
2024. Available at https://arxiv.org/abs/2406.09868.
8. Jean-Paul Allouche, John M. Campbell, Jeffrey Shallit, Manon Stipulanti, The reflection complexity of sequences over finite alphabets, Arxiv preprint arXiv:2406:09302 [math.CO], June 13 2024.
Available at https://arxiv.org/abs/2406.09302.
9. J.-P. Allouche, N. Rampersad, and J. Shallit, Repetition threshold for binary automatic sequences, Arxiv preprint arXiv:2406.06513 [math.CO], June 10 2024. Available at https://arxiv.org/abs/
2406.06513. To check the claims of the paper, you can use the following Walnut code.
10. Rob Burns, Synchronisation of running sums of automatic sequences, Arxiv preprint arXiv:2405.17536 [math.NT], May 27 2024. Available at https://arxiv.org/abs/2405.17536.
11. Aaron Barnoff, Curtis Bright, and Jeffrey Shallit, Using finite automata to compute the base-b representation of the golden ratio and other quadratic irrationals, Arxiv preprint arXiv:2405.02727
[cs.FL], May 4 2024. Available at https://arxiv.org/abs/2405.02727.
12. Marieh Jahannia and Manon Stipulanti, Exploring the Crochemore and Ziv-Lempel factorizations of some automatic sequences with the software Walnut, Arxiv preprint arXiv:2403.15215 [cs.DM], March
22 2024. Available at https://arxiv.org/abs/2403.15215.
13. Jason Bell, Chris Schulz, and Jeffrey Shallit, Consecutive Power Occurrences in Sturmian Words, Arxiv preprint arXiv:2402.09597 [math.CO], February 14 2024. Available at https://arxiv.org/abs/
14. Luke Schaeffer, Jeffrey Shallit, and Stefan Zorcic, Beatty Sequences for a Quadratic Irrational: Decidability and Applications, Arxiv preprint arXiv:2402.08331 [math.NT], February 13 2024.
Available at https://arxiv.org/abs/2402.08331.
15. Benoit Cloitre and Jeffrey Shallit, Some Fibonacci-Related Sequences, arXiv preprint arXiv:2312.11706 [math.CO], December 18 2023. Available at https://arxiv.org/abs/2312.11706.
16. J. Shallit and X. Xu, Repetition factorization of automatic sequences, Arxiv preprint arXiv:2311.14961 [cs.FL], November 25 2023. Available at https://arxiv.org/abs/2311.14961.
17. J.-P. Allouche and J. Shallit, Additive properties of the evil and odious numbers and similar sequences, Funct. Approx. Comment. Math. (2023), 1-15.
18. J. Shallit, A. M. Shur, and S. Zorcic, New constructions for 3-free and 3^+-free binary morphisms. Arxiv preprint arXiv:2310.15064 [math.CO], October 23 2023.
Walnut files for the paper:
□ X.txt, put in the Word Automata directory of Walnut
□ Y.txt, put in the Word Automata directory of Walnut
□ tar file of automata produced, unpack and put in the Automata Library directory of Walnut. These will be produced by the commands above if you have enough RAM on your machine and time to
devote (about two weeks of computation time on a machine with 400G of RAM). But if you do not have this time you can just work with the automata produced.
19. J. Shallit, Proof of Irvine's conjecture via mechanized guessing, Arxiv preprint arXiv:2310.14252 [math.CO], October 22 2023.
Files for the paper:
□ K.txt, put this in the "Word Automata Library" directory of Walnut.
□ files for numeration system, unpack and put both files in the "Custom Bases" directory of Walnut.
□ files for the sequences, unpack and put all these files in the "Automata Library" of Walnut.
20. J. Shallit, Proving Results About OEIS Sequences with Walnut, in C. Dubois and M. Kerber, eds., CICM 2023, LNAI Vol. 14101, Springer, 2023, pp. 270-282.
21. Narad Rampersad and Max Wiebe, Sums of products of binomial coefficients mod 2 and 2-regular sequences, Arxiv preprint arXiv:2309.04012 [math.NT], September 7 2023. Appeared in INTEGERS 24
(2024), Paper #A73; https://math.colgate.edu/~integers/y73/y73.pdf.
22. L. Schaeffer and J. Shallit, The first-order theory of binary overlap-free words is decidable, Canad. J. Math. (2023).
□ There is a small error in the statement of Theorem 5.5. It refers to all overlap-free words, but the theorem is actually about all overlap-free Restivo words.
□ The definition for `normalize' was omitted. You can download it here: normalize.txt.
□ The definition for `CODE' was omitted. You can download it here: CODE.txt.
23. J. Shallit, Proving properties of some greedily-defined integer recurrences via automata theory. Arxiv preprint arXiv:2308.06544 [cs.DM], August 12 2023, available at https://arxiv.org/abs/
24. M. Rigo, M. Stipulanti, and M. A. Whiteland, Automaticity and Parikh-collinear morphisms, in A. Frid and R. Mercas, eds., WORDS 2023, LNCS 13899, Springer, 2023, pp. 247-260.
25. J. Shallit, Proving Properties of ϕ-Representations with the Walnut Theorem-Prover, arxiv preprint arXiv:2305.02672 [math.NT], May 4 2023. Available at https://arxiv.org/abs/2305.02672.
26. J. Shallit and A. Zavyalov, Transduction of automatic sequences and applications, arxiv preprint arXiv:2303.15203 [cs.FL], March 27 2023. Available at https://arxiv.org/abs/2303.15203.
27. Jeffrey Shallit, Rarefied Thue-Morse Sums Via Automata Theory and Logic, Arxiv preprint ArXiv:2302.09436 [math.NT], February 18 2023. Final version in J. Number Theory 257 (2024) 98-111.
Available at https://doi.org/10.1016/j.jnt.2023.10.015.
28. Jeffrey Shallit, Prefixes of the Fibonacci word, Arxiv preprint arXiv:2302.04640 [cs.FL], February 9 2023.
29. Jeffrey Shallit, A Dombi counterexample with positive lower density, Arxiv preprint arXiv:2302.02138 [math.NT], February 4 2023. Final revised version appeared in INTEGERS 23 (2023), #A74, and
available here.
30. Narad Rampersad and Jeffrey Shallit, Rudin-Shapiro sums via automata theory and logic, Arxiv preprint arXiv:2302.00405 [math.NT], February 1 2023. An abbreviated version of this paper appeared in
A. Frid and R. Mercas, eds., WORDS 2023, LNCS 13899, Springer, 2023, pp. 233-246.
31. Jeffrey Shallit, Proof of a conjecture of Krawchuk and Rampersad, arxiv preprint arXiv:2301.11473 [math.CO], January 27 2023. Available at https://arxiv.org/abs/2301.11473.
32. Jeffrey Shallit, Counterexample to a Conjecture of Dombi in Additive Number Theory, Arxiv preprint arXiv:2212.12473 [math.NT], December 23 2022.
33. R. Fokkink, G. F. Ortega, and D. Rust, Corner the empress, Arxiv preprint arXiv:2204.11805 [math.CO], December 8 2022. Available at https://arxiv.org/abs/2204.11805.
34. Dominik Leon Jilg, Frobeniuszahl ausgewählter Zahlenfolgen: Analyse der Frobeniuszahl von synchronisierten Folgen und Folgen mit automatisierter charakteristischer Folge, Bachelor's thesis,
Lehrstuhl für Mathematik IV, Julius-Maximilians-Universität Würzburg, Germany, August 12 2022.
35. Jeffrey Shallit, The Logical Approach To Automatic Sequences: Exploring Combinatorics on Words with Walnut, London Math. Soc. Lecture Note Series, Vol. 482, Cambridge University Press, September
29 2022.
36. Rob Burns, The appearance function for paper-folding words, arxiv preprint arXiv:2210.14719 [math.NT], October 22 2022.
37. Jeffrey Shallit, Some Tribonacci conjectures, arxiv preprint arXiv:2210.03996 [math.CO], October 8 2022. Here are the files for the automata you will need in this paper. Put them in the
"Automaton Library" of Walnut.
38. Aseem Baranwal, James Currie, Lucas Mol, Pascal Ochem, Narad Rampersad, and Jeffrey Shallit, Antisquares and critical exponents, arxiv preprint arXiv:2209.09223 [math.CO], September 19 2022.
39. Luke Schaeffer and Jeffrey Shallit, The First-Order Theory of Binary Overlap-Free Words is Decidable, arxiv preprint arXiv:2209.03266 [cs.FL], September 7 2022.
40. Jeffrey Shallit, Sonja Linghui Shan, and Kai Hsiang Yang, Automatic Sequences in Negative Bases and Proofs of Some Conjectures of Shevelev, arxiv preprint arXiv:2208.06025 [cs.FL], August 11
2022, available at https://arxiv.org/abs/2208.06025. Appeared in J. Shallit, S. L. Shan and K. H. Yang, "Automatic sequences in negative bases and proofs of some conjectures of Shevelev",
RAIRO-Theor. Inf. Appl. 57 (2023), 4.
41. Joseph Meleshko, Pascal Ochem, Jeffrey Shallit, and Sonja Linghui Shan, Pseudoperiodic words and a question of Shevelev, Arxiv preprint, July 21 2022, http://arxiv.org/abs/2207.10171 . Final
version appeared in Pseudoperiodic words and a question of Shevelev, Discrete Math. Theoret. Comput. Sci. 25(2) (2023).
42. Shuo Li, A note on the Lie complexity and beyond, Arxiv preprint arXiv:2207.05859 [math.CO], July 12 2022, available at https://arxiv.org/abs/2207.05859.
43. Golnaz Badkobeh, Tero Harju, Pascal Ochem, and Matthieu Rosenfeld, Avoiding square-free words on free groups, Theoretical Computer Science 922 (2022) 206-217.
44. James Currie, Pascal Ochem, Narad Rampersad, and Jeffrey Shallit, Properties of a ternary infinite word. Arxiv preprint arXiv:2206.01776 [cs.DM], June 3 2022. Available at https://arxiv.org/abs/
2206.01776. To reproduce the calculations, first download the latest version of Walnut here. Second, put the two files in the Custom Bases directory. Third, put the file in the Word Automata
directory. Fourth, put the files in the Automata Library directory. Finally, all the commands from the paper are in the file
45. R. Burns, Factorials and Legendre's three-square theorem: II, arxiv preprint arXiv:2203.16469 [math.NT], March 30 2022. Available at https://arxiv.org/abs/2203.16469.
46. J. Shallit, Note on a Fibonacci parity sequence, arxiv preprint arXiv:2203.10504 [cs.FL], March 20 2022. Available at https://arxiv.org/abs/2203.10504. Appeared in J. Shallit, Cryptography and
Communications 15 (2023), 309-315.
47. N. Rampersad, The periodic complexity function of the Thue-Morse word, the Rudin-Shapiro word, and the period-doubling word, arxiv preprint arXiv:2112.04416, December 8 2021. Available at https:/
/arxiv.org/abs/2112.04416. Appeared in Australasian J. Combinatorics 85 (2023), 150-158.
48. N. Rampersad, Prefixes of the Fibonacci word that end with a cube, arxiv preprint arXiv:2111.09253, November 17 2021. Available at https://arxiv.org/abs/2111.09253. Appeared as N. Rampersad,
Prefixes of the Fibonacci word, C. R. Math. Acad. Sci. Paris 361 (2023) 323-330.
49. J. Shallit, Intertwining of Complementary Thue-Morse Factors, arxiv preprint arXiv:2203.02917 [cs.FL], March 6 2022. Available at https://arxiv.org/abs/2203.02917. Revised, published version in
Australasian J. Combinatorics 84 (2022), 419-430. Available here .
50. J. Shallit, Sumsets of Wythoff sequences, Fibonacci representation, and beyond, Periodica Mathematica Hungarica 84 (2022), 37--46. Available here.
51. Phakhinkon Napp Phunphayap, Prapanpong Pongsriiam, and Jeffrey Shallit, Sumsets associated with Beatty sequences, to appear, Discrete Mathematics.
52. John Machacek, Mechanical proving with Walnut for squares and cubes in partial words, arxiv preprint arXiv:2201.05954 [cs.FL], January 16 2022. Appeared in CPM 2022, Leibniz Int'l Proc.
Informatics, Vol. 223, 2022, pp.5:1-5:11.
53. J. Shallit, Additive Number Theory via Automata and Logic, arxiv preprint arXiv:2112.13627 [math.NT], December 27 2021.
54. G. Fici and J. Shallit, Properties of a Class of Toeplitz Words, arxiv preprint arXiv:2112.12125 [cs.FL], December 23 2021.
□ TP.txt file, place this in the Word Automata directory of Walnut
55. C. S. Kaplan and J. Shallit, A frameless 2-coloring of the plane lattice, Math. Magazine 94 (5) (2021), 353-360. A limited number of free-eprints are available here.
56. N. Rampersad and J. Shallit, Congruence properties of combinatorial sequences via Walnut and the Rowland-Yassawi-Zeilberger automaton, Arxiv preprint arXiv:2110.06244 [math.CO], October 12 2021.
Appeared in Elect. J. Combinatorics 29 (3) (2022), P3.36.
57. J. Shallit, Synchronized sequences, in T. Lecroq and S. Puzynina, eds., WORDS 2021, LNICS 12847, Springer, 2021, pp. 1-19.
58. Jarkko Peltomäki and Ville Salo, Automatic winning shifts, Arxiv preprint arXiv:2106.07249 [cs.FL], June 14 2021.
59. J. Shallit, Hilbert's spacefilling curve described by automatic, regular, and synchronized sequences, Arxiv preprint arXiv:2106.01062 [cs.FL], June 2 2021. Files for the Walnut proofs:
60. Jeffrey Shallit, Frobenius numbers and automatic sequences, Arxiv preprint arXiv:2103.10904 [math.NT], March 19 2021. Files for the paper:
□ shift.txt (put this in the "Automata Library" of Walnut)
□ fibinc.txt (put this in the "Automata Library" of Walnut)
61. Jason P. Bell and J. Shallit, Lie complexity of words, Arxiv preprint, Feb 7 2021, arXiv:2102.03821 [cs.FL]. Available here.
62. Aseem Baranwal, Luke Schaeffer, and Jeffrey Shallit, Ostrowski-automatic sequences: Theory and applications, Theor. Comput. Sci. 858 (2021) 122-142. Available here until March 11 2021.
63. Jeffrey Shallit, Robbins and Ardila meet Berstel, Info. Proc. Letters 167 (2021). Available at https://doi.org/10.1016/j.ipl.2020.106081.
64. Jeffrey Shallit, Abelian complexity and synchronization, arxiv preprint, November 1 2020. Appeared in Abelian complexity and synchronization, INTEGERS 21 (2021), #A36.
Here are the files associated with the paper:
65. Jeffrey Shallit, Subword complexity of the Fibonacci-Thue-Morse sequence: the proof of Dekking's conjecture, Indag. Math. 32 (2021), 729-735. Files for the paper:
66. Jeffrey Shallit, Robbins and Ardila meet Berstel, Arxiv preprint 2007.14930, July 29 2020.
67. Daniel Gabric and Jeffrey Shallit, The simplest binary word with only three squares, Arxiv preprint 2007.08188, July 17 2020. Data for the paper:
68. Marko Milosevic and Narad Rampersad, Squarefree words with interior disposable factors, Arxiv preprint, July 7 2020. Appeared in Theor. Comput. Sci. 863 (2021), 120-126.
69. Jarkko Peltomäki and Markus A. Whiteland, Avoiding abelian powers cyclically, Arxiv preprint, June 11 2020. Appeared in Advances in Applied Mathematics 121 (2020), 102095. DOI:10.1016/
70. Aseem Raj Baranwal, Decision algorithms for Ostrowski-automatic sequences, Master's thesis, University of Waterloo, 2020.
71. Aseem Raj Baranwal, Jeffrey Shallit, Repetitions in infinite palindrome-rich words, arxiv preprint, April 22 2019. In Mercas R., Reidenbach D. (eds.) Combinatorics on Words. WORDS 2019. Lecture
Notes in Computer Science, vol. 11682, Springer, 2019, pp. 93-105. Available here.
72. Tim Ng, Pascal Ochem, Narad Rampersad, Jeffrey Shallit, New results on pseudosquare avoidance, arxiv preprint, April 19 2019. In Mercas R., Reidenbach D. (eds.) Combinatorics on Words. WORDS
2019. Lecture Notes in Computer Science, vol. 11682, Springer, 2019, pp. 264-274. Available here.
73. T. Clokie, D. Gabric, and J. Shallit, Circularly squarefree words and unbordered conjugates: a new approach, arxiv preprint, April 17 2019. In Mercas R., Reidenbach D. (eds.) Combinatorics on
Words. WORDS 2019. Lecture Notes in Computer Science, vol. 11682, Springer, 2019, pp. 264-274. Available here.
74. Aseem R. Baranwal, Jeffrey Shallit, Critical exponent of infinite balanced words via the Pell number system, arxiv Preprint, February 1 2019. In Mercas R., Reidenbach D. (eds.) Combinatorics on
Words. WORDS 2019. Lecture Notes in Computer Science, vol. 11682, Springer, 2019, pp. 80-92. Available here.
75. James Currie, Narad Rampersad, Tero Harju, and Pascal Ochem, Some further results on squarefree arithmetic progressions in infinite words, arxiv preprint, January 18 2019. Appeared in Theoretical
Computer Science 799 (2019) 140-148.
76. James Currie and Narad Rampersad, On some problems of Harju concerning squarefree arithmetic progressions in infinite words, arxiv preprint, December 5 2018.
77. Lukasz Merta, Formal inverses of the generalized Thue-Morse sequences and variations of the Rudin-Shapiro sequence, arxiv preprint, October 8 2018. Appeared in Discrete Mathematics and
Theoretical Computer Science Vol. 22:1, 2020, #15.
78. Colin Krawchuk and Narad Rampersad, Cyclic Complexity of Some Infinite Words and Generalizations, Integers 18A (2018), #A12. Available at https://math.colgate.edu/~integers/sjs12/sjs12.pdf.
79. Jeffrey Shallit and Ramin Zarifi, Circular critical exponents for Thue–Morse factors, RAIRO Info. Theor., published online 17 January 2019. Available here.
80. Pierre Bonardo, Anna E. Frid, Jeffrey Shallit, "The number of valid factorizations of Fibonacci prefixes", Theor. Comput. Sci., 2019, to appear. Available online at https://doi.org/10.1016/
81. Jason Bell, Kathryn Hare and Jeffrey Shallit, When is an automatic set an additive basis? Proc. Amer. Math. Soc. Ser. B 5 (2018), 50-63. Available here.
82. Jason Bell, Thomas Finn Lidbetter, and Jeffrey Shallit, Additive Number Theory via Approximation by Regular Languages, arxiv preprint, April 23 2018. Appeared in M. Hoshi and S. Seki, eds., DLT
2018, LNCS Vol. 11088, Springer, 2018, pp. 121-132. Here are text files describing how you can verify the results yourself using Grail and Walnut.
83. Narad Rampersad, Jeffrey Shallit, Élise Vandomme, Critical exponents of infinite balanced words, arxiv preprint, January 16 2018. Accepted for Theoretical Computer Science; in press here.
84. James Currie, Lucas Mol, and Narad Rampersad, A family of formulas with reversal of high avoidability index, International Journal of Algebra and Computation 27 (2017) 477-493.
85. A. Rajasekaran, N. Rampersad, J. Shallit, Overpals, Underlaps, and Underpals, in WORDS 2017: Combinatorics on Words, 2017, pp. 17-29.
86. Chen Fei Du, Hamoon Mousavi, Eric Rowland, Luke Schaeffer, and Jeffrey Shallit, Decision Algorithms for Fibonacci-Automatic Words, II: Related Sequences and Avoidability, Theoret. Comput. Sci.
657 (2017), 146-162. Examples from the paper that can be run with Walnut (download Walnut below): You'll also need to put the following files in the "Word Automata" library:
87. Quentin Valembois, Propriétés décidables des suites automatiques, Master's thesis, University of Liège, Belgium, 2017. Available here.
88. Luke Schaeffer and Jeffrey Shallit, Closed, Palindromic, Rich, Privileged, Trapezoidal, and Balanced Words in Automatic Sequences, Elect. J. Combinatorics 23 (1) (2016), Paper #P1.25.
89. Chen Fei Du, Hamoon Mousavi, Luke Schaeffer, and Jeffrey Shallit, Decision Algorithms for Fibonacci-Automatic Words, III: Enumeration and Abelian Properties, Int. J. Found. Comput. Sci. 27 (8)
(2016), 943-963.
90. Hamoon Mousavi, Luke Schaeffer, and Jeffrey Shallit, Decision Algorithms for Fibonacci-Automatic Words, I: Basic Results, RAIRO Inform. Théorique 50 (2016), 39-66. E-version: here.
91. Floriane Magera, Automatic demonstration of mathematical conjectures, Master's thesis, University of Liège, 2015-6. Available at https://matheo.uliege.be/handle/2268.2/1679.
92. Adam Borchert and Narad Rampersad, Words with many palindrome pair factors, Arxiv preprint arXiv:1509.05396 [math.CO], September 17 2015. Published version in Electronic J. Combinatorics 22 (4)
(2015), Paper P4.23.
93. Daniel Goc, Hamoon Mousavi, Luke Schaeffer, Jeffrey Shallit, A New Approach to the Paperfolding Sequences, in Arnold Beckmann, Victor Mitrana, Mariya Soskova, eds., Evolving Computability: 11th
Conference on Computability in Europe, CiE 2015, Springer, LNICS, Vol. 9136, 2015, pp. 34-43. Available here.
94. H. Mousavi and J. Shallit, Mechanical Proofs of Properties of the Tribonacci Word, in F. Manea and D. Nowotka, eds., WORDS 2015, LNCS 9304, Springer, 2015, pp. 1-21. Here are the Walnut
predicates you can use to reproduce most of the results in this paper:
□ TR.txt, put in the "Word Automata" library
This was one of the first Walnut papers we wrote, and now (in 2024) we know there are better ways to do some of the things we did in this one. For example, to check whether TR[i..i+n-1] is a
palindrome, where TR denotes the Tribonacci word, one can simply write
def tribpal "?msd_trib Au,v (u>=i & u<i+n & u+v+1=2*i+n) => TR[u]=TR[v]":
and hence avoid the split into even and odd cases. It runs very quickly!
Similarly, to check that TR is mirror invariant, one can write
def trib_fac_reverse_equiv "?msd_trib Au,v (u>=i & u<i+n & u+v+1=i+j+n) => TR[u]=TR[v]":
def trib_mirror "?msd_trib Ai,n Ej $trib_fac_reverse_equiv(i,j,n)":
which also runs rather quickly.
95. Daniel Goc, Narad Rampersad, Michel Rigo, Pavel Salimov, On the number of Abelian Bordered Words (with an Example of Automatic Theorem-Proving) Int. J. Found. Comput. Sci. 25 (2014), 1097-1110. | {"url":"https://cs.uwaterloo.ca/~shallit/walnut.html","timestamp":"2024-11-05T01:13:12Z","content_type":"text/html","content_length":"42463","record_id":"<urn:uuid:e5996f0f-b880-48db-89d2-90a6fcf357d6>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00171.warc.gz"} |
How STARKs work if you don't care about FRI
How STARKs work if you don't care about FRI posted November 2023
Here's some notes on how STARK works, following my read of the ethSTARK Documentation (thanks Bobbin for the pointer!).
Warning: the following explanation should look surprisingly close to PlonK or SNARKs in general, to anyone familiar with these other schemes. If you know PlonK, maybe you can think of STARKs as
turboplonk without preprocessing and without copy constraints/permutation. Just turboplonk with a single custom gate that updates the next row, also the commitment scheme makes everything
The execution trace table
Imagine a table with $W$ columns representing registers, which can be used as temporary values in our program/circuit. The table has $N$ rows, which represent the temporary values of each of these
registers in each "step" of the program/circuit.
For example, a table of 4 registers and 3 steps:
r0 r1 r2
The constraints
There are two types of constraints which we want to enforce on this execution trace table to simulate our program:
• boundary constraints: if I understand correctly this is for initializing the inputs of your program in the first rows of the table (e.g. the second register must be set to 1 initially) as well as
the outputs (e.g. the registers in the last two rows must contain $3$, $4$, and $5$).
• state transitions: these are constraints that apply to ALL contiguous pairs of rows (e.g. the first two registers added together in a row equal the value of the third register in the next row).
The particularity of STARKs (and what makes them "scallable" and fast in practice) is that the same constraint is applied repeatidly. This is also why people like to use STARKs to implement
zkVMs, as VMs do the same thing over and over.
This way of encoding a circuit as constraints is called AIR (for Algebraic Intermediate Representation).
Straw man 1: Doing things in the clear coz YOLO
Let's see an example of how a STARK could work as a naive interactive protocol between a prover and verifier:
1. the prover constructs the execution trace table and sends it to the verifier
2. the verifier checks the constraints on the execution trace table by themselves
This protocol works if we don't care about zero-knowledge, but it is obviously not very efficient: the prover sends a huge table to the verifier, and the verifier has to check that the table makes
sense (vis a vis of the constraints) by checking every rows involved in the boundary constraints, and checking every contiguous pair of rows involved in the state transition constraints.
Straw man 2: Encoding things as polynomials for future profit
Let's try to improve on the previous protocol by using polynomials. This step will not immediately improve anything, but will set the stage for the step afterwards. Before we talk about the change to
the protocol let's see two different observations:
First, let's note that one can encode a list of values as a polynomial by applying a low-degree extension (LDE). That is, if your list of values look like this: $(y_0, y_1, y_2, \cdots)$, then
interpolate these values into a polynomial $f$ such that $f(0) = y_0, f(1) = y_1, f(2) = y_2, \cdots$
Usually, as we're acting in a field, a subgroup of large-enough size is chosen in place of $0, 1, 2$ as domain. You can read why's that here. (This domain is called the "trace evaluation domain" by
Second, let's see how to represent a constraint like "the first two registers added together in a row equal the value of the third register in the next row" as a polynomial. If the three registers in
our examples are encoded as the polynomials $f_1, f_2, f_3$ then we need a way to encode "the next row". If our domain is simply $(0, 1, 2, \cdots)$ then the next row for a polynomial $f_1(x)$ is
simply $f_1(x + 1)$. Similarly, if we're using a subgroup generated by $g$ as domain, we can write the next row as $f_1(x \cdot g)$. So the previous example constraint can be written as the
constraint polynomial $c_0(x) = f_1(x) + f_2(x) - f_3(x \cdot g)$.
If a constraint polynomial $c_0(x)$ is correctly satisfied by a given execution trace, then it should be zero on the entire domain (for state transition constraints) or on some values of the domain
(for boundary constraints). This means we can write it as $c_0(x) = t(x) \cdot \sum_i (x-g^i)$ for some "quotient" polynomial $t$ and the evaluation points $g^i$ (that encode the rows) where the
constraint should apply. (In other words, you can factor $c_0$ using its roots $g^i$.)
Note: for STARKs to be efficient, you shouldn't have too many roots. Hence why boundary constraints should be limited to a few rows. But how does it work for state transition constraints that
need to be applied to all the rows? The answer is that since we are in a subgroup there's a very efficient way to compute $\sum_i (x - g^i)$. You can read more about that in Efficient computation
of the vanishing polynomial of the Mina book.
At this point, you should understand that a prover that wants to convince you that a constraint $c_1$ applies to an execution trace table can do so by showing you that $t$ exists. The prover can do
so by sending the verifier the $t$ polynomial and the verifier computes $c_1$ from the register polynomials and verifies that it is indeed equal to $t$ multiplied by the $\sum_i (x-g^i)$. This is
what is done both in Plonk and in STARK.
Note: if a constraint doesn't satisfy the execution trace, then you won't be able to factor it with $\sum_i (x - g^i)$ as not all of the $g^i$ will be roots. For this reason you'll get something
like $c_1(x) = t(x) \cdot \sum_i (x - g^i) + r(x)$ for $r$ some "rest" polynomial. TODO: at this point can we still get a $t$ but it will have a high degree? If not then why do we have to do a
low-degree test later?
Now let's see our modification to the previous protocol:
1. Instead of sending the execution trace table, the prover encodes each column of the execution trace table (of height $N$) as polynomials, and sends the polynomials to the verifier.
2. The prover then creates the constraint polynomials $c_0, c_1, \cdots$ (as described above) for each constraint involved in the AIR.
3. The prover then computes the associated quotient polynomials $t_0, t_1, \cdots$ (as described above) and sends them to the verifier. Note that the ethSTARK paper call these quotient polynomials
the constraint polynomials (sorry for the confusion).
4. The verifier now has to check two things:
□ low-degree check: that these quotient polynomials are indeed low-degree. This is easy as we're doing everything in the clear for now (TODO: why do we need to check that though?)
□ correctness check: that these quotient polynomials were correctly constructed. For example, the verifier can check that for $t_0$ by computing $c_0$ themselves using the execution trace
polynomials and then checking that it equals $t_0 \cdot (x - 1)$. That is, assuming that the first constraint $c_0$ only apply to the first row $g^0=1$.
Straw man 3: Let's make use of the polynomials with the Schwartz-Zippel optimization!
The verifier doesn't actually have to compute and compare large polynomials in the correctness check. Using the Schwartz-Zippel lemma one can check that two polynomials are equal by evaluating both
of them at a random value and checking that the evaluations match. This is because Schwartz-Zippel tells us that two polynomials that are equal will be equal on all their evaluations, but if they
differ they will differ on most of their evaluations.
So the previous protocol can be modified to:
1. The prover sends the columns of the execution trace as polynomials $f_0, f_1, \cdots$ to the verifier.
2. The prover produces the quotient polynomials $t_0, t_1, \cdots$ and sends them to the verifier.
3. The verifier produces a random evaluation point $z$.
4. The verifier checks that each quotient polynomial has been computed correctly. For example, for the first constraint, they evaluate $c_0$ at $z$, then evaluate $t_0(z) \cdot (z - 1)$, then check
that both evaluations match.
Straw man 4: Using commitments to hide stuff and reduce proof size!
As the title indicates, we eventually want to use commitments in our scheme so that we can add zero-knowledge (by hiding the polynomials we're sending) and reduce the proof size (our commitments will
be much smaller than what they commit).
The commitments used in STARKs are merkle trees, where the leaves contain evaluations of a polynomial. Unlike the commitments used in SNARKs (like IPA and KZG), merkle trees don't have an algebraic
structure and thus are quite limited in what they allow us to do. Most of the complexity in STARKs come from the commitments. In this section we will not open that pandora box, and assume that the
commitments we're using are normal polynomial commitment schemes which allow us to not only commit to polynomials, but also evaluate them and provide proofs that the evaluations are correct.
Now our protocol looks like this:
1. The prover commits to the execution trace columns polynomials, then sends the commitments to the verifier.
2. The prover commits to the quotient polynomials, the sends them to the verifier.
3. The verifier sends a random value $z$.
4. The prover evaluates the execution trace column polynomials at $z$ and $z \cdot g$ (remember the verifier might want to evaluate a constraint that looks like this $c_0(x) = f1(x) + f2(x) - f3(x \
cdot g)$ as it also uses the next row) and sends the evaluations to the verifier.
5. The prover evaluates the quotient polynomials at $z$ and sends the evaluations to the verifier (these evaluations are called "masks" in the paper).
6. For each evaluation, the prover also sends evaluation proofs.
7. The verifier verifies all evaluation proofs.
8. The verifier then checks that each constraint is satisfied, by checking the $t = c \cdot \sum_i (x - g^i)$ equation in the clear (using the evaluations provided by the prover).
Straw man 5: a random linear combination to reduce all the checks to a single check
If you've been reading STARK papers you're probably wondering where the heck is the composition polynomial. That final polynomial is simply a way to aggregate a number of checks in order to optimize
the protocol.
The idea is that instead of checking a property on a list of polynomial, you can check that property on a random linear combination. For example, instead of checking that $f_1(z) = 3$ and $f_2(z) =
4$, and $f_3(z) = 8$, you can check that for random values $r_1, r_2, r_3$ you have:
$$r_1 \cdot f_1(z) + r_2 \cdot f_2(z) + r_3 \cdot f_3(z) = 3 r_1 + 4 r_2 + 8 r_3$$
Often we avoid generating multiple random values and instead use powers of a single random value, which is a tiny bit less secure but much more practical for a number of reasons I won't touch here.
So things often look like this instead, with a random value $r$:
$$f_1(z) + r \cdot f_2(z) + r^2 \cdot f_3(z) = 3 + 4 r + 8 r^2$$
Now our protocol should look like this:
1. The prover commits to the execution trace columns polynomials, then sends the commitments to the verifier.
2. The verifier sends a random value $r$.
3. The prover produces a random linear combination of the constraint polynomials.
4. The prover produces the quotient polynomial for that random linear combination, which ethSTARK calls the composition polynomial.
5. The prover commits to the composition polynomial, then sends them to the verifier.
6. The protocol continues pretty much like the previous one
Note: in the ethSTARK paper they note that the composition polynomial is likely of higher degree than the polynomials encoding the execution trace columns. (The degree of the composition
polynomial is equal to the degree of the highest-degree constraint.) For this reason, there's some further optimization that split the composition polynomial into several polynomials, but we will
avoid talking about it here.
We now have a protocol that looks somewhat clean, which seems contradictory with the level of complexity introduced by the various papers. Let's fix that in the next blogpost on FRI...
leave a comment... | {"url":"https://cryptologie.net/article/601/how-starks-work-if-you-dont-care-about-fri/","timestamp":"2024-11-06T12:10:16Z","content_type":"text/html","content_length":"30793","record_id":"<urn:uuid:036b8290-c4fd-4ba1-a92e-6aa4c7424642>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00440.warc.gz"} |
The main aim of a shelter is to create
• a roof to let rain flow aside and keep the ground beneath dry
• walls to protect from wind and cold with it, e.g. to keep a fire burning
• additional thicker walls to protect from cold, e.g. to keep heat from a fire more lasting
as a result of one or more requirements, we end up with circumscribing a given volume, we create a solid. Usually we take material found in the environment to build a shelter, e.g. tree branches with
leafs (see also Building Typology). One of the most simple construction is the one of a cone or tipi-like.
In order to study various forms, let's look how good such a space can be circumscribed, the volume we assume given - how about the surface, and what part is exposed vertical as roof?
For now I look for the dome, yurt, tipi and cube, and list first the calculations:
A = surface area, A[wall] = surface area without ground area, A[roof] = surface area weather exposed, V = volume
Sphere A = 4 π r^2 Dome A = 4 π r^2 / 2 + r^2 π = 3 π r^2 Cube A = 6 s^2
A[roof] = 4 π r^2 / 2 A[wall] = 4 π r^2 / 2 = 2 π r^2 A[wall] = 5 s^2
V = 4/3 π r^3 A[roof] = 4 π r^2 / 2 = 2 π r^2 A[roof] = s^2
V = 4/6 π r^3 V = s^3
Cone / Tipi Yurt
The tipi diameter is also the length s of the cone, therefore 2r = d = s; and h = √((2r)^2-r^2) Following assumptions were made: 1/3 height is the roof, which has α angle, 2/3 height is the
= r √3 vertical wall.
A = r^2 π + r 2r &pi = 3 r^2 π A = 2 r π (tan(α) r 2) + r π (r^2+(tan(α) r)^2)^1/2 + r^2 π = r^2π (4 tan(α) + (1+tan(α)^2)^1/2 + 1)
A[wall] = A[roof] = 2 r^2 π
A[floor] = r^2 π A[wall] = 2 r π (tan(α) r 2) + r π (r^2+(tan(α) r)^2)^1/2 = r^2 π (4 tan(α) + (1+tan(α)^2)^1/2
V = 1/3 r^2 π h = 1/3 r^2 π √((2r)^2-r^2) = 1/3 r^3 π √3
A[roof] = r π (r^2+(tan(α) r)^2)^1/2 = r^2 π (1+tan(α)^2)^1/2
V = r^2 π (tan(α) r 2) + r^2 π (tan(α) r / 3) = r^3 π 7/3 tan(α)
So, to have V set to 1m^3 (and 25° roof angle for the yurt) and calculating all surfaces A via r or s:
A[sphere] = 4 π ((1/(4/3 &pi))^1/3)^2 = 4.835
A[dome] = 3 π ((1/(4/6 &pi))^1/3)^2 = 5.757
A[dome-wall] = 2 π ((1/(4/6 &pi))^1/3)^2 = 3.838
A[yurt] = (1/(π 7/3 tan(25°))^1/3)^2 (4 tan(25°) + (1+tan(25°)^2)^1/2 + 1) = 5.494
A[yurt-wall] = (1/(π 7/3 tan(25°))^1/3)^2 (4 tan(25°) + (1+tan(25°)^2)^1/2) = 4.109
A[cube] = 6 (1^1/3)^2 = 6
A[cube-wall] = 5 (1^1/3)^2 = 5
A[tipi] = 3 π ((3/(π √3))^1/3)^2 6.337
A[tipi-wall] = 2 π ((3/(π √3))^1/3)^2 4.225
Now we can list the ratios:
A[sphere] : A[cube] = 4.835 : 6 = 100% : 124.1%
85.6% : 100%
A[dome] : A[yurt] : A[tipi] : A[cube] = 5.757 : 5.494 : 6.337 : 6 = 100% : 95.4% : 110.1% : 104.2%
96.0% : 91.6% : 105.6% : 100%
A[dome-wall] : A[yurt-wall] : A[tipi-wall] : A[cube-wall] = 3.838 : 4.109 : 4.225 : 5 = 100% : 107.1% : 110.1% : 130.3%
76.6% : 82.1% : 84.5% : 100%
Volume vs Surface (without ground area)
The required heating is mainly dependent on the surface and secondary on the volume, and since we calculated anyway with the same volume, we focus on the surface, which requires thermal insulation in
order to maintain the heat energy within the room. So, the lesser the surface circumventing a volume, the better, the lesser insulations is required, and lesser surface where the energy is able to be
So, since I quickly led out the ratios, we can directly say, considering just the walls (without floor area), the dome requires 25-30% less energy and insulation than a cube given the same volume,
and still have more ground area:
Dome (V = 1m^3) r = (1/(4/6 π))^1/3 = 0.781 A[floor] = r^2 π = 1.919 100%
Yurt (V = 1m^3) r = (1/(π 7/3 tan(25°))^1/3 = 0.664 A[floor] = r^2 π = 1.385 72.2%
Tipi (V = 1m^3) r = (3/(π √3))^1/3 = 0.820 A[floor] = r^2 π = 2.112 110.1%
Cube (V = 1m^3) s = 1^1/3 = 1 A[floor] = s^2 = 1 52.1%
In other words, given the same volume, saving 25-30% of wall insulation, and having almost twice the ground area. Since you get twice the ground area, when considering the ground area also as
insulation area, the advantage shrinks to 4% solely. The
is alike the dome, 7% more surface than the dome (just the walls/roof), this is very good. The tipi, due it's form to extend toward the ground, gives most floor area for a given volume.
I personally have a preference to consider larger floor area the better, as it's one of my aims. Yet, one may argue, as said, the grander floor requires more thermal insulation too.
Now, given the insights of the previous calculations, why aren't we building spherical? The answer will be given in this consideration:
Dome (V = 1m^3) r = (1/(4/6 π))^1/3 = 0.781 A[roof] = 2 r^2 π = 3.838 100.0% 383.8%
Yurt (V = 1m^3) r = (1/(π 7/3 tan(25°))^1/3 = 0.664 A[roof] = r^2 π (1 + tan(α)^2)^1/2 = 1.528 39.8% 152.8%
Tipi (V = 1m^3) r = (3/(π √3))^1/3 = 0.820 A[roof] = 2 r^2 π = 4.224 110.1% 422.4%
Cube (V = 1m^3) s = 1^1/3 = 1 A[roof] = s^2 = 1 26.1% 100.0%
The dome and tipi require 4x more material to be act as roof and be sealed 100% from water/rain than the roof of a cube. The yurt has apprx. 50% more roof surface than the cube, which is not that
bad, given the yurt roof already is tilted, and the cube roof is not and the surface would increase as well if we had it tilted, e.g. 25° too.
Lloyd Khan wrote in some of his critics about polyhedral domes (e.g. such as geodesic dome) that in case of domes the entire surface is a roof, unlike with an ordinary house which has a roof and
wall, and a wall requires less water resistance than a roof. This is one of the main building disadvantages, which I think is also responsible why so few domes are built compared to rectangular/cubic
houses with roofs; yet in case of temporary building the same argument applies of course.
The yurt can be considered a compromise between circle/spherical and rectangular/cubic building, the ground floor is circular, yet has wall (no vertical exposure) and roof (vertical exposure).
The Volume vs Surface reveals the spherical construction does best, and this influences effectiveness of heating as its primarly depended on surface. But then the Volume vs Roof reveals the major
drawback on spherical building, the former advantage of gaining due less material to use to enclose a given volume is vanished, by having almost 4x more vertical exposure, a roof which needs to seal
rain completely.
Now, what to do with these two considerations? If you live in a region with less rainfall, and do not need to pay so close attention to have a 100% waterproof roof, spherical or dome-wise is your
choice - also if you live in a region where it's cold and you need to invest much into thermal insulation, but you have little to no rainfall. This leaves the rest, regions with significant rainfall,
where a roof is required which indeed provides 100% seal of rain - and here cubic building does best, with a tilted roof of course so the rain flows on the side(s). The yurt, like said, a mixture of
spherical and cubic building, provides a compromise - does well Volume vs Surface and Volume vs Roof.
Why I am calculating this all - well first of all for myself as I was really curious what's the advantage truly, in numbers, not just knowing the sphere is the optimum of surface and volume but to
see how a dome (half a sphere) relates to a cube or a yurt. Now we pretty much have a good overview.
I have been fascinated by geometrical forms yet some of them were too abstract and really until I started to work in real world with some of the forms it got me so to speak, especially when you build
and live in them yourself. At such a point you actually immediatly experience a form, and especially since we are so used to cubic or rectangular rooms it is a new experience for the senses and
spirit to live in spherical or circular rooms.
We are dealing here with the archetypes of forms as such, and the circle, the square, the sphere, the cube and so forth, they are archetypes which connect us to the realm of abstract forms from where
we develop, plan and finally build not just temporary buildings, but everything which finally has a physical form here.
Next Page >> | {"url":"https://simplydifferently.org/Welcome?page=3","timestamp":"2024-11-02T04:39:42Z","content_type":"text/html","content_length":"34029","record_id":"<urn:uuid:36d420a6-dcba-485f-b28c-bd71922b51f6>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00711.warc.gz"} |
Influence of Ph on Copy-Paper Mechanical Properties Such as the Tensile Strength and Printability
Nine series of experiments were conducted on nine different copy paper samples to determine whether there is a correlation between their pH values, their tensile strengths, and printability with
statistical tests which confirmed the connection. Ιn this work, all statistical tests were performed via SPSS software. The regression analysis and statistics have confirmed that there was a
connection. In more detail the ANOVA with significant value 0.001 led us to reject the null hypothesis that there was no connection between the parameters of the pH value the tensile force MD, the
pick picking IGT MD and the ratio MD/ CD tensile strength. In coefficients the predictor variable pick-picking IGT was significant because its p-value was 0.000. The p-value for the ratio MD/CD
tensile strength was lower than 0.05, i.e., 0.011 which indicated that it was statistically significant too. The Kruskal-Wallis test confirmed also the connection between the mean values of tensile
force MD and pH value and pick-picking IGT and ratio MD/CD tensile strength. From the test of normality, the p=0.019 suggested an evidence of non-normality for the data set of pH and ratio MD/CD
tensile strength. Also, the ANOVA with significant value 0.000 led us to reject the null hypothesis that there was no connection between the parameters of the pH, the tensile force CD, the pick
picking IGT MD and the ratio MD/CD tensile strength. From the Kruskal-Wallis H test again a p-value of 0.118 confirmed the null hypothesis that the group medians were all equal when the pH value, the
CD tensile strength, the pick-picking IGT MD and ratio MD/CD tensile strength were concerned. A boxplot was produced from the descriptive statistics, for the latter parameters i.e., the pH value, the
CD tensile strength, the pick picking IGT MD and the ratio MD/CD tensile strength. The dataset had a negative kurtosis i.e., -0.916 and was considered to be a “light-tailed” dataset. From the
one-sample t-test we concluded in a level of importance α=0.01 that there was a difference between all the average pH values measured. Finally, the optimal pH values were identified during the
analysis of the nine samples of copy paper at the beginning of the year 2021, to be 10.7 when measuring the tensile strength MD, and 10.5 when measuring tensile strength CD, pick picking IGT and the
ratio MD/CD tensile strength.
Keywords: pH; Tensile strength; Pick-picking IGT; Ratio MD/CD tensile strength; SPSS; Anova; Kruskalwallis H test; Shapiro-Wilk test; Test on normality
The aim of this study was to investigate the effect of pH on the final copy paper product properties and thus on paper-fiber characteristics. The pH level of paper pulp affects the surface charge,
the fiber swelling, and finally the paper strength. Thus, there may be a correlation between the pH, the tensile strength and pick picking IGT of the copy paper samples tested in this work. It has
already been stated that the pH level certainly affects the fiber surface charge, and thus the fiber network structure of the copy paper sheets [1]. In this work tested paper properties were the pH,
the tensile strength, pick picking IGT, and the ratio of tensile values MD versus CD.
At high pH, alkaline pH, the concentration of H^+ ions decrease [1]. The hydrolysis rate though depends on the hydrogen ion concentration. At alkaline pH, the hydrolysis rate depends on hydrogen ion
concentrations due to the fact that acid hydrolysis of β(1→4) glycosidic bonds, which link the glucose monomers that make up cellulose, is lowered, and in turn the hydrogen bonding is lowered [2].
The swelling of the fibers then is increased, and the cellulose crystalline structure is preserved and tensile strength and printability attain high values. A glycosidic bond or glycosidic linkage is
a type of covalent bond that joins a carbohydrate molecule and may also be formed another carbohydrate [3]. A glycosidic bond and may also be formed between the hemiacetal or hemicetal group of a
saccharide and the hydroxyl group of some other compounds such as an alcohol. On the other hand, the N-glycosidic bonds in nucleosides [4] are highly stable in neutral and alkaline media and prone to
hydrolysis in the presence of mineral and organic acids. An increased pH value leads to a higher amount of surface charges which leads to greater swelling of the cellulose fibers.
The tensile strength test was conducted to determine the strength of the copy paper samples as per standard ISO 1924-2:2008 [5] with a cross head speed of 20mm/min. The specimens of each copy paper
sample were tested 20 times in the MD dimension and 20 times in the CD dimension and their average values were calculated. The mechanical testing (tensile) was carried out on the Zwick Roel testing
machine [6]. We have also conducted nine series of experiments measuring the pH of the aqueous extracts of copy paper A4 and A3 samples, as per standard ISO 6588-1:2020 [7], since the beginning of
the year 2021. The pH values of the nine paper samples were measured at different temperatures between 20-25 ^0C and were corrected using the Nernst equation.
Finally, the pick picking IGT printability test was held as per ISO 3783:2006 [8], for all nine samples for the MD and CD direction, and the MD value was recorded in this work. Analyses and
statistics estimates were conducted for all nine values of the parameters of the pH and the corresponding values of the tensile strength MD and CD, and the pick picking IGT MD, as well as the tensile
strength ratio. The optimal pH value for better tensile strength value MD and CD and pick-picking IGT value MD was to be evaluated by analyzing the nine copy-paper samples.
The tensile testing machine Zwick Roell Z2.5 BT1-FR 2.5^th D14/2008, S.N.181435/2008, which extended the paper test pieces of dimensions 15mm x 210mm, at 20mm/min constant rate of elongation and
measured the maximum tensile force was used. It had a strength force of 2.5KN. The tensile testing machine measured the maximum tensile force to an accuracy of ±1% of the true force. The machine had
a long stroke extensometer placed directly on the paper test piece for the measurement of its elongation. The machine was connected with a computer LG. The tensile testing machine included two clamps
for holding the paper test pieces of 15mm width. The clamps grabbed the test pieces firmly along the straight line across the full width of the test pieces and adjusted the clamping force
pneumatically. The machine had pneumatic grips of 2.5kN, and a long stroke extensometer. A guillotine IDEAL 1043 GS DP 02050 made in Germany, for cutting the paper test pieces used in the tensile
strength test to dimensions of 15mm x 210mm, was also used. An accredited ruler for measuring the width of the paper test piece and also of measuring the rate of elongation was also used.
An IGT Global Standard Tester 1 with printing force 350N and printing velocity 4.00m/s, and a high-speed inking unit 4, software Version 3.01, consisting of 5 inking drums having contact with the
top-roller was used for the determination of the pick-picking IGT value MD. The IGT pick test oil was of middle viscosity with number 404.004.020 and consisted of poly-isobutene’s. The tester was
accompanied with an aluminum disc 10mm wide with smooth edges, and a diameter of 65.0mm. There was also a paper packing consisting of 6 layers of paper with a total thickness of 1.5mm. There was also
an ink pipette for applying an accurate quantity of pick-test oil to the inking device, having a volume of 2ml. The distributing time of the ink was 15s and the inking time of the aluminum disc was
10s. The speed during inking time was 0.4m/s and the temperature was kept at 23.2 °C. The ink layer thus formed was approximately 8μm thick. Also, a pick-start viewer was used to determine the
starting point of pick, which had a light source providing grazing incident light to the paper test piece and observed pick velocity. Petroleum ether 60°-80 °C of analytical reagent grade solvent was
used to wet the cleaning rag, lint-free, which served as cleaning aid. There was also a tank filled with water with a built-in thermometer which kept the ambient temperature near the printing device
at 23 °C with an accuracy of 0.1 °C. Finally, there was a ruler with length of 200mm which measured the distance from the print starting point to the pick starting point. The guillotine IDEAL 1043 GS
DP 02050 was used also for cutting the paper test pieces to dimensions of 55mmx340mm used in the testing of the pick picking IGT. The pH value of the nine paper samples was measured at different
temperatures between 20-25 °C and was corrected at 25 °C using Nernst equation, and using a model Metrohm 716 DMS Titrino pH-meter. All measurements were performed as per standard ISO 6588-1:2020 in
duplicate and the average values were reported here.
Result and Discussion
In Table 1 and 2 below all the measured values of pH, tensile strength, and printability of the paper were recorded. Also, the grammage of the nine copy paper samples was displayed and was determined
as per standard ISO 536/2012 [9]. Regression analysis generates an equation to describe the statistical relationship between one or more predictor variables and the response variable. The p-value for
each term tests the null hypothesis that the coefficient p-value was equal to zero which indicates no effect. When the p-value was low (<0.05), here p=0.001, the null hypothesis could be rejected. A
larger, insignificant p-value would suggest that the changes in the predictor were not associated with changes in the response. In the regression analysis conducted for the pH value, the dependent
variable parameter was the tensile force MD. The independent variables were the pick picking IGT MD value and the ratio tensile strength.
Table 1: The values of the parameters of the pH, and the corresponding values of the tensile strength in KN/m, and printability of the paper, for the nine (9) samples tested since the beginning of
the year 2021. For the grammage property the standard deviation of the measurements was also given. These standard deviations were the basis of defining standard uncertainty, which were called
uncertainties at standard deviation level.
Table 2: The values of the parameters of the pH and the corresponding values of the tensile strength in MPa, and pick picking IGT and grammage for the nine (9) samples tested since the beginning of
the year 2021.
The null hypothesis was rejected here because the significant value was 0.001 in the ANOVA in the above Figure 1. That meant that there was a connection between the pH value and the other parameters
considered i.e., tensile force MD, pick picking IGT value MD and ratio tensile strength. In the output below of coefficient in Figure 2 we could see that the predictor variable pick-picking IGT was
significant because its p-value was 0.000. The p-value for the ratio-tensile was greater but remained lower than the common alpha level of 0.05, i.e., 0.011, which indicated that it was statistically
significant too. Typically, we have used the coefficient p-value to determine which terms we should keep in our regression model. In the model above we had considered keeping both the pick picking
IGT and the ratio tensile strength.
Figure 1: Regression Analysis-linear. The parameter tested was pH value, the dependent variable was tensile force MD, and the independent variables were pick-picking IGT MD, and ratio tensile MD/CD.
Figure 2:Regression Analysis-Coefficients. The parameter tested was pH value, the dependent variable was tensile force MD and the independent variables were pick-picking IGT MD and ratio tensile MD/
From the Kruskal-Wallis H test [10,11] in Figure 3 resulting that there was a connection between the median values of tensile force recorded and pH and pick-picking IGT, and ratio tensile strength,
because the significant value was 0.903. In the case of testing of several independent samples the conditions of the ANOVA dispersion analysis do not apply, that is, either if one of the samples does
not follow a normal distribution or if the sample variations are not equal, then to examine whether the groups into which the values of a variable are divided are the same. We applied the Kruskal
Wallis test which checks the null hypothesis H[o]: the samples come from the same distribution. The Kruskal-Wallis H test is a rank-based nonparametric test that can be used to determine if there are
statistically significant differences between two or more groups of an independent variable on a continuous or ordinal dependent variable. It is considered the non-parametric alternative to the
one-way ANOVA.
Figure 3:Non-parametric tests, the Kruskal-Wallis test.
There was a connection between the median values of the tensile force recorded and the pH and pick-picking IGT values because the significant value was 0.903 in the Kruskal-Wallis test. The null
hypothesis of no effect or no connection between the variables was rejected because the significant value was 0.001 in ANOVA. The null hypothesis was a hypothesis of no difference or no connection.
The p value is less than the chosen significance level and we reject the null hypothesis. The choice of significance level at which we reject H[0] is here 0.1%. This is statistically highly
The data sets that had been obtained from this study were tested for normality, and a suitable test could be chosen for significance testing. The Shapiro-Wilk test [12] was a more powerful test than
the Kolmogorov-Smirnov test. A confidence of 95% was chosen when we reported the test to be non-normal. The null hypothesis for the Shapiro-Wilk test was that the variable considered was normally
distributed in the population. The null hypothesis was rejected when p<0.05. From the test Shapiro-Wilk (n<50) we had significant value =0.019<0.05 and we concluded that the variable ratio tensile
strength did not follow a normal distribution. The value of the Shapiro-Wilk test was below 0.05 and the data significantly deviated from the normal distribution. The Kolmogorov-Smirnov test was
modified to serve as a goodness of fit test. Because the sample size was nine (9) we have used the Shapiro-Wilk test. For the skewed data a p=0.019 (Figure 4) suggested strong evidence of
non-normality for the ratio tensile strength. For the normally distributed data tensile force p was 0.235 so that the null hypothesis was retained at the 0.05 level of significance. For the
approximately normally distributed data p=0.052 for the pick picking IGT parameter so that the null hypothesis was also retained at the 0.05 level of significance. Therefore, normality could be
assumed for this data also.
Figure 4:Tests of normality-Shapiro-wilk test.
The normal Q-Q plot in the above Figure 5 was the alternative graphical method of assessing normality to a histogram [13,14]. It was easy to use because we had a rather small sample size. The scatter
lay close to the line as possible with no obvious pattern coming away from the line, for the data to be normally distributed here. The detrended normal Q-Q plot depicted the differences between the
observed and the predicted values. When in general the distribution of the values of the dependent variable was normal then the values of the difference between observed and predicted fell randomly
about the zero line. This was not the case with the sample data since there were groups of values far above and below the zero line. Then a one-sample test was performed (Figure 6).
Figure 5:Two plots are presented, one of the normal Q-Q plot of tensile force and the detrended normal Q-Q plot of tensile force.
Figure 6:One-sample t-test.
The null hypothesis for one-sample test H[o] assumed that the difference between the true mean and the comparison values was equal to zero. Here the significant value is sig.0.000<0.05 and we
rejected the null hypothesis. From the one-sample t-test, because the confidence interval between lower and upper does not contain zero we could conclude in a level of importance α=0.01 from these
sample values that there was a difference between the average values of pH measured during this test study. Since the beginning of the year 2021 from nine copy-paper samples tested an optimum pH
value was found to be pH 10.7 when the tensile force had a MD value of 4.9KN/m and a pick-picking IGT value of MD=2.0m/s. The tensile strength ratio was then 3.06.
Then in the study, the regression analysis was performed again for the pH value. Then the dependent variable parameter was the tensile force CD. The independent variables were the pick picking IGT MD
value and the ratio tensile strength. The same analyses and statistics estimates were repeated for the values of the parameters of the pH and the corresponding values of the tensile strength CD and
the pick picking IGT MD, as well as the tensile strength ratio (Figure 7). In the regression analysis conducted for the pH value the dependent variable parameter was the tensile force CD. The
independent variables were the pick picking IGT MD value and the ratio tensile strength. The significant value was then reported to be 0.000. This showed that this was less than 0.001 which in turn
meant that it was less than the chosen significance level of 0.01. Thus, we can regard the null hypothesis as refuted and that there was really an association between pH value and tensile force in
the CD direction and pick picking IGT MD and ratio tensile (Figure 8). Because the p-value was 0.118 in the Kruskal-Wallis H we did not have enough evidence to reject the null hypothesis that the
group medians were all equal. From the descriptive statistics the following boxplot structure emerged.
Figure 7:Regression Analysis.
Figure 8:Non-parametric tests, the Kruskal-Wallis test Because the p-value was 0.118 in the Kruskal-Wallis H we did not have enough evidence to reject the null hypothesis that the group medians were
all equal.
The box and whisker plot in Figure 9 displayed the five-number summary of the set of the data tested [15]. These were the minimum 10.14, the first quartile 10.24, the median 10.4, third quartile
10.52 and the maximum 10.66. The box was drawn from the first quartile to the third quartile. The vertical line went through the box at the median pH10.4. The median was a common measure of the
center of the data. The interquartile range box represents the middle 50% of the data. The whiskers represented the ranges for the bottom 25% and the top 25% of the data values, excluding outliers.
The univariate analysis in this study involved describing the distribution of a single variable i.e., pH including the range and the quartiles of the dataset. The shape of the distribution was
described here via the indices of skewness and kurtosis. The characteristics of the pH variable distribution was depicted in a graphical format as the stem and leaf display. Because the skewness was
0.150 and was between -0.5 and 0.5, the data were fairly symmetrical. The skewness was usually described as a measure of the dataset’s symmetry. The perfectly symmetrical data set would have a
skewness of 0. The kurtosis also of a normal distribution would be 0. Because here the dataset had a negative kurtosis, i.e., -0.916, it had less in the tails than the normal distribution. As the
kurtosis decreases the tails became lighter. Here the kurtosis of the dataset was -0.916. Since that value was less than 0 it was considered to be a “light-tailed” dataset. It had as much data in
each tail as it did in the peak (Figure 10).
Figure 9:Boxplot structure.
Figure 10:One-sample t-test.
From the one-sample t-test, we could conclude in a level of importance α=0.01 from the above sample values that there was a difference between the average values of pH measured. This stems from the
fact that the confidence interval between the lower and upper does not contain zero. Since the beginning of the year 2021 from nine copy-paper samples tested an optimum pH value was 10.5 when the
tensile force had a CD value of 2.7KN/m and a pick-picking IGT value of MD=1.7m/s. The tensile strength ratio was then 2.1.
The aim of this study was to investigate the effect of pH on the final copy- paper product’s tensile strength and printability and thus on paper-fiber properties. It was known that the pH level
affected the surface charge of the cellulosic fibers, the fiber swelling, and finally the paper strength [16]. The pH level affected the fiber surface charge, and the network structure of the paper
sheets. The chemical composition of the pulp of the copy paper samples also greatly affected fiber strength [17]. The physical properties especially the tensile strength of the copy paper depended
primarily on the degree of bonding between the fibers. They increased with increasing fiber length, fiber strength and fiber joint strength. The fiber-fiber joint strength increased with increasing
surface charge of the fibers due to increased surface swelling. The paper was affected by changes in pH because of the surface charges of the fibers. As the pH became more alkaline than pH 3 the
surface charge of the fibers increased. At a pH greater than 3 there was a significantly higher surface fiber load and tensile value. A pH 9 showed denser fiber networks than a pH 3. Charged groups
separated and moved away to greater pH levels [16], increasing swelling of the fibers and consequently increasing the tensile strength of the final copy paper products.
It was known that the pH in general had an effect on the original strength of paper and a very great effect on the permanence of paper [18]. Data and analysis previously acquired, not published, were
included in this study for comparison purposes. From seventy-three (73) separate experiments conducted previously during the year 2019, tests had shown that when the pH was 9.6 there was a high value
of tensile force MD=5.8KN/m and also a high value of pick picking IGT of MD=1.8m/s. The pH 9.6 was then one optimum pH value. Also, then when we had a pH value of 9.97 then the pick picking IGT test
produced a high value of MD=2.2m/s. Then, in year 2020, forty-three (43) separate experiments were carried out and one optimum pH value of 9.77 was found and then the tensile force had a MD value of
5.6KN/m and a pick-picking IGT value of MD=2.2m/s.
Since the beginning of the year 2021 nine sets of experiments in different copy paper samples were conducted and an optimum pH value was 10.7 and then the tensile force had a MD value of 4.9KN/m and
a pick picking IGT value of MD=2.0m/s. The tensile strength ratio was 3.06. Also, an optimum pH value was pH 10.5 and then the tensile force had a CD value of 2.7KN/m and a pick picking IGT value of
MD 1.7m/s. The tensile strength ratio was then 2.1. The pH value was found as an important parameter in the production of enzymes [19]. As the pH became more alkaline the enzymatic hydrolysis, the
hydrolysis of β(1→4) glycosidic bonds diminished and we had a lower number of hydrogen bonds and a crystalline structure in cellulose which favored the high tensile strength and the high pick-picking
IGT value [20,21] as was concluded also from the present study.
From the above experiments we concluded that as the pH value increased the ratio of the tensile strength values also increased. As the pH value raised, the hydrolysis of the β(1→4) glycosidic bonds
may have been lowered, the hydrogen bonds were kept low and the crystalline structure of the cellulose was preserved. The alkaline pH value may have affected the network structure of the fibers of
the copy paper sheets. As the pH raised the network of fibers became denser. The paper was affected by the changes in pH probably because of the surface loads of the paper fibers. The above trials
indicated that a higher alkaline pH value may have led to superior copy paper properties, such as tensile strength and printability.
1. Thu Dieu Le, Luyen Thi Tran, Hue Thi Minh Dang, Thi Thu Huyen Tran, Hoang Vinh Tran (2021) Graphene oxide/polyvinyl alcohol/Fe[3]O[4] nanocomposite: An efficient adsorbent for Co(II) ion removal.
Journal of Analytical Methods in Chemistry, pp.1-10.
2. Yunpeng Shang, Hui Gao, Lei Li, Chaoqun Ma, Jiao Gu, et al. (2021) green synthesis of fluorescent ag nanoclusters for detecting Cu2+ ions and its “switch-on” sensing application for GSH. Journal
of Spectroscopy, pp.1-10.
3. Jonsson Hanna (2010) Exploring the structure of oligo- and polysaccharides. Synthesis and NMR spectroscopy, PhD thesis, Stockholm University
4. Devereaux ZJ, Zhu Y, Rodgers MT (2019) Relative glycosidic bond stabilities of naturally occurring methylguanosines: 7-methylation is intrinsically activating. European Journal of Mass
Spectroscopy 25(1): 16-29.
5. (The International Organization for Standardization) ISO 1924-2:2008 Paper and board-Determination of tensile properties-Part 2: Constant rate of elongation method (20mm/min).
6. Ashraf W, Ishak MR, Zuhri MYM, Yidris N, Yaacob AM (2021) experimental investigation on the mechanical properties of a sandwich structure made of flax/glass hybrid composite face sheet and
honeycomb core. International Journal of Polymer Science, pp. 1-10.
7. (The International Organization for Standardization) ISO 6588-1:2012-Paper, board and pulps-Determination of pH of aqueous extracts-Part 1: Cold extraction.
8. (The International Organization for Standardization) ISO 3783:2006 Paper and board-Determination of resistance to picking-Accelerated speed method using the IGT-type tester (electric model).
9. (The International Organization for Standardization) ISO 536:2012 Paper and board-Determination of grammage.
10. Cohen A, Sackrowitz HB (1991) Tests for independence in contingency tables with ordered categories. Journal of Multivariate Analysis 36: 56-67.
11. Taheri SM, Hesamian C (2011) Goodman-Kruskal measure of association for fuzzy-categorized variables. Kybernetika 47(1): 110-122.
12. Hanusz Z, Tarasinska J, Zielinski W (2016) Shapiro-Wilk test with known mean. Revstat-Statistical Journal 14(1): 89-100.
13. Keya Rani Das, Rahmatullah Imon AHM (2016) A brief review of tests for normality. American Journal of Theoretical and Applied Statistics 5(1): 5-12.
14. Zubir NSA, Abas MA, Ismail N, Rahiman MHF, Tajuddin SN, et al. (2017) Statistical analysis of agarwood oil compounds in discriminating the quality of agarwood oil. Journal of Fundamental and
Applied Sciences 9(45): 45-61.
15. Williamson DF, Parker RA, Kendrick JS (1989) The box plot: A simple visual method to interpret data. Annals of Internal Medicine 110(11): 916-921.
16. Jansson Jennie (2015) The influence of pH on fiber and paper properties. Master Thesis, Faculty of Health Science and Technology Department of Engineering and Chemical Science, Chemical
Engineering, Karlstand University, Sweden.
17. Hafizur Rahman, Mikael E Lindström, Peter Sandström, Lennart Salmén, Per Engstrand (2017) The effect of increased pulp yield using additives in the softwood kraft cook on the physical properties
of low-grammage hand sheets. Nordic Pulp & Paper Research Journal 32(3).
18. Casey JP (1980) Pulp and paper, chemistry and chemical technology. In: Casey JP (Ed.), 3rd(edn), Volume 2.
19. Yabefa JA, Ocholi Y, Odubo GF (2014) Effect of temperature and changes in medium pH on enzymatic hydrolysis of β(1-4) glycosidic bond in orange mesocarp. Asian Journal of Plant Science and
Research 4(4): 21-24.
20. Anna Palme, Hans Theliander, Harald Brelid (2016) Acid hydrolysis of cellulosic fibres: Comparison of bleached kraft pulp, dissolving pulps and cotton textile cellulose. Carbohydrate Polymers
136: 1281-1287.
21. Yanjun Xie, Andreas Krause, Holger Militz, Hrvoje Turkulin, Klaus Richter, et al. (2007) Effect of treatments with 1,3-dimethylol-4,5-dihydroxy-ethyleneurea (DMDHEU) on the tensile properties of
wood. Holzforschung 61(1): 43-50.
© 2021 Chryssou K. This is an open access article distributed under the terms of the Creative Commons Attribution License , which permits unrestricted use, distribution, and build upon your work | {"url":"https://crimsonpublishers.com/acsr/fulltext/ACSR.000543.php","timestamp":"2024-11-03T03:45:52Z","content_type":"text/html","content_length":"196493","record_id":"<urn:uuid:79491b2c-5dc8-4150-8fe2-18260d442f51>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00336.warc.gz"} |