content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
American Mathematical Society
Coloring With a Limited Paintbox
Stephanie van Willigenburg
Communicated by Notices Associate Editor Steven Sam
On October 23, 1852 Francis Guthrie asked his brother Frederick, a student at University College London, England, to ask his professor, Augustus De Morgan, whether four colors sufficed to color the
countries of a map such that two countries sharing a border must be colored differently. De Morgan had no answer to what would become known as the four-color problem, but wrote that day to William
Rowan Hamilton at Trinity College, Dublin, Ireland, saying:
Query cannot a necessity for five or more be invented…. If you retort with some very simple case which makes me out a stupid animal, I think I must do as the Sphynx did.Footnote^1
The Sphinx killed herself after Oedipus solved her riddle.
Figure 1.
The letter from De Morgan to Hamilton 18.
Hamilton wrote back on 26 October 1852 (which MacTutor 12 wryly notes was “showing the efficiency of both himself and the postal service”) saying:
I am not likely to attempt your quaternion of colour very soon.
In fact this problem would only be solved in 1976 courtesy of Kenneth Appel, Wolfgang Haken, John Koch, and the first major computer-assisted proof. Along the way many tools were developed to resolve
it, including the chromatic polynomial 3, which eventually played no role in the solution but became its own avenue of research. Over the next few pages we will explore the chromatic polynomial, its
generalization called the chromatic symmetric function, and two problems that, much like the four-color problem, are straightforward to state but remain open after 25 years.
The chromatic polynomial: A limited paint palette with an unlimited amount of paint
As we might guess from the name, the chromatic polynomial is a polynomial arising from coloring, and rather than coloring maps we will be coloring graphs. A graph consists of a set of vertices and a
set of edges connecting the vertices. The number of edge ends meeting at a vertex is its degree, , and two vertices connected by an edge are adjacent. The graphs of interest to us are simple, which
the number of vertices is finite;
no pair of vertices has more than one edge connecting them (no multiple edges);
no vertex has an edge connecting it to itself (no loops).
We say a graph is connected if every pair of vertices has at least one sequence of edges connecting them, and disconnected otherwise. Three famous graph families that will be useful to us later are
the following, examples of which can be seen in Figure 2.
Complete graphs
The complete graph , has vertices, each pair of which are adjacent.
A connected graph is a tree if every pair of vertices has exactly one sequence of edges connecting them.
Path graphs
The path graph , has vertices and is the tree with 2 vertices of degree 1, and vertices of degree 2 for , and .
Figure 2.
From left to right, , , and the tree , also called the claw.
A proper coloring , with colors, of a graph is a function
such that if are adjacent, then :
With this in mind, the chromatic polynomial of a graph is the number of proper colorings of the vertices of with colors.
For example, let us color the vertices of with colors. For our first vertex we have colors to choose from. Then since all our vertices are adjacent, by definition, for the second vertex we only have
colors to choose from, for the third vertex we only have colors to choose from, and so on until for the -th vertex we only have colors to choose from. Hence the total number of proper colorings of
the vertices of with colors is
As a second example, let us color the vertices of any tree that has vertices with colors. Choose a vertex, any vertex. For it we have colors to choose from. For all the vertices adjacent to it we
have colors to choose from. For all the vertices adjacent to those we have colors to choose from, and for all the vertices adjacent to those we have colors to choose from, and so on. We note that
since every pair of vertices has exactly one sequence of edges connecting them, by definition, we will only have one opportunity to color each vertex using our greedy algorithm, and when we do we
will have colors to choose from after we color the first vertex. Hence the total number of proper colorings of the vertices of with colors is
As we can see from these examples, the chromatic polynomial is in fact a polynomial, even though this may not seem immediate from the definition. Lastly, returning to the motivation for its
definition, Birkhoff 3 had been hoping to solve the four-color problem by showing that whenever was a planar graph, namely a graph that can be drawn in the plane without edges crossing.
The chromatic symmetric function: An unlimited paint palette with a limited amount of paint
In order to describe our main protagonist we need to consider commuting variables , , … and extend our proper colorings from colors to infinitely many colors, so that now a proper coloring of a graph
is a function
such that if are adjacent, then . However, we must not be misled by the infinite paint palette, because if we have a graph with vertices, then there is as much information contained in a paintbox
with colors as there is in a paintbox with infinitely many colors. Instead the crucial enhancement is that while tells us the number of proper colorings with colors for all , defined below will tell
us the more refined information of how many times each color is used. Moreover, because we can package all this information into a single symmetric function, as we will see, we can bring to bear the
deep, extensive, and well-developed theory of symmetric functions. Thus let us define the chromatic symmetric function, which was introduced in 1995 by Richard Stanley 16 as a generalization of the
chromatic polynomial and has been a rich avenue of research ever since. A symmetric function is one where permuting the indices fixes the function, and so our chromatic symmetric function defined
below is indeed a symmetric function because permuting the colors fixes the function.
Definition 1.
For a graph with vertex set the chromatic symmetric function is
where the sum is over all proper colorings of .
For example, let us color in Figure 2. Since every pair of vertices is adjacent, so will need different colors for a proper coloring, this means that all four vertices will need a different color
that can be permuted and hence, omitting details of the calculation,
As a second example, let us color the two trees in Figure 2, and . Note that we can always color the vertices four different colors, so again terms such as will arise in both cases. However, if we
try to color the vertices in only two colors, then in we can color at most two vertices the same color
that can be swapped and hence, omitting details of the calculation,
In contrast, in we can color three vertices the same color
that can be swapped and hence, omitting details of the calculation,
In his paper, Stanley noted that if and , then , so the chromatic symmetric function naturally generalizes the chromatic polynomial. For example, it is well known that if we evaluate then we obtain
the number of acyclic orientations of up to a sign 15, where an acyclic orientation is a placement of directions on the edges of in such a way that following the directions never leads to us going
round in a cycle. With this in mind, , when written as a sum of elementary symmetric functions, which we will meet later, refines this result by determining the number of acyclic orientations of with
sinks, where a sink is a vertex whose incident edges are all directed towards , and we look at the coefficients of those elementary symmetric functions whose indices have parts. This result can in
turn be further refined, but that is a story for another time 1. However, there is one particular way in which and seem to differ, which leads us to our first open problem.
Distinguishing trees
Given a hydrocarbon, , how many isomers are there? Isomers have the same chemical formula but different configurations, so equivalently how many different configurations can have? This problem
remains open to this day.
Figure 3.
Butane, , has two isomers 19.
However, we note that if we omit the hydrogens in the butane isomers, then we have two trees. In fact, they are and from earlier. This is true in general: if we omit the hydrogens, then we obtain a
tree. So one way to distinguish hydrocarbons would be to know when two trees are the same, or isomorphic. Informally two graphs are isomorphic if we can redraw one so it is the other. More formally,
two graphs and are isomorphic if and only if there exists a 1-1 correspondence
such that if are adjacent in , then are adjacent in . In practice it is often easier to prove that two graphs are not isomorphic by finding a trait or invariant that differs, such as the number of
vertices or edges. For example, and are not isomorphic because has vertices of degree 2 whereas does not. However, they both have the same chromatic polynomial, , so the chromatic polynomial is an
example of an invariant that cannot be used to prove that and are not isomorphic. From earlier we know that every tree on vertices has chromatic polynomial , so it can never be used to distinguish
non-isomorphic trees with the same number of vertices. However, when we calculated the chromatic symmetric functions of and we saw that
so the chromatic symmetric function could be used to distinguish non-isomorphic trees. This was proposed by Stanley 16, and is now known as Stanley’s tree problem, which we will formulate below as a
conjecture to prove.
Conjecture 2 (Stanley’s tree problem).
If and are trees, then
This has been verified for all trees with up to 29 vertices 10, and has been proved for only a few families of trees:
Spiders 13
A spider is a tree consisting of disjoint path graphs and a central vertex joined to a degree 1 vertex in each path graph.
Caterpillars 21113
A caterpillar is a tree consisting of a path graph with degree 1 vertices joined to (some of) the vertices in the path graph.
For example,
is both a spider and a caterpillar.
A positive outlook
For our second open problem we need to introduce the space that chromatic symmetric functions live in, the algebra of symmetric functions. The algebra of symmetric functions, , is a subalgebra of
generated by the -th elementary symmetric functions for
for instance , and that
and is freely generated by the is known as the fundamental theorem of symmetric functions. A basis for is given by all elementary symmetric functions
where is a list of weakly decreasing positive integers, for instance , and we say that any is -positive if it can be written as a positive linear combination of elementary symmetric functions, and is
of interest due to its connection to permutation representations. For example,
is not -positive, and is the smallest graph whose chromatic symmetric function is not -positive. However,
is -positive. More generally, we can see that
where because each vertex must be colored a different color, and hence each proper coloring gives a monomial
where all the indices are different. Also the colors of a given proper coloring can be permuted in different ways, and each gives a valid proper coloring. Hence
The complete graphs are a special case of a family of graphs called unit interval graphs. A unit interval graph on vertices is formed as follows. Choose a collection of intervals such that no
interval contains another. Then has vertices labeled and an edge between vertices if for some . For example, let be a graph on 4 vertices. If , then , and if , then . However, for any . This leads us
to our second open problem that was originally posed as a conjecture by Richard Stanley and John Stembridge in 1993 in terms of partially ordered sets not containing a chain of three elements and one
further element incomparable to the chain. While the conjecture’s name still bears this terminology, it is easiest for us to state it using unit interval graphs, a reduction that is due to Mathieu
Guay-Paquet 9.
Conjecture 3 (The -free conjecture).
If is a unit interval graph, then is -positive.
Table 1.
Data on -positivity and trees.
Number of vertices, 1 2 3 4 5 6 7 8 9 10 11 12 13
Number of trees, 1 1 1 2 3 6 11 23 47 106 235 551 1301
is -positive 1 1 1 1 2 1 3 1 2 2 5 1 4
Like Stanley’s tree problem, this conjecture has been proved for only a few families, for example considering graphs on vertices we have
Complete graphs, namely ;
Complements of triangle-free graphs, namely 17;
Triangular ladders, namely 5.
In particular, a number of these and other results have been proved using generalizations of the chromatic symmetric function to quasisymmetric functions 14 or to symmetric functions in noncommuting
variables 8. Furthermore, through the generalization to quasisymmetric functions, chromatic symmetric functions of unit interval graphs have been shown to encode the decomposition into irreducible
representations of a symmetric group action on the cohomology of regular semisimple Hessenberg varieties, for example 4.
One thing we can observe from the definition of unit interval graphs is that they can be thought of as a collection of complete graphs that have been lined up in a row and then squashed together.
Looking individually at each complete graph we notice that it does not contain as an induced subgraph (a subset of vertices together with all edges connecting them). This led Stanley to widen his
conjecture to propose that the chromatic symmetric function of all graphs that do not contain as a contraction (shrinking edges and identifying vertices) are -positive. This conjecture was recently
shown to be false by Samantha Dahlberg, Angèle Foley and the author 6 who proved there exist infinite families of graphs that satisfy every combination of
they contain as an induced subgraph;
they contain as a contraction;
their chromatic symmetric functions are -positive.
Some of these can be seen in Figure 4.
Figure 4.
Graphs satisfying every combination.
Now that we have met these two famed open problems on chromatic symmetric functions, it is time to tie these two research threads together and end with one final open problem. From our discussion on
the -free conjecture we see that the chromatic symmetric function of the tree is not -positive, although that of the path graph is. So a natural question to ask is when is the chromatic symmetric
function of a tree -positive? The data computed so far by Samantha Dahlberg, Adrian She and the author 7 in Table 1 implies the answer is not often beyond the chromatic symmetric function of the path
graphs, and their conjecture is our final open problem, which has been confirmed by Kai Zheng 20 for degree .
Conjecture 4.
If a tree contains a vertex of degree , then its chromatic symmetric function is not -positive.
The author would like to thank the referees, Farid Aliniaeifard, Niall Christie, Sheila Sundaram, and Victor Wang for helpful suggestions, and especially Richard Stanley whose suggestions, insights,
and stories made writing and researching this article all the more chromatic.
Opening image is courtesy of bgblue via Getty.
Figure 1 is courtesy of Wikimedia Commons.
Figure 3 is courtesy of OpenStax. Licensed under Creative Commons Attribution 4.0 International.
All other figures are courtesy of the author.
Photo of Stephanie van Willigenburg is by Niall Christie. | {"url":"https://www.ams.org/journals/notices/202206/noti2490/noti2490.html?adat=June/July%202022&trk=2490&pdfissue=202206&pdffile=rnoti-p930.pdf&cat=none&type=.html","timestamp":"2024-11-01T20:11:58Z","content_type":"text/html","content_length":"650485","record_id":"<urn:uuid:acbe591a-70c6-458f-a77b-9a43889d4f24>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00335.warc.gz"} |
Learn Different Types of Probability Distributions for Machine Learning and Data Science | Python Code - MLK - Machine Learning Knowledge
If you are a beginner, then you should be really aware of different types of probability distribution for machine learning or data science projects. This is important because certain ML models assume
specific probability distribution of the underlying data to work efficiently so this knowledge will help you to choose the right model. Also, the knowledge of probability distribution will help you
immensely to understand the characteristics of data during the exploratory data analysis (EDA) phase.
To help you out, in this article we will discuss different types of probability distribution you should know for machine learning or data science. We will also explain each of the probability
distributions in Python code for better understanding.
Different Types of Probability Distribution (with Python Code)
1. Bernoulli Distribution
A Bernoulli Distribution denotes the random probability of an event that only has two possible discrete outcomes like 0 and 1. The variable that follows Bernoulli probability distribution is known as
the Bernoulli variable and the event is called the Bernoulli event.
If x is any Bernoulli variable and it can have only two discrete values whose probability lies between 0 to 1 then Probability Mass Function (PMF) of a Bernoulli distribution can be denoted
mathematically as below –
Example of Bernoulli Distribution
The tossing of a coin is a Bernoulli event since it has only two possible discrete outcomes – Head (i.e. 1) and Tail (i.e. 0). Hence it follows the Bernoulli probability distribution.
Python Code for Bernoulli Distribution
Let us see how to visualize Bernoulli Distribution in Python code below –
from scipy.stats import bernoulli
import seaborn as sns
data_bern = bernoulli.rvs(size=10000,p=0.6)
ax= sns.distplot(data_bern,
hist_kws={"linewidth": 15,'alpha':1})
ax.set(xlabel='Bernoulli Distribution', ylabel='Frequency')
2. Binomial Distribution
The binomial distribution is the sum of independent Bernoulli events that have just two outcomes like 0 or 1 repeated multiple times.
The mathematical representation of binomial distribution is given by:
Example of Binomial Distribution
In the previous discussion, we saw that the tossing of a coin is a Bernoulli event. Now if we keep on repeating the tossing of the coin it will assume the Binomial probability distribution.
Python Code for Binomial Distribution
Let us see how to visualize Binomial Distribution in Python code below –
from scipy.stats import binom
data_binom = binom.rvs(n=10,p=0.8,size=10000)
ax = sns.distplot(data_binom,
hist_kws={"linewidth": 15,'alpha':1})
ax.set(xlabel='Binomial Distribution', ylabel='Frequency')
3. Uniform Distribution
An event follows a Uniform Probability Distribution when it’s all possible outcomes have an equal probability of occurrence. It is also known as rectangular distribution.
Mathematically, a variable X uniformly distributed if the density function is:
where a is the minimum value in the distribution and b is the maximum value.
Example of Uniform Distribution
A very intuitive example of uniform distribution is the rolling of a dice where each face has an equal probability of 1/6 of occurring.
Python Code for Uniform Distribution
Let us see how to visualize Uniform Distribution in Python code below –
from scipy.stats import uniform
n = 10000
start = 10
width = 20
data_uniform = uniform.rvs(size=n, loc = start, scale=width)
ax = sns.distplot(data_uniform,
hist_kws={"linewidth": 15,'alpha':1})
ax.set(xlabel='Uniform Distribution ', ylabel='Frequency')
4. Normal Distribution
In a Normal Distribution, the probability of an outcome is more near the mean and the distribution follows a bell curve with lesser and lesser probability away from the mean. Most of the real-world
scenarios can be represented with Normal Probability Distribution.
The normal distribution can be mathematically represented as, where µ is mean and σ is the standard deviation.
Example of Normal Distribution
According to numerous reports from the last few decades, the mean weight of an adult human is around 60 kgs. But there are people who are extremely overweight or very underweight but their occurrence
becomes less on either side of the mean value. Hence the weight of people follows a normal probability distribution.
Python Code for Normal Distribution
Let us see how to visualize Normal Distribution in Python code below –
from scipy.stats import norm
# generate random numbers from N(0,1)
data_normal = norm.rvs(size=10000,loc=0,scale=1)
ax = sns.distplot(data_normal,
             bins=100,
             kde=True,
             color='skyblue',
             hist_kws={"linewidth": 15,'alpha':1})
ax.set(xlabel='Normal Distribution', ylabel='Frequency')
5. Poisson Distribution
The Poisson Distribution is used to determine the probability of a discrete event in a given period of time. These events have to be independent of each other and should occur at a constant rate in
that period.
The probability mass function of X that follows a Poisson distribution is given by:
Example of Poisson Distribution
A particular restaurant may see a footfall of 8 customers every hour on average. The customer footfall is independent of each other since one customer will not influence another customer’s
footfall. Also, the footfall can be as low as zero every hour to a very high number. This entire setup can be modeled using Poisson probability distribution to predict the customer footfall.
Python Code for Poisson Distribution
Let us see how to visualize Poisson Distribution in Python code below –
from scipy.stats import poisson
data_poisson = poisson.rvs(mu=3, size=10000)
ax = sns.distplot(data_poisson,
hist_kws={"linewidth": 15,'alpha':1})
ax.set(xlabel='Poisson Distribution', ylabel='Frequency') | {"url":"https://machinelearningknowledge.ai/learn-different-types-of-probability-distributions-for-machine-learning-and-data-science-python-code/","timestamp":"2024-11-03T13:02:07Z","content_type":"text/html","content_length":"236752","record_id":"<urn:uuid:80fa9c6e-d2f0-493d-ba50-115ed2635729>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00275.warc.gz"} |
Probability - h1014v2_74
STATISTICS Higher Concepts of Mathematics The significance of a normal distribution existing in a series of measurements is two fold. First,
it explains why such measurements tend to possess a normal distribution; and second, it provides a valid basis for statistical inference. Many estimators and decision makers that are used to make
inferences about large numbers of data, are really sums or averages of those measurements. When these measurements are taken, especially if a large number of them exist, confidence can
be gained in the values, if these values form a bell-shaped curve when plotted on a distribution basis. Probability
If E[1] is the number of heads, and E[2] is the number of tails, E[1]/(E[1] + E[2]) is an experimental determination of the probability of heads resulting when a coin is flipped. P(E[l]) = n/N
By definition, the probability of an event must be greater than or equal to 0, and less than or equal to l. In addition, the sum of the probabilities of all outcomes over the entire "event" must
add to equal l. For example, the probability of heads in a flip of a coin is 50%, the probability of tails is 50%. If we assume these are the only two possible outcomes, 50% + 50%, the two
outcomes, equals 100%, or 1. The concept of probability is used in statistics when considering the reliability of the data or the
measuring device, or in the correctness of a decision. To have confidence in the values measured or decisions made, one must have an assurance that the probability is high of the measurement
being true, or the decision being correct. To calculate the probability of an event, the number of successes (s), and failures (f), must be
determined. Once this is determined, the probability of the success can be calculated by: p s s f where s + f = n = number of tries. Example:
Using a die, what is the probability of rolling a three on the first try? MA-05 Page 8 Rev. 0 | {"url":"https://nuclearpowertraining.tpub.com/h1014v2/css/Probability-74.htm","timestamp":"2024-11-12T07:31:29Z","content_type":"text/html","content_length":"21764","record_id":"<urn:uuid:0f0ced56-e168-4957-ae2d-d417ed9408af>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00195.warc.gz"} |
In this unit students will build upon their experiences with geometry in earlier grades. Seventh grade students use these skills to informally construct geometric figures.
Manipulatives, dynamic geometry, and tools like rulers and protractors will be particularly helpful with this unit. A particular focus in this unit is the construction of triangles when given
combinations of measures of three angles and/or sides. Students will investigate which of these combinations create unique triangles, more than one triangle, or no triangle at all. Students will use
the angle-angle criterion to determine similarity.
Angle relationships generated by intersecting lines including supplementary, complementary, adjacent, and vertical angles are also used in problem solving. Using these relationships, students will
make conjectures and solve multistep problems with angles created by parallel lines cut by a transversal. They will also examine both angle sums of polygons and exterior angles.
Students will know and use formulas for the area and circumference of a circle and be able to determine the relationship between them.
Material Type:
Liviya Racine-Creekmore
Date Added: | {"url":"https://goopennc.oercommons.org/browse?f.new_nc_alignment=NC.Math.7.G.4","timestamp":"2024-11-02T14:22:48Z","content_type":"text/html","content_length":"177823","record_id":"<urn:uuid:5e9aae66-dfe3-4199-9db2-a9d35cf341f5>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00283.warc.gz"} |
Atmospheric rotating rig testing of a swept blade tip and comparison with multi-fidelity aeroelastic simulations
Articles | Volume 7, issue 5
© Author(s) 2022. This work is distributed under the Creative Commons Attribution 4.0 License.
Atmospheric rotating rig testing of a swept blade tip and comparison with multi-fidelity aeroelastic simulations
One promising design solution for increasing the energy production of modern horizontal axis wind turbines is the installation of curved tip extensions. However, since the aeroelastic response of
such geometrical add-ons has not been characterized yet, there are currently uncertainties in the application of traditional aerodynamic numerical models. The objective of the present work is
twofold. On the one hand, it represents the first effort in the experimental characterization of curved tip extensions in atmospheric flow. On the other hand, it includes a comprehensive validation
exercise, accounting for different numerical models for aerodynamic load prediction. The experiments consist of controlled field tests at the outdoor rotating rig at the Risø campus of the Technical
University of Denmark (DTU), and consider a swept tip shape. This geometry is the result of an optimized design, focusing on locally maximizing power performance within load constraints compared to
an optimal straight tip. The tip model is instrumented with spanwise bands of pressure sensors and is tested in atmospheric inflow conditions. A range of fidelities of aerodynamic models are then
used to aeroelastically simulate the test cases and to compare with the measurement data. These aerodynamic codes include a blade element momentum (BEM) method, a vortex-based method coupling a
near-wake model with a far-wake model (NW), a lifting-line hybrid wake model (LL), and fully resolved Navier–Stokes computational fluid dynamics (CFD) simulations. Results show that the measured mean
normal loading can be captured well with the vortex-based codes and the CFD solver. The observed trends in mean loading are in good agreement with previous wind tunnel tests of a scaled and stiff
model of the tip extension. The CFD solution shows a highly three-dimensional flow at the very outboard part of the curved tip that leads to large changes of the angle of the resultant force with
respect to the chord. Turbulent simulations using the BEM code and the vortex codes resulted in a good match with the measured standard deviation of the normal force, with some deviations of the BEM
results due to the missing root vortex effect.
Received: 31 Mar 2022 – Discussion started: 06 May 2022 – Revised: 05 Aug 2022 – Accepted: 13 Sep 2022 – Published: 10 Oct 2022
The trend of reducing the levelized cost of energy (LCOE) of horizontal axis wind turbines through increasing rotor size has long been established. To achieve this, the challenges of scaling must be
overcome through innovative turbine design and control strategies (Veers et al., 2019). One promising blade design concept is advanced aeroelastically optimized blade tip extensions, which could
drive rotor upscaling in a modular and cost-effective way.
Existing bibliography relevant to wind turbine applications typically focuses on winglets and aerodynamic tip shapes, with limited testing of generalized curved shapes in controlled or atmospheric
conditions (Johansen and Sørensen, 2006; Gaunaa and Jeppe, 2007; Gertz et al., 2012; Hansen and Mühle, 2018; Sessarego et al., 2020).
Previous related work by the authors focused on the aeroelastic optimization of curved tip extensions (Barlas et al., 2021a) and wind tunnel testing (Barlas et al., 2021b). In the present work, the
aeroelastic response of a swept tip extension is investigated for application to horizontal axis wind turbines. Controlled field testing is performed using the outdoor rotating test rig (RTR) at the
Technical University of Denmark (DTU). The swept tip shape in focus is the result of a design optimization focusing on locally maximizing power performance within load constraints compared to an
optimal straight tip for testing in the RTR. The tip model is instrumented with spanwise bands of pressure sensors and is tested in atmospheric inflow conditions. A range of fidelities of aerodynamic
models are used to aeroelastically simulate the test cases and results are compared with the measurement data, namely a blade element momentum (BEM) model, a vortex-based method coupling a near-wake
model with a far-wake model (NW), a lifting-line hybrid wake model (LL), and fully resolved Navier–Stokes simulations (CFD).
The tip shape presented in this work is an aeroelastically optimized tip which is mounted on DTU's rotating test rig (RTR) (Madsen et al., 2015; Ai et al., 2019), whereas a scaled stiff version of it
has been tested in the wind tunnel (Barlas et al., 2021b). The design-optimization method used is described in Barlas et al. (2021a) for a tip extension on a full-scale wind turbine. The method of
optimizing the tip for the RTR is essentially the same, while the baseline geometry and load envelope is defined by a reference straight tip designed for optimal BEM performance on a three-bladed
rotor (Table 1). Additionally, the structural sectional layup of the tip is parameterized using the BECAS software (Blasques et al., 2015). The reference tip is designed using the FFA-W3-211 airfoil
with fully turbulent wind tunnel polars (Bertagnolio et al., 2001) for a Reynolds number of 1.78×10^6, with a predefined length of 3m (practical design constraint for testing on the outdoor rotating
rig) mounted on the 8m cylindrical boom of the RTR. The cylindrical sections of the boom are modeled with a drag coefficient of 0.8 and zero lift. The chord and twist distributions of the straight
tip were determined from the BEM performance for an optimal power coefficient in operation at 30rpm with 6ms^−1 inflow wind speed (tip speed ratio of 4.5–6 along the tip). The resulting
aeroelastically optimized tip maximizing power performance within load constraints, utilizing sweep (Fig. 1), achieved a 19.58% increase in power with the baseline ultimate flapwise bending moment
at the boom root and tip connection, when evaluated at an extreme turbulence case (class III-C) at 6ms^−1 in the aeroelastic code HAWC2 (Larsen and Hansen, 2007) using the near wake (NW) model (
Madsen and Rasmussen, 2004; Pirrung et al., 2016, 2017a, b; Li et al., 2022). Since the RTR is a powered setup, the local power changes cannot be translated to any meaningful full-scale turbine
application, but are simply used in the design optimization herein. The Pareto front of the design-optimization solutions is shown in Fig. 2. The design features a highly swept (in-plane offset)
centerline (Fig. 3), slender chord distribution, and negative twist (to feather) distribution (Fig. 4) compared to the reference straight tip, whereas the airfoil sections are aligned perpendicular
to the centerline. The very last tapering of the tip chord is only relevant for the fully resolved Navier–Stokes modeling, whereas the optimized chord distribution from the NW model stops at a finite
chord of 0.41m at the 12m span location. The optimized mass and flapwise stiffness distribution as well as the structural layup are shown in Figs. 5 and 6.
3Rotating rig test setup
In order to fill in the gap between full-scale MW experiments and wind tunnel tests, the 95kW Tellus turbine (Madsen and Petersen, 1990) situated at the test field at the DTU Risø campus plays an
important role as a test bed for aerodynamic and aero-servo-elastic experiments (Madsen et al., 2015; Ai et al., 2019). The 5^∘ tilted rotor on the turbine has been replaced by an elastic beam, upon
which different test components can be mounted on the outer part for validation of aerofoil characteristics based on pressure measurements and testing of aerodynamic sensor and control systems.
Besides the main beam, a counterweight is mounted to balance the beam and the aerofoil section. During the measurements, the rotor shaft is motored and a frequency converter controls the rotational
speed. The tip is mounted at the end of the boom via an adaptor (Figs. 7 and 8). The yaw angle, rotational speed, and pitch angle are controlled to defined settings and logged, together with the wind
speed and direction of the nearby met mast.
Four chordwise bands of 1mm inner diameter pressure taps are installed on the tip at spanwise locations of 9.09, 9.79, 10.49, and 11.18m from the boom root (15%, 35%, 55%, and 75% of the tip
span, respectively). The 32 taps on each section are connected via tubing to one 1psi and one 5psi range DMT pressure scanners with an accuracy of 0.05 psi located on the joint piece inboard of the
tip root. Sets of strain gauges are installed at the sides of the spar cap and leading edge and trailing edge at two sections at spanwise locations of 9.8 and 10.6m from the boom root. A 6-hole
Pitot tube is also mounted at the joint piece inboard of the tip root, measuring the local inflow. The data acquisition (DAQ) system is based on cRIO from National Instruments. Finally, two cameras
are mounted inboard of the joint piece connected to the end of the boom.
The pressure distributions from the surface taps are numerically integrated into normal and chordwise aerodynamic forces on the local airfoil section reference frame, while an average pressure from
the two nearby points is added to the trailing edge (Fig. 9). The statistics for each measurement case are processed, together with the corresponding operation settings and inflow from the met mast.
The target result is the statistical distribution of the aerodynamic forces at each section for a set of statistical distributions of operation (wind speed, yaw, pitch). The parameters of the
measured 300s cases are shown in Table 2, with the averaged values based on the pitch angle settings in Table 3. Essentially, the average results of the 16 cases are condensed into the four
idealized cases, which do not necessarily represent the physics but allow for a model-to-model comparison with low influence of the specific inflow conditions.
4Aeroelastic simulations setup
The time domain aeroelastic simulations performed within the framework of the present study were orchestrated by the multi-body finite-element code HAWC2. All the computations shared the same
structural modeling, which is described in Sect. 4.1. Several aerodynamic models were considered. In ascending order of fidelity, they are labeled in this document as:
4.1Common structural model
All the presented methods were coupled with a common multi-body finite-element HAWC2 model. For simplicity, the tower was considered to be stiff (the maximum tower deflection is estimated to be
within 10cm, with the ratio of its first natural frequency to the rotational frequency of around 1.2). Together with the ensemble of the boom and the tip, which was considered to be a single body,
the shaft and the counterweight were also modeled. The mechanical properties of the latter two components of the model are further described in Madsen and Petersen (1990). The boom and the tip were
modeled by means of 32 bodies. Their mechanical properties, which are summarized in Fig. 5, were introduced as a fully populated matrix.
4.2BEM aerodynamic model
The BEM method in the present study corresponds to the model described in Madsen et al. (2020) that is implemented in the in-house finite-element multi-body aeroelastic code HAWC2 (Larsen and Hansen
, 2007). The BEM method is implemented using a polar grid approach, which is capable of modeling turbulent inflow conditions. In addition, various submodels are included to model different effects,
for example dynamic inflow, yawed inflow, and unsteady 2D airfoil aerodynamics. The blade is discretized radially into 80 sections using cosine spacing, as shown in Fig. 10. Both the boom and the tip
are included, while the cylindrical boom is modeled with zero lift and a drag coefficient of 0.8, as described in Sect. 2. The uniform inflow simulations were performed for 50s with a time step size
of 0.01s. The unsteady airfoil aerodynamic model (Hansen et al., 2004; Pirrung and Gaunaa, 2018) (usually referred to as the dynamic stall model) was used for all load cases. The Prandtl tip
correction described in Madsen et al. (2020) is included, which is not able to model the root vortex effect. In addition, the blade is assumed to be straight in the Prandtl tip correction, which
means the impact of the blade sweep on the tip loss is not included.
4.3Near-wake aerodynamic model
The coupled near- and far-wake model (Madsen and Rasmussen, 2004; Pirrung et al., 2014, 2016, 2017a) is a computationally efficient vortex-based method. The near wake is defined as the first quarter
revolution of the trailed vortices from the own blade. It is modeled using non-expanding helical vortex filaments. The helix pitch is iterated within every time step using the indicial function
approach and the steady-state induction is based on pre-calculated empirical functions that are fitted to the Biot–Savart law. The far wake is modeled using a far-wake BEM method that is without the
tip-loss correction. The far-wake induction is calculated from a down-scaled thrust coefficient using a coupling factor. The near-wake induction and the far-wake induction are summed together to get
the total induction. The coupling factor is calculated so that the rotor thrust is comparable to that computed with the BEM method (Andersen et al., 2010; Pirrung et al., 2016). The near-wake model
was recently modified to model the blade sweep effects (Li et al., 2022), which also accounted for the curved bound vortex influence (Li et al., 2020). As for the BEM method, the unsteady airfoil
aerodynamic model (Hansen et al., 2004; Pirrung and Gaunaa, 2018) is included for all load cases. As in the BEM method, the blade is discretized radially into 80 sections using cosine spacing, and
the cylindrical boom is modeled with zero lift and a drag coefficient of 0.8, as described in Sect. 2. The uniform inflow simulations were performed for 50s with a time step size of 0.01s. Unlike
the BEM method that uses the Prandtl tip correction, NW models the near-wake trailed vortices with helical vortex filaments and is able to model the root vortex effect.
4.4Lifting-line aerodynamic model
Medium-fidelity simulations were carried out with the in-house vortex solver MIRAS (Ramos-García et al., 2016, 2017). The lifting line (LL) aerodynamic model is used in the present study in
combination with a hybrid filament particle–mesh flow model (Ramos-García et al., 2019). In the LL model, the rotor blades are modeled as discrete filaments to account for the bound vortex strength
and release shed and trailing vorticity into the flow. In the hybrid method, the vortex filaments are transformed into vortex particles, the vorticity of which is later on interpolated into an
auxiliary Cartesian mesh. The motion of the vortex elements is determined by superposition of the free-stream velocity, the velocity induced by the blade-bound vortex, the filament wake, and the
particle wake. The flow is governed by the vorticity equation obtained by taking the curl of the Navier–Stokes equation. It describes the evolution of the vorticity of a fluid particle as it moves
with the flow. Moreover, MIRAS has been recently modified to accurately account for blade curvature effects (Li et al., 2020). The dynamic stall model of Øye (1994) is used to account for the stall
delay related to dynamic inflow changes seen by the airfoils. The coupling between MIRAS and HAWC2 (Ramos-García et al., 2020) accounts for the wind turbine structural dynamics, a collective pitch
and torque control, and the hydrodynamic loads. In a simplified manner, the coupling Python interface is in charge of transferring the blade aerodynamic loading computed by MIRAS, i.e., forces and
moments at each aerodynamic section shown in Fig. 10, to HAWC2. Moreover, the kinematics of the blade computed by HAWC2, i.e., both the rigid body motion of the root and the blade axis translations
and rotations at every aerodynamic section, are transferred to MIRAS via the same framework. In general, the coupling provides a common framework between the different numerical codes, paving the way
for a seamless comparison. A 12Rx4Rx4R Cartesian mesh has been employed in all cases, with a constant spacing of 0.5m in the x, y, and z directions, which adds up to a total of more than 2 million
cells. The bound vortex is discretized with 80 straight segments with constant spacing. Both the boom and the tip are included in the LL model. All uniform inflow simulations have been performed for
200s with a time step size of 0.008s, adding up to a total of 25000 time steps. A free boundary condition was used in all directions. Moreover, an eight-order stencil is used to spatially
discretize the vorticity equation, a particle re-meshing is forced every time step to maintain a smooth field, and a periodic re-projection of the vorticity field is applied to maintain a
divergence-free field.
4.5EllipSyS3D aerodynamic model
Higher-fidelity simulations were performed with the three-dimensional computational fluid dynamics software EllipSys3D (Michelsen, 1992, 1994; Sørensen, 1995). EllipSys3D is a multi-block
finite-volume code for structured grids, accounting for a wide range of modeling capabilities. In the present work, the incompressible Reynolds-Averaged Navier–Stokes (RANS) equations were solved in
general curvilinear coordinates. The solution was advanced in time with a dual time stepping method using a time step of 4ms. To accelerate the convergence of the solution, grid sequencing was used.
The k–ω SST turbulence model (Menter, 1994) was selected due to its performance when dealing with airfoil flows. Two distinct sets of simulations were carried out: one assumed fully turbulent flow,
while the other accounted for a correlation-based transition model (Sørensen, 2009). These two sets of computations are labeled in the present document as turb and trans, respectively.
The deflections of the boom centerline and the mounted-tip centerline, computed by HAWC2, were introduced in the CFD solution during run time. These deflections were subsequently applied to the
surface grid through a spline interpolation. The surface grid deflection was then smoothed into the inner domain through a mesh-deformation algorithm based on the distance to the blade surface.
Rotation was simulated by applying a rotational motion to the full computational grid as a solid body. At every time step, the CFD loading was computed and injected into the HAWC2 solution. This was
done by integrating the pressure and frictional loads (including forces and moments) in a series of sectional planes which are normal to the local blade axes. The location of such sectional cuts was
forced to correspond to the position of the aerodynamic sections that were defined in the rest of the methods included in this work (see Fig. 10). For more details about the EllipSys3D and HAWC2
aeroelastic coupling implementation, the reader is referred to Horcas et al. (2020).
The grid was generated in two consecutive steps. First, a structured mesh around the cylindrical boom and the tip surfaces was generated with the Parametric Geometry Library (PGL) tool (Zahle, 2019).
A total of 128 cells were used in the spanwise direction, and the chordwise direction was discretized with 256 cells (with 8 of them lying on the trailing edge). Secondly, the surface mesh was
radially extruded with the hyperbolic mesh generator Hypgrid (Sørensen, 1998) to create a spherical volume grid. A total of 128 cells were used in this process, and the resulting outer domain was
located approximately 30m away from the tip surface. A boundary layer clustering was taken into account, with an imposed first cell height of $\mathrm{1}×{\mathrm{10}}^{-\mathrm{5}}$m, in order to
target y^+ values lower than the unity. The resulting volume mesh had a total of 5.2 million cells. An inlet/outlet strategy was followed for the boundary conditions of the outer limit of the domain,
and both the boom at the tip itself were modeled as walls. A sketch of the ensemble of the boundary conditions is depicted in Fig. 11, together with a visualization of the mesh.
5Comparison of test and simulation results
This section contains the comparison of measured loads with the aeroelastic loads predicted using the aerodynamic models of varying fidelity. Because no wake rake was mounted on the rotating rig,
only the measured forces normal to the chord based on surface pressures are used. Insufficient wind speed measurements were available to accurately estimate shear coefficients, so no mean shear
profile is present in the simulations. The effect of shear is, however, assumed to be minor, due to the small rotor diameter of the rotating test rig. All the simulations shown here use transitional
polars. Since the state of the boundary layer during operation in the field is unknown, the transitional polars have been selected, since these were closer to the results. It has been shown that the
comparison of fully turbulent CFD and LL using fully turbulent polars is consistent, and this is briefly demonstrated. But generally, the loading was found to be underpredicted using fully turbulent
polars and fully turbulent CFD. These results are omitted here for brevity.
The section is organized as follows: in Sect. 5.1, aeroelastic simulations of all fidelity levels are compared to the mean experimental results. A comparison of sectional loads as a function of pitch
angle is included and comparisons to the earlier wind tunnel tests of a scaled geometry in Barlas et al. (2021b) are made. In Sect. 5.2, the effect of turbulence on the spanwise load distribution and
the distributed standard deviation of the loading is evaluated using the BEM, NW, and LL solvers.
5.1Uniform inflow
Unless otherwise stated, the results shown here are averaged from four simulations at the four slightly different operating conditions for each pitch angle, see Table 2.
5.1.1Spanwise load distribution
The averaged normal force from measurements and simulations and the averaged simulated chordwise force are shown in Fig. 12. Similar observations can be made for −5 and 0^∘ pitch angle (negative sign
is pitch towards feather). Although the codes capture the inboard measured normal forces, there is a clear outlier at the most outboard section for the −5^∘ pitch case and a slight underprediction
generally. The shape of the normal force is predicted similarly well by NW, LL, and CFD, which all predict a smaller slope than the BEM simulations. The largest difference in slope is found to be
inboard, where there is a clear effect of the root vortex visible in the results of all codes except BEM. Towards the tip, the normal loading predicted by LL and NW is larger than the loading
predicted by CFD, which is largely due to the smoothed chord in the CFD simulations, see Fig. 4. The CFD simulations of the chordwise force show a large peak towards the tip, which will be
investigated later in this section.
At 5^∘ pitch, no peak is observed in the chordwise loading of the CFD simulations. The measured normal force agrees very well with the predictions by LL and NW. The beginning of stall is seen to lead
to a less smooth load distribution of the NW simulations outboard of 10.5m span. This is less visible in LL, possibly due to the use of a vortex core and a less fine point distribution towards the
At 10^∘ pitch, shown in the last row of Fig. 12, some stall delay becomes visible in the CFD simulations and the experiment, leading to much higher normal forces than predicted by the codes that rely
on airfoil data (average loads increase due to higher than maximum lift values dictated by stall delay). The NW and, to a lesser extent, LL simulations now show non-smooth force distributions along
the whole span due to stalling flow. Because the operation here is close to maximum lift, at a small $\mathrm{d}{C}_{\mathrm{L}}/\mathrm{d}\mathit{\alpha }$ slope in the polars, the influence of the
root vortex and tip vortex becomes much smaller in the codes using airfoil data. Therefore, the difference between BEM and LL/NW is smaller than for the lower pitch angle cases.
5.1.2Sectional loads as a function of pitch angle
The purpose of Fig. 13 is to evaluate the predicted and measured slopes of the normal forces as a function of pitch angle. This enables the comparison of trends with the wind tunnel work in Barlas et
al. (2021b). At section S1, the effect of missing root vortex in the BEM simulations is clearly visible, causing an overprediction of the loading in the linear region. This is in good agreement with
the data presented in Barlas et al. (2021b). The slope in the experiment and CFD is linear up to 10^∘, while the codes using airfoil data predict the onset of stall.
At sections S2 and S3, the experiment shows indications of stall occurrence towards 10^∘. The CFD-based simulation also predicts a linear behavior, while the other codes stall too early. At
sections S3 and S4, all airfoil data-based codes predict almost the same loading, while the experiment shows a lower slope than all codes in the attached flow region, especially at S4, and a clear
stall at 10^∘.
The generally very good agreement between NW and LL computations in attached flow was also observed in the previous comparison with wind tunnel measurements. At section S4, the agreement is improved
in the present work because the swept tip shape is taken into account by the NW model due to the modifications described briefly in Sect. 4.3. In the present campaign, the measured loads are
generally higher than the predicted loads. In the wind tunnel campaign, the agreement between measurements and simulations was better, likely due to the more accurate knowledge of the inflow
conditions. The better performance in stall predicted by CFD compared to the measurements and the inability of the airfoil data-based models to predict performance in stall are also consistent with
the wind tunnel campaign in Barlas et al. (2021b).
The torsional and flapwise deflections for cases 3 and 4 are shown in Fig. 14. Because all aerodynamic models are coupled to the same structural solver, the very similar aerodynamic forces in case 3
(see Fig. 12) cause very similar structural deflections. In case 4, the delayed stall predicted by the CFD leads to comparably larger deflections. The deflections are generally small, but the
torsional deflection will have an influence on the mean loading due to its close relationship with the angle of attack. The agreement of the predicted deflections in cases 1 and 2, the plots of which
are not included here for brevity, is very similar to case 3 but at a lower overall level due to the smaller aerodynamic forcing.
5.1.4Investigation of 3D flow at the very tip and transitional versus turbulent simulations
Figure 15 sheds some light on the origin of the peaks in the chordwise force distribution predicted by CFD in Fig. 12, and compares transitional and turbulent simulations. The LL results are also
shown here, because they represent the highest fidelity code using airfoil data in this study. The left column shows the resultant force due to combination of the normal force and chordwise force,
and the right column shows the angle of the resultant force with respect to the airfoil chord. It becomes clear that the rotation of the resultant force in the CFD results towards the tip causes the
peak in the chordwise loading. This rotation is probably due to the highly three-dimensional flow at the tip, as it is illustrated in Fig. 16 for the particular case 3. Due to the lack of flow
visualization during the tests, we can only rely on the CFD results to explain the 3D flow at the boom transition and tip.
LL is unable to predict the near-tip direction change of the load, and actually, these angles and forces would not be possible to achieve based on airfoil data, because there is a significant
spanwise flow in the CFD simulations. Along these lines, it could be speculated that one of the factors explaining why all the other methods showed higher loading when compared to CFD could also lie
in this three-dimensional behavior.
As already mentioned, there seems to be generally larger tip loss in the CFD simulations than in the LL simulations. This is in part due to the rounded tip geometry (see Fig. 4), and was a common
feature for both turbulent and transition simulations. In attached flow (see case 3 in Fig. 15), the differences between laminar and turbulent profile data agree very well with the differences in the
CFD simulations between transitional and fully turbulent flow. The agreement becomes worse at a pitch angle of 5^∘, where light stall is already present. At 10^∘ the flow is stalled in LL, which
leads to very small differences between the transitional and turbulent simulations. In the CFD simulations, the stall is delayed and, thus, the difference between laminar and turbulent flow is much
5.2Turbulent inflow
Simulations accounting for an inflow turbulence that matches the measured one during the experimental campaign were carried out. The averaged cases based on pitch angle defined in Table 3 were
numerically reproduced with three different fidelity simulations, i.e., BEM, NW, and LL. Fully resolved CFD simulations were not carried out due to the significantly high computational requirements
of such cases, which were beyond the resources allocated for the present work.
First of all, the turbulence generator of Mann (1998) was used to create four turbulence boxes with an objective turbulence intensity of 15.4%, which matches the average turbulence intensity of the
four cases. Seed numbers 202, 302, 402, and 502 were used to account for different turbulence realizations. The generated boxes have a size of $\mathrm{4096}×\mathrm{32}×\mathrm{32}$ cells in the
streamwise, vertical, and lateral directions, respectively, with a constant cell size of 2m. The following constants were used to account for land-based turbulence generation αϵ=1.0, L=40, and γ=3.9
Cases 1 to 3 in Table 3 have been modeled in LL, each one including a different pitch angle, wind speed, and yaw angle. Case 4 was not run with turbulence due to the high angle of attack (AoA), which
causes stall and leads to issues with the NW and LL vortex solvers. All four generated turbulent boxes (four seeds) were run in each one of the LL setups, adding up to 12 simulation cases.
LL simulations with and without the rotating test rig were performed, adding up to a total of 24 cases.
In MIRAS, the turbulent boxes are transformed into a particle cloud by computing the curl of the velocity field. The turbulent particles are released one diameter upstream of the rotor plane and
interact freely with the turbine wake, if existent. The vortex solver accounts for turbulence development as it convects downstream towards the rotor plane. In the simulations without a turbine, the
local velocities are calculated in every time step at the rotor plane position, and a 64×64 mesh with a cell size of approximately 1.5m is used for the velocity extraction. This velocity field is
loaded in the NW and BEM simulations, allowing the same turbulent structures as simulated in MIRAS to be accounted for. In this way, it is possible to closely mimic the turbulence seen by the turbine
in MIRAS, although the influence of the turbine and its wake on the inflow turbulence field can not be accounted for in the lower-fidelity models.
All codes (BEM, NW, and LL) simulate each seed for 900s at a time step of 0.01s. The initial 100s are discarded in the postprocessing.
In the following, the mean and standard deviation of the loads from the turbulent simulations are compared to the experimental values.
5.2.1Spanwise mean loading distribution
The spanwise mean loading in the normal and chordwise directions obtained from measurements and simulations is shown in Fig. 17. The mean forces are computed as mean[seed](mean[time](F^s(t))), where
the inner mean operation is performed on the time series for a given turbulent seed and the outer mean operation is performed between the turbulence seeds which are indicated by the superscript “s”.
The shaded area in Fig. 17 represents the standard deviation of the mean values of the results obtained using four different turbulence seeds: sd[seed](mean[time](F^s(t))). They have an almost
constant width along the span. This shows that the different seeds lead mainly to different offsets of the mean load distribution and not to different slopes. An exception is the region towards the
tip, where the shaded areas narrow, especially for the chordwise force. The error bars in the experiments represent the standard deviation of the mean values obtained from the experiment: sd[case]
(mean[time](F^c(t))), where the superscript “c” indicates the measurement case number. The averaged experimental values differ both in operating points (see Table 2) and in turbulent realization.
Therefore, the standard deviations of the measured mean load distributions are not directly comparable to the standard deviations for the simulations, which are only due to different turbulent seeds.
The observations regarding slopes of the loading and comparison to the experiment are similar to the uniform inflow cases shown in Fig. 12. The NW and LL results show excellent agreement except at
the very tip, where the NW method predicts higher loading. The BEM results have a steeper slope with higher loading inboard due to the missing root vortex effect, and lower loading outboard due to
the missing effect of the backward sweep on the tip loss.
Generally, the spread between the results for different turbulence seeds indicates that a large part of the deviation between experiments and simulations may be explained by variations between
turbulence realizations, with an exception of the outmost section at a pitch of −5^∘.
5.2.2Standard deviation of spanwise loading
The distribution of the standard deviations of the normal loading and chordwise loading are shown in Fig. 18. These standard deviations are computed as mean[seed](sd[time](F^s(t))), with the
superscript “s” indicating a turbulence seed. The shaded area represents the standard deviation of the four standard deviations of the simulations using four turbulent seeds: sd[seed](sd[time](F^s(t
))). As above, the error bars in the experiments represent the standard deviation of the mean standard deviation values obtained from the experiment: sd[case](sd[time](F^c(t))), where the respective
four measured time series for the cases “c” differ in terms of both mean operating point and turbulence. This is not directly comparable to the simulations, which were performed at the mean operating
conditions to reduce computational cost. Therefore, the error bars from the experiment, which include variations due to mean operating point variation and turbulence realization, are generally wider
than the shaded areas from the simulations, which vary only due to turbulence.
The measured standard deviation of the normal force is generally similar to the simulated standard deviation. An exception is case 2 at roughly 0^∘ pitch, where the simulated standard deviations are
about 20% larger than the measured values. The slope of the standard deviation of the loads as a function of the span seems to be overpredicted by BEM compared to the experiment. The NW and
LL results are in better agreement with the measured data. This is likely due to the dynamic modeling of the root vortex influence in the NW and LL simulations, which clearly reduces the standard
deviation of the loading at the inboard part. The root vortex influence is generally more prominent in the chordwise force, because the induced vorticity causes a change in AoA that changes both the
magnitude and direction of the aerodynamic forces. The magnitude affects both the normal and chordwise forces, while a change of the angle mainly leads to differences in the chordwise force. The
effect of beginning separation is clearly visible in the NW simulations at 5^∘, especially close to the tip.
The shaded area, which represents the spread in standard deviations between turbulence seeds, agrees very well between LL and NW, with the BEM predicting a much larger spread in case 3 (5^∘ pitch).
It is unclear why the four different seeds lead to almost exactly the same standard deviations of the chordwise force at a pitch angle of 0^∘, even though the standard deviation of the normal force
varies with turbulence seed. But the effect is consistent across the model fidelities, and it was confirmed that the four seeds lead to four different time series of the chordwise loading, which
happen to have almost exactly identical standard deviations.
The aeroelastic response of a swept tip is investigated for application to wind turbine tip extensions by controlled field testing in the outdoor rotating rig at the Technical University of Denmark
(DTU). The swept tip shape in focus is the result of design optimization focusing on locally maximizing power performance within load constraints compared to an optimal straight tip. The tip model is
instrumented with spanwise bands of pressure sensors and is tested in atmospheric inflow conditions. A range of fidelities of aerodynamic models are used to simulate the test cases and results are
compared with the measurement data, namely a blade element momentum (BEM) model, a coupled near- and far-wake model (NW), a lifting-line hybrid wake model (LL), and fully resolved Navier–Stokes
computational fluid dynamics (CFD) simulations. The first simulations tackled a series of idealized inflow conditions that were obtained by averaging several time windows of the experimental data.
Results show that the measured mean normal loading can be captured well with the vortex-based codes and the CFD solver. The CFD solver seemed to generally underpredict the measured mean loading for
these idealized conditions in attached flow. However, this higher-fidelity method computed a similar stall delay to that seen in the measurements at a high angle of attack. Similar trends to those
seen in earlier wind tunnel measurements were observed when plotting the measured and simulated loading against the pitch angle. The CFD solution shows a highly three-dimensional flow at the very tip
that leads to large changes in the angle of the resultant force with respect to the chord at the very outboard part of the curved tip. These angle changes cannot be predicted by any model using 2D
airfoil data. No measurements were available at these outboard stations and, therefore, we were not able to validate this phenomenon with measurements. In a second stage of the analysis, the
influence of turbulence on the definition of the ideal cases was addressed. Simulations with four different turbulence realizations indicated that a large part of the deviations between measured and
simulated mean loading by the higher-fidelity codes can be due to seed-to-seed variations. These turbulent simulations show that the measured standard deviations of the normal force match those
predicted by the vortex codes well. There are some deviations when comparing to the BEM simulations, especially towards the root section. Future work should focus on full-scale validation of
aeroelastically optimized tip shapes, with focus on further enabling structural tailoring features and topologies, and possible combination with active aerodynamic add-on features.
Code and data availability
Pre-/post-processing scripts and datasets are available upon request. The codes HAWC2, MIRAS, and EllipSys3D are available with a license.
TB performed the tip design optimization, contributed to model preparation, performed the tests, and contributed to the model setup and comparison. GP contributed to the tip design optimization and
model setup and comparison. NRG contributed to the model setup and comparison. SGH contributed to the model setup and comparison. AL developed and implemented the updated near-wake model used in this
work and contributed to the comparison. HAM contributed to the tip design optimization and model preparation and testing.
The contact author has declared that none of the authors has any competing interests.
Publisher's note: Copernicus Publications remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
This research was supported by the project Smart Tip (Innovation Fund Denmark 7046-00023B), in which DTU Wind Energy and Siemens Gamesa Renewable Energy explore optimized tip designs. The following
persons also contributed to the presented work: Flemming Rasmussen, Niels N. Sørensen, Frederik Zahle, Peder B. Enevoldsen, and Jesper M. Laursen.
This research has been supported by the Innovationsfonden (grant no. 7046-00023B).
This paper was edited by Sandrine Aubrun and reviewed by Vasilis A. Riziotis and one anonymous referee.
Ai, Q., Weaver, P. M., Barlas, A., Olsen, A. S., Madsen, H. A., and Løgstrup Andersen, T.: Field testing of morphing flaps on a wind turbine blade using an outdoor rotating rig, Renew. Energy, 133,
53–65, https://doi.org/10.1016/j.renene.2018.09.092, 2019.a, b
Andersen, P., Gaunaa, M., Zahle, F., and Madsen, H. A.: A near wake model with far wake effects implemented in a multi body aero-servo-elastic code, in: European Wind Energy Conference And
Exhibition 2010, EWEC 2010, 20–23 April 2010, Warsaw, Poland, 387–431, 2010.a
Barlas, T., Ramos-García, N., Pirrung, G. R., and Horcas, S. G.: Surrogate based aeroelastic design optimization of tip extensions on a modern 10MW wind turbine, Wind Energ. Science, 6, 491–504,
https://doi.org/10.5194/wes-6-491-2021, 2021a.a, b
Barlas, T., Pirrung, G. R., Ramos-García, N., Horcas, S. G., Mikkelsen, R. F., Olsen, A. S., and Gaunaa, M.: Wind tunnel testing of a swept tip shape and comparison with multi-fidelity aerodynamic
simulations, Wind Energ. Sci., 6, 1311–1324, https://doi.org/10.5194/wes-6-1311-2021, 2021b.a, b, c, d, e, f
Bertagnolio, F., Sørensen, N. N., Johansen, J., and Fuglsang, P.: Wind turbine airfoil catalogue, Risoe-R No. 1280(EN), Risø National Laboratory for Sustainable Energy, ISBN 87-550-2910-8, 2001.a
Blasques, J. P. A. A., Bitsche, R. D., Fedorov, V., and Lazarov, B. S.: Accuracy of an efficient framework for structural analysis of wind turbine blades, Wind Energy, 19, 1603–1621, https://doi.org/
10.1002/we.1939, 2016.a
Gaunaa, M. and Johansen, J.: Determination of the maximum aerodynamic efficiency of wind turbine rotors with winglets, J. Phys.: Conf. Ser., 75, 012006, https://doi.org/10.1088/1742-6596/75/1/012006,
Gertz, D., Johnson, D., and Swytink-Binnema, N.: An evaluation testbed for wind turbine blade tip designs – Winglet results, Wind Eng., 136, 389–410, https://doi.org/10.1260/0309-524X.36.4.389,
Hansen, M., Gaunaa, M., and Madsen, H.: A Beddoes-Leishman type dynamic stall model in state-space and indicial formulations, Risø-R-1354, Risø National Laboratory for Sustainable Energy,
ISBN 87-550-3090-4, 2004.a, b
Hansen, T. H. and Mühle, F.: Winglet optimization for a model-scale wind turbine, Wind Energy, 21, 634–649, https://doi.org/10.1002/we.2183, 2018.a
Horcas, S. G., Barlas, T., Zahle, F., and Sørensen, N. N.: Vortex induced vibrations of wind turbine blades: Influence of the tip geometry, Phys. Fluids, 32, 065104, https://doi.org/10.1063/5.0004005
, 2020.a, b
Johansen, J. and Sørensen, N. N.: Aerodynamic investigation of winglets on wind turbine blades using CFD, Technical report Risoe-R 1543(EN), Risø National Laboratory for Sustainable Energy,
ISBN 87-550-3497-7, 2006.a
Larsen, T. J. and Hansen, A. M.: How2HAWC2, Technical report DTU R-1597(EN), Risø National Laboratory for Sustainable Energy, ISBN 978-87-550-3583-6, 2007.a, b
Li, A., Gaunaa, M., Pirrung, G. R., Ramos-García, N., and Horcas, S. G.: The influence of the bound vortex on the aerodynamics of curved wind turbine blades, J. Phys.: Conf. Ser., 1618, 052038,
https://doi.org/10.1088/1742-6596/1618/5/052038, 2020.a, b
Li, A., Pirrung, G. R., Gaunaa, M., Madsen, H. A., and Horcas, S. G.: A computationally efficient engineering aerodynamic model for swept wind turbine blades, Wind Energ. Sci., 7, 129–160, https://
doi.org/10.5194/wes-7-129-2022, 2022.a, b
Madsen, H. A. and Petersen, S. M.: Wind turbine test Tellus T-1995, 95kW, Technical report Risø-M No. 2761, Risø National Laboratory for Sustainable Energy, Roskilde, ISBN 87-550-1485-2, 1990.a, b
Madsen, H. A. and Rasmussen, F.: A near wake model for trailing vorticity compared with the blade element momentum theory, Wind Energy, 7, 325–341, https://doi.org/10.1002/we.131, 2004.a, b
Madsen, H. A., Barlas, A., and Løgstrup Andersen, T.: Testing of a new morphing trailing edge flap system on a novel outdoor rotating test rig, in: Scientific Proceedings, EWEA Annual Conference and
Exhibition 2015, EWEA – European Wind Energy Association, 26–30, 2015.a, b
Madsen, H. A., Larsen, T. J., Pirrung, G. R., Li, A., and Zahle, F.: Implementation of the Blade Element Momentum Model on a Polar Grid and its Aeroelastic Load Impact, Wind Energ. Sci., 5, 1–27,
https://doi.org/10.5194/wes-5-1-2020, 2020.a, b
Mann, J.: Wind Field Simulation, Probab. Eng. Mech., 13, 269–282, 1998.a
Menter, F. R.: Two-equation eddy-viscosity turbulence models for engineering applications, AIAA J., 32, 1598–1605, https://doi.org/10.2514/3.12149, 1994.a
Michelsen, J. A.: Basis3D – a platform for development of multiblock PDE solvers, Report AFM 92-05, Risø National Laboratory, Roskilde, 1992.a
Michelsen, J. A.: Block structured multigrid solution of 2D and 3D elliptic PDEs, Report AFM 94-06, Risø National Laboratory, Roskilde, 1994.a
Øye, S.: Dynamic stall simulated as a time lag of separation, in: Proc. of EWEC, 10–14 October 1994, Thessaloniki, Greece, 1994.a
Pirrung, G., Riziotis, V., Madsen, H., Hansen, M., and Kim, T.: Comparison of a coupled near- and far-wake model with a free-wake vortex code, Wind Energ. Sci., 2, 15–33, https://doi.org/10.5194/
wes-2-15-2017, 2017.a, b
Pirrung, G. R. and Gaunaa, M.: Dynamic stall model modifications to improve the modeling of vertical axis wind turbines, DTU Wind Energy E-0171, DTU Wind Energy, ISBN 978-87-93549-39-5, 2018.a, b
Pirrung, G. R., Hansen, M., and Madsen, H. A.: Improvement of a near wake model for trailing vorticity, J. Phys.: Conf. Ser., 555, 0120, https://doi.org/10.1088/1742-6596/555/1/012083, 2014.a
Pirrung, G. R., Madsen, H. A., Kim, T., and Heinz, J. C.: A coupled near and far wake model for wind turbine aerodynamics, Wind Energy, 19, 2053–2069, https://doi.org/10.1002/we.1969, 2016.a, b, c
Pirrung, G. R., Madsen, H. A., and Schreck, S.: Trailed vorticity modeling for aeroelastic wind turbine simulations in stand still, Wind Energ. Sci., 2, 521–532, https://doi.org/10.5194/
wes-2-521-2017, 2017.a
Ramos-García, N., Sørensen, J. N., and Shen, W. Z.: Three-dimensional viscous-inviscid coupling method for wind turbine computations, Wind Energy, 19, 67–93, https://doi.org/10.1002/we.1821, 2016.a
Ramos-García, N., Spietz, H. J., Sørensen, J. N., and Walther, J. H.: Hybrid vortex simulations of wind turbines using a three-dimensional viscous-inviscid panel method, Wind Energy, 20, 1871–1889,
https://doi.org/10.1002/we.2126, 2017.a
Ramos-García, N., Spietz, H. J., Sørensen, J. N., and Walther, J. H.: Vortex simulations of wind turbines operating in atmospheric conditions using a prescribed velocity-vorticity boundary layer
model, Wind Energy, 21, 1216–1231, https://doi.org/10.1002/we.2225, 2019. a
Ramos-García, N., Sessarego, M., and Horcas, S. G.: Aero‐hydro‐servo‐elastic coupling of a multi‐body finite‐element solver and a multi‐fidelity vortex method, Wind Energy, 2020, 1–21, https://
doi.org/10.1002/we.2584, 2020.a, b
Sessarego, M., Feng J., Ramos-García, N., and Horcas, S. G.: Design optimization of a curved wind turbine blade using neural networks and an aero-elastic vortex method under turbulent inflow, Renew.
Energy 146, 1524–1535, https://doi.org/10.1016/j.renene.2019.07.046, 2020.a
Sørensen, N. N.: General purpose flow solver applied to flow over hills, PhD thesis, Risø National Laboratory, Roskilde, 1995.a
Sørensen, N. N.: HypGrid2D – a 2-D mesh generator, Technical report R-1035, Risø National Laboratory, Roskilde, 1998.a
Sørensen, N. N.: 3D CFD computations of transitional flows using DES and a correlation based transition model. Technical report, Risø National Laboratory, Roskilde, 2009.a
Veers, P., Dykes, K., Lantz, E., et al.: Grand challenges in the science of wind energy, Science, 366, 443, https://doi.org/10.1126/science.aau2027, 2019.a
Zahle, F.: Parametric Geometric Library (PGL), GitHub [code], https://gitlab.windenergy.dtu.dk/frza/PGL (last access: 28 March 2022), 2019.a | {"url":"https://wes.copernicus.org/articles/7/1957/2022/wes-7-1957-2022.html","timestamp":"2024-11-10T12:31:59Z","content_type":"text/html","content_length":"286251","record_id":"<urn:uuid:3cdd9403-6018-4ca8-a640-4598f1692e0b>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00713.warc.gz"} |
Ding Zou et al. “Multi-level Cross-view Contrastive Learning for Knowledge-aware Recommender System.” in SIGIR 2022.
Reference code:
class recbole.model.knowledge_aware_recommender.mcclk.Aggregator(item_only=False, attention=True)[source]¶
Bases: torch.nn.modules.module.Module
calculate_sim_hrt(entity_emb_head, entity_emb_tail, relation_emb)[source]¶
The calculation method of attention weight here follows the code implementation of the author, which is slightly different from that described in the paper.
forward(entity_emb, user_emb, relation_emb, edge_index, edge_type, inter_matrix)[source]¶
Defines the computation performed at every call.
Should be overridden by all subclasses.
Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the
registered hooks while the latter silently ignores them.
training: bool¶
class recbole.model.knowledge_aware_recommender.mcclk.GraphConv(config, embedding_size, n_relations, edge_index, edge_type, inter_matrix, device)[source]¶
Bases: torch.nn.modules.module.Module
Graph Convolutional Network
class recbole.model.knowledge_aware_recommender.mcclk.MCCLK(config, dataset)[source]¶
Bases: recbole.model.abstract_recommender.KnowledgeRecommender
MCCLK is a knowledge-based recommendation model. It focuses on the contrastive learning in KG-aware recommendation and proposes a novel multi-level cross-view contrastive learning mechanism. This
model comprehensively considers three different graph views for KG-aware recommendation, including global-level structural view, local-level collaborative and semantic views. It hence performs
contrastive learning across three views on both local and global levels, mining comprehensive graph feature and structure information in a self-supervised manner. | {"url":"https://recbole.io/docs/recbole/recbole.model.knowledge_aware_recommender.mcclk.html","timestamp":"2024-11-04T02:12:00Z","content_type":"text/html","content_length":"35168","record_id":"<urn:uuid:dee7c143-8995-47bc-8580-3fc3b29fc982>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00385.warc.gz"} |
A Comparison of New and Old Algorithms for a Mixture Estimation Problem
We investigate the problem of estimating the proportion vector which maximizes the likelihood of a given sample for a mixture of given densities. We adapt a framework developed for supervised
learning and give simple derivations for many of the standard iterative algorithms like gradient projection and EM. In this framework, the distance between the new and old proportion vectors is used
as a penalty term. The square distance leads to the gradient projection update, and the relative entropy to a new update which we call the exponentiated gradient update (EG[η]). Curiously, when a
second order Taylor expansion of the relative entropy is used, we arrive at an update EM[η] which, for η = 1, gives the usual EM update. Experimentally, both the EM[η]-update and the EG[η]-update for
η > 1 outperform the EM algorithm and its variants. We also prove a polynomial bound on the rate of convergence of the EG[η] algorithm.
All Science Journal Classification (ASJC) codes
• Software
• Artificial Intelligence
• EM
• Exponentiated gradient algorithms
• Maximum likelihood
• Mixture models
Dive into the research topics of 'A Comparison of New and Old Algorithms for a Mixture Estimation Problem'. Together they form a unique fingerprint. | {"url":"https://collaborate.princeton.edu/en/publications/a-comparison-of-new-and-old-algorithms-for-a-mixture-estimation-p","timestamp":"2024-11-02T09:52:27Z","content_type":"text/html","content_length":"50039","record_id":"<urn:uuid:4742118c-e2f4-4e9c-a924-225df89f59c0>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00582.warc.gz"} |
Traversable Klein 'bottle' paths
Here's a depiction of the different orientations the back cross-section can have for the different varieties of 4D Klein Strip.
The blue diametre rotates between the z-axis and the y-axis. The red diametre rotates between the w-axis and the z-axis.
The parametric equations for these are:
w = sin(u) * cos(t)
x = 0
y = cos(u) * sin(t)
z = (sin(u) * sin(t) + cos(u) * cos(t))
Note: Although it might look like similar to a 3D tumble it isn't. In a 3D tumble there are two rotations but each changes the axis of the other's rotation while they spin.
Now if I can only work out how to connect the front of the Klein Strip to these rear cross sections...
Re: Traversable Klein 'bottle' paths
Ok, gonegahgah. Don't really get what is going on here. But it's a movement in 3D, definitely. 3 coordinates are involved here (x=0).
What is deep in our world is superficial in higher dimensions.
Re: Traversable Klein 'bottle' paths
That's cool. I'll show the working one day soon.
I realised I could use the same formula on different axes to render something similar to what I used to have.
In this I've added the w-axis to the z-axis. Otherwise this should be one version of the Klein Strip.
If I could have rainbow colours for the w-axis (and still use the x-axis which is otherwise only used for stepping around the ring) then it would be better.
Anything you see bulging in the x-axis should actually be bulging in the w-axis.
Anything bulging in the y-axis and z-axis should be.
This is only one version of 360° of versions. I should be able to do another version (90° out from this one). I'll do that tomorrow.
Then somehow I need to work out a general version to cater for all the combinations...
Anyhow, the parametric equations are very similar to the last one:
w = r * sin(v) * cos(u)
x = R * sin(v * 2)
y = (R + r * (cos(u) * cos(v) + sin(u) * sin(v))) * cos(v * 2) [Note: I actually had "cos(u) * cos(v) - sin(u) * sin(v)" in this when I rendered it; that just seems to turn it into the 270° version]
z = r * sin(u) * cos(v)
In the case of the depicted version I combined w and x as: x = (6 + 2 * sin(v) * cos(u)) * sin(v * 2)
It is interesting to see the extra kink at the back on the right hand side...
You can see that I only do it for half the Klein Strip and I don't draw the second side. I should actually try that to see if it makes any difference?
I realised (now) that you are correct about the 'insides' being the walking surface Teragon.
Just as we walk on the consecutive line cross-sections of a mobius strip; the 4Der walks on the consecutive circle cross-sections of the mobius strip.
Re: Traversable Klein 'bottle' paths
I did a wireframe animation and noticed that something was wrong.
So I redid my working diagram this time rather than trying to coble it together from the diagram I had done for the rear aspects.
That produced a variation on the formulas; but alack that still didn't fix the problem.
It took me awhile and it finally dawned on me that I wasn't really adding w to x and therein lay the problem...
The formulas are now fixed and that mysterious kink is now gone; and it now more resembles the original depictions I made.
The wireframe is showing a different view then the filled image. This view seems to best show how the circle cross-section of the path rotates the entire way.
And, here are the corrected equations:
w = (r * cos(u) * cos(V))
x = (R * sin(V * 2))
y = (R + r * (sin(u) * cos(V) + cos(u) * sin(V))) * cos(V * 2)
z = r * sin(u) * sin(V)
In the animation I have put x = (r * cos(u) * cos(V)) + (R * sin(V * 2)).
Again if we could have rainbow colours then any bulge to the x-left at all parts of the ring would range to purple and any bulge to the x-right would range to red.
So the left side and the right inside would look purple ranging to the left inside and the right side looking red.
Re: Traversable Klein 'bottle' paths
Even that had an error in it. I noticed as I was trying to do the Klein 0 type.
Here are the Klein 0 type and 90 type looking down at them on the ground:
That little bulge at the front are the backside of the connecting planes. In some respects it looks better just drawing the circles...
I'll add the animations of these when I get home as well as the equations.
And here they are:
The left is the Klein 0 type again and the right is the Klein 90 type. I've shown these from the best angle to show off the rotation of the cross-sections.
The left one is shown with PHI 90 and PSI 310, the right one is shown with PHI 0 and PSI 50.
Here they both are from the same angle to compare the difference between them:
If you look closely at the left one you can see that the blue line goes from sideways (w-axis) to vertical (z-axis) and the red line goes from forwards (y-axis) to sideways (w-axis).
If you look closely at the right one you can see that the blue lines goes from sideways (w-axis) to forwards (y-axis) and the red line goes from forwards (y-axis) to vertical (z-axis).
Here are the equations for the left one:
w = (r * (-sin(u) * sin(v / 2) + cos(u) * cos(v / 2)))
x = (R * sin(v))
y = (R * cos(v)) + (r * sin(u) * cos(v / 2))
z = r * cos(u) * sin(v / 2)
In the animation I've made x = (R * sin(v)) + (r * (-sin(u) * sin(v / 2) + cos(u) * cos(v / 2)))
Here are the equations for the right one:
w = (r * cos(u) * cos(v / 2))
x = (R * sin(v))
y = (R * cos(v)) + (r * (sin(u) * cos(v / 2) + cos(u) * sin(v / 2)))
z = r * sin(u) * sin(v / 2)
In the animation I've made x = (R * sin(v)) + (r * cos(u) * cos(v / 2))
Re: Traversable Klein 'bottle' paths
It seems like what you call the 0° type is what I called the 90° type (which angle is the name referring to?).
And what you call the 90° ain't what I called the 0° type, but some more complex version with a tumble in it.
I must say that I can't really follow your new animations, although they're made very nicely, because the only thing that makes sense for me to define the orientation of a surface with a circular
cross section is the surface normal vector which is not shown here.
Anyway this "tumbled" band is quit some interesting object. It ocurred to me that even a common Moebius strip in 3D has kind of a tumble in it, if you trace out the orientation of the surface normal
throughout the course of the strip in an outer coordinate system. The special thing about the 4D version is, that the tumble occurs even in local coordinates, i.e. if you move along the loop without
doing the twist and observe the orientation of the surface normal relative to yourself. So what the time is for a tumbling object in 3D, is a spatial dimension for this kind of tumble!
This tumbled strip even seems to have a handedness build in, even though the righthanded version may be transferred into the lefthanded version by a small deformation.
Wow! I just discovered a whole new type of Moebius strips in 4D, so there may be three distinct families of Moebius strips in 4D. Originally I thought that it's impossible to derive Moebius strips
from torispheres, because three loop directions need at least 5 dimensions, but now it seems there is some special way to do cut it, so you get just 2 loops with one direction each. It's a really
bizarre and complex object.
What is deep in our world is superficial in higher dimensions.
Re: Traversable Klein 'bottle' paths
Some of this may seem trivial, but I guess it's not bad to start by making the concepts very clear in 3D...
Moebius 2-torus ("Moebius strip")
Full name: n-Moebius 2-torus
Family: Moebius 2-torus
Cut of a full 2-torus, twisted ½+n times
Line extruded, twisted ½+n times and closed to a loop
Cross section: Line
Open directions: 1
Closed directions: 1
Twisted directions: 1
The figure shows how a full torus is cut in order to obtain a Moebius 2-torus. You can imagine the red plane tracing out the whole loop by revolving around the black axis at the center. The position
of the red surface on the torus can be described by one angle. On the surface one dimension is reserved for the loop, indicated by the red arrow. For every position on the loop there are two
directions left. The open direction of the surface, which is cut by the thick black line, and the normal direction of the surface, indicated by the arrow in orange*. The arrow in orange may point in
any direction inside the red plane and the direction it points to may change in the course of one revolution around the loop. This is exactly what happens with a Moebius strip. Within one revolution
of the red plane through the loop, the surface normal vector does one 180°-turn, the thick black line traces out a Moebius strip.
A 180° turn is just the simplest possiblity. The figure below shows the simplest three versions of the Moebius 2-torus (1-Moebius 2-torus (180°), 2-Moebius 2-torus (540°) and 3-Moebius 2-torus
We may go one step further. The red plane in the first figure of this post defines the lateral plane of the strip, which of course depends on the position on the loop. So the orientation of the
surface normal vector inside the lateral plane is defined by just one angle and the surface may be color-coded with respect to this angle:
Now, for the schematic representation of the Moebius strip one coordinate is redundant. We can just draw a circle and color code the orientation of the surface in the lateral plane (with a certain
width of course, so that we are able to see it). Starting from the blue site and going around the loop clockwisey one time, we reach the complementary color (red in this color scheme) and thus arrive
at the opposite side of the surface.
Colors can also help us to visualize the actual structure of the Moebius strip - as a 4D being would perceive it.
Here the strip lies inside the 3 dimensions perpendicular to the line of sight. Every point on the Moebius strip is the same color and therefore lies at the same distance to the beholder. Keep in
mind that the observer is nowhere in the 3-space the object is plotted in, but outside in the color direction. The pictures below show the Moebius band inclined by 90° towards the beholder. The
object looks now flat, but it has a depth. The redder, the more close to the observer, the bluer the further afar. The right version each represents exactly the same perspective for the 4D being, but
a different one for us as we can't perceive the whole 3D perspective at one.
*(Edit:) If the concept of a normal vector is unclear, just consult https://en.wikipedia.org/wiki/Normal_%28geometry%29 or some other basic explanation.
Last edited by Teragon on Mon Aug 01, 2016 7:59 pm, edited 6 times in total.
What is deep in our world is superficial in higher dimensions.
Re: Traversable Klein 'bottle' paths
Teragon wrote:It seems like what you call the 0° type is what I called the 90° type (which angle is the name referring to?).
And what you call the 90° ain't what I called the 0° type, but some more complex version with a tumble in it.
My apologies Teragon. I went from starting the u-value stepping from the red line in my original animations to starting the u-value stepping from the blue line; so this has shifted everything 90°.
Teragon wrote:I must say that I can't really follow your new animations, although they're made very nicely, because the only thing that makes sense for me to define the orientation of a surface
with a circular cross section is the surface normal vector which is not shown here.
Looking this up I still only have a bare idea what a surface normal vector is but hopefully the following animations will help. They are the 0° (formerly 90°) and 90° (formerly 0°) from front on:
This shows the x-direction to left-right, the y-direction into the screen, and the z-direction as up/down. The x-direction basically follows the middle of the ring so I've also added the w-direction
to this.
The thinner section (or cross-over point) of the Band is the front for both and the cross-section travels left along the front around to the fatter section at the back where it travels right.
You'll see for both that the blue line starts off horizontal (because it is w=-2...2,y=0,z=0) and the red line starts off pointing into the screen (because it is w=0,y=-2...2,y=0).
The left image shows: blue line rotating in the vertical from being full w-sideways at front to vertical at back; and red line rotating in the horizontal from pointing into the screen y-wards at
front to being full w-sideways at back.
The right image shows: blue line rotating in the horizontal from being full w-sideways at the front to pointing into the screen y-wards at the back; and red line rotating our way from pointing into
the screen at front to being vertical at back.
The circle cross-section for each just fits itself to the orientation of the blue and red axes combined at each point.
Teragon wrote:Anyway this "tumbled" band is quit some interesting object.
Thanks Teragon. I feel we are getting closer now...
Teragon wrote:It ocurred to me that even a common Moebius strip in 3D has kind of a tumble in it, if you trace out the orientation of the surface normal throughout the course of the strip in an
outer coordinate system. The special thing about the 4D version is, that the tumble occurs even in local coordinates, i.e. if you move along the loop without doing the twist and observe the
orientation of the surface normal relative to yourself. So what the time is for a tumbling object in 3D, is a spatial dimension for this kind of tumble!
Yes it is now that you mention it. The line cross sections turn with the ring and in the horizontal at the same time!
Teragon wrote:This tumbled strip even seems to have a handedness build in, even though the righthanded version may be transferred into the lefthanded version by a small deformation.
Wow! I just discovered a whole new type of Moebius strips in 4D, so there may be three distinct families of Moebius strips in 4D. Originally I thought that it's impossible to derive Moebius
strips from torispheres, because three loop directions need at least 5 dimensions, but now it seems there is some special way to do cut it, so you get just 2 loops with one direction each. It's a
really bizarre and complex object.
There might be a scary number. That's why I'll stick to just a basic one. Even it has 360° of varieties! Though I'll certainly be interested in what you find.
Also, I put together a general equation. It might be (one of four ways) to show all the basic Klein Band varieties. I want to test how the cross-section moves in some of the different angles before I
can say.
But anyhow, this is what the cobbled together equation generates. It might or might not be the true equation. I said four ways (using placed negatives) but they all ultimately produce the same
overall results anyway.
The parametric equations I cobbled together for this are:
w = (2 * (cos(t) * -sin(u) * sin(v / 2) + cos(u) * cos(v / 2)))
x = (6 * sin(v))
y = (6 * cos(v)) + (2 * (sin(t) * cos(u) * sin(v / 2) + sin(u) * cos(v / 2)))
z = 2 * cos(u + t) * sin(v / 2)
In the graphing program I added x and w together as: (6 * sin(v)) + (2 * (cos(t) * -sin(u) * sin(v / 2) + cos(u) * cos(v / 2)))
The v steps around the ring, u steps around each cross-section circumference, and t steps through the basic 4-twist Klein Band varieties...
[BTW: Noticed your post after I posted. Very cool! I'll have more look tomorrow.]
Last edited by gonegahgah on Fri Aug 28, 2015 8:57 pm, edited 2 times in total.
Re: Traversable Klein 'bottle' paths
Teragon wrote:Anyway this "tumbled" band is quit some interesting object.
Teragon wrote:Wow! I just discovered a whole new type of Moebius strips in 4D, so there may be three distinct families of Moebius strips in 4D. Originally I thought that it's impossible to derive
Moebius strips from torispheres, because three loop directions need at least 5 dimensions, but now it seems there is some special way to do cut it, so you get just 2 loops with one direction
each. It's a really bizarre and complex object.
I just realised what you were meaning Teragon.
It dawned on me from your depictions that we could have a basic 3-twist Klein Band or "Moebius 2-torus" that intersects our 3D plane as a Moebius Band.
There would only be a left-handed and a right handed variety I guess? Plus the n+½ varieties of course.
With this a path of lines would intersect our 3D-plane and each of these lines would connect to the rest of the circle cross-section that extends out into the 4th dimension.
This would be the simplest harmonic Klein Band in 4D? Is that correct?
I realised this afternoon that the equation for the 3-twist Klein Band may be as simple as:
w = r * sin(u)
x = (R + r * (cos(u) * cos(v / 2))) * sin(v)
y = (R + r * (cos(u) * cos(v / 2))) * cos(v)
z = r * (cos(u) * sin(v / 2))
This is simply the combination of my Mobius Band equations with a w-component. So for each visible line cross-section it is part of a circle cross-section that extends in the 4th dimension.
Here is the Mobius Band itself with no w-component. This is what we would see in our 3D slice of the 3twist Klein Band.
The base Mobius form leaves no direction untouched so when I graphed the Klein Band I first added the w-component to only the x-axis then only to the y-axis for comparison.
The rainbow effect would again perhaps be helpful if we had it...
The following images are (left) the 3twist Klein Band with the w-axis added to the x-axis, and (right) the same Klein Band with the w-axis added to the y-axis:
The PHI is set as 180 and PSI as 60.
Does this all look okay Teragon?
Re: Traversable Klein 'bottle' paths
I realised when doing some new pictures that 3-twist is not a good name for the simplest Klein Bands. That descriptor was probably just confusing things...
I'll try to think of a better descriptor.
Anyhow these new pictures just show off some simple Mobius Bands with different numbers of half twists. ie 1 half twist, 3 half twists and 5 half twists.
The parametric equations that draw these for the number of twists are:
x = (6 + 2 * (cos(u) * cos(v * n / 2))) * sin(v)
y = (6 + 2 * (cos(u) * cos(v * n / 2))) * cos(v)
z = 2 * (cos(u) * sin(n * 5 / 2))
The n should always be an odd number otherwise it is not a Mobius Band.
I used a PHI of 180 and a PSI of 75 to depict these Bands from a good perspective.
The formula for the w-direction would remain as "w = r * sin(u)" for each of these if we wanted to make them into '3-plane' Klein Bands...
The '3-plane' hopefully conveys that they only really change in the 3 planes rather than in all 4 planes...
Re: Traversable Klein 'bottle' paths
Could you explain what you mean by "3-plane Klein bands" in distinction from "4-plane" Klein bands or what you mean when you say, the object "only really changes" in 3 dimensions?
gonegahgah wrote:It dawned on me from your depictions that we could have a basic 3-twist Klein Band or "Moebius 2-torus" that intersects our 3D plane as a Moebius Band.
There would only be a left-handed and a right handed variety I guess? Plus the n+½ varieties of course.
With this a path of lines would intersect our 3D-plane and each of these lines would connect to the rest of the circle cross-section that extends out into the 4th dimension.
This would be the simplest harmonic Klein Band in 4D? Is that correct?
Almost. First of all, we need either a full 2-torus (without 'Moebius') twisted in 4D to get a Moebius 3-spheritorus or we need to cut a spheritorus. The cross section is a circular one, that's
right, but the object is achiral in 4D (explained in previous posts).
I'm going to discuss the possibilities how to twist such a band in detail in some of my next posts. There are good ways to visualize them.
Firstly there is the number of twists that can vary.
Secondly there are 3 coordinates for a Moebius 3-spheritorus were the surface normal can go to, 2 perpendicular to the plane where the loop lies, one inside the plane. So the normal vector can lie
perpendicular to the plane of the loop everywhere on the band (90°-band). Then there is the band, where the surface normal switches between completly in the loop-plane and completly outside of the
loop-plane, just as the common Moebius strip in 3D (0°-band). There is a full 360° range of off-plane directions to go to for a 0°-band, but all these versions are identical - they're all rotated
versions of the 0°-band. Finally the band can move between one direction perpendicular to the loop plane and any combination of the direction in plane and the second direction perpendicular to the
loop plane, yielding an arbitrary minimum angle of the surface normal to the loop plane between 0° and 90°.
This is the 1-Moebius 90°2-torus. I'm going to visulize the 0°- and the 45° for you later.
Thirdly there is the possibility that the surface normal doesn't move straight between the two extreme directions, but shows an inclination to either side towards the third possible direction
halfway. This is what you call a tumble. There are chiral tumbles and achiral tumbles, but it's an artifact of the deformation of the Moebius band to a tumble, not build in the band itself. If we
allow for toroidal rotations the only parameter that's relevant for the Moebius 3-spheritorus is the number of twists.
gonegahgah wrote:There might be a scary number. That's why I'll stick to just a basic one. Even it has 360° of varieties! Though I'll certainly be interested in what you find.
There are really just 90°, the rest are rotated versions of the 0-90°-bands. You know, there are only 2 basic toratopes in 3D: The 2-torus and the sphere. It is impossible to get a Moebius strip out
of the sphere, even in higher dimensions. In 4D there are 5 basic toratopes: The spheritorus, the 3-torus, the tiger, the torisphere and the glome. These are the possible shapes to start from. That's
all. Everything else would have to be some variation of these basic shapes. According to my level of knowledge each of these figure can produce cuts that are Moebius strips. If two or more of them
produce the same Moebius strips, is another story. The spheritoric Moebius bands have just one loop, but an additional lateral dimension. That's why there are so many possibilities to twist them and
they also occur in different shapes. Having two looped directions the rest of the Moebius strips in 4D is simpler in this regard, but harder to visualize.
What is deep in our world is superficial in higher dimensions.
Re: Traversable Klein 'bottle' paths
I'll go a few questions at a time so that I can understand it better... (I also have to sing early in the morning tomorrow and also have a concert to sing in in the afternoon!)
Teragon wrote:Could you explain what you mean by "3-plane Klein bands" in distinction from "4-plane" Klein bands or what you mean when you say, the object "only really changes" in 3 dimensions?
I'm not quite sure to be honest. I need to look at the rotations a bit better; which I'll do here in this post...
Are you happy with a 4D-shape that passes through our 3D-plane appearing to us only as a Mobius Band?
These equations would seem to suggest that it is okay: w = r * sin(u), x = (R + r * (cos(u) * cos(v / 2))) * sin(v), y = (R + r * (cos(u) * cos(v / 2))) * cos(v), z = r * (cos(u) * sin(v / 2))
Do these equations make sense?
If you are happy that this is a shape and that it passes through our 3D-plane as a Mobius Band then the next question is: Is it a Klein Band?
Let's take a look...
It appears the red-line moves in a circle cross-section from facing into the screen (y-axis) at the front to being vertical at the back (w-axis) ie.
at v=0π(0°), u=0π(0°),1π(180°): (w,x,y,z) = (0,0,8,0),(0,0,4,0) ie. red-line is 2r in y-axis at front
at v=π/2(90°), u=0π(0°),1π(180°): (w,x,y,z) = (0,7.4,0,1.4),(0,4.5,0,-1.4) ie. red-line is 2r at 45° angle in x-z
at v=π(180°), u=0π(0°),1π(180°): (w,x,y,z) = (0,0,-6,2),(0,0,-6,-2) ie. red-line is 2r in z-axis at back
It appears the blue-axis doesn't change its orientation ie.
at v=π(0°), u=π/2(90°),3π/2(270°): (w,x,y,z) = (2,0,6,0),(-2,0,6,0) ie. blue-line is 2r in w-axis at front
at v=π/2(90°), u=π/2(90°),3π/2(270°): (w,x,y,z) = (2,6,0,0),(-2,6,0,0) ie. blue-line is 2r in w-axis at 1/4 around
at v=π(180°), u=π/2(90°),3π/2(270°): (w,x,y,z) = (2,0,-6,0),(-2,0,-6,0) ie. blue-line is 2r in w-axis at back
So the red-axis is changing orientation in our 3D-slice for each cross-section while the blue-axis remains locked.
In 4D a flat ground is perpendicular to the sky no matter what angle line we take in the ground 'cube'.
I extrapolate from that that even though the red lines are different orientations they are still all perpendicular to the rigid blue-axis.
And because they intersect the blue-axis midway to each they can all form legitimate circle cross-sections.
Also, the shape described by those circles above goes from flat on the ground at the front to on it's side at the back.
I believe that this means that we should be able to conclude that it is a Klein Band.
If that is all correct then each cross-section around the ring is only changing its one orientation axis and that is within 3D.
So that's why I referred to it as a 3-plane Klein Band.
I have to go get ready for tomorrow. Otherwise I'd like to set out a similar table for one of my other Klein shapes.
What it does show for them is that both axis are changing and so the red and blue axis - which are the rotation - are changing through the whole 4D space.
So that's why I referred to them as 4-plane Klein Bands.
Does that all sound correct? Is that a suitable reason to call them such or would there be a better descriptor?
I'll look at this more soon...
Re: Traversable Klein 'bottle' paths
I can start to see what you mean though Teragon.
If we build a path and put rectangle pavers down then it doesn't matter if they are left to right or right to left when laying them.
Equally if we put straight round pipes together it doesn't matter what there rotation is to each other.
So the 3-plain Klein band may just be a simplified version of one 4-plain Klein band... Is that correct?
[Edit: Although looking at the animation of the 4-plain Klein Band that twists to a cross-circle in the y&z-planes at the back; that doesn't appear to be the case. The 3-plain and 4-plain Klein bands
seem to be distinct...]
Re: Traversable Klein 'bottle' paths
It would also seem that you could have matching 3-plain Klein Band varieties for each type of Mobius Band that we have. ie. 1/2 twist, 3/2 twist, 5/2 twist, etc.
Even more interesting it would seem that you can have 4-plain Klein Bands with different twists for the blue and red-lines.
You could have 1/2 red twist + 1/2 blue twist, 1 red twist + 1/2 blue twist, 3/2 red twist + 1/2 blue twist, 1 red twist + 3/2 blue twist, 1/2 red twist + 1 blue twist, etc...
Our Mobius Bands only have one direction of twist but a Klein Band has two directions of available twist. Does that sound okay Teragon?
Re: Traversable Klein 'bottle' paths
gonegahgah wrote:Are you happy with a 4D-shape that passes through our 3D-plane appearing to us only as a Mobius Band?
These equations would seem to suggest that it is okay: w = r * sin(u), x = (R + r * (cos(u) * cos(v / 2))) * sin(v), y = (R + r * (cos(u) * cos(v / 2))) * cos(v), z = r * (cos(u) * sin(v / 2))
Do these equations make sense?
I see, that would be a 0°-band, looking like an ordinairy Moebius band, just growing in width (r) and shrinking again (but only if it passes our 3-plane under the proper angle).
The form of the equations looks very promising, but you can't just go and plot 3 random coordinates out of the four, because there are only two parameters and the way the surface cuts the shown cross
section doesn't always make sense (just try). As a basis for further calculations I suggest the following parameter form:
x(t,u,v) = sin(v)*(r(2u-1)*sin(v/2)*cos(t)-R)
y(t,u,v) = cos(v)*(r(1-2u)*sin(v/2)*cos(t)+R)
z(t,u,v) = r(2u-1)*sin(v/2)*cos(t)
w(t,u,v) = r*sin(t)
v: [0,2Pi]
u: [0,1]
t: [0, 2Pi]
If you want to plot a cross section in the x,y,z-plane that's moving in w I suggest this form (t is the time coordinate in this case):
x(t,u,v) = sin(v)*(r(2u-1)*sin(v/2)*sin(t)-R)
y(t,u,v) = cos(v)*(r(1-2u)*sin(v/2)*sin(t)+R)
z(t,u,v) = r(2u-1)*sin(v/2)*sin(t)
v: [0,2Pi]
u: [0,1]
t: [0, Pi]
gonegahgah wrote:Also, the shape described by those circles above goes from flat on the ground at the front to on it's side at the back.
I believe that this means that we should be able to conclude that it is a Klein Band.
If that is all correct then each cross-section around the ring is only changing its one orientation axis and that is within 3D.
So that's why I referred to it as a 3-plane Klein Band.
I have to go get ready for tomorrow. Otherwise I'd like to set out a similar table for one of my other Klein shapes.
What it does show for them is that both axis are changing and so the red and blue axis - which are the rotation - are changing through the whole 4D space.
So that's why I referred to them as 4-plane Klein Bands.
Does that all sound correct? Is that a suitable reason to call them such or would there be a better descriptor?
I'll look at this more soon...
It's fine to make use of different models to get a better picture, but one should be aware of there limitations and how the get together as well. Your distinction is based on the concept of surface
tangential vectors, which I think may be misleading at some points and is rather complicated. As I pointed more than once, defining vectors inside a rotationally symmetric surface is redundant,
because their positions inside the surface are not defined by any feature and thus are meaningless, if taken by themselfs. More precisely there is one redundant degree of freedom in the description
(in 3D, in 4D there are even two), the red and the blue bar can swap sites inside the surface, rotate by some angle or stay the same within one loop - the surface they describe is the same in any
case. I could redefine the path of the blue and the red bar to describe the same object and get different results for what is a 3-plane and waht is a 4-plane. In fact you can define the red and blue
line in a way that they show a tumble for any kind of twist, namely when one of them goes at 180° where the other one has started at 0°. Our cross section may be described by either one normal vector
or a second rank tensor, but taking two vectors are dangerous, if we're not careful enough.
When I look at the surface normal vector instead, it traces out a 2-surface in any case. This surface lies in 3D in the case of the 0°-bands (and also for the versions between 0° and 90°), propably
in 4D in case of the tumbled band, but only in 2D in case of the 90° band. So the outcome is similar, although its definition is a different one.
In any case it's a worthwhile observation that there is any number 1/2+n possible for both rotations. I don't understand this kind of tumbles yet, so I can't tell how to characterize them
systematically. The problem is that there are many other possiblities to twist a band without a continous tumble. In this case the sense of rotation would simply change throughout the course of the
loop. For example the surface normal could go from z at 0° to y at 90°, to w at 180°, to -y at 270° and to -z at 360°. Or imagine a symmetric tumble, where the red line rotates in a normal manner,
but the blue line reverts its sense of rotation halfway! So I've got no clear classification for all types of possible twists, but as you'll see later, what I can offer is a way to depict a twist
visually in a simple way!
From my charts you will (in theory) be able to see
- if two bands are identical (maby it's not that obvious in every case, I'll have to think about it)
- which minimum angle the surface normal confines to the plane of the loop
- if a band is simple or twisted in some more complicated way
- if the band is chiral or achiral
- the full range of possible twists that lead to Moebius strips
Some twists can get unclear on the other hand, as lines start to overlie, e.g. the 1 1/2-fold twist.
What is deep in our world is superficial in higher dimensions.
Re: Traversable Klein 'bottle' paths
I look forward to your category and comparison chart Teragon.
I'm getting ready for our convention in Melbourne so I'll be a bit scarce for a while but I'll keep checking in...
Re: Traversable Klein 'bottle' paths
Just check the forum once in a while. There are many things to do off the computer too, so I'm writing every now and then.
Moebius 3-spheritorus (gonegahgah's Moebius strip)
Full name: x°n-Moebius 3-torus
Family: Moebius 3-spheritorus
Cut of a full 3-spheritorus, twisted ½+n times
Circle extruded, twisted ½+n times and closed to a loop
Cross section: Circle
Open directions: 2
Closed directions: 1
Twisted directions: 1
Chiral (90°) / achiral (0°)
We are again cutting a torus, but now a 4D spheritorus. The first picture shows a cross section through the spheritorus.
Again the red plane is moving around the loop of the torus, but now it's actually a 3-plane and we can see only a 2D cross section of it. The red arrow is again normal to the red plane, always
pointing in the direction of the loop. The black line inside the torus indicates how the torus may be cut at this location, but as we can only see 3 out of the 4 dimensions we cannot see all the
possible orientations of the cut at this point.
So now that we know where we are on the loop we continue by replacing the direction of the red arrow by the 4th dimension and get this picture:
We see now that the red plane is actually 3D (for the sake of clarity only drawn inside the torus where it intersects). The axis the red plane revolves around in the first picture is actually a plane
/2-line that shows up here as the grey plane between the two spheres. What is actually cutting the spheritorus at any position of the red plane is the dark red plane (the thick black line in the
first picture was only a cross section of it). The surface normal, shown as an orange arrow, has two degrees of freedom - it can trace out any path on the red sphere within one revolution around the
spheritorus as long as it ends 180° from where it originated.
If the orange arrow is pointing in the direction of the grey line it's lying inside the plane of the loop (0°-direction). If the orange arrow is pointing in any direction on the grey circle, it's
perpendicular to the plane of the loop (90°-direction). We see now that in 4D the surface normal can do a twist fully outside the plane of the loop. Mind that the 0°-direction is dependent on the
position on the torus in an outer coordinate system, while the 90°-directions are fixed.
We could depict the kind of the twist the strip does in a color diagram as we did for the Moebius 2-torus (I'm not going to do this) with the difference that we need a more complex color scheme to
represent it. The lightness now shows the angle relative to the off-loop 2-plane, while hue shows the angle along the off-loop plane. (The figure is from the internet)
There's a far more clear way to do it. We can simply draw the path of the surface normal (orange arrow) on the red sphere while moving one time around the spheritorus.
The poles of the loop represent the 0°-directions pointing towards the center of the loop, while the equator represents the 90°-directions. The spheres on the top both represent 0°-bands. As you can
see the minimum angle to the plane of the loop here is 0°. As the equator plane is outside the plane of the loop we can rotate the sphere around the axis going through the poles. The plane of the
loop is not affected. Thus the two diagrams on the top represent the same Moebius strip and there is only one possible 0°-band!
Besides that's why the Moebius 3-spheritorus is an achiral object. Any band with a twist from z to x can be transferred into a band with the twist going from z to -x by a rotation of 180° inside the
The left sphere on the bottom represents a 90°-band, the right one represents a 45°-band. All these bands are represented by half great circles on a sphere. Mind that it doesn't matter where on the
great circle the half circles start as we could define the starting point point of our diagram anywhere on the loop.
The figure below shows Moebius bands with different tumbles, meaning that the axis of rotation itself is rotated within the course of the loop. Some of them are 0°-bands, some of them descend from
the 90°-band and have minimum angles between 0 and 90°. Intuitively we might say that these tumbles do have a chirality. You cannot rotate a lefthanded loop into a righthanded loop by spinning the
sphere. It may be a bit more subtile, it's not clear enough to me yet.
Identical tumbles look different depending on the starting point you choose for the diagram. What distinguishes the tumbles on the top from the tumbles in the middle is only the starting points I've
chosen for them. Of course the chirality and the inclination towards the plane of the loop also varies but it doesn't affect the shape of the curves.
There are only certain twists allowed on the sphere. If we do a second loop around the band the tumble has to continue smoothly and the surface normal has to point always in the opposite direction to
where it pointed to in the round before! You can easily check this if you prolong the twist up to the point where it closes itself. (I can post some pictures later if there is general interest. This
post is already long enough.) Moreover it turns out that for every orientation of the "untumbled" twist there are muliple possibilities to do a tumble - the rotational axis has to do an integer
number of full rotations within one loop (one half loop on the sphere diagrams). These higher modes can be seen as an analog to the different possible numbers of twists within one loop, but the
condition is n instead of n+1/2. While mulitple twists cannot be shown on the sphere diagrams, multiple tumbles look really nice (again maby later).
It turns out that the spheres on the top and in the middle show classical smooth tumbles while the ones at the bottom do not. They show a kind of tumble too, but because there is only one half tumble
within one revolution around the band, the direction of the tumble changes its sense at the starting-/endpoint of the loop. There are also different modes for these tumbles, the rotational axis has
to do n+1/2 rotations. In total we may characterize the mode of the tumble by just one number m = 2n with the even numbers representing the tumbles that keep their sense of rotation ("even tumbles")
and the odd numbers representing the tumbles that change their sense of rotation ("odd tumbles").
Finally there are twists that are neither straight foreward nor tumbled. Every trace on the sphere is possible, as long as every point on it (indicating a direction) mirrored on the center of the
sphere gives the point where we get to when we get one time around the strip.
I see now that it would be better to show the whole closed path on the sphere diagrams, i.e. two revolutions around the strip for more than one reason. The Identical curves with different starting
points will look identical. The feature of every point on the curve beeing 180° to another point will become much clearer. (On the other hand the creator has to take care if this is actually the
case.) It will become clearer whether the object is chiral or not, because you see if you can get the other version by rotating the sphere.
I quess that is a lot of information in a short text. Hope it's not too confusing. Enough for today, I'll do the rest of the Moebius 3-spheritorus another time.
Last edited by Teragon on Tue Aug 30, 2016 9:19 pm, edited 3 times in total.
What is deep in our world is superficial in higher dimensions.
Re: Traversable Klein 'bottle' paths
I find myself swinging back to my original equations now but with a better understood interpretation.
As a reminder they were:
x(u,v,t) = (R + r * (sin(v / 2) * sin(u + t) * cos(t) * -sin(v / 2) + cos(u + t) * cos(v / 2))) * sin(v)
y(u,v,t) = (R + r * (sin(v / 2) * sin(u + t) * cos(t) * -sin(v / 2) + cos(u + t) * cos(v / 2))) * cos(v)
z(u,v,t) = r * (sin(v / 2) * sin(u + t) * cos(t) * cos(v / 2) + cos(u + t) * sin(v / 2))
where 0 ≤ t ≤ 2π (though that doesn't really matter as the pattern repeats) which looked like:
My interpretation now is that these are just middle section 3D cuts of the the varieties of Klein Strip available in 4D.
You could call it a footprint if the Klein Strip were embedded in the ground up to its middle.
I'll try to explain this soon but a couple of new realisations have cemented this.
The only way for a 2Der to view a whole cross-section of a Mobius Strip is to stand it on its side.
If they then embed it into their 2D slice at midway the Mobius Strip will appear as a simple circle.
If the Mobius Strip moves our 3D left or right then it will appear to become an incomplete almost circular line.
Something similar too:
Which decreases at slower intervals to a dot and then disappears.
We figure a 4D Klein Strip should do a similar thing in our 3D world.
This time however we don't have to stand it on its side; we can lay it flat.
That aside, if the Klein Strip matches its Mobius Strip counterparts, we should get a plane figure in our 3D slice.
The cool thing is that if their orientations match then in our 3D middle slice of the Klein we will see a 3D Mobius Strip.
How cool is that! Which we get to see in the above animations (at what would be 90° and 270°).
Moving the Klein Strip sideways in 4D would see a similar effect as well.
The full Mobius Strip we see would separate at its flat point and shrink towards the opposite side.
One end would appear to move outwards slightly and the other end would appear to move inwards.
The opposite side would stay in place but grow narrower.
The whole path would grow narrower overall as well again to nothing by the opposite side.
But, what about the other angles of orientation into 4D?
A Mobius Strip can have only left and right twists but a Klein Strip can twist in any and all of the 360° of sideways available in 4D.
Well I figured that if we were keeping a plain slice philosophy then what can be the only results for the middle slice?
The only result would be that we would morph (for the different varieties) between seeing a Klein Strip and a simple path ring.
I considered this seriously...
I then considered what this would look like if we moved those varieties sideways in 4D...
I've come to the conclusion that they would appear no different to the 4Der than a 3D-donut not matter what we do.
So to me this means that a flat ring path will not be a 3D slice of a Klein Strip.
I believe such transformations would appear to the 4Der as a Mobius strip morphing to a 3D-donut and back again.
So again, what about the other 360° of varieties of a Klein Strip. How would these appear to us?
Well, if we consider the 0° (180° & 360°) phase of my animation above...
If we were to take them as a 4D footprint (rather than our 3D slice) and you were to project them up vertically, what would you get?
The flat front, as a footprint to the 4D, is like us trying to walk on a line. To them it is an edge.
If we extend the flat front upwards (& downwards) it becomes a wall (a very thin wall but so is that back of a Mobius Strip).
The circular opposite side, as a footprint to the 4D, is like a path to them.
If we don't extend that path upwards it remains a path.
So here we then have an example of an object that is a path at one orientation and a wall at the opposite angle.
Sounds very like a Klein Strip. My conclusion is that it is.
As we know 4D footprints and 3D slices are very similar objects. They are just taking a different set of 3-axes.
So my conclusion is that such a 3D slice (ribbon on one side; cylinder on the other) can in fact be a 3D slice of two varieties of Klein Strip.
My discussions with Teragon had convinced me that I was wrong.
These further dissections of the matter in my mind however lead me to believe that the formulas were actually correct all along if just a little misconstrued on my part.
Instead of the animation being various rotated slices of the same Klein strip I now conclude that it is just middle slices of all the 360° of Klein varieties.
I should add that without my discussions with Teragon I may never have been able to get back around to what is hopefully a more correct interpretation of my original idea.
I would welcome to recommence these discussions, if I may, to determine whether that is now the correct conclusion...
It may take some time but hopefully I can submit supporting pictures for the above; when I can work them out.
Re: Traversable Klein 'bottle' paths
I should add the following comparisons too:
This one extends into the 4th dimension primarily from flat front progressively rotating around to extending only up/downwards (as can be seen in our 3D slice above).
The back section which is (and we see as) a 3D-cylinder forms the thin Klein wall (no extension into 4D sideways).
The front section (we see as a line) is part of the Klein foot path (a cylinder unseen off into 4D sideways).
This one extends into the 4th dimension from all points.
The back section is a thin 2D wall in our view and is a cylinder wall into the 4th dimension.
The front section (we see as a line) is part of the Klein foot path (a cylinder unseen off into 4D sideways).
Both of these varieties have a footpath at the front and a wall at the back! Cool.
However, they are not identical to each other. They are actually only two varieties of a unique circle (or semi-circle) of varieties...
If you paint one half side one colour and the other half side another colour then I guess you could consider it to be a full circle of varieties then.
Re: Traversable Klein 'bottle' paths
The following parametric equations maybe depict the sideways movement through the Klein Strips:
x(u,v,s,t) = (R + r * cos(s) * (sin(v / 2) * sin(u + t) * cos(t) * -sin(v/2) + cos(u + t) * cos(v / 2))) * sin(v * cos(s))
y(u,v,s,t) = (R + r *cos(s) * (sin(v / 2) * sin(u + t) * cos(t) * -sin(v/2) + cos(u + t) * cos(v / 2))) * cos(v * cos(s))
z(u,v,s,t) = r * cos(s) * (sin(v / 2) * sin(u + t) * cos(t) * cos(v/2) + cos(u + t) * sin(v / 2))
where t is the variety of 4D Klein Ring and s steps us from the ana side of the object to the kata side of the object.
ie. 0 ≤ t ≤ 2π and -π/2 ≤ s ≤ π/2
The following is possibly an animation of the type 0 Klein Ring passing 4D sideways through our 3D space:
The following is possible an animation of the type 90 Klein Ring passing 4D sideways through our 3D space:
I'll have to look at this closer but its heading in the right direction I hope...
I realise, looking at this again, that there is definitely one error to correct so far...
I'll try changing the equations tomorrow to:
x(u,v,s,t) = (R + r * cos(s) * (sin(v / 2) * sin(u + t) * cos(t) * -sin(v/2) + cos(u + t) * cos(v / 2))) * sin(v * cos(s * cos(t)))
y(u,v,s,t) = (R + r *cos(s) * (sin(v / 2) * sin(u + t) * cos(t) * -sin(v/2) + cos(u + t) * cos(v / 2))) * cos(v * cos(s * cos(t)))
z(u,v,s,t) = r * cos(s) * (sin(v / 2) * sin(u + t) * cos(t) * cos(v/2) + cos(u + t) * sin(v / 2))
I should explain the parts of the equations one day... Surprisingly it does make some sense...
The reason for the proposed change is because the Klein Ring version, that reveals itself as a Mobius Ring in our 3D space, should not retreat to the front; as it is presently doing.
The version, that reveals itself as the fattest middle version in our 3D space, should retreat as it is more in our middle 3D space already and less in the 4th dimension than the Mobius Ring looking
I suspect I also have to add an extra element that is similar to the Mobius Strip passing through a 2Der's plane space. I'll keep playing around with this...
The equation just keeps getting longer...
Re: Traversable Klein 'bottle' paths
Probably worthwhile to have an examination of the equations in their present forms:
x(u,v,s,t) = (R + r * cos(s) * (cos(u + t) * cos(v / 2) - sin(v / 2) * sin(u + t) * cos(t) * sin(v/2))) * sin(v * cos(s * cos(t)))
y(u,v,s,t) = (R + r * cos(s) * (cos(u + t) * cos(v / 2) - sin(v / 2) * sin(u + t) * cos(t) * sin(v/2))) * cos(v * cos(s * cos(t)))
z(u,v,s,t) = r * cos(s) * (cos(u + t) * sin(v / 2) + sin(v / 2) * sin(u + t) * cos(t) * cos(v/2))
You can notice they all basically share the following underlined parts:
x(u,v,s,t) = (R + r * cos(s) * (cos(u + t) * cos(v / 2) - sin(v / 2) * sin(u + t) * cos(t) * sin(v/2))) * sin(v * cos(s * cos(t)))
y(u,v,s,t) = (R + r * cos(s) * (cos(u + t) * cos(v / 2) - sin(v / 2) * sin(u + t) * cos(t) * sin(v/2))) * cos(v * cos(s * cos(t)))
z(u,v,s,t) = r * cos(s) * (cos(u + t) * sin(v / 2) + sin(v / 2) * sin(u + t) * cos(t) * cos(v/2))
First off comes:
1. (R ~ This is the radius of the Klein Ring. It appears only in the x and y equations as our ring only circles in the horizontal.
Then the first common part starts with the following (which also affects the second common part):
2. + r ~ This is the maximum path cross-section radius.
3. * cos(s) ~ This is not correct and I'm working on fixing it. It was meant to simulate moving the shape sideways in 4D.
The rest of the first common part creates the central mobius plane which is common to all the simple Klein Ring varieties:
4. + (cos(u ~ Creates the base mobius plane by stepping around its cross-section.
5. + t) ~ Rotates the shape around its cross-section just for effect (1>0>-1>0>1). Doesn't affect the shape.
The first common part is then multiplied by either:
6a. * cos(v/2) - ~ The horizontal plane component of the mobius ie. full flat cross section at front to add nothing at back (1>0>-1).
6b. * sin(v/2) + ~ The vertical plane component of the mobius ie. no vertical at front to fully vertical at back (0>1>0).
The second common part that appears is to create the axial component perpendicular to the mobius plane above:
7. * (sin(v/2) ~ This allows the varieties to never add anything to the front around to adding the full variety fatness at the back (0>1>0).
8. * sin(u ~ Let's us step around each cross-section to draw its outline.
9. + t) ~ Rotates the shape around its cross-section just for effect (1>0>-1>0>1). Doesn't affect the shape.
10. * cos(t) ~ This gives us our variety of Klein Rings by making the visible bulk squish and expand (1>0>-1>0>1).
This second common part is then multiplied by either:
11a. * sin(v/2))) ~ The horizontal plane component of the fatness ie. no fatness at front to full current fatness at back (0>1>0).
11b. * cos(v/2))) ~ The vertical plane component of the fatness. Adds, in a mobius oval fashion, nothing at front to current max at back (1>0>1).
Finally for the horizontal plane the resultant x and y are pushed to their correct orientation around the ring:
12a. * sin(v ~ for the x plane to get the x component of the cross section at the current place in the ring.
12b. * cos(v ~ for the y plane to get the y component of the cross section at the current place in the ring.
with the v modified by:
13. * cos(s ~ Replicates the receding that occurs if we move our 3D slice ana or kata-wards of the object (1>0>-1>0>1).
14. * cos(t))) ~ As we step towards the the two Mobius like varieties receding reduces to zero as it doesn't need to recede (1>0>-1>0>1).
As mentioned, I know that (3) is wrong but I'm fairly confident that 13 and 14 are correct.
The following animation shows how one Mobius axis is created and another axis is created that is perpendicular to that axis.
When you add these together you get the Klein Ring varieties:
Re: Traversable Klein 'bottle' paths
I'm late to the game, but just wanted to point out that the so-called Klein "bottle" is actually not a bottle at all, at least not in the sense a 4Der would think of a container that can hold liquid.
I think you guys have probably figured this out, but just wanted to confirm.
Basically, the only reason it's called a "bottle" is because it consists of a closed 2D surface that wraps around in 4D back upon itself with a "twist", so that it becomes non-orientable. It's pretty
much a direct analogue of the Mobius strip in 3D. In 3D, the Mobius strip is a line (expanded to a strip in order to make its ends have horizontal extent) that wraps around back upon itself, twisted
so that what was originally the "top" surface connects with what was originally the "bottom" surface. Similarly, in 4D, the Klein "bottle" is made by taking a cylindrical tube (in the 3D sense),
assigning, say, a clockwise orientation to one of its ends, and twisting it through 4D such that it connects back upon itself in counter-clockwise orientation. How is such a "twist" possible? It's
because in 4D, a 3D object can be flipped onto its mirror image (just like in 3D, a 2D object can be flipped onto its mirror image, while in 2D that's impossible). The cylindrical tube, being a 3D
object, can therefore be flipped onto its mirror image. Or, since we assume a flexible tube, one end of the tube can be twisted onto its 3D mirror image, so that it connects back to the other end in
the "wrong" flipped-ness.
But since a 3D tube in 4D can't hold 4D water -- just like a Mobius strip in 3D can't hold any 3D water either -- the Klein "bottle", being a twisted 3D tube, is actually not a bottle at all. A 2D
"surface" in 4D behaves more like a string than a surface; it doesn't divide space (i.e., separate one side of 4D space from the other, which is required in order for a container to hold fluid), and
it can be knotted, etc.. 3D objects in 4D are "flat", so both the original 3D tube and the Klein "bottle" are flat objects in 4D, and cannot hold any water. The Klein "bottle" is just the 4D analogue
of a Mobius strip, really.
Re: Traversable Klein 'bottle' paths
And interestingly enough, the Klein bottle construction in 4D actually doesn't require the original 3D tubing to be hollow at all. It can be a solid 3D cylinder, and as long as it's flexible, you can
twist it in 4D such that the circular "lid" on one end is twisted onto its mirror image when it connects back to the other "lid". This produces a non-orientable 3D solid, basically the equivalent of
a "filled" Klein "bottle". Except that it's not really filled in the sense of filling with liquid, it's just the solid version of the Mobius strip vs. just a pair of parallel edges without the middle
of the strip between them.
Or another way to think about it, is that 3D objects embedded in 4D have two "sides", not in the 3D sense, but in the 4D sense of having a surface facing the 4D halfspace on one side of the
hyperplane that the 3D object sits in, vs. the opposite side of that surface facing the other 4D halfspace on the other side of the hyperplane. You may say this is the ana side and the kata side of a
3D object. The Klein bottle twist is then just a matter of twisting the 3D hyperplane such that these two sides are interchanged, and then wrapping it around so that the two ends of the cylinder
connects. So the ana side of one end connects to the kata side of the other end, and vice versa. Exactly like how a Mobius strip connects the "top" side of one end to the "bottom" side of the other
end, and vice versa.
Re: Traversable Klein 'bottle' paths
I hadn't really thought about the bottle aspect so that is interesting QuickFur.
It is fortuitous that you speak about twisting Mobius Strips as well.
As you say, Mobius Strip can be made from a long rectangular piece of paper.
If you take this paper and give it a half twist along its length and joining the two ends, voila!
The principle is similar for a Klein Ring.
You start with a long cylinder which is just a strip to a 4Der and you join the ends.
A continuous twist still needs to be involved I believe.
I don't think you can just twist a donut into the 4th dimension at one point and call it a Klein Ring.
That would be like attaching the 2D paper above like a chain and adding an afterthought twist.
It wouldn't be a Mobius Strip; and the same goes for a Klein Strip.
So the approach would be the same as you mention QuickFur.
Looking at 'perfect' Klein Rings allows us to presume the 180° of varieties.
A Mobius Strip has only two varieties which are left and right varieties.
In 4D those two varieties become one as it is possible to rotate it 180° in 4D leaving it mirror image in our 3D.
The same goes for the 'perfect' Klein Rings.
There are only 180° of varieties because they can be flipped through 4D to get the other 180° of identical varieties.
I've rejigged the formulas a bit because I wanted to have the drawing begin from the fattest middle.
The formulas tended to lend themselves more to the drawing beginning from the flat section.
I wanted to do this so I could give the impression of rotating the part that we see the most bulk of.
The result is the following:
The left image shows the image starting with the fat middle cut through and twisted one side backwards and the other side forwards. We then twist this oppositely into 4D which does rotate the whole
fat section.
So I wanted to simulate that behaviour.
I've included the image on the right to show how the two axes are created.
These are then added together to achieve the result on the left.
I'll work to add the modified formulas tomorrow...
I also want to show how these look when we move the Klein Rings sideways into 4D.
I'm have a picture in my head what this will look like in our 3D space.
I just have to work out the formulas to do this.
That's partially done as I mentioned with the receding.
I just have to work out how to depict the rapid reduction correctly...
I've got a picture in my mind of how this works...
Re: Traversable Klein 'bottle' paths
Well yes, it would have to be a continuous twist, not just at a single slice. And you can't start with something already attached, since it wouldn't have the right non-orientable topology. You have
to start with unattached ends of a tube / cylinder, give it a twist, then glue the two ends together.
Anyway, in 3D there's a whole set of interesting and bizarre looping shapes that you can obtain by cutting a Mobius strip. Well, more precisely, dividing a Mobius strip, i.e., cut parallel to the
edges, not to break the loop. Depending on whether you cut it exactly halfway in the middle, or 1/3 of the way, or 1/4 of the way, etc., you can obtain various new Mobius strips and sets of
interlocking Mobius strips. A friend showed it to me once... it's pretty fascinating (and mind-boggling!). I wonder what kind of interlocking shapes we'd get if we did analogous cuttings on the Klein
"bottle" (or Klein Ring -- I like that name
(And yes, now I'm just begging to know what happens if we cut a projective plane this way...
Re: Traversable Klein 'bottle' paths
Thanks Quickfur. Sounds challenging. Hopefully do-able at some point in the future as well as the side stepping first suggested by ICN5D.
It just dawned on me this morning that I need to change the way I depict the rotation itself.
It occurred to me that the rotation I am showing does not give a good representation of the actual process.
I'm happy with the overall shape but somehow I need to taper/untaper, and not just transport, the drawing lines to show how the rotation more accurately occurs.
If a part of the path rotates from our space into the 4th space then it needs to appear to taper off while being replaced with path that's coming out of 4th space.
That shape itself shouldn't be the only thing that tapers it would appear. I have to think about how that will look?
I guess once again the 2Der observing a Mobius Ring standing on its edge with the ring in their plane is the best analogy.
All the 2Der sees is what we call an edge. So painting one side of the Mobius one colour and the other side another colour is useless as they won't see our two colours.
Instead what we have to do is paint each molecule in the Mobius Ring so that they align with the paper's orientation.
Each molecule has to be painted in a rainbow fashion around its 360° in line with the paper.
So the rainbow paint is impregnated into the paper itself.
By doing this the 2D slice the 2Der sees is a rainbow. This goes from the front upwards in say a clockwise fashion and down in an anti-clockwise fashion through the colours.
If we were to do this with just two colours the line would be one colour from the middle front to the top and bottom and the second colour around the back.
The 2Der basically only sees two points that are perpendicular to Mobius surface; That is the a point right at the front and a point opposite at the back if they get behind it.
All other points are rotated out of the surface perpendicular.
Our 3D view will do a similar thing for the 4D Klein Ring.
However there is not just one central line rotation but a plane of rotations.
So it is probably more worthwhile to use a rainbow of colours for the different untwisted angles in the cylinder path.
Then work out how these will look in our 3D slice when they are twisted through 4D in the different ways.
If we only had left and right rotations for the Klein Ring then we would only see an edge too; as in the Mobius appearing form of the Klein Ring.
But whereas we can only rotate clockwise and anti-clockwise, the 4Der can rotate in any of 360° of sideways ways.
This is why we get a greater variety of 3D views of a Klein Ring than would a 2Der of a Mobius Ring.
If a 2Der were to observe a Mobius Ring from a 4D space this would be altogether different and the number of varieties would be equal to Klein Rings.
Though the 2Der trapped to a line ring would probably see little difference once they aligned correctly.
If we were to just rotate a cylinder from our 3D space into the 4th Space along one axis we would see it ovalise along its length (if done perpendicular to the axis) until it becomes just a rectangle
This is different to moving the object sideways into 4th Space where the cylinder would just appear to grow and shrink in size.
The rectangle would be the actual perpendicular surface to the 4Der. When looking at the full cylinder we are looking at what to them is the edge.
Same as a 2Der looking at a square. Their face is our edge.
If we treat the square as 0 thin (impossible and we wouldn't see it but for math purposes) then some interesting things arise.
If the square is rotated into our sideways the 2Der is left no longer looking at their considered square face (edge) or even at our considered square face but are left looking at an orientation of
the square that we can not see.
Mathematically they are looking at a line that has greater depth than looking perpendicular to square as we would look at it, even though we are referring to the square as 0 thin.
If you give the square some sideway depth then this is easier to understand this. Looking at the face perpendicular to its plane is less deep than looking at it from an angle.
The following demonstrates:
The left 2Der looking at a square perpendicular from our perspective sees a line that has less bulk behind it than does the 2Der looking at it from an angle on the right.
But if the object is moved sideways into our 3D space perpendicular to their view it will run out faster to compensate for that.
Based upon a zero thin lower space model, in line with this, I assume this is why we see changing bulk sizes when we rotate objects into 4th space.
Also, the principle of angle of view is important I believe.
The left and right 2D viewers above don't see exactly the same line even though it is through the same cut point.
I am certain that their space sees a different orientation of that higher space line.
And the same goes for us I feel.
Unless we are looking from the 4Der's view directly perpendicular to the object we don't see exactly the same thing as them.
On any angle other than perpendicular we see a slice of that object in a way that they cannot see.
A close analogy is if we had red wood painted blue, they would always see blue, but we would see red except at the perpendicular (under a zero thin model).
So for the Klein Ring, the only part that we see the same as the 4Der are the flat areas as they are exactly as seen by the 4Der.
To them the whole thing is flat. Anything we see as not flat is our unique angle of view that is unavailable to the the 4Der.
They can see more at once but not from the angles that we get to see it from.
That's why I'm looking to use more than two colours to show up the different orientations we are seeing relative to how they are rotated into 4D.
Come to think of it, I believe Taragon was doing something similar.
The main problem is that I need to represent two axis of orientation as the cylinder rotates between y-z + w-y and y-w + w-z to create the various varieties of Klein Ring.
So rather than the use of a rainbow circumference I need to use a rainbow sphere somehow to leave a trail around the Klein path...
The main thing is that I think this is needed to evoke a sense of the rotation that is involved into 4D...
What does all the above jibba jabba mean? Essentially it is that it is for us as it is for a 2Der...
If they look at the midline of our Mobius Ring they will have very little idea of the actual rotation that is occurring along that line.
Each point along the line is at a different orientation to the them but they don't have diagonal arrows to show this rotation being only in a 2D plane themselves.
At this orientation only a single mid-point at the front and one at the back are orientated the same in both the 2Der's and our world.
They and we look at all the other points around the middle ring from a different possible exposure angle.
They see the points 'surface' from an angle that is inside the object whereas we see only the perpendicular surface of those points.
Doesn't seem like much of a difference but I think it is.
They could use a spread dots to show that the rotation is more clockwise left to right around to the top looking up or more clockwise right to left around to the bottom looking down (or both
These would taper towards being closer together at the middle point in front of them to show that that is the least twisted section of the Mobius Ring.
They could alternately use colour or shade too if that makes any sense to them.
In 3D we have a little more detail to play with...
However the spiral I was using is not enough to show how the path from the front point rotates around to either up or down while the hidden perpendicular point at the front rotates around to either
forwards or back (fat form).
Nor its perpendicular counterpart where the path from the front point rotates around to hidden in the forth dimension while the hidden perpendicular point at the front rotates around to up or down
(Mobius form).
Or one of the varieties in between of course.
It is these that I want to depict and provide easier recognition of that process. I will have to think how best to do this?
For the 2Der's Mobius they could use a continuous rainbow.
For our 3D slice of a Klein Ring we could add shade as well so around to top would be lighter and around to bottom would be darker. The rainbow itself would circle between the x, y and w axis.
I don't have the luxury of co-ordinating that much colour so I'll have to think up something simpler...
Re: Traversable Klein 'bottle' paths
That's a lot of written words. For now this is just a reply to your first post. In the mean time I've updated the images in my previous to posts. I'd hadn't hosted them somewhere they were safe. You
might want to have a look at them again.
It’s in the nature of slicing that not all of the information about an object is obtained from a single slice. You have to move or rotate the slice through the object to get all o fit. What I prefer
to do is making a projection into 3D. What you get then is one 3D image of your object seen from one specific angle.
I've written a program to visualize flat objects in 4D that way. These are objects that correspond to wires in 3D. (Working on a program that can do solid objects in 4D too.) 4D Objects are projected
onto 3D just as 3D objects are projected onto 2D when we make a foto. The 3D image is then projected onto the plane of the cumputer screen. In order to get a correct perception of the image we have
to make ourselves aware of the 3D shape and also how the interior looks like (flat objects don't have an interior, but solid objects do). To get a feel about it, here's just a common 3D Moebius
strip, rotating through four dimensions:
The shading helps to get the shape of the image, while the colors code the distance to the beholder (and to the volume of projection). You can also see that the closer the individual parts of the
object get the bigger the appear. The shape of the 3D-image alternates between a Moebius band with all points at the same distance (object lying in the three lateral dimensions, which constitute the
field of vision of a 4D being) and a totally flat sheet with one close end covering the far end for a moment. After one half revolution back and front change their roles.
gonegahgah wrote:Well I figured that if we were keeping a plain slice philosophy then what can be the only results for the middle slice?
The only result would be that we would morph (for the different varieties) between seeing a Klein Strip and a simple path ring.
It depends on the mentioned twist of the Moebius strip. Only a 0°-band will yield a slice that is identical to the 3D Moebius strip. But as you state anyway in the end, every frame of your animation
is the same cut taken for a different Moebius strip.
gonegahgah wrote:I've come to the conclusion that they would appear no different to the 4Der than a 3D-donut not matter what we do.
More precisely, a 4D beeing would see a twisted torus. With the difference that the torus is flat to it and what looks like the interior for unversed 3D beings is actually the surface.
On the rest I do agree.
Last edited by Teragon on Thu Aug 04, 2016 12:11 pm, edited 1 time in total.
What is deep in our world is superficial in higher dimensions.
Re: Traversable Klein 'bottle' paths
Considering you second post: Exactly! These are the two extreme cases of what I like to call Moebius-Spheritori - the 90° and the 0° one.
The 90°-object is the more symmetric one, as all the directions the surface normal points at look identical. The surface normal is always pointing ouside of the loop. It just came to me that this
means that you could rotate the object in the plane of the loop by some angle, then rotate it by the same angle in the plane perpendicular to the plane of the loop and retain the exact same shape!
That means in the same way a torus has a rotational symmetry (=invariance under rotations), the 90°-Moebius-Spheritorus has a double-rotational symmetry (=invariance under double rotations).
What is deep in our world is superficial in higher dimensions.
Re: Traversable Klein 'bottle' paths
Hi Teragon,
Nice to see you again
I'm getting there with my new depiction. The following is a hastily put together first frame.
The equations are getting longer and longer and hopefully I can tidy them down somewhat...
I've managed to get half of the new animation (one red and one blue section) mostly working now.
I think only two colours (total four sections) will be necessary but I'll see how that feels when done.
I'll keep plugging away and hopefully get it completed soon...
I've noticed already that the join at the fattest part is not as smoothly continuous as I would like.
I suspect I'll have to rejig some part of the formulas in someway.
The initial animation will at least give the idea hopefully once complete and then I can look to find what needs to be re-engineered.
Re: Traversable Klein 'bottle' paths
Teragon wrote:That's a lot of written words. For now this is just a reply to your first post. In the mean time I've updated the images in my previous to posts. I'd hadn't hosted them somewhere
they were safe. You might want to have a look at them again.
Thank you Teragon. When I came back to this thread I was sad to see your images gone. Looking at them again I have a better feel for them now.
Teragon wrote:It’s in the nature of slicing that not all of the information about an object is obtained from a single slice. You have to move or rotate the slice through the object to get all of
Absolutely, I feel I'm more on track to add sideways movement as an animations. Hopefully rotation as well at some point.
Teragon wrote:What I prefer to do is making a projection into 3D. What you get then is one 3D image of your object seen from one specific angle.
Me too, but I can't do that with what I have yet sadly.
Teragon wrote:I've written a program to visualize flat objects in 4D that way. These are objects that correspond to wires in 3D. (Working on a program that can do solid objects in 4D too.)
Awesome Teragon
Teragon wrote:4D Objects are projected onto 3D just as 3D objects are projected onto 2D when we make a foto. The 3D image is then projected onto the plane of the cumputer screen.
Maybe one day you can also help me to create a program to show 4D objects using my rotated projection model. That's something I hope to make one day.
Teragon wrote:In order to get a correct perception of the image we have to make ourselves aware of the 3D shape and also how the interior looks like (flat objects don't have an interior, but
solid objects do).
Like we see a square as having an interior (or two sides) but we don't think of a line in the same way; whereas a 2Der does see a line that way.
Teragon wrote:To get a feel about it, here's just a common 3D Moebius strip, rotating through four dimensions:
That also nicely shows how the Moebius strip can go from the left version to the right version by rotating it through 4th Space. [Which I notice you mention next...]
Teragon wrote:The shading helps to get the shape of the image, while the colors code the distance to the beholder (and to the volume of projection). You can also see that the closer the
individual parts of the object get the bigger the appear. The shape of the 3D-image alternates between a Moebius band with all points at the same distance (object lying in the three lateral
dimensions, which constitute the field of vision of a 4D being) and a totally flat sheet with one close end covering the far end for a moment. After one half revolution back and front change
their roles.
These colour versions make clearer sense to me now, cool! I guess that is the process of my brain adapting to new models?
Teragon wrote:
gonegahgah wrote:I've come to the conclusion that they would appear no different to the 4Der than a 3D-donut not matter what we do.
More precisely, a 4D beeing would see a twisted torus. With the difference that the torus is flat to it and what looks like the interior for unversed 3D beings is actually the surface.
I've had another conclusion (or realisation) as well. We discussed a lot about interiors previously somewhere in this thread...
We spoke about how the solid part of the Klein Ring representation contained an 'inside'.
I had, for a bit, thought that they must be hollow and you explained that they weren't; and you were absolutely correct.
My new realisation is that when we see a 'solid' of a 4D object in our 3D slice that it is either one of: 1) a 4D solid, or 2) alternately that we are looking at a flat 4D object edge on.
The distinction is that the solid seen is inside but not surface. It's all about the angle of viewing.
Teragon wrote:The 90°-object is the more symmetric one, as all the directions the surface normal points at look identical. The surface normal is always pointing ouside of the loop. It just came
to me that this means that you could rotate the object in the plane of the loop by some angle, then rotate it by the same angle in the plane perpendicular to the plane of the loop and retain the
exact same shape! That means in the same way a torus has a rotational symmetry (=invariance under rotations), the 90°-Moebius-Spheritorus has a double-rotational symmetry (=invariance under
double rotations).
I might need you to explain this a bit more please Teragon. | {"url":"http://hi.gher.space/forum/viewtopic.php?f=27&t=2086&start=90","timestamp":"2024-11-10T01:16:48Z","content_type":"application/xhtml+xml","content_length":"138533","record_id":"<urn:uuid:cb7bc1a5-d330-41aa-bd4f-2bafb4c6b725>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00307.warc.gz"} |
What Is Compounding Or The Refinansiering Effektiv Rente? - Magazinepaper.net
When considering whether or not to take out a loan or put money in savings, one of the very first figures that people look at is the interest rate. However, there are situations when the number that
you see isn’t always the number that you receive.
This is due to the fact that there is more than one method to represent interest, and not all of these ways take into consideration the impact that compounding has. You need to look at augmenting,
because it may provide you with a more realistic picture of the rate that you are being charged.
Interest Compounding
Nominal interest rates and effective interest rates are the two most common methods to describe a given rate, whether for saving or borrowing. The concept of compounding is not taken into
consideration while calculating nominal interest, but it is for effective interest.
When you were doing mathematics in high school, you may have been exposed to the concept of augmenting. The term “compounding” refers to a process in which interest is first computed on your funds,
and then the interest that was calculated is added to your initial investment. When it comes time to recalculate your rate, it will be based on a proportion of both your original principle and the
interest that has already been accrued.
The frequency at which interest is compounded varies from bank to bank and lender to lender. This is described by the heightening period, which is the amount of time that passes between each
computation. However, this is not always the case, and there are instances when interest is compounded much less often than usual. In fact, the following are all examples of typical augmenting
• Daily
• Weekly
• Monthly
• Quarterly
• Semiannually
If you want to receive daily compounding, the bank rate on your investment will be computed each day. If your rate is compounded semiannually, then you will only need to compute it once every six
The difference between simple interest and compound interest
Compound interest is applied to certain loans and savings accounts, but not all of them. Instead of augmenting, some people choose to employ simple bank rates, in which case you will only ever be
charged interest on the original principal amount.
The difference between effective and nominal interest
As was said before, the impact of heightening is not taken into account when calculating a nominal bank rate. To put it another way, the calculation is based on the assumption that the compounding
period is once a year.
However, it is not the case in the majority of situations. Augmenting periods that are not yearly are used by a number of lenders and institutions. If you just consider the nominal bank rate, though,
you won’t have a realistic picture of the total amount of fees that will be added up over the course of the loan. Because, as it happens, augmenting may make a significant difference in the rate.
The impact of interest compounded over time
You may recall from your high school calculus course that the interest you earn is more when the heightening period is shorter. For illustration purposes, if you compound your earnings once per day
rather than once per month, you will accumulate more interest. As a result, that monthly frequency will earn more than what a yearly frequency would have earned.
Say, for example, you have multiple loans with a 10% rate, but one of them compounds once a year and the other twice a year. Even if both loans have a stated rate of 10%, the loan that is compounded
twice per year will have a higher effective yearly rate. This is because the interest is compounded more often.
It is essential for borrowers to be aware of the effective yearly rate or interest rate because, if they are unaware of it, they are more likely to underestimate the total cost of the loan. Also,
it’s essential for estimating how much money an investment like a corporate bond is likely to make back.
What You Can Learn From the Annual Effective Interest Rate
The nominal and effective yearly rates on a CD, savings account, or loan may be marketed. Neither the augmenting impact of interest nor the fees associated with these financial instruments are
reflected in the nominal rate. The real return is equal to the yearly rate that is really effective.
That’s why knowing your effective yearly rate is crucial in the world of finance. You will not be able to do an appropriate comparison of the different options unless you are aware of the effective
yearly rate that each one carries.
Influence of the Fraction of a Year Between Compounding Periods
The effective yearly rate grows in tandem with the compounding period length. Returns generated quarterly are greater than semi-annual periods, while monthly period returns are higher than quarterly,
and finally, daily returns are higher than monthly compounding periods.
Compounding’s Capacity Restraints
The phenomena of compounding can only go up to a certain point. Compounding’s limit is achieved even if it happens infinitely, not simply every second or microsecond.
What does it mean for an interest rate to be nominal?
The nominal rate is typically the rate that is announced by financial institutions. When comparing effective and nominal rates, (as seen here: https://refinansiere.net/effektiv-og-nominell-rente/),
the adjustments can be made for compounding or other costs. When calculating charges on a loan or deposit, compound interest is used since it takes into account not just the original principle but
also all of the money that has been accrued in prior periods. When figuring out compound interest, it really matters how many times you compound.
The Bottom Line
Financial institutions and other organizations often promote their money market rates by using nominal rate, which does not account for fees or compounding. The nominal rate is lower than the
effective yearly rate since the latter factors in the rate’s compounding. The effective rate in the end will be proportionally larger if there were more periods of compounding involved.
When the effective yearly rate is greater, it is favorable for those who save money or invest it, but it is not favorable for people who take out loans. That means, if you’re being charged compound
rates on a purchase, this isn’t ideal for you. But, if you have a stake in a commodity that is earning compound interest, that’s better for you. Consumers should focus on the effective yearly rate
rather than the attention-grabbing nominal rate when comparing deposit and loan bank rates. | {"url":"https://magazinepaper.net/what-is-compounding-or-the-refinansiering-effektiv-rente/","timestamp":"2024-11-03T22:02:49Z","content_type":"text/html","content_length":"66379","record_id":"<urn:uuid:9c135262-6cb3-414f-acda-0b0673f1c426>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00841.warc.gz"} |
Nuzulan Naim Zulkipli*1, Ahmad Zuri Sha’ameri2, Zulfakar Aspar3 and GhazaliHussin41, 2, 3 Faculty of Electrical Engineering, Universiti Teknologi Malaysia, Johor, MALAYSIA.(E-mail:
nnaimzulkipli@gmail.com, ahmadzuri@utm.my, zulfakar@utm.my)3 Keysight Technologies Malaysia Sdn. Bhd, Bayan Lepas, Penang, MALAYSIA(E-mail: ghazali_hussin@keysight.com)ABSTRACTError in RF power
measurement is due to both additive white Gaussian noise (AWGN) and1/f noise. Thus, the objective of this study is to implement a radio frequency powermeasurement processor on a field programmable
gate array (FPGA) to minimize the noisein the signal and improve power measurement accuracy. The processor consists of fivemain modules which are whitening, wavelet decomposition, denoising, signal
recovery andpower estimation. The justification for implementation on FPGA is it provides flexibility,reprogrammability and high rate of throughput by exploiting bit parallelism and
pipeliningtechniques. The resulting waveforms, RTL notation, and data flow graphs are presentedand compared with Matlab simulations to verify its accuracy, function and performance.Implementation
result shows 1.74 % percentage of error at 10 dBm signal power andutilized about 2.84 % logic elements (LE), 3125 out of 110,000.Key words: RF power measurement, FPGA, Wavelet transform, Mallat’s
Radio frequency (RF) power sensor is used to measure the power of various types of signalin the field of communication, aerospace, defence and signal detections. Depending on theapplication, the
signal of interest can be either in continuous wave or pulse signals. Tomeet the user specification, the RF power measurement should be conducted with thesmallest possible error [1].The objective of
this paper is to implement an RF power measurement processor in thepresence of 1/f noise. Unlike applications where noise is assumed as additive whiteGaussian noise (AWGN), noise in RF power
measurement follows a 1/f power spectrumcharacteristic. This property results in the noise power accumulates at low frequencywhich is at the same frequency of the signal of interest. To reduce power
measurementerror, the architecture for the RF power measurement processor is shown in Figure 1.
There are five main modules of the processor which are: whitening, waveletdecomposition, denoising, signal recovery and power estimation. Whitening module isimplemented using decimation of the input
signal [3]. In decimation process, the first stepis to analyze the noise at the sensor. From the autocorrelation function of the noise, thedecimation rate for the whitening module is determined. The
second step is the waveletdecomposition module where Haar wavelet is used as the basis function and the selecteddecomposition level is five. Mallat’s wavelet decomposition and reconstruction scheme
isused for efficient implementation of wavelet transform. At a given decomposition level, apair of low pass filter and high pass filter called quadrature mirror filter (QMF) is used. Byusing integer
form as proposed in [2], detailed coefficients, Dk[n] and approximatecoefficients, Ak[n] can be expressed as()[ ][2 ][2 1]1kA nX n X n=++(1)[ ][2 ][2 1]kD nX n X n=−+(2)Third, denoising is done on
the detailed coefficients Dk[n] using universal softthresholding. After that, the signal undergoes signal recovery process to reconstruct thedecomposed signal. Finally, the power in the signal is
measured in the power estimationmodule. The complete RF power measurement processor is first verified in Matlabsoftware for its functionality. After that, the proposed processor is plotted in data
flowgraph and algorithmic state machine for register transfer level (RTL) transformation. Thefinal step is to implement on FPGA and verified using ModelSIM software.Figure 1. Proposed architecture of
RF power measurement processor
To ensure the processor is suitable for real application, the noise (data size 192,000samples) used is obtained from the industry collaborator, Keysight Technologies MalaysiaSdn. Bhd. Figure 2 shows
the input and output waveform of the processor implementedon FPGA. The output is obtained after 5,723 cycles: 4,960 cycles for decimation, 251cycles for wavelet decomposition, denoising and
reconstruction and 512 cycles forpower estimation. The latency of decimation module depends on the signal length.Longer signals will require more cycles to process. The performance analysis is
shownin Table 1. The noise power is a fixed parameter. By varying the signal power, theperformance of the processor is measured by the output of the FPGA implementation.The percentage of error [4] of
the proposed architecture reduced to 1.74% when thesignal power increases. The proposed processor consumed 3125 LE out of 110,000.
Table 1. Performance analysis of FPGA implementationPowerofsignal (W)Powerofsignal (dBm)Power estimated by RF powerprocessor (W)Percentage oferror (%)100 m10.0098.26 m1.74100 µ-10.0097.65 µ2.3582.05
µ-20.8691.55 µ11.59Figure 2. Output waveform of RF power measurement processor
Implementation result shows significant reduction in power measurement error down to1.74 % at 10 dBm signal power and 2.84 % utilization of LE. The implementation took5,723 cycles to complete. As the
proposed architecture consumed small amount ofresources of the FPGA, this method can be further developed and analyzed for futureusage.Acknowledgment: We would like to thank Keysight Technologies
Sdn. Bhd for theirsupport in this study in term of financial, technical and also raw materials. We would alsolike to express our appreciation to Collaborative Research in Engineering, Science
andTechnology (CREST) for providing us with grant P03C1-14 and UTM VOTR.J130000.7323.4B183 which were fully utilized to complete this study.
1. M.R. Chaurasia and S. K. Patel 2014. ‘Recent advancement in RF and microwave powermeasurements’. 2014 2nd International Conference on Emerging Technology Trends in Electronics,Communication and
Networking. Surat, pp. 1-5.
2. Stojanović, R., D. Karadaglić, M. Mirković, et al. ‘A FPGA system for QRS complex detection based onInteger Wavelet Transform’ Measurement Science Review, 11.4 (2011): 131-138
3. A. A. b. Zali, A. Z. b. Sha’ameri, Y. Y. Mohammed, G. b. Hussin and Chee Yen Mei, ‘Enhancement ofpower measurement by decimation in 1/f noise’ 2015 IEEE Student Conference on Research
andDevelopment (SCOReD), Kuala Lumpur, 2015, pp. 610-614.
4. S. Jaiswal, M. G. Wath and M. S. Ballal, “Modeling the measurement error of energy meter using NARXmodel,” 2016 IEEE International Instrumentation and Measurement Technology
ConferenceProceedings, Taipei, 2016, pp. 1-6.5,723 cycles
Pleine Lune & Jardinage
KEYNOTE 3
ASSOC. PROF. DR. AHMAD ZURI SHA’AMERI
Assoc. Prof. Dr. Ahmad Zuri bin Sha’ameri obtained his B. Sc. in Electrical Engineering from the University of Missouri-Columbia, USA in 1984, and M. Eng. Electrical Engineering and Ph D both from
UTM in 1991 and 2000 respectively. At present, he is a member of the Digital Signal and Image Processing (DSIP) Research Group and Academic Coordinator for the DSP Lab, Electronic and Computer
Engineering Department, Faculty of Electrical Engineering, UTM. His research interest includes signal theory, signal processing for radar and communication, signal analysis and classification, and
information security. The subjects taught at both undergraduate and postgraduate levels include digital signal processing, advance digital signal processing, advance digital communications and
information security. He has also conducted short courses for both government and private sectors. At present, he has published 160 papers in his areas of interest at both national and international
levels in conferences and journals.
Besides applications in civil aviation, maritime, defence and homeland security, wireless positioning system has found its use in other variety of applications and services such as enhanced-911,
improved fraud detection, location based services, location sensitive billing, intelligent transport systems and improved traffic management. Active implementation such radar due with its relatively
high transmit power is not suitable for indoor use and has a potential for causing interference or health hazards to potential users. Passive implementation such as global navigation surveillance
system (GNSS) has some limitations such as high power consumption, blocking of the RF signal by foliage and buildings. Since the tracked objects usually emit electromagnetic signals, it is possible
to perform passive wireless positioning by intercepting and performing analysis on these signals to perform identification based on the signal parameters or information content. By employing multiple
receivers, the spatial difference between the intercepted signals can be exploited by estimating the difference between the received signal strength indication (RSSI), angle of arrival (AOA) and time
delay of arrival (TDOA) to determine the position of the tracked object. To complete the process, an efficient backbone network should be in place to enable efficient and error free data link between
all the receiving stations and a centralized processing system. Current IP based infrastructure and internet of things (IOT) concept can used to form the backbone network for a wireless positioning
Despite its benefits, performing a passive wireless positioning system has its own share of challenges. In a noncooperative environment where prior knowledge on the parameters of the possible signals
within the band is unknown, among the challenges are noise in the intercepted signal with multipath fading and signals with a combination of the following characteristics: large bandwidth, short
duration, and low peak power. Thus, the objective of this presentation is to highlight the possible signal reception technologies through channelized receiver configuration and high speed scanning,
signal detection and enhancement with de-noising techniques and the use of time-frequency analysis to estimate the signal parameters for identification. Once identified, the next step is to estimate
the position of the tracked object by first estimating the position related parameters such as RSS, AOA or TDOA which is then used by the position estimation process. Field trials results will be
presented from the interception of signals at the campus of Universiti Teknologi Malaysia, Johor Bahru and Gunung Raya, Langkawi for automatic dependent surveillance broadcast (ADS-B) signals from
aircrafts, short range locating by RSSI fingerprinting and drone locating by TDOA. The results will demonstrate use of wireless positioning for use with different applications. | {"url":"http://spsocmalaysia.org/","timestamp":"2024-11-05T16:53:32Z","content_type":"text/html","content_length":"13538","record_id":"<urn:uuid:1b5cb0a7-186b-4df0-a3e3-127d59a298ad>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00297.warc.gz"} |
What Are APAs?
A Consultant Clinical Academic (CCA) or Senior Academic General Practitioner (SAGP) working full time will work ten (10) programmed activities per week.
Extra programmed activities are referred to as Additional Programmed Activities (APAs) and these may be either academic or clinical. Further information regarding APA's and applicable to staff
employed on the CCA or SAGP contract, may be found in Annex E of the terms and conditions of employment.
Calculating the value of additional programmed activities:
The value of an additional programmed activity is variable depending on:
(1) your pay threshold (please see Clinical Academic salary scales), and,
(2) whether or not you hold discretionary points, a distinction award or a clinical excellence award (please click here for current rates of awards).
Calculation 1
If you do NOT hold a discretionary point, distinction award or a clinical excellence award, perform the following calculation:
• Take the value of your basic full time pay (no other payments should be added) and divide this payment by 10.
e.g. basic salary of £69,991 p.a. / 10 = APA allowance of £6,999 p.a. (per APA undertaken)
Calculation 2
If you hold a discretionary point, distinction award or a clinical excellence award, perform the following calculation:
Calculate the annual value of one additional PA per week:
• Take the value of your basic full time pay (no other payments should be added) and divide this payment by 10. This provides figure A .
e.g. basic salary of £84,154 p.a. / 10 = Figure A rate of £8,415 p.a.
Proceed to the next step:
2. If you hold a discretionary point, distinction award or a clinical excellence award, perform the following calculation to provide figure B :
• If Discretionary Points held: Divide the annual value of your DPs by 10 = figure B
• If Distinction Award held: Divide the value of 8 DPs by 10 = figure B
• If Clinical Excellence Award level 1 - 9 held: Divide the value of your CEA by 10 = figure B
• If Clinical Excellence Award level 10 - 12 held: Divide the value of level 9 by 10 = figure B
Outcome: The annual value for each APA undertaken = A + B | {"url":"https://www.ucl.ac.uk/human-resources/what-are-apas","timestamp":"2024-11-12T06:44:58Z","content_type":"text/html","content_length":"45339","record_id":"<urn:uuid:2b45d523-cd46-4714-8729-9d4dacd1995c>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00649.warc.gz"} |
Error Bars in Excel (Examples) | How To Add Excel Error Bar?
Updated May 10, 2023
Error Bars in Excel
Error bars are one of the graphical representations of data that is used to denote errors. Also, error bars can have plus, minus, or both types of direction with Cap and No Cap style of bars. First,
to insert error bars, create an Excel chart using any Bars or Columns charts, mainly from the Insert menu tab. Then click the Plug button at the top right corner of the chart and select Error Bars
from there. To customize the error bars further, choose More Options from the same menu list.
In Error Bar, we have three options which are listed as follows:
• Errors Bars with Standard Error.
• Error Bars with Percentage.
• Error Bars with Standard Deviation.
How to Add Error Bars in Excel?
Adding error bars in Excel is very simple and easy. Let’s understand how to add error bars in Excel with a few different examples.
Example #1
The standard error is a statistical term that measures the accuracy of the specific data. In statistics, a sample deviates from the actual mean of the population; therefore, this deviation is called
the standard error. In Microsoft Excel standard error bar is the exact standard deviation.
In this example, we will learn how to add error bars to the chart. Let’s consider the example below, which shows the sales actual figure and forecasting figure.
Now we will apply Excel error bars by following the below steps.
• First, select the Actual and Forecast figure and the month to get the graph below.
• Go to the Insert menu and choose the Line chart.
• We will get the below line chart with an actual and forecasting figure.
• Now click on the forecast bar to select the actual line with a dotted line, as shown below.
• Once we can click on the forecast, we can see the layout menu. Click on the Layout menu to see the Error bar option.
• Click on the Error Bars option so that we will get the below option as shown below.
• In the error bar, click on the second option, “Error Bar with Standard Error“.
• Once we click on the standard error bar, the graph will get changed, as shown below. The screenshot below shows the standard error bar, which shows the fluctuation statistic measurement of Actual
and forecast reports.
Example #2
This example will teach adding an Excel Error bar with a percentage in the graphical chart.
Let’s consider the example below, which shows a chemistry lab trial test as follows.
Now calculate the Average and standard deviation to apply for the error bar by following the below steps.
• Create two new columns as Average and Standard Deviation.
• Insert Average formula =AVERAGE(B2:F2) by selecting B2: F2.
• We will get the average output as follows.
• Now calculate the Standard Deviation by using the formula =STDEVA(B2:F2)
• We will get the standard deviation as follows.
• Now select the Sucrose Concentration Column and AVG column by holding the CTRL-key.
• Go to the Insert menu. Select the Scatter Chart that needs to be displayed. Click on the scatter chart.
• We will get the result as follows.
• The dotted lines show the average trail figure.
• Now click on the blue dots to get the layout menu.
• Click on the Layout Menu to find the error bar option, as displayed in the screenshot below.
• Click on the Error Bar to get the error bar options. Click on the third option called “Error Bar with Percentage”.
• Once we click on the Error Bar with the percentage, the above graph will change, as shown below.
• In the above screenshot, we can see that Error Bar with 5 Percentage measurement.
• By default, the Error Bar with Percentage takes as 5 Percentage. We can change the Error Bar Percentage by applying custom values in Excel, as shown below.
• Click on the Error Bar to see the More Error Bar Option, as shown below.
• Click on the “More Error Bar option”.
• So that we will get the below dialogue box, we can see in the percentage column by default, excel will take as 5 Percent.
• In this example, we will increase the Error Bar Percentage by 15 %, as shown below.
• Close the dialogue box so that the above Error bar with the percentage will increase by 15 percent.
• In the below screenshot, the Error Bar with Percentage shows 15 percentage Error Bar measurements of 1.0, 1.0, 0.0, -1.3, and at last with value -1.4 in blue color dots as shown in the below
Example #3
In this example, we will see how to add an Error Bar with Standard Deviation in Excel.
Consider the same example where we have already calculated the Standard deviation as shown in the screenshot below showing chemistry labs test trails with average and standard deviation.
We can apply an Error Bar with a Standard Deviation by following the below steps.
• First, select the Sucrose Concentration column and Standard deviation Column.
• Go to the Insert menu to choose the chart type and select the Column chart as shown below.
• We will get the below chart as follows.
• Now click on the Blue Colour bar to get the dotted lines as shown below.
• Once we click on the selected dots, we will get the chart layout with the Error Bar option. Choose the fourth option, “Error Bars with Standard Deviation”.
• So that we will get the below standard deviation measurement chart as shown in the below screenshot.
• Follow the same process illustrated above to show the Error bar in Red Bar.
As we can notice, the error bars show a standard deviation of 0.5, 0.1, 0.2, 0.4 and 1.0
Things to Remember
• Error Bars in Excel can be applied only for chart formats.
• In Microsoft Excel, error bars are mostly used in chemistry labs and biology to describe the data set.
• While using Error bars in Excel, ensure you use the full axis, i.e., numerical values must start at zero.
Recommended Articles
This has been a guide to Error Bars in Excel. Here we discuss how to add Error Bars in Excel, along with Excel examples and a downloadable Excel template. You can also go through our other suggested
articles – | {"url":"https://www.educba.com/error-bars-in-excel/","timestamp":"2024-11-02T02:17:23Z","content_type":"text/html","content_length":"374715","record_id":"<urn:uuid:361ce838-9678-429c-9216-c3211dadd75b>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00209.warc.gz"} |
Area of a kite calculator
Area of a Kite Calculator
Select the term you want to calculate, enter values, and click calculate button to find area and perimeter of kite using area of kite calculator
Area of a Kite Calculator
Area of a Kite Calculator is an online tool that helps to quickly find the kite's area and perimeter. it used the trigonometric formula to get the kite's area as well.
Geometrical Explanation of Kite
A kite is a quadrilateral shape from a geometric perspective, and it has two pairs of adjacent sides that are of equal length and two pairs of opposite angles that are likewise of equal length.
The two diagonals of a kite intersect at a point called the "kite point" which is the midpoint of the longer diagonal.
The diagonal of Kite
In geometry, the diagonal is a line that connects two non-adjacent vertices of a polygon or quadrilateral. In the case of a kite, there are two diagonals, each connecting opposite vertices of the
The longer diagonal of a kite is called the "main diagonal (d[1])" while the shorter diagonal is called the "cross diagonal (d[2])".
The “kite point” where the two diagonals intersect is the midpoint of the longer diagonal.
Formulas of area and perimeter of kite
For finding area
Area of kite = 1/2 × d[1] × d[2]
• d[1] and d[2] are the length of the diagonal of a kite.
For finding area using trigonometry
Area of Kite Using Trigonometry = a × b × Sin(C)
For finding perimeter
Perimeter of kite = 2(a + b)
• “a” and “b” are the adjacent lengths of the kite and “C” is the angle between the diagonals.
How to calculate the area and perimeter of the kite?
Find the area and perimeter of the kite if its larger length is 10 units and its smaller length is 7 units while the length of the side is 15 and 12.
Step 1: Write the data from the above.
d[1] = 10, d[2] = 7, a = 15, b = 12,
Area of kite =?
Perimeter of kite =?
Step 2: Write the formula of the Area and Perimeter of a kite.
Area of kite = 1/2 × d1 × d2
While, Perimeter of kite = 2(a + b)
Step 3: Put the values from “step 1” in the above formulas carefully.
Area of Kite:
d[1] = 10, d[2] = 7
Area of kite = 1/2 × d1 × d2
Area of kite = 1/2 × 10 × 7
Area of kite = 1/2 × 70
Area of kite = 35
The perimeter of Kite:
a = 15, b = 12
Perimeter of kite = 2(a + b)
Perimeter of kite = 2(15 + 12)
Perimeter of kite = 2(27)
Perimeter of kite = 54
Some related examples
Here are a few results of some other examples in the table section.
Length of the adjacent side Length of the adjacent side
Diagonal (d[1]) Diagonal (d[2]) Area of Kite Perimeter of Kite
(a) (b)
125 67 4187.5 34 28 124
12577 97889 6.1557e^8 2347 2889 10472 | {"url":"https://www.allmath.com/kite.php","timestamp":"2024-11-07T03:20:15Z","content_type":"text/html","content_length":"44820","record_id":"<urn:uuid:a1a94d43-cc3e-4630-b8ff-d39713a05c2a>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00055.warc.gz"} |
Using two conditions for a Proc Optmodel constraint
Hello All! I am using SAS Enterprise Guide. I have a optimization code where I have many constraints and decision variables. It is a facility location problem using multimodal (truck and rail) and I
have created routes using location j and k. I have two constraints right now where I am putting conditions if j and k does not have rail access, it can not ship using rail. I tried to combine the two
constraints (freight_depot and freight_bioref) into one but it only considered the first condition before the constraint and not the one I join using AND. Is there anyway I can use the AND operator
to join two conditions before a constraint, so that I can make the two constraints into one. I don't have all my constraints here. Just put some of them.
This is how I want to write them together-
Con freight {<j,k> in ROUTES2: (s[j]=0 and l[k] =0)}:
sum{<f,'1'> in tr} Amount2[j,k,f,'1'] = 0;
Also, my code is very memory intensive as it deals with inputs of 2000 rows and millions of constraints and decision variables. Is there any way I can solve the out of memory issue? Right now, after
reaching an error gap of 8%, I am going out of memory.
Thank you so much for all of your help.
@RobPratt, any help from you would be greatly appreciated. Thank you again!
proc optmodel ;
*****DECLARE SETS AND PARAMETERS*******;
set <str> Fields;
set <str> Depots init {};
set <str> Bioref;
set <str> Feedstock;
set <str> Price;
set Mode = {'0', '1'};
set <num> Capacity;
set <str,str,str> TRIPLES;
set DOUBLES = setof {<i,f,p> in TRIPLES} <i,f>;
set <str,str,str> ROUTES;
set <str,str> ROUTES2;
set <str,str> ROUTES3;
set <str,str> tr;
*Parameters for fields;
num sifp{Fields,Feedstock,Price} init 0; *Available biomass in field i of feedstock f at price p;
num maif{Fields,Feedstock} init 0; *binary assignment of type f from field i;
num gp {Price} init 0; *Farmgate price including grower payment and harvest and collection cost;
num fsb {Feedstock}init 0; *field site storage in bales;
num dmloss {Feedstock}init 0; *dry matter loss from bales;
num aifp{i in Fields,f in Feedstock,p in Price} = round(sifp[i,f,p]*(1-dmloss[f])); *available biomass in the fields after considering the dry matter loss;
*Transportation parameters;
num x{Fields union Depots}; *union combines both field and depot;
num y{Fields union Depots};
num s{Depots};
num l{Bioref};
*Calculating distances between fields and depots;
num a{i in Fields, j in Depots}= (sin((y[j]-y[i])*&C/2))**2+cos(y[i]*&C)*cos(y[j]*&C)*(sin((x[j]-x[i])*&C/2))**2;
num sij{i in Fields, j in Depots} = ifn(x[i]=x[j] and y[i]=y[j],0,2*atan2(sqrt(a[i,j]),sqrt(1-a[i,j]))*3957.143);*converted the distance to miles;
num dij{i in Fields, j in Depots} = round(sij[i,j],0.01);
*Calculating distances between depots and biorefineries;
num b{j in Depots, k in Bioref}= (sin((y[k]-y[j])*&C/2))**2+cos(y[j]*&C)*cos(y[k]*&C)*(sin((x[k]-x[j])*&C/2))**2;
num sjk{j in Depots, k in Bioref} = ifn(x[j]=x[k] and y[j]=y[k],0,2*atan2(sqrt(b[j,k]),sqrt(1-b[j,k]))*3957.143);*converted the distance to miles;
num djk{j in Depots, k in Bioref} = round(sjk[j,k],0.01);
*Transportation costs;
num vfb {Feedstock}; *variable cost of transporting bales of feedstock f;
num cfb {Feedstock}; *fixed/constant cost of transporting bales of feedstock f;
num vfp {Feedstock, Mode}; *variable cost of transporting pellets of feedstock f using mode t;
num cfp {Feedstock, Mode}; *fixed/constant cost of transporting pellets of feedstock f using mode t;
num tijf {i in Fields, j in Depots, f in Feedstock}= ifn (dij[i,j] = 0, 0, cfb[f] + vfb[f]*1.2*dij[i,j]); *Transportation cost for bales;
num tjkf_pellets {j in Depots, k in Bioref, f in Feedstock, t in Mode}= ifn (djk[j,k] = 0, 0, cfp[f,t] + vfp[f,t]*1.2*djk[j,k]); *Transportation cost for pellets;
*Parameters for Depots;
num qh{Feedstock}init 0; *Handling and queuing of bales at depot: $1.21 for CS and $1.34 for SW;
num pf{Feedstock}init 0; *preprocessing cost at depot: $22.65-3.18=$19.47 for CS and $22.05-3.18=$18.77 for SW;
num ds{Feedstock}init 0; *depot storage in pellet form;
num U = 0.9; *depot utilization factor;
num max_distance1 = 200;
num max_distance2 = 1300;
num max_distance3 = 400;
num min_distance = 300;
*quality parameters;
num Ash{Feedstock};
num Moisture{Feedstock};
num Carb{Feedstock};
num Ash_dock{Feedstock};
num Moist_dock{Feedstock};
num Ash_diff{f in Feedstock} = ifn(&Max_Ash>=Ash[f],0, Ash[f] - &Max_Ash);
num Moist_diff{f in Feedstock} = ifn(&Min_moisture<=Moisture[f],0, &Min_moisture-Moisture[f]);
read data out.fields_&year into Fields = [fips] x y;
read data out.INLdepots_&year into Depots = [fips] x y s=site;
read data out.INLdepots_&year into Bioref = [fips] x y l=site;
read data out.Price into Price = [name] gp = Pr ; *gp= grower payment;
read data out.feedstockpropertiesbales_MS into Feedstock=[Feed] vfb=TranspCostVar cfb=TranspCostFixed fsb=StorageCost
qh=HandlingQueuingCost dmloss=BiomassLoss;
read data out.feedstockpropertiespellets_MS into Feedstock=[Feed] ds=StorageCost pf=ProcessingCost;
read data out.Supplymod_&year into TRIPLES=[fips Feed Price] sifp=Supply; *same as line commented below;
read data out.Supplymin_&year into [fips Feed] maif=MinAssign; *minimum assignment (binary);
read data out.quality_MS into Feedstock = [feed] Ash Moisture carb Ash_dock Moist_dock;
read data out.transport into tr=[Feed Mode] vfp=TranspCostVar cfp=TranspCostFixed;
ROUTES = {<i,f> in DOUBLES, j in DEPOTS: dij[i,j] < max_distance1};
ROUTES2 = {j in DEPOTS, k in BIOREF: djk[j,k] <max_distance2};
*****DECLARE MODEL ELEMENTS*******;
******DECISION VARIABLES*****;
var Build {DEPOTS} binary; *binary value to build depots with specific capacity;
var CapacityChunks {DEPOTS} >= 0; *integer value to determine depot capacity;
var Build2 {Bioref} binary; *binary value to build biorefineries with fixed capacity of 725000 dry tons;
var CapacityBioRef {Bioref} >= 0; *to determine biorefinery capacity;
var AnyPurchased {TRIPLES} binary;
var AmountPurchased {TRIPLES} >= 0;
var AmountShipped {ROUTES} >= 0;
var Amount2{ROUTES2,tr} >= 0;
impvar BuildingCost {j in DEPOTS} =
132717 * Build[j] + 2.297 * 25000 * CapacityChunks[j];
impvar VariableCost = sum{<i,f,p> in TRIPLES} 0.977*gp[p] * AmountPurchased[i,f,p] +
sum{<i,f,j> in ROUTES} (fsb[f]+ tijf[i,j,f]+ qh[f]+ ds[f]+ pf[f]) * AmountShipped[i,f,j]
+ sum{<j,k> in ROUTES2, <f,t> in tr}
+ (Moist_dock[f]*Moist_diff[f])) * Amount2[j,k,f,t];
impvar FixedCost = sum {j in Depots} BuildingCost[j];
*****OBJECTIVE FUNCTION******;
max Supply = sum{<j,k> in ROUTES2,<f,t> in tr} Amount2[j,k,f,t];
*Flow balance between field-depot and depot-biorefinery;
Con Depot_avail {j in Depots,f in Feedstock}:
sum {<i,(f),(j)> in ROUTES} AmountShipped[i,f,j]
= sum{<(j),k> in ROUTES2, <(f),t> in tr} Amount2[j,k,f,t];
*Depot locations which have do not freight access;
Con freight_depot {j in Depots: s[j]=0}:
sum{<f,'1'> in tr} Amount2[j,k,f,'1'] = 0;
*Biorefinery Locations which have do not freight access;
Con freight_bioref {k in Bioref: l[k]=0}:
sum{<j,(k)> in ROUTES2,<f,'1'> in tr} Amount2[j,k,f,'1'] = 0;
Con rail_minimum {<j,k> in ROUTES2: djk[j,k]<= min_distance}:
sum{<f,'1'> in tr} Amount2[j,k,f,'1'] = 0;
Con truck_maximum {<j,k> in ROUTES2: djk[j,k] > max_distance3 }:
sum{<f,'0'> in tr} Amount2[j,k,f,'0'] = 0;
solve obj Supply with milp / maxtime=&MaxRunTime relobjgap=0.03; * To force the code to stop after maxtime;
09-21-2022 10:47 AM | {"url":"https://communities.sas.com/t5/Mathematical-Optimization/Using-two-conditions-for-a-Proc-Optmodel-constraint/td-p/834498","timestamp":"2024-11-13T02:10:35Z","content_type":"text/html","content_length":"151385","record_id":"<urn:uuid:d2b5155e-a0ee-4adf-90e3-85d155eb53a6>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00154.warc.gz"} |
Duration when interest rates fall
When interest rates rise, bond prices fall, and when interest rates fall, bond a modified duration of 5 will drop about 5% if the market interest rate increases 1%,
Fed seems poised to continue to raise interest rates gradually over the next few duration bond implies that its price would rise more when rates fall and fall secondary market will fall; if rates
decrease, interim coupon payments will be The manager can immunize his portfolio from interest rate risk by setting the An alternative method for measuring interest-rate risk, called duration gap
With a total asset value of $100 million, the market value of assets falls by $2.5 mil-. 3 • Pages 343–361 • Fall 2005. EXPONENTIAL DURATION: A MORE ACCURATE ESTIMATION. OF INTEREST RATE RISK. Miles
Livingston. University of
When interest rates rise, bond prices fall. There is, however, a high level of gray area to this rule. For shorter duration bonds with high levels of credit risk, interest
26 Dec 2018 If interest rates fall, prices of debt securities rise. Kumar Das advocates investing in short-term bond funds with around three years duration. two methods of measuring the interest rate
risk - duration and convexity. The concept of interest rates drop, the non-callable bond price in- creases, as well as When interest rates rise, bond prices fall. There is, however, a high level of
gray area to this rule. For shorter duration bonds with high levels of credit risk, interest 4 Sep 2013 Bond prices have an inverse relationship with interest rates — when interest rates rise, bond
prices fall, and vice-versa. Duration, a measure of
If rates were to fall 2%, the bond's value would also rise by approximately twice as much (18%). Using a bond's convexity to gauge interest rate risk. Keep in mind
As a general rule, the price of a bond moves inversely to changes in interest rates : a bond's price will increase as rates decline and will decrease as rates move “If the interest rate on the bond
goes up by 1%, the bond's price will decline by 4 %. Duration can increase or decrease given an increase in the time to maturity 3 Mar 2020 If bond yields fall a lot, duration exposure can deliver
large capital gains, As bond yields fell, the interest rate duration exposure inherent in There are two important components of Duration's computation: interest rate rates rise, investors will flock
to higher rates and the price of the bond will fall to .
So when interest rates drop, and mortgage bond duration starts to shorten, the investors will scramble to compensate by adding duration to their holdings, in a phenomenon known as convexity hedging.
These many factors are calculated into one number that measures how sensitive a bond’s value may be to interest rate changes. How investors use duration. Generally, the higher a bond’s duration, the
more its value will fall as interest rates rise, because when rates go up, bond values fall and vice versa. If interest rates were to fall, the value of a bond with a longer duration would rise more
than a bond with a shorter duration. Therefore, in our example above, if interest rates were to fall by 1%, the 10-year bond with a duration of just under 9 years would rise in value by approximately
9%. On the other hand, the bond fund will increase in value by 10 percent if interest rates fall one percent. If a fund’s duration is two years, then a one percent rise in interest rates will result
in a two percent decline in the bond fund’s value. A two percent increase in the bond’s fund value would follow if interest rates fall by one percent. When interest rates rise, prices of traditional
bonds fall, and vice versa. So if you own a bond that is paying a 3% interest rate (in other words, yielding 3%) and rates rise, that 3% yield doesn't look as attractive. It's lost some appeal (and
value) in the marketplace. Duration is measured in years. Most bonds pay a fixed interest rate, if interest rates in general fall, the bond's interest rates become more attractive, so people will bid
up the price of the bond. Likewise, if interest rates rise, people will no longer prefer the lower fixed interest rate paid by a bond, and their price will fall. Interest rate risk arises when the
absolute level of interest rates fluctuate. Interest rate risk directly affects the values of fixed income securities. Since interest rates and bond prices are inversely related, the risk associated
with a rise in interest rates causes bond prices to fall and vice versa. Bond prices rise when interest rates fall, and bond prices fall when interest rates rise. Think of it like a price war; the
price of the bond adjusts to keep the bond competitive in light of current market interest rates. Let's see how this works. A dollars and cents example offers the best explanation of the relationship
between bond prices
10 Sep 2019 To take on convexity, we need to first grasp what's known as duration. As interest rates drop, bond prices will rise and vice versa. The extent of
Long$term interest rates in Europe fell sharply in 2014 to historically low levels. than the duration of assets, and this gap widens nonlinearly with a fall in rates. When interest rates rise, bond
prices fall, and when interest rates fall, bond a modified duration of 5 will drop about 5% if the market interest rate increases 1%, Fed seems poised to continue to raise interest rates gradually
over the next few duration bond implies that its price would rise more when rates fall and fall secondary market will fall; if rates decrease, interim coupon payments will be The manager can
immunize his portfolio from interest rate risk by setting the An alternative method for measuring interest-rate risk, called duration gap With a total asset value of $100 million, the market value
of assets falls by $2.5 mil-. 3 • Pages 343–361 • Fall 2005. EXPONENTIAL DURATION: A MORE ACCURATE ESTIMATION. OF INTEREST RATE RISK. Miles Livingston. University of
On the other hand, the bond fund will increase in value by 10 percent if interest rates fall one percent. If a fund’s duration is two years, then a one percent rise in interest rates will result in a
two percent decline in the bond fund’s value. A two percent increase in the bond’s fund value would follow if interest rates fall by one percent. When interest rates rise, prices of traditional bonds
fall, and vice versa. So if you own a bond that is paying a 3% interest rate (in other words, yielding 3%) and rates rise, that 3% yield doesn't look as attractive. It's lost some appeal (and value)
in the marketplace. Duration is measured in years. Most bonds pay a fixed interest rate, if interest rates in general fall, the bond's interest rates become more attractive, so people will bid up the
price of the bond. Likewise, if interest rates rise, people will no longer prefer the lower fixed interest rate paid by a bond, and their price will fall. Interest rate risk arises when the absolute
level of interest rates fluctuate. Interest rate risk directly affects the values of fixed income securities. Since interest rates and bond prices are inversely related, the risk associated with a
rise in interest rates causes bond prices to fall and vice versa. Bond prices rise when interest rates fall, and bond prices fall when interest rates rise. Think of it like a price war; the price of
the bond adjusts to keep the bond competitive in light of current market interest rates. Let's see how this works. A dollars and cents example offers the best explanation of the relationship between
bond prices Duration and convexity are two tools used to manage the risk exposure of fixed-income investments. Duration measures the bond's sensitivity to interest rate changes. | {"url":"https://bestoptionsyupc.netlify.app/rincon79312seha/duration-when-interest-rates-fall-332","timestamp":"2024-11-08T09:08:08Z","content_type":"text/html","content_length":"35868","record_id":"<urn:uuid:f76c809b-8b19-47a6-8ac8-c599958a34d8>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00006.warc.gz"} |
Algorithms/Print version - Wikibooks, open books for an open world
Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software
Foundation; with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license is included in the section entitled "GNU Free Documentation License".
This book covers techniques for the design and analysis of algorithms. The algorithmic techniques covered include: divide and conquer, backtracking, dynamic programming, greedy algorithms, and
Any solvable problem generally has at least one algorithm of each of the following types:
1. the obvious way;
2. the methodical way;
3. the clever way; and
4. the miraculous way.
On the first and most basic level, the "obvious" solution might try to exhaustively search for the answer. Intuitively, the obvious solution is the one that comes easily if you're familiar with a
programming language and the basic problem solving techniques.
The second level is the methodical level and is the heart of this book: after understanding the material presented here you should be able to methodically turn most obvious algorithms into better
performing algorithms.
The third level, the clever level, requires more understanding of the elements involved in the problem and their properties or even a reformulation of the algorithm (e.g., numerical algorithms
exploit mathematical properties that are not obvious). A clever algorithm may be hard to understand by being non-obvious that it is correct, or it may be hard to understand that it actually runs
faster than what it would seem to require.
The fourth and final level of an algorithmic solution is the miraculous level: this is reserved for the rare cases where a breakthrough results in a highly non-intuitive solution.
Naturally, all of these four levels are relative, and some clever algorithms are covered in this book as well, in addition to the methodical techniques. Let's begin.
To understand the material presented in this book you need to know a programming language well enough to translate the pseudocode in this book into a working solution. You also need to know the
basics about the following data structures: arrays, stacks, queues, linked-lists, trees, heaps (also called priority queues), disjoint sets, and graphs.
Additionally, you should know some basic algorithms like binary search, a sorting algorithm (merge sort, heap sort, insertion sort, or others), and breadth-first or depth-first search.
If you are unfamiliar with any of these prerequisites you should review the material in the Data Structures book first.
When is Efficiency Important?
Not every problem requires the most efficient solution available. For our purposes, the term efficient is concerned with the time and/or space needed to perform the task. When either time or space is
abundant and cheap, it may not be worth it to pay a programmer to spend a day or so working to make a program faster.
However, here are some cases where efficiency matters:
• When resources are limited, a change in algorithms could create great savings and allow limited machines (like cell phones, embedded systems, and sensor networks) to be stretched to the frontier
of possibility.
• When the data is large a more efficient solution can mean the difference between a task finishing in two days versus two weeks. Examples include physics, genetics, web searches, massive online
stores, and network traffic analysis.
• Real time applications: the term "real time applications" actually refers to computations that give time guarantees, versus meaning "fast." However, the quality can be increased further by
choosing the appropriate algorithm.
• Computationally expensive jobs, like fluid dynamics, partial differential equations, VLSI design, and cryptanalysis can sometimes only be considered when the solution is found efficiently enough.
• When a subroutine is common and frequently used, time spent on a more efficient implementation can result in benefits for every application that uses the subroutine. Examples include sorting,
searching, pseudorandom number generation, kernel operations (not to be confused with the operating system kernel), database queries, and graphics.
In short, it's important to save time when you do not have any time to spare.
When is efficiency unimportant? Examples of these cases include prototypes that are used only a few times, cases where the input is small, when simplicity and ease of maintenance is more important,
when the area concerned is not the bottle neck, or when there's another process or area in the code that would benefit far more from efficient design and attention to the algorithm(s).
Inventing an Algorithm
Because we assume you have some knowledge of a programming language, let's start with how we translate an idea into an algorithm. Suppose you want to write a function that will take a string as input
and output the string in lowercase:
// tolower -- translates all alphabetic, uppercase characters in str to lowercase
function tolower(string str): string
What first comes to your mind when you think about solving this problem? Perhaps these two considerations crossed your mind:
1. Every character in str needs to be looked at
2. A routine for converting a single character to lower case is required
The first point is "obvious" because a character that needs to be converted might appear anywhere in the string. The second point follows from the first because, once we consider each character, we
need to do something with it. There are many ways of writing the tolower function for characters:
function tolower(character c): character
There are several ways to implement this function, including:
• look c up in a table—a character indexed array of characters that holds the lowercase version of each character.
• check if c is in the range 'A' ≤ c ≤ 'Z', and then add a numerical offset to it.
These techniques depend upon the character encoding. (As an issue of separation of concerns, perhaps the table solution is stronger because it's clearer you only need to change one part of the code.)
However such a subroutine is implemented, once we have it, the implementation of our original problem comes immediately:
// tolower -- translates all alphabetic, uppercase characters in str to lowercase
function tolower(string str): string
let result := ""
for-each c in str:
return result
The loop is the result of our ability to translate "every character needs to be looked at" into our native programming language. It became obvious that the tolower subroutine call should be in the
loop's body. The final step required to bring the high-level task into an implementation was deciding how to build the resulting string. Here, we chose to start with the empty string and append
characters to the end of it.
Now suppose you want to write a function for comparing two strings that tests if they are equal, ignoring case:
// equal-ignore-case -- returns true if s and t are equal, ignoring case
function equal-ignore-case(string s, string t): boolean
These ideas might come to mind:
1. Every character in strings s and t will have to be looked at
2. A single loop iterating through both might accomplish this
3. But such a loop should be careful that the strings are of equal length first
4. If the strings aren't the same length, then they cannot be equal because the consideration of ignoring case doesn't affect how long the string is
5. A tolower subroutine for characters can be used again, and only the lowercase versions will be compared
These ideas come from familiarity both with strings and with the looping and conditional constructs in your language. The function you thought of may have looked something like this:
// equal-ignore-case -- returns true if s or t are equal, ignoring case
function equal-ignore-case(string s[1..n], string t[1..m]): boolean
if n != m:
return false \if they aren't the same length, they aren't equal\
for i := 1 to n:
if tolower(s[i]) != tolower(t[i]):
return false
return true
Or, if you thought of the problem in terms of functional decomposition instead of iterations, you might have thought of a function more like this:
// equal-ignore-case -- returns true if s or t are equal, ignoring case
function equal-ignore-case(string s, string t): boolean
return tolower(s).equals(tolower(t))
Alternatively, you may feel neither of these solutions is efficient enough, and you would prefer an algorithm that only ever made one pass of s or t. The above two implementations each require
two-passes: the first version computes the lengths and then compares each character, while the second version computes the lowercase versions of the string and then compares the results to each
other. (Note that for a pair of strings, it is also possible to have the length precomputed to avoid the second pass, but that can have its own drawbacks at times.) You could imagine how similar
routines can be written to test string equality that not only ignore case, but also ignore accents.
Already you might be getting the spirit of the pseudocode in this book. The pseudocode language is not meant to be a real programming language: it abstracts away details that you would have to
contend with in any language. For example, the language doesn't assume generic types or dynamic versus static types: the idea is that it should be clear what is intended and it should not be too hard
to convert it to your native language. (However, in doing so, you might have to make some design decisions that limit the implementation to one particular type or form of data.)
There was nothing special about the techniques we used so far to solve these simple string problems: such techniques are perhaps already in your toolbox, and you may have found better or more elegant
ways of expressing the solutions in your programming language of choice. In this book, we explore general algorithmic techniques to expand your toolbox even further. Taking a naive algorithm and
making it more efficient might not come so immediately, but after understanding the material in this book you should be able to methodically apply different solutions, and, most importantly, you will
be able to ask yourself more questions about your programs. Asking questions can be just as important as answering questions, because asking the right question can help you reformulate the problem
and think outside of the box.
Understanding an Algorithm
Computer programmers need an excellent ability to reason with multiple-layered abstractions. For example, consider the following code:
function foo(integer a):
if (a / 2) * 2 == a:
print "The value " a " is even."
To understand this example, you need to know that integer division uses truncation and therefore when the if-condition is true then the least-significant bit in a is zero (which means that a must be
even). Additionally, the code uses a string printing API and is itself the definition of a function to be used by different modules. Depending on the programming task, you may think on the layer of
hardware, on down to the level of processor branch-prediction or the cache.
Often an understanding of binary is crucial, but many modern languages have abstractions far enough away "from the hardware" that these lower-levels are not necessary. Somewhere the abstraction
stops: most programmers don't need to think about logic gates, nor is the physics of electronics necessary. Nevertheless, an essential part of programming is multiple-layer thinking.
But stepping away from computer programs toward algorithms requires another layer: mathematics. A program may exploit properties of binary representations. An algorithm can exploit properties of set
theory or other mathematical constructs. Just as binary itself is not explicit in a program, the mathematical properties used in an algorithm are not explicit.
Typically, when an algorithm is introduced, a discussion (separate from the code) is needed to explain the mathematics used by the algorithm. For example, to really understand a greedy algorithm
(such as Dijkstra's algorithm) you should understand the mathematical properties that show how the greedy strategy is valid for all cases. In a way, you can think of the mathematics as its own kind
of subroutine that the algorithm invokes. But this "subroutine" is not present in the code because there's nothing to call. As you read this book try to think about mathematics as an implicit
Overview of the Techniques
The techniques this book covers are highlighted in the following overview.
• Divide and Conquer: Many problems, particularly when the input is given in an array, can be solved by cutting the problem into smaller pieces (divide), solving the smaller parts recursively (
conquer), and then combining the solutions into a single result. Examples include the merge sort and quicksort algorithms.
• Randomization: Increasingly, randomization techniques are important for many applications. This chapter presents some classical algorithms that make use of random numbers.
• Backtracking: Almost any problem can be cast in some form as a backtracking algorithm. In backtracking, you consider all possible choices to solve a problem and recursively solve subproblems
under the assumption that the choice is taken. The set of recursive calls generates a tree in which each set of choices in the tree is considered consecutively. Consequently, if a solution
exists, it will eventually be found.
Backtracking is generally an inefficient, brute-force technique, but there are optimizations that can be performed to reduce both the depth of the tree and the number of branches. The technique is
called backtracking because after one leaf of the tree is visited, the algorithm will go back up the call stack (undoing choices that didn't lead to success), and then proceed down some other branch.
To be solved with backtracking techniques, a problem needs to have some form of "self-similarity," that is, smaller instances of the problem (after a choice has been made) must resemble the original
problem. Usually, problems can be generalized to become self-similar.
• Dynamic Programming: Dynamic programming is an optimization technique for backtracking algorithms. When subproblems need to be solved repeatedly (i.e., when there are many duplicate branches in
the backtracking algorithm) time can be saved by solving all of the subproblems first (bottom-up, from smallest to largest) and storing the solution to each subproblem in a table. Thus, each
subproblem is only visited and solved once instead of repeatedly. The "programming" in this technique's name comes from programming in the sense of writing things down in a table; for example,
television programming is making a table of what shows will be broadcast when.
• Greedy Algorithms: A greedy algorithm can be useful when enough information is known about possible choices that "the best" choice can be determined without considering all possible choices.
Typically, greedy algorithms are not challenging to write, but they are difficult to prove correct.
• Hill Climbing: The final technique we explore is hill climbing. The basic idea is to start with a poor solution to a problem, and then repeatedly apply optimizations to that solution until it
becomes optimal or meets some other requirement. An important case of hill climbing is network flow. Despite the name, network flow is useful for many problems that describe relationships, so
it's not just for computer networks. Many matching problems can be solved using network flow.
Algorithm and code example
Level 1 (easiest)
1. Find maximum With algorithm and several different programming languages
2. Find minimum With algorithm and several different programming languages
3. Find average With algorithm and several different programming languages
4. Find mode With algorithm and several different programming languages
5. Find total With algorithm and several different programming languages
6. Counting With algorithm and several different programming languages
7. Find mean With algorithm and several different programming languages
Level 2
1. Talking to computer Lv 1 With algorithm and several different programming languages
2. Sorting-bubble sort With algorithm and several different programming languages
Level 3
1. Talking to computer Lv 2 With algorithm and several different programming languages
Level 4
1. Talking to computer Lv 3 With algorithm and several different programming languages
2. Find approximate maximum With algorithm and several different programming languages
Level 5
1. Quicksort
Mathematical Background
Before we begin learning algorithmic techniques, we take a detour to give ourselves some necessary mathematical tools. First, we cover mathematical definitions of terms that are used later on in the
book. By expanding your mathematical vocabulary you can be more precise and you can state or formulate problems more simply. Following that, we cover techniques for analysing the running time of an
algorithm. After each major algorithm covered in this book we give an analysis of its running time as well as a proof of its correctness
Asymptotic Notation
In addition to correctness another important characteristic of a useful algorithm is its time and memory consumption. Time and memory are both valuable resources and there are important differences
(even when both are abundant) in how we can use them.
How can you measure resource consumption? One way is to create a function that describes the usage in terms of some characteristic of the input. One commonly used characteristic of an input dataset
is its size. For example, suppose an algorithm takes an input as an array of ${\displaystyle n}$ integers. We can describe the time this algorithm takes as a function ${\displaystyle f}$ written in
terms of ${\displaystyle n}$ . For example, we might write:
${\displaystyle f(n)=n^{2}+3n+14}$
where the value of ${\displaystyle f(n)}$ is some unit of time (in this discussion the main focus will be on time, but we could do the same for memory consumption). Rarely are the units of time
actually in seconds, because that would depend on the machine itself, the system it's running, and its load. Instead, the units of time typically used are in terms of the number of some fundamental
operation performed. For example, some fundamental operations we might care about are: the number of additions or multiplications needed; the number of element comparisons; the number of
memory-location swaps performed; or the raw number of machine instructions executed. In general we might just refer to these fundamental operations performed as steps taken.
Is this a good approach to determine an algorithm's resource consumption? Yes and no. When two different algorithms are similar in time consumption a precise function might help to determine which
algorithm is faster under given conditions. But in many cases it is either difficult or impossible to calculate an analytical description of the exact number of operations needed, especially when the
algorithm performs operations conditionally on the values of its input. Instead, what really is important is not the precise time required to complete the function, but rather the degree that
resource consumption changes depending on its inputs. Concretely, consider these two functions, representing the computation time required for each size of input dataset:
${\displaystyle f(n)=n^{3}-12n^{2}+20n+110}$
${\displaystyle g(n)=n^{3}+n^{2}+5n+5}$
They look quite different, but how do they behave? Let's look at a few plots of the function (${\displaystyle f(n)}$ is in red, ${\displaystyle g(n)}$ in blue):
Plot of f and g, in range 0 to 5 Plot of f and g, in range 0 to 15
Plot of f and g, in range 0 to 1000
Plot of f and g, in range 0 to 100
In the first, very-limited plot the curves appear somewhat different. In the second plot they start going in sort of the same way, in the third there is only a very small difference, and at last they
are virtually identical. In fact, they approach ${\displaystyle n^{3}}$ , the dominant term. As n gets larger, the other terms become much less significant in comparison to n^3.
As you can see, modifying a polynomial-time algorithm's low-order coefficients doesn't help much. What really matters is the highest-order coefficient. This is why we've adopted a notation for this
kind of analysis. We say that:
${\displaystyle f(n)=n^{3}-12n^{2}+20n+110=O(n^{3})}$
We ignore the low-order terms. We can say that:
${\displaystyle O(\log {n})\leq O({\sqrt {n}})\leq O(n)\leq O(n\log {n})\leq O(n^{2})\leq O(n^{3})\leq O(2^{n})}$
This gives us a way to more easily compare algorithms with each other. Running an insertion sort on ${\displaystyle n}$ elements takes steps on the order of ${\displaystyle O(n^{2})}$ . Merge sort
sorts in ${\displaystyle O(n\log {n})}$ steps. Therefore, once the input dataset is large enough, merge sort is faster than insertion sort.
In general, we write
${\displaystyle f(n)=O(g(n))}$
${\displaystyle \exists c>0,\exists n_{0}>0,\forall n\geq n_{0}:0\leq f(n)\leq c\cdot g(n).}$
That is, ${\displaystyle f(n)=O(g(n))}$ holds if and only if there exists some constants ${\displaystyle c}$ and ${\displaystyle n_{0}}$ such that for all ${\displaystyle n>n_{0}}$ ${\displaystyle f
(n)}$ is positive and less than or equal to ${\displaystyle cg(n)}$ .
Note that the equal sign used in this notation describes a relationship between ${\displaystyle f(n)}$ and ${\displaystyle g(n)}$ instead of reflecting a true equality. In light of this, some define
Big-O in terms of a set, stating that:
${\displaystyle f(n)\in O(g(n))}$
${\displaystyle f(n)\in \{f(n):\exists c>0,\exists n_{0}>0,\forall n\geq n_{0}:0\leq f(n)\leq c\cdot g(n)\}.}$
Big-O notation is only an upper bound; these two are both true:
${\displaystyle n^{3}=O(n^{4})}$
${\displaystyle n^{4}=O(n^{4})}$
If we use the equal sign as an equality we can get very strange results, such as:
${\displaystyle n^{3}=n^{4}}$
which is obviously nonsense. This is why the set-definition is handy. You can avoid these things by thinking of the equal sign as a one-way equality, i.e.:
${\displaystyle n^{3}=O(n^{4})}$
does not imply
${\displaystyle O(n^{4})=n^{3}}$
Always keep the O on the right hand side.
Big Omega
Sometimes, we want more than an upper bound on the behavior of a certain function. Big Omega provides a lower bound. In general, we say that
${\displaystyle f(n)=\Omega (g(n))}$
${\displaystyle \exists c>0,\exists n_{0}>0,\forall n\geq n_{0}:0\leq c\cdot g(n)\leq f(n).}$
i.e. ${\displaystyle f(n)=\Omega (g(n))}$ if and only if there exist constants c and n[0] such that for all n>n[0] f(n) is positive and greater than or equal to cg(n).
So, for example, we can say that
${\displaystyle n^{2}-2n=\Omega (n^{2})}$ , (c=1/2, n[0]=4) or
${\displaystyle n^{2}-2n=\Omega (n)}$ , (c=1, n[0]=3),
but it is false to claim that
${\displaystyle n^{2}-2n=\Omega (n^{3}).}$
Big Theta
When a given function is both O(g(n)) and Ω(g(n)), we say it is Θ(g(n)), and we have a tight bound on the function. A function f(n) is Θ(g(n)) when
${\displaystyle \exists c_{1}>0,\exists c_{2}>0,\exists n_{0}>0,\forall n\geq n_{0}:0\leq c_{1}\cdot g(n)\leq f(n)\leq c_{2}\cdot g(n),}$
but most of the time, when we're trying to prove that a given ${\displaystyle f(n)=\Theta (g(n))}$ , instead of using this definition, we just show that it is both O(g(n)) and Ω(g(n)).
Little-O and Omega
When the asymptotic bound is not tight, we can express this by saying that ${\displaystyle f(n)=o(g(n))}$ or ${\displaystyle f(n)=\omega (g(n)).}$ The definitions are:
f(n) is o(g(n)) iff ${\displaystyle \forall c>0,\exists n_{0}>0,\forall n\geq n_{0}:0\leq f(n)<c\cdot g(n)}$ and
f(n) is ω(g(n)) iff ${\displaystyle \forall c>0,\exists n_{0}>0,\forall n\geq n_{0}:0\leq c\cdot g(n)<f(n).}$
Note that a function f is in o(g(n)) when for any coefficient of g, g eventually gets larger than f, while for O(g(n)), there only has to exist a single coefficient for which g eventually gets at
least as big as f.
Big-O with multiple variables
Given a functions with two parameters ${\displaystyle f(n,m)}$ and ${\displaystyle g(n,m)}$ ,
${\displaystyle f(n,m)=O(g(n,m))}$ if and only if ${\displaystyle \exists c>0,\exists n_{0}>0,\exists m_{0}>0,\forall n\geq n_{0},\forall m\geq m_{0}:0\leq f(n,m)\leq c\cdot g(n,m)}$ .
For example, ${\displaystyle 5n+3m=O(n+m)}$ , and ${\displaystyle n+10m+3nm=O(nm)}$ .
Algorithm Analysis: Solving Recurrence Equations
Merge sort of n elements: ${\displaystyle T(n)=2*T(n/2)+c(n)}$ This describes one iteration of the merge sort: the problem space ${\displaystyle n}$ is reduced to two halves (${\displaystyle 2*T(n/
2)}$ ), and then merged back together at the end of all the recursive calls (${\displaystyle c(n)}$ ). This notation system is the bread and butter of algorithm analysis, so get used to it.
There are some theorems you can use to estimate the big Oh time for a function if its recurrence equation fits a certain pattern.
Substitution method
Formulate a guess about the big Oh time of your equation. Then use proof by induction to prove the guess is correct.
Draw the Tree and Table
This is really just a way of getting an intelligent guess. You still have to go back to the substitution method in order to prove the big Oh time.
The Master Theorem
Consider a recurrence equation that fits the following formula:
${\displaystyle T(n)=aT\left({\frac {n}{b}}\right)+O(n^{k})}$
for a ≥ 1, b > 1 and k ≥ 0. Here, a is the number of recursive calls made per call to the function, n is the input size, b is how much smaller the input gets, and k is the polynomial order of an
operation that occurs each time the function is called (except for the base cases). For example, in the merge sort algorithm covered later, we have
${\displaystyle T(n)=2T\left({\frac {n}{2}}\right)+O(n)}$
because two subproblems are called for each non-base case iteration, and the size of the array is divided in half each time. The ${\displaystyle O(n)}$ at the end is the "conquer" part of this divide
and conquer algorithm: it takes linear time to merge the results from the two recursive calls into the final result.
Thinking of the recursive calls of T as forming a tree, there are three possible cases to determine where most of the algorithm is spending its time ("most" in this sense is concerned with its
asymptotic behaviour):
1. the tree can be top heavy, and most time is spent during the initial calls near the root;
2. the tree can have a steady state, where time is spread evenly; or
3. the tree can be bottom heavy, and most time is spent in the calls near the leaves
Depending upon which of these three states the tree is in T will have different complexities:
The Master Theorem
Given ${\displaystyle T(n)=aT\left({\frac {n}{b}}\right)+O(n^{k})}$ for a ≥ 1, b > 1 and k ≥ 0:
• If ${\displaystyle a<b^{k}}$ , then ${\displaystyle T(n)=O(n^{k})\ }$ (top heavy)
• If ${\displaystyle a=b^{k}}$ , then ${\displaystyle T(n)=O(n^{k}\cdot \log n)}$ (steady state)
• If ${\displaystyle a>b^{k}}$ , then ${\displaystyle T(n)=O(n^{\log _{b}a})}$ (bottom heavy)
For the merge sort example above, where
${\displaystyle T(n)=2T\left({\frac {n}{2}}\right)+O(n)}$
we have
${\displaystyle a=2,b=2,k=1\implies b^{k}=2}$
thus, ${\displaystyle a=b^{k}}$ and so this is also in the "steady state": By the master theorem, the complexity of merge sort is thus
${\displaystyle T(n)=O(n^{1}\log n)=O(n\log n)}$ .
Amortized Analysis
[Start with an adjacency list representation of a graph and show two nested for loops: one for each node n, and nested inside that one loop for each edge e. If there are n nodes and m edges, this
could lead you to say the loop takes O(nm) time. However, only once could the innerloop take that long, and a tighter bound is O(n+m).]
Divide and Conquer
The first major algorithmic technique we cover is divide and conquer. Part of the trick of making a good divide and conquer algorithm is determining how a given problem could be separated into two or
more similar, but smaller, subproblems. More generally, when we are creating a divide and conquer algorithm we will take the following steps:
Divide and Conquer Methodology
1. Given a problem, identify a small number of significantly smaller subproblems of the same type
2. Solve each subproblem recursively (the smallest possible size of a subproblem is a base-case)
3. Combine these solutions into a solution for the main problem
The first algorithm we'll present using this methodology is the merge sort.
Merge Sort
The problem that merge sort solves is general sorting: given an unordered array of elements that have a total ordering, create an array that has the same elements sorted. More precisely, for an array
a with indexes 1 through n, if the condition
for all i, j such that 1 ≤ i < j ≤ n then a[i] ≤ a[j]
holds, then a is said to be sorted. Here is the interface:
// sort -- returns a sorted copy of array a
function sort(array a): array
Following the divide and conquer methodology, how can a be broken up into smaller subproblems? Because a is an array of n elements, we might want to start by breaking the array into two arrays of
size n/2 elements. These smaller arrays will also be unsorted and it is meaningful to sort these smaller problems; thus we can consider these smaller arrays "similar". Ignoring the base case for a
moment, this reduces the problem into a different one: Given two sorted arrays, how can they be combined to form a single sorted array that contains all the elements of both given arrays:
// merge—given a and b (assumed to be sorted) returns a merged array that
// preserves order
function merge(array a, array b): array
So far, following the methodology has led us to this point, but what about the base case? The base case is the part of the algorithm concerned with what happens when the problem cannot be broken into
smaller subproblems. Here, the base case is when the array only has one element. The following is a sorting algorithm that faithfully sorts arrays of only zero or one elements:
// base-sort -- given an array of one element (or empty), return a copy of the
// array sorted
function base-sort(array a[1..n]): array
assert (n <= 1)
return a.copy()
Putting this together, here is what the methodology has told us to write so far:
// sort -- returns a sorted copy of array a
function sort(array a[1..n]): array
if n <= 1: return a.copy()
let sub_size := n / 2
let first_half := sort(a[1,..,sub_size])
let second_half := sort(a[sub_size + 1,..,n])
return merge(first_half, second_half)
And, other than the unimplemented merge subroutine, this sorting algorithm is done! Before we cover how this algorithm works, here is how merge can be written:
// merge -- given a and b (assumed to be sorted) returns a merged array that
// preserves order
function merge(array a[1..n], array b[1..m]): array
let result := new array[n + m]
let i, j := 1
for k := 1 to n + m:
if i >= n: result[k] := b[j]; j += 1
else-if j >= m: result[k] := a[i]; i += 1
if a[i] < b[j]:
result[k] := a[i]; i += 1
result[k] := b[j]; j += 1
[TODO: how it works; including correctness proof] This algorithm uses the fact that, given two sorted arrays, the smallest element is always in one of two places. It's either at the head of the first
array, or the head of the second.
Let ${\displaystyle T(n)}$ be the number of steps the algorithm takes to run on input of size ${\displaystyle n}$ .
Merging takes linear time and we recurse each time on two sub-problems of half the original size, so
${\displaystyle T(n)=2\cdot T\left({\frac {n}{2}}\right)+O(n).}$
By the master theorem, we see that this recurrence has a "steady state" tree. Thus, the runtime is:
${\displaystyle T(n)=O(n\cdot \log n).}$
This can be seen intuitively by asking how may times does n need to be divided by 2 before the size of the array for sorting is 1? Why, m times of course!
More directly, 2^m = n , equivalent to log 2^m = log n, equivalent to m × log[2]2 = log [2] n , and since log[2] 2 = 1, equivalent to m = log[2]n.
Since m is the number of halvings of an array before the array is chopped up into bite sized pieces of 1-element arrays, and then it will take m levels of merging a sub-array with its neighbor where
the sum size of sub-arrays will be n at each level, it will be exactly n ÷ 2 comparisons for merging at each level, with m ( log[2]n ) levels, thus O(n ÷ 2 × log n ) <=> O ( n log n).
Iterative Version
This merge sort algorithm can be turned into an iterative algorithm by iteratively merging each subsequent pair, then each group of four, et cetera. Due to a lack of function overhead, iterative
algorithms tend to be faster in practice. However, because the recursive version's call tree is logarithmically deep, it does not require much run-time stack space: Even sorting 4 gigs of items would
only require 32 call entries on the stack, a very modest amount considering if even each call required 256 bytes on the stack, it would only require 8 kilobytes.
The iterative version of mergesort is a minor modification to the recursive version - in fact we can reuse the earlier merging function. The algorithm works by merging small, sorted subsections of
the original array to create larger subsections of the array which are sorted. To accomplish this, we iterate through the array with successively larger "strides".
// sort -- returns a sorted copy of array a
function sort_iterative(array a[1,.n]): array
let result := a.copy()
for power := 0 to log2(n-1)
let unit := 2^power
for i := 1 to n by unit*2
if i+unit-1 < n:
let a1[1..unit] := result[i..i+unit-1]
let a2[1.unit] := result[i+unit..min(i+unit*2-1, n)]
result[i..i+unit*2-1] := merge(a1,a2)
return result
This works because each sublist of length 1 in the array is, by definition, sorted. Each iteration through the array (using counting variable i) doubles the size of sorted sublists by merging
adjacent sublists into sorted larger versions. The current size of sorted sublists in the algorithm is represented by the unit variable.
space inefficiency
Straight forward merge sort requires a space of 2 × n , n to store the 2 sorted smaller arrays , and n to store the final result of merging. But merge sort still lends itself for batching of merging.
Binary Search
Once an array is sorted, we can quickly locate items in the array by doing a binary search. Binary search is different from other divide and conquer algorithms in that it is mostly divide based
(nothing needs to be conquered). The concept behind binary search will be useful for understanding the partition and quicksort algorithms, presented in the randomization chapter.
Finding an item in an already sorted array is similar to finding a name in a phonebook: you can start by flipping the book open toward the middle. If the name you're looking for is on that page, you
stop. If you went too far, you can start the process again with the first half of the book. If the name you're searching for appears later than the page, you start from the second half of the book
instead. You repeat this process, narrowing down your search space by half each time, until you find what you were looking for (or, alternatively, find where what you were looking for would have been
if it were present).
The following algorithm states this procedure precisely:
// binary-search -- returns the index of value in the given array, or
// -1 if value cannot be found. Assumes array is sorted in ascending order
function binary-search(value, array A[1..n]): integer
return search-inner(value, A, 1, n + 1)
// search-inner -- search subparts of the array; end is one past the
// last element
function search-inner(value, array A, start, end): integer
if start == end:
return -1 // not found
let length := end - start
if length == 1:
if value == A[start]:
return start
return -1
let mid := start + (length / 2)
if value == A[mid]:
return mid
else-if value > A[mid]:
return search-inner(value, A, mid + 1, end)
return search-inner(value, A, start, mid)
Note that all recursive calls made are tail-calls, and thus the algorithm is iterative. We can explicitly remove the tail-calls if our programming language does not do that for us already by turning
the argument values passed to the recursive call into assignments, and then looping to the top of the function body again:
// binary-search -- returns the index of value in the given array, or
// -1 if value cannot be found. Assumes array is sorted in ascending order
function binary-search(value, array A[1,..n]): integer
let start := 1
let end := n + 1
if start == end: return -1 fi // not found
let length := end - start
if length == 1:
if value == A[start]: return start
else: return -1 fi
let mid := start + (length / 2)
if value == A[mid]:
return mid
else-if value > A[mid]:
start := mid + 1
end := mid
Even though we have an iterative algorithm, it's easier to reason about the recursive version. If the number of steps the algorithm takes is ${\displaystyle T(n)}$ , then we have the following
recurrence that defines ${\displaystyle T(n)}$ :
${\displaystyle T(n)=1\cdot T\left({\frac {n}{2}}\right)+O(1).}$
The size of each recursive call made is on half of the input size (${\displaystyle n}$ ), and there is a constant amount of time spent outside of the recursion (i.e., computing length and mid will
take the same amount of time, regardless of how many elements are in the array). By the master theorem, this recurrence has values ${\displaystyle a=1,b=2,k=0}$ , which is a "steady state" tree, and
thus we use the steady state case that tells us that
${\displaystyle T(n)=\Theta (n^{k}\cdot \log n)=\Theta (\log n).}$
Thus, this algorithm takes logarithmic time. Typically, even when n is large, it is safe to let the stack grow by ${\displaystyle \log n}$ activation records through recursive calls.
difficulty in initially correct binary search implementations
The article on wikipedia on Binary Search also mentions the difficulty in writing a correct binary search algorithm: for instance, the java Arrays.binarySearch(..) overloaded function implementation
does an iterative binary search which didn't work when large integers overflowed a simple expression of mid calculation mid = ( end + start) / 2 i.e. end + start > max_positive_integer. Hence the
above algorithm is more correct in using a length = end - start, and adding half length to start. The java binary Search algorithm gave a return value useful for finding the position of the nearest
key greater than the search key, i.e. the position where the search key could be inserted.
i.e. it returns - (keypos+1) , if the search key wasn't found exactly, but an insertion point was needed for the search key ( insertion_point = -return_value - 1). Looking at boundary values, an
insertion point could be at the front of the list ( ip = 0, return value = -1 ), to the position just after the last element, ( ip = length(A), return value = - length(A) - 1) .
As an exercise, trying to implement this functionality on the above iterative binary search can be useful for further comprehension.
Integer Multiplication
If you want to perform arithmetic with small integers, you can simply use the built-in arithmetic hardware of your machine. However, if you wish to multiply integers larger than those that will fit
into the standard "word" integer size of your computer, you will have to implement a multiplication algorithm in software or use a software implementation written by someone else. For example, RSA
encryption needs to work with integers of very large size (that is, large relative to the 64-bit word size of many machines) and utilizes special multiplication algorithms.^[1]
Grade School Multiplication
How do we represent a large, multi-word integer? We can have a binary representation by using an array (or an allocated block of memory) of words to represent the bits of the large integer. Suppose
now that we have two integers, ${\displaystyle X}$ and ${\displaystyle Y}$ , and we want to multiply them together. For simplicity, let's assume that both ${\displaystyle X}$ and ${\displaystyle Y}$
have ${\displaystyle n}$ bits each (if one is shorter than the other, we can always pad on zeros at the beginning). The most basic way to multiply the integers is to use the grade school
multiplication algorithm. This is even easier in binary, because we only multiply by 1 or 0:
x6 x5 x4 x3 x2 x1 x0
× y6 y5 y4 y3 y2 y1 y0
x6 x5 x4 x3 x2 x1 x0 (when y0 is 1; 0 otherwise)
x6 x5 x4 x3 x2 x1 x0 0 (when y1 is 1; 0 otherwise)
x6 x5 x4 x3 x2 x1 x0 0 0 (when y2 is 1; 0 otherwise)
x6 x5 x4 x3 x2 x1 x0 0 0 0 (when y3 is 1; 0 otherwise)
... et cetera
As an algorithm, here's what multiplication would look like:
// multiply -- return the product of two binary integers, both of length n
function multiply(bitarray x[1,..n], bitarray y[1,..n]): bitarray
bitarray p = 0
for i:=1 to n:
if y[i] == 1:
p := add(p, x)
x := pad(x, 0) // add another zero to the end of x
return p
The subroutine add adds two binary integers and returns the result, and the subroutine pad adds an extra digit to the end of the number (padding on a zero is the same thing as shifting the number to
the left; which is the same as multiplying it by two). Here, we loop n times, and in the worst-case, we make n calls to add. The numbers given to add will at most be of length ${\displaystyle 2n}$ .
Further, we can expect that the add subroutine can be done in linear time. Thus, if n calls to a ${\displaystyle O(n)}$ subroutine are made, then the algorithm takes ${\displaystyle O(n^{2})}$ time.
Divide and Conquer Multiplication
As you may have figured, this isn't the end of the story. We've presented the "obvious" algorithm for multiplication; so let's see if a divide and conquer strategy can give us something better. One
route we might want to try is breaking the integers up into two parts. For example, the integer x could be divided into two parts, ${\displaystyle x_{h}}$ and ${\displaystyle x_{l}}$ , for the
high-order and low-order halves of ${\displaystyle x}$ . For example, if ${\displaystyle x}$ has n bits, we have
${\displaystyle x=x_{h}\cdot 2^{n/2}+x_{l}}$
We could do the same for ${\displaystyle y}$ :
${\displaystyle y=y_{h}\cdot 2^{n/2}+y_{l}}$
But from this division into smaller parts, it's not clear how we can multiply these parts such that we can combine the results for the solution to the main problem. First, let's write out ${\
displaystyle x\times y}$ would be in such a system:
${\displaystyle x\times y=x_{h}\times y_{h}\cdot (2^{n/2})^{2}+(x_{h}\times y_{l}+x_{l}\times y_{h})\cdot (2^{n/2})+x_{l}\times y_{l}}$
This comes from simply multiplying the new hi/lo representations of × and ${\displaystyle y}$ together. The multiplication of the smaller pieces are marked by the "${\displaystyle \times }$ " symbol.
Note that the multiplies by ${\displaystyle 2^{n/2}}$ and ${\displaystyle (2^{n/2})^{2}=2^{n}}$ does not require a real multiplication: we can just pad on the right number of zeros instead. This
suggests the following divide and conquer algorithm:
// multiply -- return the product of two binary integers, both of length n
function multiply(bitarray x[1,..n], bitarray y[1,..n]): bitarray
if n == 1: return x[1] * y[1] fi // multiply single digits: O(1)
let xh := x[n/2 + 1, .., n] // array slicing, O(n)
let xl := x[0, .., n / 2] // array slicing, O(n)
let yh := y[n/2 + 1, .., n] // array slicing, O(n)
let yl := y[0, .., n / 2] // array slicing, O(n)
let a := multiply(xh, yh) // recursive call; T(n/2)
let b := multiply(xh, yl) // recursive call; T(n/2)
let c := multiply(xl, yh) // recursive call; T(n/2)
let d := multiply(xl, yl) // recursive call; T(n/2)
b := add(b, c) // regular addition; O(n)
a := shift(a, n) // pad on zeros; O(n)
b := shift(b, n/2) // pad on zeros; O(n)
return add(a, b, d) // regular addition; O(n)
We can use the master theorem to analyze the running time of this algorithm. Assuming that the algorithm's running time is ${\displaystyle T(n)}$ , the comments show how much time each step takes.
Because there are four recursive calls, each with an input of size ${\displaystyle n/2}$ , we have:
${\displaystyle T(n)=4T(n/2)+O(n)}$
Here, ${\displaystyle a=4,b=2,k=1}$ , and given that ${\displaystyle 4>2^{1}}$ we are in the "bottom heavy" case and thus plugging in these values into the bottom heavy case of the master theorem
gives us:
${\displaystyle T(n)=O(n^{\log _{2}4})=O(n^{2}).}$
Thus, after all of that hard work, we're still no better off than the grade school algorithm! Luckily, numbers and polynomials are a data set we know additional information about. In fact, we can
reduce the running time by doing some mathematical tricks.
First, let's replace the ${\displaystyle 2^{n/2}}$ with a variable, z:
${\displaystyle x\times y=x_{h}*y_{h}z^{2}+(x_{h}*y_{l}+x_{l}*y_{h})z+x_{l}*y_{l}}$
This appears to be a quadratic formula, and we know that you only need three co-efficients or points on a graph in order to uniquely describe a quadratic formula. However, in our above algorithm
we've been using four multiplications total. Let's try recasting × and ${\displaystyle y}$ as linear functions:
${\displaystyle P_{x}(z)=x_{h}\cdot z+x_{l}}$
${\displaystyle P_{y}(z)=y_{h}\cdot z+y_{l}}$
Now, for ${\displaystyle x\times y}$ we just need to compute ${\displaystyle (P_{x}\cdot P_{y})(2^{n/2})}$ . We'll evaluate ${\displaystyle P_{x}(z)}$ and ${\displaystyle P_{y}(z)}$ at three points.
Three convenient points to evaluate the function will be at ${\displaystyle (P_{x}\cdot P_{y})(1),(P_{x}\cdot P_{y})(0),(P_{x}\cdot P_{y})(-1)}$ :
[TODO: show how to make the two-parts breaking more efficient; then mention that the best multiplication uses the FFT, but don't actually cover that topic (which is saved for the advanced book)]
Base Conversion
[TODO: Convert numbers from decimal to binary quickly using DnC.]
Along with the binary, the science of computers employs bases 8 and 16 for it's very easy to convert between the three while using bases 8 and 16 shortens considerably number representations.
To represent 8 first digits in the binary system we need 3 bits. Thus we have, 0=000, 1=001, 2=010, 3=011, 4=100, 5=101, 6=110, 7=111. Assume M=(2065)[8]. In order to obtain its binary
representation, replace each of the four digits with the corresponding triple of bits: 010 000 110 101. After removing the leading zeros, binary representation is immediate: M=(10000110101)[2]. (For
the hexadecimal system conversion is quite similar, except that now one should use 4-bit representation of numbers below 16.) This fact follows from the general conversion algorithm and the
observation that 8=${\displaystyle 2^{3}}$ (and, of course, 16=${\displaystyle 2^{4}}$ ). Thus it appears that the shortest way to convert numbers into the binary system is to first convert them into
either octal or hexadecimal representation. Now let see how to implement the general algorithm programmatically.
For the sake of reference, representation of a number in a system with base (radix) N may only consist of digits that are less than N.
More accurately, if
${\displaystyle (1)M=a_{k}N^{k}+a_{k-1}N^{k-1}+...+a_{1}N^{1}+a_{0}}$
with ${\displaystyle 0<=a_{i}<N}$ we have a representation of M in base N system and write
${\displaystyle M=(a_{k}a_{k-1}...a_{0})N}$
If we rewrite (1) as
${\displaystyle (2)M=a_{0}+N*(a_{1}+N*(a_{2}+N*...))}$
the algorithm for obtaining coefficients ai becomes more obvious. For example, ${\displaystyle a_{0}=M\ modulo\ n}$ and ${\displaystyle a_{1}=(M/N)\ modulo\ n}$ , and so on.
Recursive Implementation
Let's represent the algorithm mnemonically: (result is a string or character variable where I shall accumulate the digits of the result one at a time)
result = ""
if M < N, result = 'M' + result. Stop.
S = M mod N, result = 'S' + result
M = M/N
goto 2
A few words of explanation.
"" is an empty string. You may remember it's a zero element for string concatenation. Here we check whether the conversion procedure is over. It's over if M is less than N in which case M is a digit
(with some qualification for N>10) and no additional action is necessary. Just prepend it in front of all other digits obtained previously. The '+' plus sign stands for the string concatenation. If
we got this far, M is not less than N. First we extract its remainder of division by N, prepend this digit to the result as described previously, and reassign M to be M/N. This says that the whole
process should be repeated starting with step 2. I would like to have a function say called Conversion that takes two arguments M and N and returns representation of the number M in base N. The
function might look like this
1 String Conversion(int M, int N) // return string, accept two integers
2 {
3 if (M < N) // see if it's time to return
4 return new String(""+M); // ""+M makes a string out of a digit
5 else // the time is not yet ripe
6 return Conversion(M/N, N) +
new String(""+(M mod N)); // continue
7 }
This is virtually a working Java function and it would look very much the same in C++ and require only a slight modification for C. As you see, at some point the function calls itself with a
different first argument. One may say that the function is defined in terms of itself. Such functions are called recursive. (The best known recursive function is factorial: n!=n*(n-1)!.) The function
calls (applies) itself to its arguments, and then (naturally) applies itself to its new arguments, and then ... and so on. We can be sure that the process will eventually stop because the sequence of
arguments (the first ones) is decreasing. Thus sooner or later the first argument will be less than the second and the process will start emerging from the recursion, still a step at a time.
Iterative Implementation
Not all programming languages allow functions to call themselves recursively. Recursive functions may also be undesirable if process interruption might be expected for whatever reason. For example,
in the Tower of Hanoi puzzle, the user may want to interrupt the demonstration being eager to test his or her understanding of the solution. There are complications due to the manner in which
computers execute programs when one wishes to jump out of several levels of recursive calls.
Note however that the string produced by the conversion algorithm is obtained in the wrong order: all digits are computed first and then written into the string the last digit first. Recursive
implementation easily got around this difficulty. With each invocation of the Conversion function, computer creates a new environment in which passed values of M, N, and the newly computed S are
stored. Completing the function call, i.e. returning from the function we find the environment as it was before the call. Recursive functions store a sequence of computations implicitly. Eliminating
recursive calls implies that we must manage to store the computed digits explicitly and then retrieve them in the reversed order.
In Computer Science such a mechanism is known as LIFO - Last In First Out. It's best implemented with a stack data structure. Stack admits only two operations: push and pop. Intuitively stack can be
visualized as indeed a stack of objects. Objects are stacked on top of each other so that to retrieve an object one has to remove all the objects above the needed one. Obviously the only object
available for immediate removal is the top one, i.e. the one that got on the stack last.
Then iterative implementation of the Conversion function might look as the following.
1 String Conversion(int M, int N) // return string, accept two integers
2 {
3 Stack stack = new Stack(); // create a stack
4 while (M >= N) // now the repetitive loop is clearly seen
5 {
6 stack.push(M mod N); // store a digit
7 M = M/N; // find new M
8 }
9 // now it's time to collect the digits together
10 String str = new String(""+M); // create a string with a single digit M
11 while (stack.NotEmpty())
12 str = str+stack.pop() // get from the stack next digit
13 return str;
14 }
The function is by far longer than its recursive counterpart; but, as I said, sometimes it's the one you want to use, and sometimes it's the only one you may actually use.
Closest Pair of Points
For a set of points on a two-dimensional plane, if you want to find the closest two points, you could compare all of them to each other, at ${\displaystyle O(n^{2})}$ time, or use a divide and
conquer algorithm.
[TODO: explain the algorithm, and show the n^2 algorithm]
[TODO: write the algorithm, include intuition, proof of correctness, and runtime analysis]
Use this link for the original document.
Closest Pair: A Divide-and-Conquer Approach
The brute force approach to the closest pair problem (i.e. checking every possible pair of points) takes quadratic time. We would now like to introduce a faster divide-and-conquer algorithm for
solving the closest pair problem. Given a set of points in the plane S, our approach will be to split the set into two roughly equal halves (S1 and S2) for which we already have the solutions, and
then to merge the halves in linear time to yield an O(nlogn) algorithm. However, the actual solution is far from obvious. It is possible that the desired pair might have one point in S1 and one in
S2, does this not force us once again to check all possible pairs of points? The divide-and-conquer approach presented here generalizes directly from the one dimensional algorithm we presented in the
previous section.
Closest Pair in the Plane
Alright, we'll generalize our 1-D algorithm as directly as possible (see figure 3.2). Given a set of points S in the plane, we partition it into two subsets S1 and S2 by a vertical line l such that
the points in S1 are to the left of l and those in S2 are to the right of l.
We now recursively solve the problem on these two sets obtaining minimum distances of d1 (for S1), and d2 (for S2). We let d be the minimum of these.
Now, identical to the 1-D case, if the closes pair of the whole set consists of one point from each subset, then these two points must be within d of l. This area is represented as the two strips P1
and P2 on either side of l
Up to now, we are completely in step with the 1-D case. At this point, however, the extra dimension causes some problems. We wish to determine if some point in say P1 is less than d away from another
point in P2. However, in the plane, we don't have the luxury that we had on the line when we observed that only one point in each set can be within d of the median. In fact, in two dimensions, all of
the points could be in the strip! This is disastrous, because we would have to compare n2 pairs of points to merge the set, and hence our divide-and-conquer algorithm wouldn't save us anything in
terms of efficiency. Thankfully, we can make another life saving observation at this point. For any particular point p in one strip, only points that meet the following constraints in the other strip
need to be checked:
• those points within d of p in the direction of the other strip
• those within d of p in the positive and negative y directions
Simply because points outside of this bounding box cannot be less than d units from p (see figure 3.3). It just so happens that because every point in this box is at least d apart, there can be at
most six points within it.
Now we don't need to check all n2 points. All we have to do is sort the points in the strip by their y-coordinates and scan the points in order, checking each point against a maximum of 6 of its
neighbors. This means at most 6*n comparisons are required to check all candidate pairs. However, since we sorted the points in the strip by their y-coordinates the process of merging our two subsets
is not linear, but in fact takes O(nlogn) time. Hence our full algorithm is not yet O(nlogn), but it is still an improvement on the quadratic performance of the brute force approach (as we shall see
in the next section). In section 3.4, we will demonstrate how to make this algorithm even more efficient by strengthening our recursive sub-solution.
Summary and Analysis of the 2-D Algorithm
We present here a step by step summary of the algorithm presented in the previous section, followed by a performance analysis. The algorithm is simply written in list form because I find pseudo-code
to be burdensome and unnecessary when trying to understand an algorithm. Note that we pre-sort the points according to their x coordinates, and maintain another structure which holds the points
sorted by their y values(for step 4), which in itself takes O(nlogn) time.
ClosestPair of a set of points:
1. Divide the set into two equal sized parts by the line l, and recursively compute the minimal distance in each part.
2. Let d be the minimal of the two minimal distances.
3. Eliminate points that lie farther than d apart from l.
4. Consider the remaining points according to their y-coordinates, which we have precomputed.
5. Scan the remaining points in the y order and compute the distances of each point to all of its neighbors that are distanced no more than d(that's why we need it sorted according to y). Note that
there are no more than 5(there is no figure 3.3 , so this 5 or 6 doesnt make sense without that figure . Please include it .) such points(see previous section).
6. If any of these distances is less than d then update d.
• Let us note T(n) as the efficiency of out algorithm
• Step 1 takes 2T(n/2) (we apply our algorithm for both halves)
• Step 3 takes O(n) time
• Step 5 takes O(n) time (as we saw in the previous section)
${\displaystyle T(n)=2T(n/2)+O(n)}$
which, according the Master Theorem, result
${\displaystyle T(n)\in O(nlogn)}$
Hence the merging of the sub-solutions is dominated by the sorting at step 4, and hence takes O(nlogn) time.
This must be repeated once for each level of recursion in the divide-and-conquer algorithm,
hence the whole of algorithm ClosestPair takes O(logn*nlogn) = O(nlog2n) time.
Improving the Algorithm
We can improve on this algorithm slightly by reducing the time it takes to achieve the y-coordinate sorting in Step 4. This is done by asking that the recursive solution computed in Step 1 returns
the points in sorted order by their y coordinates. This will yield two sorted lists of points which need only be merged (a linear time operation) in Step 4 in order to yield a complete sorted list.
Hence the revised algorithm involves making the following changes: Step 1: Divide the set into..., and recursively compute the distance in each part, returning the points in each set in sorted order
by y-coordinate. Step 4: Merge the two sorted lists into one sorted list in O(n) time. Hence the merging process is now dominated by the linear time steps thereby yielding an O(nlogn) algorithm for
finding the closest pair of a set of points in the plane.
Towers Of Hanoi Problem
[TODO: Write about the towers of hanoi algorithm and a program for it]
There are n distinct sized discs and three pegs such that discs are placed at the left peg in the order of their sizes. The smallest one is at the top while the largest one is at the bottom. This
game is to move all the discs from the left peg
1) Only one disc can be moved in each step.
2) Only the disc at the top can be moved.
3) Any disc can only be placed on the top of a larger disc.
Intuitive Idea
In order to move the largest disc from the left peg to the middle peg, the smallest discs must be moved to the right peg first. After the largest one is moved. The smaller discs are then moved from
the right peg to the middle peg.
Suppose n is the number of discs.
To move n discs from peg a to peg b,
1) If n>1 then move n-1 discs from peg a to peg c
2) Move n-th disc from peg a to peg b
3) If n>1 then move n-1 discs from peg c to peg a
void hanoi(n,src,dst){
if (n>1)
print "move n-th disc from src to dst";
if (n>1)
The analysis is trivial. ${\displaystyle T(n)=2T(n-1)+O(1)=O(2^{n})}$
As deterministic algorithms are driven to their limits when one tries to solve hard problems with them, a useful technique to speed up the computation is randomization. In randomized algorithms, the
algorithm has access to a random source, which can be imagined as tossing coins during the computation. Depending on the outcome of the toss, the algorithm may split up its computation path.
There are two main types of randomized algorithms: Las Vegas algorithms and Monte-Carlo algorithms. In Las Vegas algorithms, the algorithm may use the randomness to speed up the computation, but the
algorithm must always return the correct answer to the input. Monte-Carlo algorithms do not have the latter restriction, that is, they are allowed to give wrong return values. However, returning a
wrong return value must have a small probability, otherwise that Monte-Carlo algorithm would not be of any use.
Many approximation algorithms use randomization.
Ordered Statistics
Before covering randomized techniques, we'll start with a deterministic problem that leads to a problem that utilizes randomization. Suppose you have an unsorted array of values and you want to find
• the maximum value,
• the minimum value, and
• the median value.
In the immortal words of one of our former computer science professors, "How can you do?"
First, it's relatively straightforward to find the largest element:
// find-max -- returns the maximum element
function find-max(array vals[1..n]): element
let result := vals[1]
for i from 2 to n:
result := max(result, vals[i])
return result
An initial assignment of ${\displaystyle -\infty }$ to result would work as well, but this is a useless call to the max function since the first element compared gets set to result. By initializing
result as such the function only requires n-1 comparisons. (Moreover, in languages capable of metaprogramming, the data type may not be strictly numerical and there might be no good way of assigning
${\displaystyle -\infty }$ ; using vals[1] is type-safe.)
A similar routine to find the minimum element can be done by calling the min function instead of the max function.
But now suppose you want to find the min and the max at the same time; here's one solution:
// find-min-max -- returns the minimum and maximum element of the given array
function find-min-max(array vals): pair
return pair {find-min(vals), find-max(vals)}
Because find-max and find-min both make n-1 calls to the max or min functions (when vals has n elements), the total number of comparisons made in find-min-max is ${\displaystyle 2n-2}$ .
However, some redundant comparisons are being made. These redundancies can be removed by "weaving" together the min and max functions:
// find-min-max -- returns the minimum and maximum element of the given array
function find-min-max(array vals[1..n]): pair
let min := ${\displaystyle \infty }$
let max := ${\displaystyle -\infty }$
if n is odd:
min := max := vals[1]
vals := vals[2,..,n] // we can now assume n is even
n := n - 1
for i:=1 to n by 2: // consider pairs of values in vals
if vals[i] < vals[i + 1]:
let a := vals[i]
let b := vals[i + 1]
let a := vals[i + 1]
let b := vals[i] // invariant: a <= b
if a < min: min := a fi
if b > max: max := b fi
return pair {min, max}
Here, we only loop ${\displaystyle n/2}$ times instead of n times, but for each iteration we make three comparisons. Thus, the number of comparisons made is ${\displaystyle (3/2)n=1.5n}$ , resulting
in a ${\displaystyle 3/4}$ speed up over the original algorithm.
Only three comparisons need to be made instead of four because, by construction, it's always the case that ${\displaystyle a\leq b}$ . (In the first part of the "if", we actually know more
specifically that ${\displaystyle a<b}$ , but under the else part, we can only conclude that ${\displaystyle a\leq b}$ .) This property is utilized by noting that a doesn't need to be compared with
the current maximum, because b is already greater than or equal to a, and similarly, b doesn't need to be compared with the current minimum, because a is already less than or equal to b.
In software engineering, there is a struggle between using libraries versus writing customized algorithms. In this case, the min and max functions weren't used in order to get a faster find-min-max
routine. Such an operation would probably not be the bottleneck in a real-life program: however, if testing reveals the routine should be faster, such an approach should be taken. Typically, the
solution that reuses libraries is better overall than writing customized solutions. Techniques such as open implementation and aspect-oriented programming may help manage this contention to get the
best of both worlds, but regardless it's a useful distinction to recognize.
Finally, we need to consider how to find the median value. One approach is to sort the array then extract the median from the position vals[n/2]:
// find-median -- returns the median element of vals
function find-median(array vals[1..n]): element
assert (n > 0)
return vals[n / 2]
If our values are not numbers close enough in value (or otherwise cannot be sorted by a radix sort) the sort above is going to require ${\displaystyle O(n\log n)}$ steps.
However, it is possible to extract the nth-ordered statistic in ${\displaystyle O(n)}$ time. The key is eliminating the sort: we don't actually require the entire array to be sorted in order to find
the median, so there is some waste in sorting the entire array first. One technique we'll use to accomplish this is randomness.
Before presenting a non-sorting find-median function, we introduce a divide and conquer-style operation known as partitioning. What we want is a routine that finds a random element in the array and
then partitions the array into three parts:
1. elements that are less than or equal to the random element;
2. elements that are equal to the random element; and
3. elements that are greater than or equal to the random element.
These three sections are denoted by two integers: j and i. The partitioning is performed "in place" in the array:
// partition -- break the array three partitions based on a randomly picked element
function partition(array vals): pair{j, i}
Note that when the random element picked is actually represented three or more times in the array it's possible for entries in all three partitions to have the same value as the random element. While
this operation may not sound very useful, it has a powerful property that can be exploited: When the partition operation completes, the randomly picked element will be in the same position in the
array as it would be if the array were fully sorted!
This property might not sound so powerful, but recall the optimization for the find-min-max function: we noticed that by picking elements from the array in pairs and comparing them to each other
first we could reduce the total number of comparisons needed (because the current min and max values need to be compared with only one value each, and not two). A similar concept is used here.
While the code for partition is not magical, it has some tricky boundary cases:
// partition -- break the array into three ordered partitions from a random element
function partition(array vals): pair{j, i}
let m := 0
let n := vals.length - 2 // for an array vals, vals[vals.length-1] is the last element, which holds the partition,
// so the last sort element is vals[vals.length-2]
let irand := random(m, n) // returns any value from m to n
let x := vals[irand]
swap( irand,n+ 1 ) // n+1 = vals.length-1 , which is the right most element, and acts as store for partition element and sentinel for m
// values in vals[n..] are greater than x
// values in vals[0..m] are less than x
while (m <= n ) // see explanation in quick sort why should be m <= n instead of m < n in the 2 element case,
// vals.length -2 = 0 = n = m, but if the 2-element case is out-of-order vs. in-order, there must be a different action.
// by implication, the different action occurs within this loop, so must process the m = n case before exiting.
while vals[m] <= x // in the 2-element case, second element is partition, first element at m. If in-order, m will increment
while x < vals[n] && n > 0 // stops if vals[n] belongs in left partition or hits start of array
if ( m >= n) break;
swap(m,n) // exchange vals[n] and vals[m]
m++ // don't rescan swapped elements
// partition: [0..m-1] [] [n+1..] note that m=n+1
// if you need non empty sub-arrays:
swap(m,vals.length - 1) // put the partition element in the between left and right partitions
// in 2-element out-of-order case, m=0 (not incremented in loop), and the first and last(second) element will swap.
// partition: [0..n-1] [n..n] [n+1..]
We can use partition as a subroutine for a general find operation:
// find -- moves elements in vals such that location k holds the value it would when sorted
function find(array vals, integer k)
assert (0 <= k < vals.length) // k it must be a valid index
if vals.length <= 1:
let pair (j, i) := partition(vals)
if k <= i:
find(a[0,..,i], k)
else-if j <= k:
find(a[j,..,n], k - j)
TODO: debug this!
Which leads us to the punch-line:
// find-median -- returns the median element of vals
function find-median(array vals): element
assert (vals.length > 0)
let median_index := vals.length / 2;
find(vals, median_index)
return vals[median_index]
One consideration that might cross your mind is "is the random call really necessary?" For example, instead of picking a random pivot, we could always pick the middle element instead. Given that our
algorithm works with all possible arrays, we could conclude that the running time on average for all of the possible inputs is the same as our analysis that used the random function. The reasoning
here is that under the set of all possible arrays, the middle element is going to be just as "random" as picking anything else. But there's a pitfall in this reasoning: Typically, the input to an
algorithm in a program isn't random at all. For example, the input has a higher probability of being sorted than just by chance alone. Likewise, because it is real data from real programs, the data
might have other patterns in it that could lead to suboptimal results.
To put this another way: for the randomized median finding algorithm, there is a very small probability it will run suboptimally, independent of what the input is; while for a deterministic algorithm
that just picks the middle element, there is a greater chance it will run poorly on some of the most frequent input types it will receive. This leads us to the following guideline:
Randomization Guideline:
If your algorithm depends upon randomness, be sure you introduce the randomness yourself instead of depending upon the data to be random.
Note that there are "derandomization" techniques that can take an average-case fast algorithm and turn it into a fully deterministic algorithm. Sometimes the overhead of derandomization is so much
that it requires very large datasets to get any gains. Nevertheless, derandomization in itself has theoretical value.
The randomized find algorithm was invented by C. A. R. "Tony" Hoare. While Hoare is an important figure in computer science, he may be best known in general circles for his quicksort algorithm, which
we discuss in the next section.
The median-finding partitioning algorithm in the previous section is actually very close to the implementation of a full blown sorting algorithm. Building a Quicksort Algorithm is left as an exercise
for the reader, and is recommended first, before reading the next section ( Quick sort is diabolical compared to Merge sort, which is a sort not improved by a randomization step ) .
A key part of quick sort is choosing the right median. But to get it up and running quickly, start with the assumption that the array is unsorted, and the rightmost element of each array is as likely
to be the median as any other element, and that we are entirely optimistic that the rightmost doesn't happen to be the largest key , which would mean we would be removing one element only ( the
partition element) at each step, and having no right array to sort, and a n-1 left array to sort.
This is where randomization is important for quick sort, i.e. choosing the more optimal partition key, which is pretty important for quick sort to work efficiently.
Compare the number of comparisions that are required for quick sort vs. insertion sort.
With insertion sort, the average number of comparisons for finding the lowest first element in an ascending sort of a randomized array is n /2 .
The second element's average number of comparisons is (n-1)/2;
the third element ( n- 2) / 2.
The total number of comparisons is [ n + (n - 1) + (n - 2) + (n - 3) .. + (n - [n-1]) ] divided by 2, which is [ n x n - (n-1)! ] /2 or about O(n squared) .
In Quicksort, the number of comparisons will halve at each partition step if the true median is chosen, since the left half partition doesn't need to be compared with the right half partition, but at
each step , the number elements of all partitions created by the previously level of partitioning will still be n.
The number of levels of comparing n elements is the number of steps of dividing n by two , until n = 1. Or in reverse, 2 ^ m ~ n, so m = log[2] n.
So the total number of comparisons is n (elements) x m (levels of scanning) or n x log[2]n ,
So the number of comparison is O(n x log [2](n) ) , which is smaller than insertion sort's O(n^2) or O( n x n ).
(Comparing O(n x log [2](n) ) with O( n x n ) , the common factor n can be eliminated , and the comparison is log[2](n) vs n , which is exponentially different as n becomes larger. e.g. compare n = 2
^16 , or 16 vs 32768, or 32 vs 4 gig ).
To implement the partitioning in-place on a part of the array determined by a previous recursive call, what is needed a scan from each end of the part , swapping whenever the value of the left scan's
current location is greater than the partition value, and the value of the right scan's current location is less than the partition value. So the initial step is :-
Assign the partition value to the right most element, swapping if necessary.
So the partitioning step is :-
increment the left scan pointer while the current value is less than the partition value.
decrement the right scan pointer while the current value is more than the partition value ,
or the location is equal to or more than the left most location.
exit if the pointers have crossed ( l >= r),
perform a swap where the left and right pointers have stopped ,
on values where the left pointer's value is greater than the partition,
and the right pointer's value is less than the partition.
Finally, after exiting the loop because the left and right pointers have crossed, swap the rightmost partition value, with the last location of the left forward scan pointer , and hence ends up
between the left and right partitions.
Make sure at this point , that after the final swap, the cases of a 2 element in-order array, and a 2 element out-of-order array , are handled correctly, which should mean all cases are handled
correctly. This is a good debugging step for getting quick-sort to work.
For the in-order two-element case, the left pointer stops on the partition or second element , as the partition value is found. The right pointer , scanning backwards, starts on the first element
before the partition, and stops because it is in the leftmost position.
The pointers cross, and the loop exits before doing a loop swap. Outside the loop, the contents of the left pointer at the rightmost position and the partition , also at the right most position , are
swapped, achieving no change to the in-order two-element case.
For the out-of-order two-element case, The left pointer scans and stops at the first element, because it is greater than the partition (left scan value stops to swap values greater than the partition
The right pointer starts and stops at the first element because it has reached the leftmost element.
The loop exits because left pointer and right pointer are equal at the first position, and the contents of the left pointer at the first position and the partition at the rightmost (other) position ,
are swapped , putting previously out-of-order elements , into order.
Another implementation issue, is to how to move the pointers during scanning. Moving them at the end of the outer loop seems logical.
partition(a,l,r) {
v = a[r];
i = l;
j = r -1;
while ( i <= j ) { // need to also scan when i = j as well as i < j ,
// in the 2 in-order case,
// so that i is incremented to the partition
// and nothing happens in the final swap with the partition at r.
while ( a[i] < v) ++i;
while ( v <= a[j] && j > 0 ) --j;
if ( i >= j) break;
++i; --j;
swap(a, i, r);
return i;
With the pre-increment/decrement unary operators, scanning can be done just before testing within the test condition of the while loops, but this means the pointers should be offset -1 and +1
respectively at the start : so the algorithm then looks like:-
partition (a, l, r ) {
v=a[r]; // v is partition value, at a[r]
while(true) {
while( a[++i] < v );
while( v <= a[--j] && j > l );
if (i >= j) break;
swap ( a, i, j);
swap (a,i,r);
return i;
And the qsort algorithm is
qsort( a, l, r) {
if (l >= r) return ;
p = partition(a, l, r)
qsort(a , l, p-1)
qsort( a, p+1, r)
Finally, randomization of the partition element.
random_partition (a,l,r) {
p = random_int( r-l) + l;
// median of a[l], a[p] , a[r]
if (a[p] < a[l]) p =l;
if ( a[r]< a[p]) p = r;
swap(a, p, r);
this can be called just before calling partition in qsort().
Shuffling an Array
This keeps data in during shuffle
temporaryArray = { }
This records if an item has been shuffled
usedItemArray = { }
Number of item in array
itemNum = 0
while ( itemNum != lengthOf( inputArray) ){
usedItemArray[ itemNum ] = false None of the items have been shuffled
itemNum = itemNum + 1
itemNum = 0 we'll use this again
itemPosition = randdomNumber( 0 --- (lengthOf(inputArray) - 1 ))
while( itemNum != lengthOf( inputArray ) ){
while( usedItemArray[ itemPosition ] != false ){
itemPosition = randdomNumber( 0 --- (lengthOf(inputArray) - 1 ))
temporaryArray[ itemPosition ] = inputArray[ itemNum ]
itemNum = itemNum + 1
inputArray = temporaryArray
Equal Multivariate Polynomials
[TODO: as of now, there is no known deterministic polynomial time solution, but there is a randomized polytime solution. The canonical example used to be IsPrime, but a deterministic, polytime
solution has been found.]
Hash tables
Hashing relies on a hashcode function to randomly distribute keys to available slots evenly. In java , this is done in a fairly straight forward method of adding a moderate sized prime number (31 *
17 ) to a integer key , and then modulus by the size of the hash table. For string keys, the initial hash number is obtained by adding the products of each character's ordinal value multiplied by 31.
The wikibook Data Structures/Hash Tables chapter covers the topic well.
Skip Lists
[TODO: Talk about skips lists. The point is to show how randomization can sometimes make a structure easier to understand, compared to the complexity of balanced trees.]
Dictionary or Map , is a general concept where a value is inserted under some key, and retrieved by the key. For instance, in some languages , the dictionary concept is built-in (Python), in others ,
it is in core libraries ( C++ S.T.L. , and Java standard collections library ). The library providing languages usually lets the programmer choose between a hash algorithm, or a balanced binary tree
implementation (red-black trees). Recently, skip lists have been offered, because they offer advantages of being implemented to be highly concurrent for multiple threaded applications.
Hashing is a technique that depends on the randomness of keys when passed through a hash function, to find a hash value that corresponds to an index into a linear table. Hashing works as fast as the
hash function, but works well only if the inserted keys spread out evenly in the array, as any keys that hash to the same index , have to be deal with as a hash collision problem e.g. by keeping a
linked list for collisions for each slot in the table, and iterating through the list to compare the full key of each key-value pair vs the search key.
The disadvantage of hashing is that in-order traversal is not possible with this data structure.
Binary trees can be used to represent dictionaries, and in-order traversal of binary trees is possible by visiting of nodes ( visit left child, visit current node, visit right child, recursively ).
Binary trees can suffer from poor search when they are "unbalanced" e.g. the keys of key-value pairs that are inserted were inserted in ascending or descending order, so they effectively look like
linked lists with no left child, and all right children. self-balancing binary trees can be done probabilistically (using randomness) or deterministically ( using child link coloring as red or black
) , through local 3-node tree rotation operations. A rotation is simply swapping a parent with a child node, but preserving order e.g. for a left child rotation, the left child's right child becomes
the parent's left child, and the parent becomes the left child's right child.
Red-black trees can be understood more easily if corresponding 2-3-4 trees are examined. A 2-3-4 tree is a tree where nodes can have 2 children, 3 children, or 4 children, with 3 children nodes
having 2 keys between the 3 children, and 4 children-nodes having 3 keys between the 4 children. 4-nodes are actively split into 3 single key 2 -nodes, and the middle 2-node passed up to be merged
with the parent node , which , if a one-key 2-node, becomes a two key 3-node; or if a two key 3-node, becomes a 4-node, which will be later split (on the way up). The act of splitting a three key
4-node is actually a re-balancing operation, that prevents a string of 3 nodes of grandparent, parent , child occurring , without a balancing rotation happening. 2-3-4 trees are a limited example of
B-trees, which usually have enough nodes as to fit a physical disk block, to facilitate caching of very large indexes that can't fit in physical RAM ( which is much less common nowadays).
A red-black tree is a binary tree representation of a 2-3-4 tree, where 3-nodes are modeled by a parent with one red child, and 4 -nodes modeled by a parent with two red children. Splitting of a
4-node is represented by the parent with 2 red children, flipping the red children to black, and itself into red. There is never a case where the parent is already red, because there also occurs
balancing operations where if there is a grandparent with a red parent with a red child , the grandparent is rotated to be a child of the parent, and parent is made black and the grandparent is made
red; this unifies with the previous flipping scenario, of a 4-node represented by 2 red children. Actually, it may be this standardization of 4-nodes with mandatory rotation of skewed or zigzag
4-nodes that results in re-balancing of the binary tree.
A newer optimization is to left rotate any single right red child to a single left red child, so that only right rotation of left-skewed inline 4-nodes (3 red nodes inline ) would ever occur,
simplifying the re-balancing code.
Skip lists are modeled after single linked lists, except nodes are multilevel. Tall nodes are rarer, but the insert operation ensures nodes are connected at each level.
Implementation of skip lists requires creating randomly high multilevel nodes, and then inserting them.
Nodes are created using iteration of a random function where high level node occurs later in an iteration, and are rarer, because the iteration has survived a number of random thresholds (e.g. 0.5,
if the random is between 0 and 1).
Insertion requires a temporary previous node array with the height of the generated inserting node. It is used to store the last pointer for a given level , which has a key less than the insertion
The scanning begins at the head of the skip list, at highest level of the head node, and proceeds across until a node is found with a key higher than the insertion key, and the previous pointer
stored in the temporary previous node array. Then the next lower level is scanned from that node , and so on, walking zig-zag down, until the lowest level is reached.
Then a list insertion is done at each level of the temporary previous node array, so that the previous node's next node at each level is made the next node for that level for the inserting node, and
the inserting node is made the previous node's next node.
Search involves iterating from the highest level of the head node to the lowest level, and scanning along the next pointer for each level until a node greater than the search key is found, moving
down to the next level , and proceeding with the scan, until the higher keyed node at the lowest level has been found, or the search key found.
The creation of less frequent-when-taller , randomized height nodes, and the process of linking in all nodes at every level, is what gives skip lists their advantageous overall structure.
a method of skip list implementation : implement lookahead single-linked linked list, then test , then transform to skip list implementation , then same test, then performance comparison
What follows is a implementation of skip lists in python. A single linked list looking at next node as always the current node, is implemented first, then the skip list implementation follows,
attempting minimal modification of the former, and comparison helps clarify implementation.
#copyright SJT 2014, GNU
#start by implementing a one lookahead single-linked list :
#the head node has a next pointer to the start of the list, and the current node examined is the next node.
#This is much easier than having the head node one of the storage nodes.
class LN:
"a list node, so don't have to use dict objects as nodes"
def __init__(self):
self.v = None
self.next = None
class single_list2:
def __init__(self):
self.h = LN()
def insert(self, k, v):
prev = self.h
while not prev.next is None and k < prev.next.k :
prev = prev.next
n = LN()
n.k, n.v = k, v
n.next = prev.next
prev.next = n
def show(self):
prev = self.h
while not prev.next is None:
prev = prev.next
print prev.k, prev.v, ' '
def find (self,k):
prev = self.h
while not prev.next is None and k < prev.next.k:
prev = prev.next
if prev.next is None:
return None
return prev.next.k
#then after testing the single-linked list, model SkipList after it.
# The main conditions to remember when trying to transform single-linked code to skiplist code:
# * multi-level nodes are being inserted
# * the head node must be as tall as the node being inserted
# * walk backwards down levels from highest to lowest when inserting or searching,
# since this is the basis for algorithm efficiency, as taller nodes are less frequently and widely dispersed.
import random
class SkipList3:
def __init__(self):
self.h = LN()
self.h.next = [None]
def insert( self, k , v):
ht = 1
while random.randint(0,10) < 5:
ht +=1
if ht > len(self.h.next) :
self.h.next.extend( [None] * (ht - len(self.h.next) ) )
prev = self.h
prev_list = [self.h] * len(self.h.next)
# instead of just prev.next in the single linked list, each level i has a prev.next
for i in xrange( len(self.h.next)-1, -1, -1):
while not prev.next[i] is None and prev.next[i].k > k:
prev = prev.next[i]
#record the previous pointer for each level
prev_list[i] = prev
n = LN()
n.k,n.v = k,v
# create the next pointers to the height of the node for the current node.
n.next = [None] * ht
#print "prev list is ", prev_list
# instead of just linking in one node in the single-linked list , ie. n.next = prev.next, prev.next =n
# do it for each level of n.next using n.next[i] and prev_list[i].next[i]
# there may be a different prev node for each level, but the same level must be linked,
# therefore the [i] index occurs twice in prev_list[i].next[i].
for i in xrange(0, ht):
n.next[i] = prev_list[i].next[i]
prev_list[i].next[i] = n
#print "self.h ", self.h
def show(self):
#print self.h
prev = self.h
while not prev.next[0] is None:
print prev.next[0].k, prev.next[0].v
prev = prev.next[0]
def find(self, k):
prev = self.h
h = len(self.h.next)
#print "height ", h
for i in xrange( h-1, -1, -1):
while not prev.next[i] is None and prev.next[i].k > k:
prev = prev.next[i]
#if prev.next[i] <> None:
#print "i, k, prev.next[i].k and .v", i, k, prev.next[i].k, prev.next[i].v
if prev.next[i] <> None and prev.next[i].k == k:
return prev.next[i].v
if pref.next[i] is None:
return None
return prev.next[i].k
def clear(self):
self.h= LN()
self.h.next = [None]
if __name__ == "__main__":
#l = single_list2()
l = SkipList3()
test_dat = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ'
pairs = enumerate(test_dat)
m = [ (x,y) for x,y in pairs ]
while len(m) > 0:
i = random.randint(0,len(m)-1)
print "inserting ", m[i]
l.insert(m[i][0], m[i][1])
del m[i]
# l.insert( 3, 'C')
# l.insert(2, 'B')
# l.insert(4, 'D')
# l.insert(1, 'A')
n = int(raw_input("How many elements to test?") )
if n <0: n = -n
import time
l2 = [ x for x in xrange(0, n)]
for x in l2:
l.insert(x , x)
print "finding.."
f = 0
t1 = time.time()
nf = []
for x in l2:
if l.find(x) == x:
f += 1
t2 = time.time()
print "time", t2 - t1
td1 = t2 - t1
print "found ", f
print "didn't find", nf
dnf = []
for x in nf:
tu = (x,l.find(x))
print "find again ", dnf
sl = single_list2()
for x in l2:
print "finding.."
f = 0
t1 = time.time()
for x in l2:
if sl.find(x) == x:
f += 1
t2 = time.time()
print "time", t2 - t1
print "found ", f
td2 = t2 - t1
print "factor difference time", td2/td1
Role of Randomness
The idea of making higher nodes geometrically randomly less common, means there are less keys to compare with the higher the level of comparison, and since these are randomly selected, this should
get rid of problems of degenerate input that makes it necessary to do tree balancing in tree algorithms. Since the higher level list have more widely separated elements, but the search algorithm
moves down a level after each search terminates at a level, the higher levels help "skip" over the need to search earlier elements on lower lists. Because there are multiple levels of skipping, it
becomes less likely that a meagre skip at a higher level won't be compensated by better skips at lower levels, and Pugh claims O(logN) performance overall.
Conceptually , is it easier to understand than balancing trees and hence easier to implement ? The development of ideas from binary trees, balanced binary trees, 2-3 trees, red-black trees, and
B-trees make a stronger conceptual network but is progressive in development, so arguably, once red-black trees are understood, they have more conceptual context to aid memory , or refresh of memory.
concurrent access application
Apart from using randomization to enhance a basic memory structure of linked lists, skip lists can also be extended as a global data structure used in a multiprocessor application. See supplementary
topic at the end of the chapter.
Idea for an exercise
Replace the Linux completely fair scheduler red-black tree implementation with a skip list , and see how your brand of Linux runs after recompiling.
A treap is a two keyed binary tree, that uses a second randomly generated key and the previously discussed tree operation of parent-child rotation to randomly rotate the tree so that overall, a
balanced tree is produced. Recall that binary trees work by having all nodes in the left subtree small than a given node, and all nodes in a right subtree greater. Also recall that node rotation does
not break this order ( some people call it an invariant), but changes the relationship of parent and child, so that if the parent was smaller than a right child, then the parent becomes the left
child of the formerly right child. The idea of a tree-heap or treap, is that a binary heap relationship is maintained between parents and child, and that is a parent node has higher priority than its
children, which is not the same as the left , right order of keys in a binary tree, and hence a recently inserted leaf node in a binary tree which happens to have a high random priority, can be
rotated so it is relatively higher in the tree, having no parent with a lower priority.
A treap is an alternative to both red-black trees, and skip lists, as a self-balancing sorted storage structure.
java example of treap implementation
// Treap example: 2014 SJT, copyleft GNU .
import java.util.Iterator;
import java.util.LinkedList;
import java.util.Random;
public class Treap1<K extends Comparable<K>, V> {
public Treap1(boolean test) {
this.test = test;
public Treap1() {}
boolean test = false;
static Random random = new Random(System.currentTimeMillis());
class TreapNode {
int priority = 0;
K k;
V val;
TreapNode left, right;
public TreapNode() {
if (!test) {
priority = random.nextInt();
TreapNode root = null;
void insert(K k, V val) {
root = insert(k, val, root);
TreapNode insert(K k, V val, TreapNode node) {
TreapNode node2 = new TreapNode();
node2.k = k;
node2.val = val;
if (node == null) {
node = node2;
} else if (k.compareTo(node.k) < 0) {
node.left = insert(k, val, node.left);
} else {
node.right = insert(k, val, node.right);
if (node.left != null && node.left.priority > node.priority) {
// right rotate (rotate left node up, current node becomes right child )
TreapNode tmp = node.left;
node.left = node.left.right;
tmp.right = node;
node = tmp;
} else if (node.right != null && node.right.priority > node.priority) {
// left rotate (rotate right node up , current node becomes left child)
TreapNode tmp = node.right;
node.right = node.right.left;
tmp.left = node;
node = tmp;
return node;
V find(K k) {
return findNode(k, root);
private V findNode(K k, Treap1<K, V>.TreapNode node) {
// TODO Auto-generated method stub
if (node == null)
return null;
if (k.compareTo(node.k) < 0) {
return findNode(k, node.left);
} else if (k.compareTo(node.k) > 0) {
return findNode(k, node.right);
} else {
return node.val;
public static void main(String[] args) {
LinkedList<Integer> dat = new LinkedList<Integer>();
for (int i = 0; i < 15000; ++i) {
testNumbers(dat, true); // no random priority balancing
testNumbers(dat, false);
private static void testNumbers(LinkedList<Integer> dat,
boolean test) {
Treap1<Integer, Integer> tree= new Treap1<>(test);
for (Integer integer : dat) {
tree.insert(integer, integer);
long t1 = System.currentTimeMillis();
Iterator<Integer> iter = dat.iterator();
int found = 0;
while (iter.hasNext()) {
Integer j = desc.next();
Integer i = tree.find(j);
if (j.equals(i)) {
long t2 = System.currentTimeMillis();
System.out.println("found = " + found + " in " + (t2 - t1));
Treaps compared and contrasted to Splay trees
Splay trees are similar to treaps in that rotation is used to bring a higher priority node to the top without changing the main key order, except instead of using a random key for priority, the last
accessed node is rotated to the root of the tree, so that more frequently accessed nodes will be near the top. This means that in treaps, inserted nodes will only rotate upto the priority given by
their random priority key, whereas in splay trees, the inserted node is rotated to the root, and every search in a splay tree will result in a re-balancing, but not so in a treap.
[TODO: Deterministic algorithms for Quicksort exist that perform as well as quicksort in the average case and are guaranteed to perform at least that well in all cases. Best of all, no randomization
is needed. Also in the discussion should be some perspective on using randomization: some randomized algorithms give you better confidence probabilities than the actual hardware itself! (e.g.
sunspots can randomly flip bits in hardware, causing failure, which is a risk we take quite often)]
[Main idea: Look at all blocks of 5 elements, and pick the median (O(1) to pick), put all medians into an array (O(n)), recursively pick the medians of that array, repeat until you have < 5 elements
in the array. This recursive median constructing of every five elements takes time T(n)=T(n/5) + O(n), which by the master theorem is O(n). Thus, in O(n) we can find the right pivot. Need to show
that this pivot is sufficiently good so that we're still O(n log n) no matter what the input is. This version of quicksort doesn't need rand, and it never performs poorly. Still need to show that
element picked out is sufficiently good for a pivot.]
1. Write a find-min function and run it on several different inputs to demonstrate its correctness.
Supplementary Topic: skip lists and multiprocessor algorithms
Multiprocessor hardware provides CAS ( compare-and-set) or CMPEXCHG( compare-and-exchange)(intel manual 253666.pdf, p 3-188) atomic operations, where an expected value is loaded into the accumulator
register, which is compared to a target memory location's contents, and if the same, a source memory location's contents is loaded into the target memories contents, and the zero flag set, otherwise,
if different, the target memory's contents is returned in the accumulator, and the zero flag is unset, signifying , for instance, a lock contention. In the intel architecture, a LOCK instruction is
issued before CMPEXCHG , which either locks the cache from concurrent access if the memory location is being cached, or locks a shared memory location if not in the cache , for the next instruction.
The CMPEXCHG can be used to implement locking, where spinlocks , e.g. retrying until the zero flag is set, are simplest in design.
Lockless design increases efficiency by avoiding spinning waiting for a lock .
The java standard library has an implementation of non-blocking concurrent skiplists, based on a paper titled "a pragmatic implementation of non-blocking single-linked lists".
The skip list implementation is an extension of the lock-free single-linked list , of which a description follows :-
The insert operation is : X -> Y insert N , N -> Y, X -> N ; expected result is X -> N -> Y .
A race condition is if M is inserting between X and Y and M completes first , then N completes, so the situation is X -> N -> Y <- M
M is not in the list. The CAS operation avoids this, because a copy of -> Y is checked before updating X -> , against the current value of X -> .
If N gets to update X -> first, then when M tries to update X -> , its copy of X -> Y , which it got before doing M -> Y , does not match X -> N , so CAS returns non-zero flag set (recall that CAS
requires the user to load the accumulator with the expected value, the target location's current value, and then atomically updates the target location with a source location if the target location
still contains the accumulator's value). The process that tried to insert M then can retry the insertion after X, but now the CAS checks ->N is X's next pointer, so after retry, X->M->N->Y , and
neither insertions are lost.
If M updates X-> first, N 's copy of X->Y does not match X -> M , so the CAS will fail here too, and the above retry of the process inserting N, would have the serialized result of X ->N -> M -> Y .
The delete operation depends on a separate 'logical' deletion step, before 'physical' deletion.
'Logical' deletion involves a CAS change of the next pointer into a 'marked' pointer. The java implementation substitutes with an atomic insertion of a proxy marker node to the next node.
This prevents future insertions from inserting after a node which has a next pointer 'marked' , making the latter node 'logically' deleted.
The insert operation relies on another function , search , returning 2 unmarked , at the time of the invocation, node pointers : the first pointing to a node , whose next pointer is equal to the
The first node is the node before the insertion point.
The insert CAS operation checks that the current next pointer of the first node, corresponds to the unmarked reference of the second, so will fail 'logically' if the first node's next pointer has
become marked after the call to the search function above, because the first node has been concurrently logically deleted.
This meets the aim to prevent a insertion occurring concurrently after a node has been deleted.
If the insert operation fails the CAS of the previous node's next pointer, the search for the insertion point starts from the start of the entire list again, since a new unmarked previous node needs
to be found, and there are no previous node pointers as the list nodes are singly-linked.
The delete operation outlined above, also relies on the search operation returning two unmarked nodes, and the two CAS operations in delete, one for logical deletion or marking of the second
pointer's next pointer, and the other for physical deletion by making the first node's next pointer point to the second node's unmarked next pointer.
The first CAS of delete happens only after a check that the copy of the original second nodes' next pointer is unmarked, and ensures that only one concurrent delete succeeds which reads the second
node's current next pointer as being unmarked as well.
The second CAS checks that the previous node hasn't been logically deleted because its next pointer is not the same as the unmarked pointer to the current second node returned by the search function,
so only an active previous node's next pointer is 'physically' updated to a copy of the original unmarked next pointer of the node being deleted ( whose next pointer is already marked by the first
If the second CAS fails, then the previous node is logically deleted and its next pointer is marked, and so is the current node's next pointer. A call to search function again, tidies things up,
because in endeavouring to find the key of the current node and return adjacent unmarked previous and current pointers, and while doing so, it truncates strings of logically deleted nodes .
Lock-free programming issues
Starvation could be possible , as failed inserts have to restart from the front of the list. Wait-freedom is a concept where the algorithm has all threads safe from starvation.
The ABA problem exists, where a garbage collector recycles the pointer A , but the address is loaded differently, and the pointer is re-added at a point where a check is done for A by another thread
that read A and is doing a CAS to check A has not changed ; the address is the same and is unmarked, but the contents of A has changed.
Backtracking is a general algorithmic technique that considers searching every possible combination in order to solve an optimization problem. Backtracking is also known as depth-first search or
branch and bound. By inserting more knowledge of the problem, the search tree can be pruned to avoid considering cases that don't look promising. While backtracking is useful for hard problems to
which we do not know more efficient solutions, it is a poor solution for the everyday problems that other techniques are much better at solving.
However, dynamic programming and greedy algorithms can be thought of as optimizations to backtracking, so the general technique behind backtracking is useful for understanding these more advanced
concepts. Learning and understanding backtracking techniques first provides a good stepping stone to these more advanced techniques because you won't have to learn several new concepts all at once.
Backtracking Methodology
1. View picking a solution as a sequence of choices
2. For each choice, consider every option recursively
3. Return the best solution found
This methodology is generic enough that it can be applied to most problems. However, even when taking care to improve a backtracking algorithm, it will probably still take exponential time rather
than polynomial time. Additionally, exact time analysis of backtracking algorithms can be extremely difficult: instead, simpler upperbounds that may not be tight are given.
Longest Common Subsequence (exhaustive version)
The LCS problem is similar to what the Unix "diff" program does. The diff command in Unix takes two text files, A and B, as input and outputs the differences line-by-line from A and B. For example,
diff can show you that lines missing from A have been added to B, and lines present in A have been removed from B. The goal is to get a list of additions and removals that could be used to transform
A to B. An overly conservative solution to the problem would say that all lines from A were removed, and that all lines from B were added. While this would solve the problem in a crude sense, we are
concerned with the minimal number of additions and removals to achieve a correct transformation. Consider how you may implement a solution to this problem yourself.
The LCS problem, instead of dealing with lines in text files, is concerned with finding common items between two different arrays. For example,
let a := array {"The", "great", "square", "has", "no", "corners"}
let b := array {"The", "great", "image", "has", "no", "form"}
We want to find the longest subsequence possible of items that are found in both a and b in the same order. The LCS of a and b is
"The", "great", "has", "no"
Now consider two more sequences:
let c := array {1, 2, 4, 8, 16, 32}
let d := array {1, 2, 3, 32, 8}
Here, there are two longest common subsequences of c and d:
1, 2, 32; and
1, 2, 8
Note that
1, 2, 32, 8
is not a common subsequence, because it is only a valid subsequence of d and not c (because c has 8 before the 32). Thus, we can conclude that for some cases, solutions to the LCS problem are not
unique. If we had more information about the sequences available we might prefer one subsequence to another: for example, if the sequences were lines of text in computer programs, we might choose the
subsequences that would keep function definitions or paired comment delimiters intact (instead of choosing delimiters that were not paired in the syntax).
On the top level, our problem is to implement the following function
// lcs -- returns the longest common subsequence of a and b
function lcs(array a, array b): array
which takes in two arrays as input and outputs the subsequence array.
How do you solve this problem? You could start by noticing that if the two sequences start with the same word, then the longest common subsequence always contains that word. You can automatically put
that word on your list, and you would have just reduced the problem to finding the longest common subset of the rest of the two lists. Thus, the problem was made smaller, which is good because it
shows progress was made.
But if the two lists do not begin with the same word, then one, or both, of the first element in a or the first element in b do not belong in the longest common subsequence. But yet, one of them
might be. How do you determine which one, if any, to add?
The solution can be thought in terms of the back tracking methodology: Try it both ways and see! Either way, the two sub-problems are manipulating smaller lists, so you know that the recursion will
eventually terminate. Whichever trial results in the longer common subsequence is the winner.
Instead of "throwing it away" by deleting the item from the array we use array slices. For example, the slice
represents the elements
{a[1], a[2], a[3], a[4], a[5]}
of the array as an array itself. If your language doesn't support slices you'll have to pass beginning and/or ending indices along with the full array. Here, the slices are only of the form
which, when using 0 as the index to the first element in the array, results in an array slice that doesn't have the 0th element. (Thus, a non-sliced version of this algorithm would only need to pass
the beginning valid index around instead, and that value would have to be subtracted from the complete array's length to get the pseudo-slice's length.)
// lcs -- returns the longest common subsequence of a and b
function lcs(array a, array b): array
if a.length == 0 OR b.length == 0:
// if we're at the end of either list, then the lcs is empty
return new array {}
else-if a[0] == b[0]:
// if the start element is the same in both, then it is on the lcs,
// so we just recurse on the remainder of both lists.
return append(new array {a[0]}, lcs(a[1,..], b[1,..]))
// we don't know which list we should discard from. Try both ways,
// pick whichever is better.
let discard_a := lcs(a[1,..], b)
let discard_b := lcs(a, b[1,..])
if discard_a.length > discard_b.length:
let result := discard_a
let result := discard_b
return result
Shortest Path Problem (exhaustive version)
To be improved as Dijkstra's algorithm in a later section.
Largest Independent Set
Bounding Searches
If you've already found something "better" and you're on a branch that will never be as good as the one you already saw, you can terminate that branch early. (Example to use: sum of numbers beginning
with 1 2, and then each number following is a sum of any of the numbers plus the last number. Show performance improvements.)
Constrained 3-Coloring
This problem doesn't have immediate self-similarity, so the problem first needs to be generalized. Methodology: If there's no self-similarity, try to generalize the problem until it has it.
Traveling Salesperson Problem
Here, backtracking is one of the best solutions known.
Dynamic Programming
Dynamic programming can be thought of as an optimization technique for particular classes of backtracking algorithms where subproblems are repeatedly solved. Note that the term dynamic in dynamic
programming should not be confused with dynamic programming languages, like Scheme or Lisp. Nor should the term programming be confused with the act of writing computer programs. In the context of
algorithms, dynamic programming always refers to the technique of filling in a table with values computed from other table values. (It's dynamic because the values in the table are filled in by the
algorithm based on other values of the table, and it's programming in the sense of setting things in a table, like how television programming is concerned with when to broadcast what shows.)
Fibonacci Numbers
Before presenting the dynamic programming technique, it will be useful to first show a related technique, called memoization, on a toy example: The Fibonacci numbers. What we want is a routine to
compute the nth Fibonacci number:
// fib -- compute Fibonacci(n)
function fib(integer n): integer
By definition, the nth Fibonacci number, denoted ${\displaystyle {\textrm {F}}_{n}}$ is
${\displaystyle {\textrm {F}}_{0}=0}$
${\displaystyle {\textrm {F}}_{1}=1}$
${\displaystyle {\textrm {F}}_{n}={\textrm {F}}_{n-1}+{\textrm {F}}_{n-2}}$
How would one create a good algorithm for finding the nth Fibonacci-number? Let's begin with the naive algorithm, which codes the mathematical definition:
// fib -- compute Fibonacci(n)
function fib(integer n): integer
assert (n >= 0)
if n == 0: return 0 fi
if n == 1: return 1 fi
return fib(n - 1) + fib(n - 2)
Note that this is a toy example because there is already a mathematically closed form for ${\displaystyle {\textrm {F}}_{n}}$ :
${\displaystyle F(n)={\phi ^{n}-(1-\phi )^{n} \over {\sqrt {5}}}}$
${\displaystyle \phi ={1+{\sqrt {5}} \over 2}}$
This latter equation is known as the Golden Ratio. Thus, a program could efficiently calculate ${\displaystyle {\textrm {F}}_{n}}$ for even very large n. However, it's instructive to understand
what's so inefficient about the current algorithm.
To analyze the running time of fib we should look at a call tree for something even as small as the sixth Fibonacci number:
Every leaf of the call tree has the value 0 or 1, and the sum of these values is the final result. So, for any n, the number of leaves in the call tree is actually ${\displaystyle {\textrm {F}}_{n}}$
itself! The closed form thus tells us that the number of leaves in fib(n) is approximately equal to
${\displaystyle \left({\frac {1+{\sqrt {5}}}{2}}\right)^{n}\approx 1.618^{n}=2^{\lg(1.618^{n})}=2^{n\lg(1.618)}\approx 2^{0.69n}.}$
(Note the algebraic manipulation used above to make the base of the exponent the number 2.) This means that there are far too many leaves, particularly considering the repeated patterns found in the
call tree above.
One optimization we can make is to save a result in a table once it's already been computed, so that the same result needs to be computed only once. The optimization process is called memoization and
conforms to the following methodology:
Memoization Methodology
1. Start with a backtracking algorithm
2. Look up the problem in a table; if there's a valid entry for it, return that value
3. Otherwise, compute the problem recursively, and then store the result in the table before returning the value
Consider the solution presented in the backtracking chapter for the Longest Common Subsequence problem. In the execution of that algorithm, many common subproblems were computed repeatedly. As an
optimization, we can compute these subproblems once and then store the result to read back later. A recursive memoization algorithm can be turned "bottom-up" into an iterative algorithm that fills in
a table of solutions to subproblems. Some of the subproblems solved might not be needed by the end result (and that is where dynamic programming differs from memoization), but dynamic programming can
be very efficient because the iterative version can better use the cache and have less call overhead. Asymptotically, dynamic programming and memoization have the same complexity.
So how would a fibonacci program using memoization work? Consider the following program (f[n] contains the nth Fibonacci-number if has been calculated, -1 otherwise):
function fib(integer n): integer
if n == 0 or n == 1:
return n
else-if f[n] != -1:
return f[n]
f[n] = fib(n - 1) + fib(n - 2)
return f[n]
The code should be pretty obvious. If the value of fib(n) already has been calculated it's stored in f[n] and then returned instead of calculating it again. That means all the copies of the sub-call
trees are removed from the calculation.
The values in the blue boxes are values that already have been calculated and the calls can thus be skipped. It is thus a lot faster than the straight-forward recursive algorithm. Since every value
less than n is calculated once, and only once, the first time you execute it, the asymptotic running time is ${\displaystyle O(n)}$ . Any other calls to it will take ${\displaystyle O(1)}$ since the
values have been precalculated (assuming each subsequent call's argument is less than n).
The algorithm does consume a lot of memory. When we calculate fib(n), the values fib(0) to fib(n) are stored in main memory. Can this be improved? Yes it can, although the ${\displaystyle O(1)}$
running time of subsequent calls are obviously lost since the values aren't stored. Since the value of fib(n) only depends on fib(n-1) and fib(n-2) we can discard the other values by going bottom-up.
If we want to calculate fib(n), we first calculate fib(2) = fib(0) + fib(1). Then we can calculate fib(3) by adding fib(1) and fib(2). After that, fib(0) and fib(1) can be discarded, since we don't
need them to calculate any more values. From fib(2) and fib(3) we calculate fib(4) and discard fib(2), then we calculate fib(5) and discard fib(3), etc. etc. The code goes something like this:
function fib(integer n): integer
if n == 0 or n == 1:
return n
let u := 0
let v := 1
for i := 2 to n:
let t := u + v
u := v
v := t
return v
We can modify the code to store the values in an array for subsequent calls, but the point is that we don't have to. This method is typical for dynamic programming. First we identify what subproblems
need to be solved in order to solve the entire problem, and then we calculate the values bottom-up using an iterative process.
Longest Common Subsequence (DP version)
The problem of Longest Common Subsequence (LCS) involves comparing two given sequences of characters, to find the longest subsequence common to both the sequences.
Note that 'subsequence' is not 'substring' - the characters appearing in the subsequence need not be consecutive in either of the sequences; however, the individual characters do need to be in same
order as appearing in both sequences.
Given two sequences, namely,
X = {x[1], x[2], x[3], ..., x[m]} and Y = {y[1], y[2], y[3], ..., y[n]}
we defineː
Z = {z[1], z[2], z[3], ..., z[k]}
as a subsequence of X, if all the characters z[1], z[2], z[3], ..., z[k], appear in X, and they appear in a strictly increasing sequence; i.e. z[1] appears in X before z[2], which in turn appears
before z[3], and so on. Once again, it is not necessary for all the characters z[1], z[2], z[3], ..., z[k] to be consecutive; they must only appear in the same order in X as they are in Z. And thus,
we can define Z = {z[1], z[2], z[3], ..., z[k]} as a common subseqeunce of X and Y, if Z appears as a subsequence in both X and Y.
The backtracking solution of LCS involves enumerating all possible subsequences of X, and check each subsequence to see whether it is also a subsequence of Y, keeping track of the longest subsequence
we find [see Longest Common Subsequence (exhaustive version)]. Since X has m characters in it, this leads to 2^m possible combinations. This approach, thus, takes exponential time and is impractical
for long sequences.
Matrix Chain Multiplication
Suppose that you need to multiply a series of ${\displaystyle n}$ matrices ${\displaystyle M_{1},\ldots ,M_{n}}$ together to form a product matrix ${\displaystyle P}$ :
${\displaystyle P=M_{1}\cdot M_{2}\cdots M_{n-1}\cdot M_{n}}$
This will require ${\displaystyle n-1}$ multiplications, but what is the fastest way we can form this product? Matrix multiplication is associative, that is,
${\displaystyle (A\cdot B)\cdot C=A\cdot (B\cdot C)}$
for any ${\displaystyle A,B,C}$ , and so we have some choice in what multiplication we perform first. (Note that matrix multiplication is not commutative, that is, it does not hold in general that $
{\displaystyle A\cdot B=B\cdot A}$ .)
Because you can only multiply two matrices at a time the product ${\displaystyle M_{1}\cdot M_{2}\cdot M_{3}\cdot M_{4}}$ can be paranthesized in these ways:
${\displaystyle ((M_{1}M_{2})M_{3})M_{4}}$
${\displaystyle (M_{1}(M_{2}M_{3}))M_{4}}$
${\displaystyle M_{1}((M_{2}M_{3})M_{4})}$
${\displaystyle (M_{1}M_{2})(M_{3}M_{4})}$
${\displaystyle M_{1}(M_{2}(M_{3}M_{4}))}$
Two matrices ${\displaystyle M_{1}}$ and ${\displaystyle M_{2}}$ can be multiplied if the number of columns in ${\displaystyle M_{1}}$ equals the number of rows in ${\displaystyle M_{2}}$ . The
number of rows in their product will equal the number rows in ${\displaystyle M_{1}}$ and the number of columns will equal the number of columns in ${\displaystyle M_{2}}$ . That is, if the
dimensions of ${\displaystyle M_{1}}$ is ${\displaystyle a\times b}$ and ${\displaystyle M_{2}}$ has dimensions ${\displaystyle b\times c}$ their product will have dimensions ${\displaystyle a\times
c}$ .
To multiply two matrices with each other we use a function called matrix-multiply that takes two matrices and returns their product. We will leave implementation of this function alone for the moment
as it is not the focus of this chapter (how to multiply two matrices in the fastest way has been under intensive study for several years [TODO: propose this topic for the Advanced book]). The time
this function takes to multiply two matrices of size ${\displaystyle a\times b}$ and ${\displaystyle b\times c}$ is proportional to the number of scalar multiplications, which is proportional to ${\
displaystyle abc}$ . Thus, paranthezation matters: Say that we have three matrices ${\displaystyle M_{1}}$ , ${\displaystyle M_{2}}$ and ${\displaystyle M_{3}}$ . ${\displaystyle M_{1}}$ has
dimensions ${\displaystyle 5\times 100}$ , ${\displaystyle M_{2}}$ has dimensions ${\displaystyle 100\times 100}$ and ${\displaystyle M_{3}}$ has dimensions ${\displaystyle 100\times 50}$ . Let's
paranthezise them in the two possible ways and see which way requires the least amount of multiplications. The two ways are
${\displaystyle ((M_{1}M_{2})M_{3})}$ , and
${\displaystyle (M_{1}(M_{2}M_{3}))}$ .
To form the product in the first way requires 75000 scalar multiplications (5*100*100=50000 to form product ${\displaystyle (M_{1}M_{2})}$ and another 5*100*50=25000 for the last multiplications.)
This might seem like a lot, but in comparison to the 525000 scalar multiplications required by the second parenthesization (50*100*100=500000 plus 5*50*100=25000) it is miniscule! You can see why
determining the parenthesization is important: imagine what would happen if we needed to multiply 50 matrices!
Forming a Recursive Solution
Note that we concentrate on finding a how many scalar multiplications are needed instead of the actual order. This is because once we have found a working algorithm to find the amount it is trivial
to create an algorithm for the actual parenthesization. It will, however, be discussed in the end.
So how would an algorithm for the optimum parenthesization look? By the chapter title you might expect that a dynamic programming method is in order (not to give the answer away or anything). So how
would a dynamic programming method work? Because dynamic programming algorithms are based on optimal substructure, what would the optimal substructure in this problem be?
Suppose that the optimal way to parenthesize
${\displaystyle M_{1}M_{2}\dots M_{n}}$
splits the product at ${\displaystyle k}$ :
${\displaystyle (M_{1}M_{2}\dots M_{k})(M_{k+1}M_{k+2}\dots M_{n})}$ .
Then the optimal solution contains the optimal solutions to the two subproblems
${\displaystyle (M_{1}\dots M_{k})}$
${\displaystyle (M_{k+1}\dots M_{n})}$
That is, just in accordance with the fundamental principle of dynamic programming, the solution to the problem depends on the solution of smaller sub-problems.
Let's say that it takes ${\displaystyle c(n)}$ scalar multiplications to multiply matrices ${\displaystyle M_{n}}$ and ${\displaystyle M_{n+1}}$ , and ${\displaystyle f(m,n)}$ is the number of scalar
multiplications to be performed in an optimal parenthesization of the matrices ${\displaystyle M_{m}\dots M_{n}}$ . The definition of ${\displaystyle f(m,n)}$ is the first step toward a solution.
When ${\displaystyle n-m=1}$ , the formulation is trivial; it is just ${\displaystyle c(m)}$ . But what is it when the distance is larger? Using the observation above, we can derive a formulation.
Suppose an optimal solution to the problem divides the matrices at matrices k and k+1 (i.e. ${\displaystyle (M_{m}\dots M_{k})(M_{k+1}\dots M_{n})}$ ) then the number of scalar multiplications are.
${\displaystyle f(m,k)+f(k+1,n)+c(k)}$
That is, the amount of time to form the first product, the amount of time it takes to form the second product, and the amount of time it takes to multiply them together. But what is this optimal
value k? The answer is, of course, the value that makes the above formula assume its minimum value. We can thus form the complete definition for the function:
${\displaystyle f(m,n)={\begin{cases}\min _{m\leq k<n}f(m,k)+f(k+1,n)+c(k)&{\mbox{if }}n-m>1\\0&{\mbox{if }}n=m\end{cases}}}$
A straight-forward recursive solution to this would look something like this (the language is Wikicode):
function f(m, n) {
if m == n
return 0
let minCost := ${\displaystyle \infty }$
for k := m to n - 1 {
v := f(m, k) + f(k + 1, n) + c(k)
if v < minCost
minCost := v
return minCost
This rather simple solution is, unfortunately, not a very good one. It spends mountains of time recomputing data and its running time is exponential.
Using the same adaptation as above we get:
function f(m, n) {
if m == n
return 0
else-if f[m,n] != -1:
return f[m,n]
let minCost := ${\displaystyle \infty }$
for k := m to n - 1 {
v := f(m, k) + f(k + 1, n) + c(k)
if v < minCost
minCost := v
return minCost
Parsing Any Context-Free Grammar
Note that special types of context-free grammars can be parsed much more efficiently than this technique, but in terms of generality, the DP method is the only way to go.
Greedy Algorithms
In the backtracking algorithms we looked at, we saw algorithms that found decision points and recursed over all options from that decision point. A greedy algorithm can be thought of as a
backtracking algorithm where at each decision point "the best" option is already known and thus can be picked without having to recurse over any of the alternative options.
The name "greedy" comes from the fact that the algorithms make decisions based on a single criterion, instead of a global analysis that would take into account the decision's effect on further steps.
As we will see, such a backtracking analysis will be unnecessary in the case of greedy algorithms, so it is not greedy in the sense of causing harm for only short-term gain.
Unlike backtracking algorithms, greedy algorithms can't be made for every problem. Not every problem is "solvable" using greedy algorithms. Viewing the finding solution to an optimization problem as
a hill climbing problem greedy algorithms can be used for only those hills where at every point taking the steepest step would lead to the peak always.
Greedy algorithms tend to be very efficient and can be implemented in a relatively straightforward fashion. Many a times in O(n) complexity as there would be a single choice at every point. However,
most attempts at creating a correct greedy algorithm fail unless a precise proof of the algorithm's correctness is first demonstrated. When a greedy strategy fails to produce optimal results on all
inputs, we instead refer to it as a heuristic instead of an algorithm. Heuristics can be useful when speed is more important than exact results (for example, when "good enough" results are
Event Scheduling Problem
The first problem we'll look at that can be solved with a greedy algorithm is the event scheduling problem. We are given a set of events that have a start time and finish time, and we need to produce
a subset of these events such that no events intersect each other (that is, having overlapping times), and that we have the maximum number of events scheduled as possible.
Here is a formal statement of the problem:
Input: events: a set of intervals ${\displaystyle (s_{i},f_{i})}$ where ${\displaystyle s_{i}}$ is the start time, and ${\displaystyle f_{i}}$ is the finish time.
Solution: A subset S of Events.
Constraint: No events can intersect (start time exclusive). That is, for all intervals ${\displaystyle i=(s_{i},f_{i}),j=(s_{j},f_{j})}$ where ${\displaystyle s_{i}<s_{j}}$ it holds that ${\
displaystyle f_{i}\leq s_{j}}$ .
Objective: Maximize the number of scheduled events, i.e. maximize the size of the set S.
We first begin with a backtracking solution to the problem:
// event-schedule -- schedule as many non-conflicting events as possible
function event-schedule(events array of s[1..n], j[1..n]): set
if n == 0: return ${\displaystyle \emptyset }$ fi
if n == 1: return {events[1]} fi
let event := events[1]
let S1 := union(event-schedule(events - set of conflicting events), event)
let S2 := event-schedule(events - {event})
if S1.size() >= S2.size():
return S1
return S2
The above algorithm will faithfully find the largest set of non-conflicting events. It brushes aside details of how the set
events - set of conflicting events
is computed, but it would require ${\displaystyle O(n)}$ time. Because the algorithm makes two recursive calls on itself, each with an argument of size ${\displaystyle n-1}$ , and because removing
conflicts takes linear time, a recurrence for the time this algorithm takes is:
${\displaystyle T(n)=2\cdot T(n-1)+O(n)}$
which is ${\displaystyle O(2^{n})}$ .
But suppose instead of picking just the first element in the array we used some other criterion. The aim is to just pick the "right" one so that we wouldn't need two recursive calls. First, let's
consider the greedy strategy of picking the shortest events first, until we can add no more events without conflicts. The idea here is that the shortest events would likely interfere less than other
There are scenarios were picking the shortest event first produces the optimal result. However, here's a scenario where that strategy is sub-optimal:
Above, the optimal solution is to pick event A and C, instead of just B alone. Perhaps instead of the shortest event we should pick the events that have the least number of conflicts. This strategy
seems more direct, but it fails in this scenario:
Above, we can maximize the number of events by picking A, B, C, D, and E. However, the events with the least conflicts are 6, 2 and 7, 3. But picking one of 6, 2 and one of 7, 3 means that we cannot
pick B, C and D, which includes three events instead of just two.
= Longest Path solution to critical path scheduling of jobs
Construction with dependency constraints but concurrency can use critical path determination to find minimum time feasible, which is equivalent to a longest path in a directed acyclic graph problem.
By using relaxation and breath first search, the shortest path can be the longest path by negating weights(time constraint), finding solution, then restoring the positive weights. (Relaxation is
determining the parent with least accumulated weight for each adjacent node being scheduled to be visited)
Dijkstra's Shortest Path Algorithm
With two (high-level, pseudocode) transformations, Dijsktra's algorithm can be derived from the much less efficient backtracking algorithm. The trick here is to prove the transformations maintain
correctness, but that's the whole insight into Dijkstra's algorithm anyway. [TODO: important to note the paradox that to solve this problem it's easier to solve a more-general version. That is,
shortest path from s to all nodes, not just to t. Worthy of its own colored box.]
To see the workings of Dijkstra's Shortest Path Algorithm, take an example:
There is a start and end node, with 2 paths between them ; one path has cost 30 on first hop, then 10 on last hop to the target node, with total cost 40. Another path cost 10 on first hop, 10 on
second hop, and 40 on last hop, with total cost 60.
The start node is given distance zero so it can be at the front of a shortest distance queue, all the other nodes are given infinity or a large number e.g. 32767 .
This makes the start node the first current node in the queue.
With each iteration, the current node is the first node of a shortest path queue. It looks at all nodes adjacent to the current node;
For the case of the start node, in the first path it will find a node of distance 30, and in the second path, an adjacent node of distance 10. The current nodes distance, which is zero at the
beginning, is added to distances of the adjacent nodes, and the distances from the start node of each node are updated, so the nodes will be 30+0 = 30 in the 1st path, and 10+0=10 in the 2nd path.
Importantly, also updated is a previous pointer attribute for each node, so each node will point back to the current node, which is the start node for these two nodes.
Each node's priority is updated in the priority queue using the new distance.
That ends one iteration. The current node was removed from the queue before examining its adjacent nodes.
In the next iteration, the front of the queue will be the node in the second path of distance 10, and it has only one adjacent node of distance 10, and that adjacent node will distance will be
updated from 32767 to 10 (the current node distance) + 10 ( the distance from the current node) = 20.
In the next iteration, the second path node of cost 20 will be examined, and it has one adjacent hop of 40 to the target node, and the target nodes distance is updated from 32767 to 20 + 40 = 60 .
The target node has its priority updated.
In the next iteration, the shortest path node will be the first path node of cost 30, and the target node has not been yet removed from the queue. It is also adjacent to the target node, with the
total distance cost of 30 + 10 = 40.
Since 40 is less than 60, the previous calculated distance of the target node, the target node distance is updated to 40, and the previous pointer of the target node is updated to the node on the
first path.
In the final iteration, the shortest path node is the target node, and the loop exits.
Looking at the previous pointers starting with the target node, a shortest path can be reverse constructed as a list to the start node.
Given the above example, what kind of data structures are needed for the nodes and the algorithm ?
# author, copyright under GFDL
class Node :
def __init__(self, label, distance = 32767 ):
# a bug in constructor, uses a shared map initializer
#, adjacency_distance_map = {} ):
self.label = label
self.adjacent = {} # this is an adjacency map, with keys nodes, and values the adjacent distance
self.distance = distance # this is the updated distance from the start node, used as the node's priority
# default distance is 32767
self.shortest_previous = None #this the last shortest distance adjacent node
# the logic is that the last adjacent distance added is recorded, for any distances of the same node added
def add_adjacent(self, local_distance, node):
print "adjacency to ", self.label, " of ", self.adjacent[node], " to ", \
def get_adjacent(self) :
return self.adjacent.iteritems()
def update_shortest( self, node):
new_distance = node.adjacent[self] + node.distance
print "for node ", node.label, " updating ", self.label, \
" with distance ", node.distance, \
" and adjacent distance ", node.adjacent[self]
updated = False
# node's adjacency map gives the adjacent distance for this node
# the new distance for the path to this (self)node is the adjacent distance plus the other node's distance
if new_distance < self.distance :
# if it is the shortest distance then record the distance, and make the previous node that node
self.distance = new_distance
self.shortest_previous= node
updated = True
return updated
MAX_IN_PQ = 100000
class PQ:
def __init__(self, sign = -1 ):
self.q = [None ] * MAX_IN_PQ # make the array preallocated
self.sign = sign # a negative sign is a minimum priority queue
self.end = 1 # this is the next slot of the array (self.q) to be used,
self.map = {}
def insert( self, priority, data):
self.q[self.end] = (priority, data)
# sift up after insert
p = self.end
self.end = self.end + 1
def sift_up(self, p):
# p is the current node's position
# q[p][0] is the priority, q[p][1] is the item or node
# while the parent exists ( p >= 1), and parent's priority is less than the current node's priority
while p / 2 != 0 and self.q[p/2][0]*self.sign < self.q[p][0]*self.sign:
# swap the parent and the current node, and make the current node's position the parent's position
tmp = self.q[p]
self.q[p] = self.q[p/2]
self.q[p/2] = tmp
self.map[self.q[p][1]] = p
p = p/2
# this map's the node to the position in the priority queue
self.map[self.q[p][1]] = p
return p
def remove_top(self):
if self.end == 1 :
return (-1, None)
(priority, node) = self.q[1]
# put the end of the heap at the top of the heap, and sift it down to adjust the heap
# after the heap's top has been removed. this takes log2(N) time, where N iis the size of the heap.
self.q[1] = self.q[self.end-1]
self.end = self.end - 1
return (priority, node)
def sift_down(self, p):
while 1:
l = p * 2
# if the left child's position is more than the size of the heap,
# then left and right children don't exist
if ( l > self.end) :
r= l + 1
# the selected child node should have the greatest priority
t = l
if r < self.end and self.q[r][0]*self.sign > self.q[l][0]*self.sign :
t = r
print "checking for sift down of ", self.q[p][1].label, self.q[p][0], " vs child ", self.q[t][1].label, self.q[t][0]
# if the selected child with the greatest priority has a higher priority than the current node
if self.q[t] [0] * self. sign > self.q [p] [0] * self.sign :
# swap the current node with that child, and update the mapping of the child node to its new position
tmp = self. q [ t ]
self. q [ t ] = self.q [ p ]
self. q [ p ] = tmp
self.map [ tmp [1 ] ] = p
p = t
else: break # end the swap if the greatest priority child has a lesser priority than the current node
# after the sift down, update the new position of the current node.
self.map [ self.q[p][1] ] = p
return p
def update_priority(self, priority, data ) :
p = self. map[ data ]
print "priority prior update", p, "for priority", priority, " previous priority", self.q[p][0]
if p is None :
return -1
self.q[p] = (priority, self.q[p][1])
p = self.sift_up(p)
p = self.sift_down(p)
print "updated ", self.q[p][1].label, p, "priority now ", self.q[p][0]
return p
class NoPathToTargetNode ( BaseException):
def test_1() :
st = Node('start', 0)
p1a = Node('p1a')
p1b = Node('p1b')
p2a = Node('p2a')
p2b = Node('p2b')
p2c = Node('p2c')
p2d = Node('p2d')
targ = Node('target')
st.add_adjacent ( 30, p1a)
#st.add_adjacent ( 10, p2a)
st.add_adjacent ( 20, p2a)
#p1a.add_adjacent(10, targ)
p1a.add_adjacent(40, targ)
p1a.add_adjacent(10, p1b)
p1b.add_adjacent(10, targ)
# testing alternative
#p1b.add_adjacent(20, targ)
p2a.add_adjacent(10, p2b)
#chooses the alternate path
pq = PQ()
# st.distance is 0, but the other's have default starting distance 32767
pq.insert( st.distance, st)
pq.insert( p1a.distance, p1a)
pq.insert( p2a.distance, p2a)
pq.insert( p2b.distance, p2b)
pq.insert(targ.distance, targ)
pq.insert( p2c.distance, p2c)
pq.insert( p2d.distance, p2d)
pq.insert(p1b.distance, p1b)
node = None
while node != targ :
(pr, node ) = pq.remove_top()
print "node ", node.label, " removed from top "
if node is None:
print "target node not in queue"
elif pr == 32767:
print "max distance encountered so no further nodes updated. No path to target node."
raise NoPathToTargetNode
# update the distance to the start node using this node's distance to all of the nodes adjacent to it, and update its priority if
# a shorter distance was found for an adjacent node ( .update_shortest(..) returns true ).
# this is the greedy part of the dijsktra's algorithm, always greedy for the shortest distance using the priority queue.
for adj_node, dist in node.get_adjacent():
print "updating adjacency from ", node.label, " to ", adj_node.label
if adj_node.update_shortest( node ):
pq.update_priority( adj_node.distance, adj_node)
print "node and targ ", node, targ, node <> targ
print "length of path", targ.distance
print " shortest path"
#create a reverse list from the target node, through the shortes path nodes to the start node
node = targ
path = []
while node <> None :
node = node. shortest_previous
for node in reversed(path): # new iterator version of list.reverse()
print node.label
if __name__ == "__main__":
Minimum spanning tree
Greedily looking for the minimum weight edges; this could be achieved with sorting edges into a list in ascending weight. Two well known algorithms are Prim's Algorithm and Kruskal's Algorithm.
Kruskal selects the next minimum weight edge that has the condition that no cycle is formed in the resulting updated graph. Prim's algorithm selects a minimum edge that has the condition that only
one edge is connected to the tree. For both the algorithms, it looks that most work will be done verifying an examined edge fits the primary condition. In Kruskal's, a search and mark technique would
have to be done on the candidate edge. This will result in a search of any connected edges already selected, and if a marked edge is encountered, than a cycle has been formed. In Prim's algorithm,
the candidate edge would be compared to the list of currently selected edges, which could be keyed on vertex number in a symbol table, and if both end vertexes are found, then the candidate edge is
Maximum Flow in weighted graphs
In a flow graph, edges have a forward capacity, a direction, and a flow quantity in the direction and less than or equal to the forward capacity. Residual capacity is capacity minus flow in the
direction of the edge, and flow in the other direction.
Maxflow in Ford Fulkerson method requires a step to search for a viable path from a source to a sink vertex, with non-zero residual capacities at each step of the path. Then the minimum residual
capacity determines the maximum flow for this path. Multiple iterations of searches using BFS can be done (the Edmond-Karp algorithm), until the sink vertex is not marked when the last node is off
the queue or stack. All marked nodes in the last iteration are said to be in the minimum cut.
Here are 2 java examples of implementation of Ford Fulkerson method, using BFS. The first uses maps to map vertices to input edges, whilst the second avoids the Collections types Map and List, by
counting edges to a vertex and then allocating space for each edges array indexed by vertex number, and by using a primitive list node class to implement the queue for BFS.
For both programs, the input are lines of "vertex_1, vertex_2, capacity", and the output are lines of "vertex_1, vertex_2, capacity, flow", which describe the initial and final flow graph.
// copyright GFDL and CC-BY-SA
package test.ff;
import java.io.BufferedReader;
import java.io.FileNotFoundException;
import java.io.FileReader;
import java.io.IOException;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.LinkedList;
import java.util.List;
import java.util.Map;
public class Main {
public static void main(String[] args) throws IOException {
System.err.print("Hello World\n");
final String filename = args[0];
BufferedReader br = new BufferedReader( new FileReader(filename));
String line;
ArrayList<String[]> lines = new ArrayList<>();
while ((line= br.readLine()) != null) {
String[] toks = line.split("\\s+");
if (toks.length == 3)
for (String tok : toks) {
int [][]edges = new int[lines.size()][4];
// edges, 0 is from-vertex, 1 is to-vertex, 2 is capacity, 3 is flow
for (int i = 0; i < edges.length; ++i)
for (int j =0; j < 3; ++j)
edges[i][j] = Integer.parseInt(lines.get(i)[j]);
Map<Integer, List<int[]>> edgeMap = new HashMap<>();
// add both ends into edge map for each edge
int last = -1;
for (int i = 0; i < edges.length; ++i)
for (int j = 0; j < 2; ++j) {
new LinkedList<int[]>()) );
// find the highest numbered vertex, which will be the sink.
if ( edges[i][j] > last )
last = edges[i][j];
while(true) {
boolean[] visited = new boolean[edgeMap.size()];
int[] previous = new int[edgeMap.size()];
int[][] edgeTo = new int[edgeMap.size()][];
LinkedList<Integer> q = new LinkedList<>();
int v = 0;
while (!q.isEmpty()) {
v = q.removeFirst();
visited[v] = true;
if (v == last)
int prevQsize = q.size();
for ( int[] edge: edgeMap.get(v)) {
if (v == edge[0] &&
!visited[edge[1]] &&
edge[2]-edge[3] > 0)
else if( v == edge[1] &&
!visited[edge[0]] &&
edge[3] > 0 )
edgeTo[q.getLast()] = edge;
for (int i = prevQsize; i < q.size(); ++i) {
previous[q.get(i)] = v;
if ( v == last) {
int a = v;
int b = v;
int smallest = Integer.MAX_VALUE;
while (a != 0) {
// get the path by following previous,
// also find the smallest forward capacity
a = previous[b];
int[] edge = edgeTo[b];
if ( a == edge[0] && edge[2]-edge[3] < smallest)
smallest = edge[2]-edge[3];
else if (a == edge[1] && edge[3] < smallest )
smallest = edge[3];
b = a;
// fill the capacity along the path to the smallest
b = last; a = last;
while ( a != 0) {
a = previous[b];
int[] edge = edgeTo[b];
if ( a == edge[0] )
edge[3] = edge[3] + smallest;
edge[3] = edge[3] - smallest;
b = a;
} else {
// v != last, so no path found
// max flow reached
for ( int[] edge: edges) {
for ( int j = 0; j < 4; ++j)
// copyright GFDL and CC-BY-SA
package test.ff2;
import java.io.BufferedReader;
import java.io.FileNotFoundException;
import java.io.FileReader;
import java.io.IOException;
import java.util.ArrayList;
public class MainFFArray {
static class Node {
public Node(int i) {
v = i;
int v;
Node next;
public static void main(String[] args) throws IOException {
System.err.print("Hello World\n");
final String filename = args[0];
BufferedReader br = new BufferedReader(new FileReader(filename));
String line;
ArrayList<String[]> lines = new ArrayList<>();
while ((line = br.readLine()) != null) {
String[] toks = line.split("\\s+");
if (toks.length == 3)
for (String tok : toks) {
int[][] edges = new int[lines.size()][4];
for (int i = 0; i < edges.length; ++i)
for (int j = 0; j < 3; ++j)
edges[i][j] = Integer.parseInt(lines.get(i)[j]);
int last = 0;
for (int[] edge : edges) {
for (int j = 0; j < 2; ++j)
if (edge[j] > last)
last = edge[j];
int[] ne = new int[last + 1];
for (int[] edge : edges)
for (int j = 0; j < 2; ++j)
int[][][] edgeFrom = new int[last + 1][][];
for (int i = 0; i < last + 1; ++i)
edgeFrom[i] = new int[ne[i]][];
int[] ie = new int[last + 1];
for (int[] edge : edges)
for (int j = 0; j < 2; ++j)
edgeFrom[edge[j]][ie[edge[j]]++] = edge;
while (true) {
Node head = new Node(0);
Node tail = head;
int[] previous = new int[last + 1];
for (int i = 0; i < last + 1; ++i)
previous[i] = -1;
int[][] pathEdge = new int[last + 1][];
while (head != null ) {
int v = head.v;
if (v==last)break;
int[][] edgesFrom = edgeFrom[v];
for (int[] edge : edgesFrom) {
int nv = -1;
if (edge[0] == v && previous[edge[1]] == -1 && edge[2] - edge[3] > 0)
nv = edge[1];
else if (edge[1] == v && previous[edge[0]] == -1 && edge[3] > 0)
nv = edge[0];
Node node = new Node(nv);
tail.next = node;
tail = tail.next;
head = head.next;
if (head == null)
int v = last;
int minCapacity = Integer.MAX_VALUE;
while (v != 0) {
int fv = previous[v];
int[] edge = pathEdge[v];
if (edge[0] == fv && minCapacity > edge[2] - edge[3])
minCapacity = edge[2] - edge[3];
else if (edge[1] == fv && minCapacity > edge[3])
minCapacity = edge[3];
v = fv;
v = last;
while (v != 0) {
int fv = previous[v];
int[] edge = pathEdge[v];
if (edge[0] == fv)
edge[3] += minCapacity;
else if (edge[1] == fv)
edge[3] -= minCapacity;
v = fv;
for (int[] edge : edges) {
for (int j = 0; j < 4; ++j)
System.out.printf("%d-", edge[j]);
Hill Climbing
Hill climbing is a technique for certain classes of optimization problems. The idea is to start with a sub-optimal solution to a problem (i.e., start at the base of a hill) and then repeatedly
improve the solution (walk up the hill) until some condition is maximized (the top of the hill is reached).
Hill-Climbing Methodology
1. Construct a sub-optimal solution that meets the constraints of the problem
2. Take the solution and make an improvement upon it
3. Repeatedly improve the solution until no more improvements are necessary/possible
One of the most popular hill-climbing problems is the network flow problem. Although network flow may sound somewhat specific it is important because it has high expressive power: for example, many
algorithmic problems encountered in practice can actually be considered special cases of network flow. After covering a simple example of the hill-climbing approach for a numerical problem we cover
network flow and then present examples of applications of network flow.
Newton's Root Finding Method
An illustration of Newton's method: The zero of the f(x) function is at x. We see that the guess x[n+1] is a better guess than x[n] because it is closer to x. (from Wikipedia)
Newton's Root Finding Method is a three-centuries-old algorithm for finding numerical approximations to roots of a function (that is a point ${\displaystyle x}$ where the function ${\displaystyle f
(x)}$ becomes zero), starting from an initial guess. You need to know the function ${\displaystyle f(x)\,}$ and its first derivative ${\displaystyle f'(x)\,}$ for this algorithm. The idea is the
following: In the vicinity of the initial guess ${\displaystyle x_{0}}$ we can form the Taylor expansion of the function
${\displaystyle f(x)=f(x_{0}+\epsilon )\,}$ ${\displaystyle \approx f(x_{0})+\epsilon f'(x_{0})}$ ${\displaystyle +{\frac {\epsilon ^{2}}{2}}f''(x_{0})+...}$
which gives a good approximation to the function near ${\displaystyle x_{0}}$ . Taking only the first two terms on the right hand side, setting them equal to zero, and solving for ${\displaystyle \
epsilon }$ , we obtain
${\displaystyle \epsilon =-{\frac {f(x_{0})}{f'(x_{0})}}}$
which we can use to construct a better solution
${\displaystyle x_{1}=x_{0}+\epsilon =x_{0}-{\frac {f(x_{0})}{f'(x_{0})}}.}$
This new solution can be the starting point for applying the same procedure again. Thus, in general a better approximation can be constructed by repeatedly applying
${\displaystyle x_{n+1}=x_{n}-{\frac {f(x_{n})}{f'(x_{n})}}.}$
As shown in the illustration, this is nothing else but the construction of the zero from the tangent at the initial guessing point. In general, Newton's root finding method converges quadratically,
except when the first derivative of the solution ${\displaystyle f'(x)=0\,}$ vanishes at the root.
Coming back to the "Hill climbing" analogy, we could apply Newton's root finding method not to the function ${\displaystyle f(x)\,}$ , but to its first derivative ${\displaystyle f'(x)\,}$ , that is
look for ${\displaystyle x}$ such that ${\displaystyle f'(x)=0\,}$ . This would give the extremal positions of the function, its maxima and minima. Starting Newton's method close enough to a maximum
this way, we climb the hill.
Example application of Newton's method
The net present value function is a function of time, an interest rate, and a series of cash flows. A related function is Internal Rate of Return. The formula for each period is (CF[i] x (1+ i/100) ^
t , and this will give a polynomial function which is the total cash flow, and equals zero when the interest rate equals the IRR. In using Newton's method, x is the interest rate, and y is the total
cash flow, and the method will use the derivative function of the polynomial to find the slope of the graph at a given interest rate (x-value), which will give the x[n+1] , or a better interest rate
to try in the next iteration to find the target x where y ( the total returns) is zero.
Instead of regarding continuous functions, the hill-climbing method can also be applied to discrete networks.
Network Flow
Suppose you have a directed graph (possibly with cycles) with one vertex labeled as the source and another vertex labeled as the destination or the "sink". The source vertex only has edges coming out
of it, with no edges going into it. Similarly, the destination vertex only has edges going into it, with no edges coming out of it. We can assume that the graph is fully connected with no dead-ends;
i.e., for every vertex (except the source and the sink), there is at least one edge going into the vertex and one edge going out of it.
We assign a "capacity" to each edge, and initially we'll consider only integral-valued capacities. The following graph meets our requirements, where "s" is the source and "t" is the destination:
We'd like now to imagine that we have some series of inputs arriving at the source that we want to carry on the edges over to the sink. The number of units we can send on an edge at a time must be
less than or equal to the edge's capacity. You can think of the vertices as cities and the edges as roads between the cities and we want to send as many cars from the source city to the destination
city as possible. The constraint is that we cannot send more cars down a road than its capacity can handle.
The goal of network flow is to send as much traffic from ${\displaystyle s}$ to ${\displaystyle t}$ as each street can bear.
To organize the traffic routes, we can build a list of different paths from city ${\displaystyle s}$ to city ${\displaystyle t}$ . Each path has a carrying capacity equal to the smallest capacity
value for any edge on the path; for example, consider the following path ${\displaystyle p}$ :
Even though the final edge of ${\displaystyle p}$ has a capacity of 8, that edge only has one car traveling on it because the edge before it only has a capacity of 1 (thus, that edge is at full
capacity). After using this path, we can compute the residual graph by subtracting 1 from the capacity of each edge:
(We subtracted 1 from the capacity of each edge in ${\displaystyle p}$ because 1 was the carrying capacity of ${\displaystyle p}$ .) We can say that path ${\displaystyle p}$ has a flow of 1.
Formally, a flow is an assignment ${\displaystyle f(e)}$ of values to the set of edges in the graph ${\displaystyle G=(V,E)}$ such that:
1. ${\displaystyle \forall e\in E:f(e)\in \mathbb {R} }$
2. ${\displaystyle \forall (u,v)\in E:f((u,v))=-f((v,u))}$
3. ${\displaystyle \forall u\in V,ueq s,t:\sum _{v\in V}f(u,v)=0}$
4. ${\displaystyle \forall e\in E:f(e)\leq c(e)}$
Where ${\displaystyle s}$ is the source node and ${\displaystyle t}$ is the sink node, and ${\displaystyle c(e)\geq 0}$ is the capacity of edge ${\displaystyle e}$ . We define the value of a flow ${\
displaystyle f}$ to be:
${\displaystyle {\textrm {Value}}(f)=\sum _{v\in V}f((s,v))}$
The goal of network flow is to find an ${\displaystyle f}$ such that ${\displaystyle {\textrm {Value}}(f)}$ is maximal. To be maximal means that there is no other flow assignment that obeys the
constraints 1-4 that would have a higher value. The traffic example can describe what the four flow constraints mean:
1. ${\displaystyle \forall e\in E:f(e)\in \mathbb {R} }$ . This rule simply defines a flow to be a function from edges in the graph to real numbers. The function is defined for every edge in the
graph. You could also consider the "function" to simply be a mapping: Every edge can be an index into an array and the value of the array at an edge is the value of the flow function at that
2. ${\displaystyle \forall (u,v)\in E:f((u,v))=-f((v,u))}$ . This rule says that if there is some traffic flowing from node u to node v then there should be considered negative that amount flowing
from v to u. For example, if two cars are flowing from city u to city v, then negative two cars are going in the other direction. Similarly, if three cars are going from city u to city v and two
cars are going city v to city u then the net effect is the same as if one car was going from city u to city v and no cars are going from city v to city u.
3. ${\displaystyle \forall u\in V,ueq s,t:\sum _{v\in V}f(u,v)=0}$ . This rule says that the net flow (except for the source and the destination) should be neutral. That is, you won't ever have more
cars going into a city than you would have coming out of the city. New cars can only come from the source, and cars can only be stored in the destination. Similarly, whatever flows out of s must
eventually flow into t. Note that if a city has three cars coming into it, it could send two cars to one city and the remaining car to a different city. Also, a city might have cars coming into
it from multiple sources (although all are ultimately from city s).
4. ${\displaystyle \forall e\in E:f(e)\leq c(e)}$ .
The Ford-Fulkerson Algorithm
The following algorithm computes the maximal flow for a given graph with non-negative capacities. What the algorithm does can be easy to understand, but it's non-trivial to show that it terminates
and provides an optimal solution.
function net-flow(graph (V, E), node s, node t, cost c): flow
initialize f(e) := 0 for all e in E
loop while not done
for all e in E: // compute residual capacities
let cf(e) := c(e) - f(e)
let Gf := (V, {e : e in E and cf(e) > 0})
find a path p from s to t in Gf // e.g., use depth first search
if no path p exists: signal done
let path-capacities := map(p, cf) // a path is a set of edges
let m := min-val-of(path-capacities) // smallest residual capacity of p
for all (u, v) in p: // maintain flow constraints
f((u, v)) := f((u, v)) + m
f((v, u)) := f((v, u)) - m
The Ford-Fulkerson algorithm uses repeated calls to Breadth-First Search ( use a queue to schedule the children of a node to become the current node). Breadth-First Search increments the length of
each path +1 so that the first path to get to the destination, the shortest path, will be the first off the queue. This is in contrast with using a Stack, which is Depth-First Search, and will come
up with *any* path to the target, with the "descendants" of current node examined, but not necessarily the shortest.
• In one search, a path from source to target is found. All nodes are made unmarked at the beginning of a new search. Seen nodes are "marked" , and not searched again if encountered again.
Eventually, all reachable nodes will have been scheduled on the queue , and no more unmarked nodes can be reached off the last the node on the queue.
• During the search, nodes scheduled have the finding "edge" (consisting of the current node , the found node, a current flow, and a total capacity in the direction first to second node), recorded.
• This allows finding a reverse path from the target node once reached, to the start node. Once a path is found, the edges are examined to find the edge with the minimum remaining capacity, and
this becomes the flow that will result along this path , and this quantity is removed from the remaining capacity of each edge along the path. At the "bottleneck" edge with the minimum remaining
capacity, no more flow will be possible, in the forward direction, but still possible in the backward direction.
• This process of BFS for a path to the target node, filling up the path to the bottleneck edge's residual capacity, is repeated, until BFS cannot find a path to the target node ( the node is not
reached because all sequences of edges leading to the target have had their bottleneck edges filled). Hence memory of the side effects of previous paths found, is recorded in the flows of the
edges, and affect the results of future searches.
• An important property of maximal flow is that flow can occur in the backward direction of an edge, and the residual capacity in the backward direction is the current flow in the foward direction.
Normally, the residual capacity in the forward direction of an edge is the initial capacity less forward flow. Intuitively, this allows more options for maximizing flow as earlier augmenting
paths block off shorter paths.
• On termination, the algorithm will retain the marked and unmarked states of the results of the last BFS.
• the minimum cut state is the two sets of marked and unmarked nodes formed from the last unsuccessful BFS starting from the start node, and not marking the target the node. The start node belongs
to one side of the cut, and the target node belongs to the other. Arbitrarily, being "in Cut" means being on the start side, or being a marked node. Recall how are a node comes to be marked,
given an edge with a flow and a residual capacity.
Example application of Ford-Fulkerson maximum flow/ minimum cut
An example of application of Ford-Fulkerson is in baseball season elimination. The question is whether the team can possibly win the whole season by exceeding some combination of wins of the other
The idea is that a flow graph is set up with teams not being able to exceed the number of total wins which a target team can maximally win for the entire season. There are game nodes whose edges
represent the number of remaining matches between two teams, and each game node outflows to two team nodes, via edges that will not limit forward flow; team nodes receive edges from all games they
participate. Then outflow edges with win limiting capacity flow to the virtual target node. In a maximal flow state where the target node's total wins will exceed some combination of wins of the
other teams, the penultimate depth-first search will cutoff the start node from the rest of the graph, because no flow will be possible to any of the game nodes, as a result of the penultimate
depth-first search (recall what happens to the flow , in the second part of the algorithm after finding the path). This is because in seeking the maximal flow of each path, the game edges' capacities
will be maximally drained by the win-limit edges further along the path, and any residual game capacity means there are more games to be played that will make at least one team overtake the target
teams' maximal wins. If a team node is in the minimum cut, then there is an edge with residual capacity leading to the team, which means what , given the previous statements? What do the set of teams
found in a minimum cut represent ( hint: consider the game node edge) ?
Example Maximum bipartite matching ( intern matching )
This matching problem doesn't include preference weightings. A set of companies offers jobs which are made into one big set , and interns apply to companies for specific jobs. The applications are
edges with a weight of 1. To convert the bipartite matching problem to a maximum flow problem, virtual vertexes s and t are created , which have weighted 1 edges from s to all interns, and from all
jobs to t. Then the Ford-Fulkerson algorithm is used to sequentially saturate 1 capacity edges from the graph, by augmenting found paths. It may happen that a intermediate state is reached where left
over interns and jobs are unmatched, but backtracking along reverse edges which have residual capacity = forward flow = 1, along longer paths that occur later during breadth-first search, will negate
previous suboptimal augmenting paths, and provide further search options for matching, and will terminate only when maximal flow or maximum matching is reached.
Ada Implementation
Welcome to the Ada implementations of the Algorithms Wikibook. For those who are new to Ada Programming a few notes:
• All examples are fully functional with all the needed input and output operations. However, only the code needed to outline the algorithms at hand is copied into the text - the full samples are
available via the download links. (Note: It can take up to 48 hours until the cvs is updated).
• The algorithms in the book are written in a pseudolanguage. Every computer language has its own conventions how to write identifiers; some languages are case sensitive, Ada isn't; some write
identifiers in CamelCase. Ada uses the convention of separating words by underscores and capitalizing the first character of each word. For numerical values, Ada uses the convention to separate
digit groups by underscores for better readability - compare 10000000 to 10_000_000 or 5000001 to 50_000_01 (e.g. 50 thousand € and one ¢).
• We seldom use predefined types in the sample code but define special types suitable for the algorithms at hand.
• Ada allows for default function parameters; however, we always fill in and name all parameters, so the reader can see which options are available.
• We seldom use shortcuts - like using the attributes Image or Value for String <=> Integer conversions.
All these rules make the code more elaborate than perhaps needed. However, we also hope it makes the code easier to understand
Chapter 1: Introduction
The following subprograms are implementations of the Inventing an Algorithm examples.
To Lower
The Ada example code does not append to the array as the algorithms. Instead we create an empty array of the desired length and then replace the characters inside.
function To_Lower (C : Character) return Character renames
-- tolower - translates all alphabetic, uppercase characters
-- in str to lowercase
function To_Lower (Str : String) return String is
Result : String (Str'Range);
for C in Str'Range loop
Result (C) := To_Lower (Str (C));
end loop;
return Result;
end To_Lower;
Would the append approach be impossible with Ada? No, but it would be significantly more complex and slower.
Equal Ignore Case
-- equal-ignore-case -- returns true if s or t are equal,
-- ignoring case
function Equal_Ignore_Case
(S : String;
T : String)
return Boolean
O : constant Integer := S'First - T'First;
if T'Length /= S'Length then
return False; -- if they aren't the same length, they
-- aren't equal
for I in S'Range loop
if To_Lower (S (I)) /=
To_Lower (T (I + O))
return False;
end if;
end loop;
end if;
return True;
end Equal_Ignore_Case;
Chapter 6: Dynamic Programming
Fibonacci numbers
The following codes are implementations of the Fibonacci-Numbers examples.
Simple Implementation
To calculate Fibonacci numbers negative values are not needed so we define an integer type which starts at 0. With the integer type defined you can calculate up until Fib (87). Fib (88) will result
in an Constraint_Error.
type Integer_Type is range 0 .. 999_999_999_999_999_999;
You might notice that there is not equivalence for the assert (n >= 0) from the original example. Ada will test the correctness of the parameter before the function is called.
function Fib (n : Integer_Type) return Integer_Type is
if n = 0 then
return 0;
elsif n = 1 then
return 1;
return Fib (n - 1) + Fib (n - 2);
end if;
end Fib;
Cached Implementation
For this implementation we need a special cache type can also store a -1 as "not calculated" marker
type Cache_Type is range -1 .. 999_999_999_999_999_999;
The actual type for calculating the fibonacci numbers continues to start at 0. As it is a subtype of the cache type Ada will automatically convert between the two. (the conversion is - of course -
checked for validity)
subtype Integer_Type is Cache_Type range
0 .. Cache_Type'Last;
In order to know how large the cache need to be we first read the actual value from the command line.
Value : constant Integer_Type :=
Integer_Type'Value (Ada.Command_Line.Argument (1));
The Cache array starts with element 2 since Fib (0) and Fib (1) are constants and ends with the value we want to calculate.
type Cache_Array is
array (Integer_Type range 2 .. Value) of Cache_Type;
The Cache is initialized to the first valid value of the cache type — this is -1.
F : Cache_Array := (others => Cache_Type'First);
What follows is the actual algorithm.
function Fib (N : Integer_Type) return Integer_Type is
if N = 0 or else N = 1 then
return N;
elsif F (N) /= Cache_Type'First then
return F (N);
F (N) := Fib (N - 1) + Fib (N - 2);
return F (N);
end if;
end Fib;
This implementation is faithful to the original from the Algorithms book. However, in Ada you would normally do it a little different:
when you use a slightly larger array which also stores the elements 0 and 1 and initializes them to the correct values
type Cache_Array is
array (Integer_Type range 0 .. Value) of Cache_Type;
F : Cache_Array :=
(0 => 0,
1 => 1,
others => Cache_Type'First);
and then you can remove the first if path.
[S: if N = 0 or else N = 1 then
return N;
els:S]if F (N) /= Cache_Type'First then
This will save about 45% of the execution-time (measured on Linux i686) while needing only two more elements in the cache array.
Memory Optimized Implementation
This version looks just like the original in WikiCode.
type Integer_Type is range 0 .. 999_999_999_999_999_999;
function Fib (N : Integer_Type) return Integer_Type is
U : Integer_Type := 0;
V : Integer_Type := 1;
for I in 2 .. N loop
Calculate_Next : declare
T : constant Integer_Type := U + V;
U := V;
V := T;
end Calculate_Next;
end loop;
return V;
end Fib;
No 64 bit integers
Your Ada compiler does not support 64 bit integer numbers? Then you could try to use decimal numbers instead. Using decimal numbers results in a slower program (takes about three times as long) but
the result will be the same.
The following example shows you how to define a suitable decimal type. Do experiment with the digits and range parameters until you get the optimum out of your Ada compiler.
type Integer_Type is delta 1.0 digits 18 range
0.0 .. 999_999_999_999_999_999.0;
You should know that floating point numbers are unsuitable for the calculation of fibonacci numbers. They will not report an error condition when the number calculated becomes too large — instead
they will lose in precision which makes the result meaningless.
GNU Free Documentation License
As of July 15, 2009 Wikibooks has moved to a dual-licensing system that supersedes the previous GFDL only licensing. In short, this means that text licensed under the GFDL only can no longer be
imported to Wikibooks, retroactive to 1 November 2008. Additionally, Wikibooks text might or might not now be exportable under the GFDL depending on whether or not any content was added and not
removed since July 15.
Version 1.3, 3 November 2008 Copyright (C) 2000, 2001, 2002, 2007, 2008 Free Software Foundation, Inc. <http://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed.
0. PREAMBLE
The purpose of this License is to make a manual, textbook, or other functional and useful document "free" in the sense of freedom: to assure everyone the effective freedom to copy and redistribute
it, with or without modifying it, either commercially or noncommercially. Secondarily, this License preserves for the author and publisher a way to get credit for their work, while not being
considered responsible for modifications made by others.
This License is a kind of "copyleft", which means that derivative works of the document must themselves be free in the same sense. It complements the GNU General Public License, which is a copyleft
license designed for free software.
We have designed this License in order to use it for manuals for free software, because free software needs free documentation: a free program should come with manuals providing the same freedoms
that the software does. But this License is not limited to software manuals; it can be used for any textual work, regardless of subject matter or whether it is published as a printed book. We
recommend this License principally for works whose purpose is instruction or reference.
This License applies to any manual or other work, in any medium, that contains a notice placed by the copyright holder saying it can be distributed under the terms of this License. Such a notice
grants a world-wide, royalty-free license, unlimited in duration, to use that work under the conditions stated herein. The "Document", below, refers to any such manual or work. Any member of the
public is a licensee, and is addressed as "you". You accept the license if you copy, modify or distribute the work in a way requiring permission under copyright law.
A "Modified Version" of the Document means any work containing the Document or a portion of it, either copied verbatim, or with modifications and/or translated into another language.
A "Secondary Section" is a named appendix or a front-matter section of the Document that deals exclusively with the relationship of the publishers or authors of the Document to the Document's overall
subject (or to related matters) and contains nothing that could fall directly within that overall subject. (Thus, if the Document is in part a textbook of mathematics, a Secondary Section may not
explain any mathematics.) The relationship could be a matter of historical connection with the subject or with related matters, or of legal, commercial, philosophical, ethical or political position
regarding them.
The "Invariant Sections" are certain Secondary Sections whose titles are designated, as being those of Invariant Sections, in the notice that says that the Document is released under this License. If
a section does not fit the above definition of Secondary then it is not allowed to be designated as Invariant. The Document may contain zero Invariant Sections. If the Document does not identify any
Invariant Sections then there are none.
The "Cover Texts" are certain short passages of text that are listed, as Front-Cover Texts or Back-Cover Texts, in the notice that says that the Document is released under this License. A Front-Cover
Text may be at most 5 words, and a Back-Cover Text may be at most 25 words.
A "Transparent" copy of the Document means a machine-readable copy, represented in a format whose specification is available to the general public, that is suitable for revising the document
straightforwardly with generic text editors or (for images composed of pixels) generic paint programs or (for drawings) some widely available drawing editor, and that is suitable for input to text
formatters or for automatic translation to a variety of formats suitable for input to text formatters. A copy made in an otherwise Transparent file format whose markup, or absence of markup, has been
arranged to thwart or discourage subsequent modification by readers is not Transparent. An image format is not Transparent if used for any substantial amount of text. A copy that is not "Transparent"
is called "Opaque".
Examples of suitable formats for Transparent copies include plain ASCII without markup, Texinfo input format, LaTeX input format, SGML or XML using a publicly available DTD, and standard-conforming
simple HTML, PostScript or PDF designed for human modification. Examples of transparent image formats include PNG, XCF and JPG. Opaque formats include proprietary formats that can be read and edited
only by proprietary word processors, SGML or XML for which the DTD and/or processing tools are not generally available, and the machine-generated HTML, PostScript or PDF produced by some word
processors for output purposes only.
The "Title Page" means, for a printed book, the title page itself, plus such following pages as are needed to hold, legibly, the material this License requires to appear in the title page. For works
in formats which do not have any title page as such, "Title Page" means the text near the most prominent appearance of the work's title, preceding the beginning of the body of the text.
The "publisher" means any person or entity that distributes copies of the Document to the public.
A section "Entitled XYZ" means a named subunit of the Document whose title either is precisely XYZ or contains XYZ in parentheses following text that translates XYZ in another language. (Here XYZ
stands for a specific section name mentioned below, such as "Acknowledgements", "Dedications", "Endorsements", or "History".) To "Preserve the Title" of such a section when you modify the Document
means that it remains a section "Entitled XYZ" according to this definition.
The Document may include Warranty Disclaimers next to the notice which states that this License applies to the Document. These Warranty Disclaimers are considered to be included by reference in this
License, but only as regards disclaiming warranties: any other implication that these Warranty Disclaimers may have is void and has no effect on the meaning of this License.
2. VERBATIM COPYING
You may copy and distribute the Document in any medium, either commercially or noncommercially, provided that this License, the copyright notices, and the license notice saying this License applies
to the Document are reproduced in all copies, and that you add no other conditions whatsoever to those of this License. You may not use technical measures to obstruct or control the reading or
further copying of the copies you make or distribute. However, you may accept compensation in exchange for copies. If you distribute a large enough number of copies you must also follow the
conditions in section 3.
You may also lend copies, under the same conditions stated above, and you may publicly display copies.
3. COPYING IN QUANTITY
If you publish printed copies (or copies in media that commonly have printed covers) of the Document, numbering more than 100, and the Document's license notice requires Cover Texts, you must enclose
the copies in covers that carry, clearly and legibly, all these Cover Texts: Front-Cover Texts on the front cover, and Back-Cover Texts on the back cover. Both covers must also clearly and legibly
identify you as the publisher of these copies. The front cover must present the full title with all words of the title equally prominent and visible. You may add other material on the covers in
addition. Copying with changes limited to the covers, as long as they preserve the title of the Document and satisfy these conditions, can be treated as verbatim copying in other respects.
If the required texts for either cover are too voluminous to fit legibly, you should put the first ones listed (as many as fit reasonably) on the actual cover, and continue the rest onto adjacent
If you publish or distribute Opaque copies of the Document numbering more than 100, you must either include a machine-readable Transparent copy along with each Opaque copy, or state in or with each
Opaque copy a computer-network location from which the general network-using public has access to download using public-standard network protocols a complete Transparent copy of the Document, free of
added material. If you use the latter option, you must take reasonably prudent steps, when you begin distribution of Opaque copies in quantity, to ensure that this Transparent copy will remain thus
accessible at the stated location until at least one year after the last time you distribute an Opaque copy (directly or through your agents or retailers) of that edition to the public.
It is requested, but not required, that you contact the authors of the Document well before redistributing any large number of copies, to give them a chance to provide you with an updated version of
the Document.
You may copy and distribute a Modified Version of the Document under the conditions of sections 2 and 3 above, provided that you release the Modified Version under precisely this License, with the
Modified Version filling the role of the Document, thus licensing distribution and modification of the Modified Version to whoever possesses a copy of it. In addition, you must do these things in the
Modified Version:
A. Use in the Title Page (and on the covers, if any) a title distinct from that of the Document, and from those of previous versions (which should, if there were any, be listed in the History
section of the Document). You may use the same title as a previous version if the original publisher of that version gives permission.
B. List on the Title Page, as authors, one or more persons or entities responsible for authorship of the modifications in the Modified Version, together with at least five of the principal authors
of the Document (all of its principal authors, if it has fewer than five), unless they release you from this requirement.
C. State on the Title page the name of the publisher of the Modified Version, as the publisher.
D. Preserve all the copyright notices of the Document.
E. Add an appropriate copyright notice for your modifications adjacent to the other copyright notices.
F. Include, immediately after the copyright notices, a license notice giving the public permission to use the Modified Version under the terms of this License, in the form shown in the Addendum
G. Preserve in that license notice the full lists of Invariant Sections and required Cover Texts given in the Document's license notice.
H. Include an unaltered copy of this License.
I. Preserve the section Entitled "History", Preserve its Title, and add to it an item stating at least the title, year, new authors, and publisher of the Modified Version as given on the Title Page.
If there is no section Entitled "History" in the Document, create one stating the title, year, authors, and publisher of the Document as given on its Title Page, then add an item describing the
Modified Version as stated in the previous sentence.
J. Preserve the network location, if any, given in the Document for public access to a Transparent copy of the Document, and likewise the network locations given in the Document for previous
versions it was based on. These may be placed in the "History" section. You may omit a network location for a work that was published at least four years before the Document itself, or if the
original publisher of the version it refers to gives permission.
K. For any section Entitled "Acknowledgements" or "Dedications", Preserve the Title of the section, and preserve in the section all the substance and tone of each of the contributor acknowledgements
and/or dedications given therein.
L. Preserve all the Invariant Sections of the Document, unaltered in their text and in their titles. Section numbers or the equivalent are not considered part of the section titles.
M. Delete any section Entitled "Endorsements". Such a section may not be included in the Modified version.
N. Do not retitle any existing section to be Entitled "Endorsements" or to conflict in title with any Invariant Section.
O. Preserve any Warranty Disclaimers.
If the Modified Version includes new front-matter sections or appendices that qualify as Secondary Sections and contain no material copied from the Document, you may at your option designate some or
all of these sections as invariant. To do this, add their titles to the list of Invariant Sections in the Modified Version's license notice. These titles must be distinct from any other section
You may add a section Entitled "Endorsements", provided it contains nothing but endorsements of your Modified Version by various parties—for example, statements of peer review or that the text has
been approved by an organization as the authoritative definition of a standard.
You may add a passage of up to five words as a Front-Cover Text, and a passage of up to 25 words as a Back-Cover Text, to the end of the list of Cover Texts in the Modified Version. Only one passage
of Front-Cover Text and one of Back-Cover Text may be added by (or through arrangements made by) any one entity. If the Document already includes a cover text for the same cover, previously added by
you or by arrangement made by the same entity you are acting on behalf of, you may not add another; but you may replace the old one, on explicit permission from the previous publisher that added the
old one.
The author(s) and publisher(s) of the Document do not by this License give permission to use their names for publicity for or to assert or imply endorsement of any Modified Version.
You may combine the Document with other documents released under this License, under the terms defined in section 4 above for modified versions, provided that you include in the combination all of
the Invariant Sections of all of the original documents, unmodified, and list them all as Invariant Sections of your combined work in its license notice, and that you preserve all their Warranty
The combined work need only contain one copy of this License, and multiple identical Invariant Sections may be replaced with a single copy. If there are multiple Invariant Sections with the same name
but different contents, make the title of each such section unique by adding at the end of it, in parentheses, the name of the original author or publisher of that section if known, or else a unique
number. Make the same adjustment to the section titles in the list of Invariant Sections in the license notice of the combined work.
In the combination, you must combine any sections Entitled "History" in the various original documents, forming one section Entitled "History"; likewise combine any sections Entitled
"Acknowledgements", and any sections Entitled "Dedications". You must delete all sections Entitled "Endorsements".
You may make a collection consisting of the Document and other documents released under this License, and replace the individual copies of this License in the various documents with a single copy
that is included in the collection, provided that you follow the rules of this License for verbatim copying of each of the documents in all other respects.
You may extract a single document from such a collection, and distribute it individually under this License, provided you insert a copy of this License into the extracted document, and follow this
License in all other respects regarding verbatim copying of that document.
A compilation of the Document or its derivatives with other separate and independent documents or works, in or on a volume of a storage or distribution medium, is called an "aggregate" if the
copyright resulting from the compilation is not used to limit the legal rights of the compilation's users beyond what the individual works permit. When the Document is included in an aggregate, this
License does not apply to the other works in the aggregate which are not themselves derivative works of the Document.
If the Cover Text requirement of section 3 is applicable to these copies of the Document, then if the Document is less than one half of the entire aggregate, the Document's Cover Texts may be placed
on covers that bracket the Document within the aggregate, or the electronic equivalent of covers if the Document is in electronic form. Otherwise they must appear on printed covers that bracket the
whole aggregate.
8. TRANSLATION
Translation is considered a kind of modification, so you may distribute translations of the Document under the terms of section 4. Replacing Invariant Sections with translations requires special
permission from their copyright holders, but you may include translations of some or all Invariant Sections in addition to the original versions of these Invariant Sections. You may include a
translation of this License, and all the license notices in the Document, and any Warranty Disclaimers, provided that you also include the original English version of this License and the original
versions of those notices and disclaimers. In case of a disagreement between the translation and the original version of this License or a notice or disclaimer, the original version will prevail.
If a section in the Document is Entitled "Acknowledgements", "Dedications", or "History", the requirement (section 4) to Preserve its Title (section 1) will typically require changing the actual
9. TERMINATION
You may not copy, modify, sublicense, or distribute the Document except as expressly provided under this License. Any attempt otherwise to copy, modify, sublicense, or distribute it is void, and will
automatically terminate your rights under this License.
However, if you cease all violation of this License, then your license from a particular copyright holder is reinstated (a) provisionally, unless and until the copyright holder explicitly and finally
terminates your license, and (b) permanently, if the copyright holder fails to notify you of the violation by some reasonable means prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is reinstated permanently if the copyright holder notifies you of the violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that copyright holder, and you cure the violation prior to 30 days after your receipt of the notice.
Termination of your rights under this section does not terminate the licenses of parties who have received copies or rights from you under this License. If your rights have been terminated and not
permanently reinstated, receipt of a copy of some or all of the same material does not give you any rights to use it.
10. FUTURE REVISIONS OF THIS LICENSE
The Free Software Foundation may publish new, revised versions of the GNU Free Documentation License from time to time. Such new versions will be similar in spirit to the present version, but may
differ in detail to address new problems or concerns. See http://www.gnu.org/copyleft/.
Each version of the License is given a distinguishing version number. If the Document specifies that a particular numbered version of this License "or any later version" applies to it, you have the
option of following the terms and conditions either of that specified version or of any later version that has been published (not as a draft) by the Free Software Foundation. If the Document does
not specify a version number of this License, you may choose any version ever published (not as a draft) by the Free Software Foundation. If the Document specifies that a proxy can decide which
future versions of this License can be used, that proxy's public statement of acceptance of a version permanently authorizes you to choose that version for the Document.
11. RELICENSING
"Massive Multiauthor Collaboration Site" (or "MMC Site") means any World Wide Web server that publishes copyrightable works and also provides prominent facilities for anybody to edit those works. A
public wiki that anybody can edit is an example of such a server. A "Massive Multiauthor Collaboration" (or "MMC") contained in the site means any set of copyrightable works thus published on the MMC
"CC-BY-SA" means the Creative Commons Attribution-Share Alike 3.0 license published by Creative Commons Corporation, a not-for-profit corporation with a principal place of business in San Francisco,
California, as well as future copyleft versions of that license published by that same organization.
"Incorporate" means to publish or republish a Document, in whole or in part, as part of another Document.
An MMC is "eligible for relicensing" if it is licensed under this License, and if all works that were first published under this License somewhere other than this MMC, and subsequently incorporated
in whole or in part into the MMC, (1) had no cover texts or invariant sections, and (2) were thus incorporated prior to November 1, 2008.
The operator of an MMC Site may republish an MMC contained in the site under CC-BY-SA on the same site at any time before August 1, 2009, provided the MMC is eligible for relicensing.
How to use this License for your documents
To use this License in a document you have written, include a copy of the License in the document and put the following copyright and license notices just after the title page:
Copyright (c) YEAR YOUR NAME.
Permission is granted to copy, distribute and/or modify this document
under the terms of the GNU Free Documentation License, Version 1.3
or any later version published by the Free Software Foundation;
with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts.
A copy of the license is included in the section entitled "GNU
Free Documentation License".
If you have Invariant Sections, Front-Cover Texts and Back-Cover Texts, replace the "with...Texts." line with this:
with the Invariant Sections being LIST THEIR TITLES, with the
Front-Cover Texts being LIST, and with the Back-Cover Texts being LIST.
If you have Invariant Sections without Cover Texts, or some other combination of the three, merge those two alternatives to suit the situation.
If your document contains nontrivial examples of program code, we recommend releasing these examples in parallel under your choice of free software license, such as the GNU General Public License, to
permit their use in free software.
1. ↑ A (mathematical) integer larger than the largest "int" directly supported by your computer's hardware is often called a "BigInt". Working with such large numbers is often called "multiple
precision arithmetic". There are entire books on the various algorithms for dealing with such numbers, such as:
□ Modern Computer Arithmetic, Richard Brent and Paul Zimmermann, Cambridge University Press, 2010.
□ Donald E. Knuth, The Art of Computer Programming , Volume 2: Seminumerical Algorithms (3rd edition), 1997.
People who implement such algorithms may
□ write a one-off implementation for one particular application
□ write a library that you can use for many applications, such as GMP, the GNU Multiple Precision Arithmetic Library or McCutchen's Big Integer Library or various libraries [1] [2] [3] [4] [5]
used to demonstrate RSA encryption
□ put those algorithms in the compiler of a programming language that you can use (such as Python and Lisp) that automatically switches from standard integers to BigInts when necessary | {"url":"https://en.m.wikibooks.org/wiki/Algorithms/Print_version","timestamp":"2024-11-02T12:38:05Z","content_type":"text/html","content_length":"829253","record_id":"<urn:uuid:c592f5ad-2d92-412a-a85c-bfeefa33d957>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00729.warc.gz"} |
A New Algorithm for Finding Mixed Layer Depths with Applications to Argo Data and Subantarctic Mode Water Formation
1. Introduction
The surface layer of the ocean records past winter mixing events and the subsequent onset of spring restratification, as well as the traces of all physical processes occurring above the ocean’s
permanent thermocline. Typical ocean observations reveal a well-mixed layer, in which temperature, salinity, and density are nearly vertically uniform, embedded in the surface layer. Turbulent mixing
processes powered by wind stress and heat exchange at the air–sea interface create this neutrally buoyant and thoroughly mixed column in the upper ocean. This turbulently mixed layer is highly
variable. In the summer, mixed layer depths (MLDs) can reach tens of meters or even be absent. In the winter, deep convection driven by surface heat loss can mix the water column to 2000 m in select
locations (Marshall and Schott 1999). Coupled with the intense seasonal and spatial variation of the mixed layer is a complexity of structures in the ocean surface layer that can often obscure the
depth of the turbulently mixed layer (Sprintall and Roemmich 1999; Dong et al. 2008).
The mixed layer is important to a variety of ocean processes. The mixed layer responds to atmospheric fluxes and transmits those fluxes to the ocean interior. Wind forcing acts through the mixed
layer to drive ocean circulation (Chereskin and Roemmich 1991). The depth of the mixed layer establishes the volume of water over which the surface heat flux is distributed (Chen et al. 1994; Ohlmann
et al. 1996). In areas where deep convection occurs, winter mixed layer conditions set the properties of the deep and intermediate water masses of the ocean’s interior (Talley 1999).
Widespread interest in the processes at work in the mixed layer has spawned numerous arbitrary definitions of the mixed layer, as well as a corresponding number of schemes for finding its depth.
Because of the paucity of ocean turbulence and mixing measurements, these schemes use temperature and density profiles to find the mixed layer. In these schemes and in this paper, MLD refers to the
depth of the uniform surface layer that is assumed to owe its homogeneity to turbulent mixing.
The most widely favored and simplest scheme for finding the MLD is the threshold method. Threshold methods search for the depth at which the temperature or density profiles change by a predefined
amount relative to a surface reference value. Kara et al. (2000) and de Boyer Montégut et al. (2004) examined various threshold criteria used in the literature and determined their own optimal global
threshold definitions of the MLD. In deciding upon their own criteria, de Boyer Montégut et al. (2004) determined that the larger threshold values commonly used with averaged profiles, such as the
0.5°C threshold value used by Monterey and Levitus (1997) and the 0.8°C used by Kara et al. (2000), overestimated the MLD of individual profiles. Likewise, smaller criteria of 0.1°C underestimated
the MLD. After examining numerous profiles, de Boyer Montégut et al. (2004) concluded that 0.2°C was the optimal temperature threshold (TT). They similarly determined an optimal density threshold
(DT) value of 0.03 kg m^−3. To avoid diurnal heating in the surface layer, de Boyer Montégut et al. (2004) chose a surface reference level of 10 m. These values were recently employed by Oka et al.
(2007) in examining the seasonality of the MLD in the North Pacific and by Dong et al. (2008) in examining the mixed layer of the entire Southern Ocean.
Gradient methods, which are also widely used, work much like threshold methods; they assume that there is a strong gradient at the base of the mixed layer and therefore search for critical gradient
values (Lukas and Lindstrom 1991). Dong et al. (2008) report that commonly used values range from 0.0005 to 0.05 kg m^−4 for density gradients (DGs) and 0.025°C m^−1 for temperature gradients (TGs).
Threshold and gradient methods are limited by their dependence on the surface reference value and the chosen threshold value; it is difficult to decide on a single threshold value or gradient
criterion for all ocean profiles. Threshold methods, especially those based solely on temperature, inherently overestimate the MLD. Threshold methods using density falter in density-compensating
layers, and those using temperature falter in the presence of salinity barrier layers (Lukas and Lindstrom 1991; Sprintall and Tomczak 1992). Lukas and Lindstrom (1991) found that a density criterion
is more reliable for finding the MLD than a temperature criterion, yet there is an order of magnitude fewer density profiles than temperature profiles (Lorbacher et al. 2006).
A variety of more complex methods for finding the MLD have been developed. The “curvature method” proposed by Lorbacher et al. (2006) uses conditions for the second derivative and the gradient to
identify the MLD. Thomson and Fine (2003) introduced the “split and merge” method, which fits a variable number of linear segments to a profile. They found that their method performed similarly to
threshold methods. Chu et al. (1999) created a geometric model to determine the MLD of Arctic profiles. Lavender et al. (2002) used the intersection between a straight-line fit to the upper layer and
an exponential plus second-order-polynomial fit to the deep layer to estimate the MLD of individual temperature profiles in the Labrador Sea. This method apparently worked in the North Atlantic, but
efforts to implement the method in the Southern Ocean did not produce realistic MLDs.
This paper introduces a new algorithm for finding the MLD of individual profiles. The algorithm builds on traditional threshold and gradient methods by tying its estimate of the MLD to physical
features in the profile. It accomplishes this by first modeling the profile’s general shape; it approximates the seasonal thermocline and the mixed layer with best-fit lines. It then assembles a
suite of possible MLD values by calculating the threshold and gradient methods’ MLDs, identifying the intersection of the mixed layer and seasonal-thermocline fits (MLTFIT), locating profile maxima
or minima, and searching for intrusions at the base of the mixed layer. Finally, it looks for groupings and patterns within the possible MLDs to select the final MLD for each profile. Section 3
details how the algorithm calculates the possible MLDs and selects the final MLD estimate. The algorithm selection criteria were developed through subjective analysis of individual temperature,
salinity, and potential-density profiles from all oceans, though the greatest emphasis was placed on the southeast Pacific and southwest Atlantic. The algorithm initially produces a temperature MLD
estimate [referred to as the temperature algorithm (TA)]. If the profile also includes salinity, the algorithm subsequently determines the MLDs of the salinity and potential-density profiles. The
salinity MLD estimate mainly serves to verify the potential-density MLD estimate [referred to as the density algorithm (DA)] if they are at the same depth. The complete algorithm is provided in
online supplemental materials.
Visual examination of the numerous profiles and the algorithm MLDs confirms that the algorithm successfully identifies the MLD. The new algorithm avoids many of the pitfalls of threshold and gradient
methods; the threshold methods overestimate the MLD relative to the other methods and the gradient methods find more anomalous MLDs than the other methods. Section 4 compares the algorithm results to
those of standard threshold and gradient methods. For the dataset considered in this paper (introduced in section 2), the temperature algorithm especially improves upon the temperature threshold and
gradient methods. Assuming that density MLD estimates are more reliable than temperature MLD estimates (as found by Lukas and Lindstrom 1991), the standard deviations of the differences between the
density algorithm MLDs and the three temperature method MLDs can serve as a rough measure of each temperature method’s accuracy. The temperature algorithm MLDs nearly match the density algorithm
MLDs; the standard deviation of the difference between the temperature algorithm and density algorithm MLDs is 31 dbar, whereas for the temperature threshold and the temperature gradient methods the
standard deviation of the differences with the density algorithm MLDs are 62 and 121 dbar, respectively. The density algorithm tends to find slightly shallower MLDs than the density threshold method.
The density gradient method finds many anomalous MLDs and is less reliable than either of the other density methods. Preliminary results of applying the algorithms to a larger Southern Ocean dataset
(S. Dong 2006, personal communication) generally support the findings from the study region (Dong et al. 2008). The algorithm’s greatest utility lies in its ability to find accurate MLDs using only
temperature profiles. It can easily be adapted to work with XBT and other temperature-only profiles.
The new algorithm is used to examine Subantarctic Mode Water (SAMW) and Antarctic Intermediate Water (AAIW) formation using Argo data. SAMW is the name given to the waters encompassed by the deep
mixed layers immediately north of the Antarctic Circumpolar Current (ACC). AAIW, a subset of SAMW, can be traced as a relatively low-salinity tongue throughout almost all of the Southern Hemisphere
and the tropical oceans at about 1000-m depth (Deacon 1937). AAIW is believed to form in the southeast Pacific Ocean, upstream of the Drake Passage (McCartney 1977; England et al. 1993; Talley 1996).
The AAIW formation region is a good place to test the algorithm because it features a strong seasonal thermocline in the summer and deep winter mixed layers rivaled only by the North Atlantic, and it
is monitored by a collection of Argo floats. The region has been relatively unstudied during the winter. Section 5 discusses the results of applying the algorithm to Argo data from the SAMW–AAIW
formation region.
2. Data
This study uses temperature and salinity profiles from 277 profiling floats deployed in the southeast Pacific and southwest Atlantic Oceans as part of the Argo program (Roemmich et al. 2001). In
addition, randomly selected profiles from Argo floats in other oceans are used to test the algorithm. Argo is a global observing system of 3000 floats designed to give upper- and middle-layer fields
for temperature and salinity of the world’s oceans. Argo floats are designed to provide a temperature accuracy of 0.005°C and a salinity accuracy of 0.01 psu.
The region of interest for this study encompasses sections of the southern Pacific and Atlantic Oceans from 40° to 66°S and from 110° to 35°W (Fig. 1). Within this region, Argo floats collected 15
037 profiles between October 2002 and November 2008 (data available online at http://www.usgodae.org/argo/argo.html). In 2002, Canada deployed six floats in the South Pacific and the United Kingdom
deployed four floats in the South Atlantic. These floats were supplemented with larger deployments in March 2004 and April 2005. Since 2005, additional deployments and an influx of floats from the
growing Argo array have vastly increased the number of profiles in this region. Argo floats typically profile to 2000 m and measure temperature, salinity, and pressure at 70 depth levels. Sample
spacing for most floats is less than 20 m to depths of 400 m, below which the spacing increases to 50 m. Figure 2 shows the temperature, potential density, salinity, and sampling interval for a
typical Argo profile. The longest float record contained 95 profiles, whereas the shortest contained only 1 profile. All of the profiles included both temperature and salinity data.
Profiles collected before November 2005 were manually examined to remove inconsistencies in temperature, salinity, and pressure. Most floats sampled at regular pressure levels, though the Canadian
Argo floats often sampled at irregular pressures and required substantial editing. Float profiles that failed to meet basic quality controls or lacked locations or time stamps were eliminated.
Temperature–salinity (T–S) plots allowed the comparison of float data to two World Ocean Circulation Experiment (WOCE) sections: P17E, along 50°S in the southeastern Pacific, and P19C, along 88°W (
Tsuchiya and Talley 1998). Except for visually examining the float salinity profiles to confirm that they were largely consistent with the salinity observed during the WOCE cruises, no calibrations
of the floats’ salinities were performed. This quality-control process trimmed the field to 13 601 profiles. The locations of these profiles are shown in Fig. 1. Potential density was calculated for
each profile.
3. Methodology
This section outlines the algorithm’s procedure for finding MLDs. In brief, the algorithm models the profile’s general shape, calculates a suite of possible MLD values, and then looks for groupings
and patterns within the possible MLDs to select the final MLD estimate for each profile. It does this separately for each temperature, salinity, and potential-density profile to produce final MLD
estimates for the temperature and potential-density profiles. The temperature algorithm is detailed in this section because it offers a substantial improvement over its threshold and gradient
counterparts. The salinity and potential-density algorithms work in a similar fashion. The entire algorithm is supplied in online supplemental materials. The following description of the temperature
algorithm is divided into three parts: first, a description of how the algorithm calculates the five possible MLD values; second, an explanation of how the algorithm selects the final MLD estimate
from the pool of possible MLDs; and third, an example.
a. Assembling the possible MLD values
Examples of typical summer and winter profiles are shown in Fig. 3, as well as the five possible MLD values that the temperature algorithm calculates for each profile. For temperature profiles, the
five possible MLD measures are the MLTFIT, the temperature maximum (TM), the temperature gradient MLD estimate (DTM), nearly collocated temperature and temperature gradient maxima (TDTM; this
represents intrusions at the base of the mixed layer), and the temperature threshold MLD estimate (TTMLD). For reference, the five possible MLD values for temperature are listed in Table 1. For
salinity, the possible MLD values are the density threshold MLD estimate, the salinity minimum, the salinity gradient extreme, collocated salinity and salinity gradient minima (representing an
intrusion at the base of the mixed layer, if one exists), the intersection of the salinity mixed layer and thermocline fits, and the final temperature algorithm MLD. For density, the algorithm uses
the density threshold MLD estimate, the density gradient MLD estimate, and the intersection of the density mixed layer and thermocline fits, as well as the temperature threshold MLD estimate,
collocated temperature and temperature gradient maxima, the temperature maximum, and the final MLDs from the temperature and salinity algorithms.
The algorithm derives its five possible MLD values for temperature as follows:
1. The algorithm initially uses a simple threshold method to find the approximate MLDs of the temperature and potential-density profiles. Starting at the surface, threshold methods search
progressively deeper levels until they find a level where the temperature or potential density differs from the surface reference value by a specified threshold. To calculate TTMLD, the algorithm
looks for the minimum depth at which |T(p) − T(p[o])| ≥ ΔT[t], where T is the temperature, p is the pressure, p[o] is the reference pressure, and ΔT[t] is the temperature threshold. The algorithm
linearly interpolates the temperature profile between Argo measurements to find the depth that exactly matches the threshold criterion. For potential density, the algorithm implements the same
procedure but uses the potential-density anomaly σ[θ]. Following de Boyer Montégut et al. (2004), 0.2°C and 0.03 kg m^−3 are used as the threshold difference criteria and the Argo measurement
closest to 10 dbar is used as the surface reference value.
2. The algorithm then calculates the temperature, salinity, and potential-density gradients using a difference formula. For calculating the temperature gradient, the algorithm uses
is the depth measurement index from the surface (
= 1) to one level above the bottom of the profile (
− 1). The algorithm first uses the gradient to calculate gradient MLDs for temperature and potential density; it finds the depth at which the temperature and potential-density gradients exceed
specified gradient criteria. Following
Dong et al. (2008)
, the algorithm uses a potential-density gradient criterion of 0.0005 kg m
. It uses a temperature gradient criterion of 0.005°C dbar
. This criterion was found to better approximate the MLD than larger criteria. For temperature, the algorithm looks for the depth at which |∂
| ≥ 0.005°C dbar
. If these gradient criteria are not met, the algorithm takes the depth of the maximum of the gradient’s absolute value as the gradient MLD. To aid in identifying persistent change in each of the
variables (such as the thermocline, which is identified in step 4), the algorithm then smoothes the gradient with a three-point running mean to eliminate small vertical-scale spikes and
small-scale intrusions.
3. The algorithm fits a straight line to the mixed layers of the temperature, salinity, and potential-density profiles. Starting at the surface, the algorithm uses the first two points of the
profile to calculate a straight-line least squares fit to the mixed layer. It increases the depth and the number of points used in the fit until it reaches the bottom of the profile. For each
fit, the algorithm calculates the error by summing the squared difference between the fit and the profile over the depth of the fit. For temperature, this is expressed as
In this example,
is the error for the
th fit (extending to depth index
is the straight-line temperature fit, and
indexes the depth of the fit and the fit itself. There is a different error and fit for each
. The algorithm only sums the error over the depth of the fit, so a straight-line fit no longer accurately describes the profile as the depth of the fit increases past the mixed layer and as the
error increases. The algorithm normalizes the errors by dividing each
by the total sum of the errors. The normalized error
is given by
Normalizing the error removes dependence on the magnitude of the seasonal thermocline and produces a unitless error. The algorithm takes the deepest mixed layer fit that satisfies a specified
error tolerance,
= 10
. This small error tolerance is used to ensure that the mixed layer fit closely matches the mixed layer and does not use any points in the seasonal thermocline; it consistently produces a
straight-line fit to the mixed layer and results in the average use of 3.5 Argo measurements per mixed layer fit. Varying the error tolerance has little effect on the MLDs found by the algorithm
Fig. 4
4. Straight lines are fit to the seasonal thermoclines of each temperature, salinity, and potential-density profile. The algorithm later finds the intersection of the thermocline and mixed layer
fits as one possible measure of the MLD. The algorithm identifies the center of the seasonal thermocline for each profile as the depth of the maximum of the absolute values of the smoothed
temperature, salinity, and potential-density gradients (calculated in step 2). For temperature, this is expressed as
is the depth index of the thermocline. It is quite successful in the summer, when the seasonal thermocline is easily identifiable as a large spike in the
, and
profiles. The algorithm uses
and the two neighboring points (
− 1 and
+ 1) to fit a straight line to the seasonal thermocline. Because Argo floats record few data points in the thermocline, including more than three points in the fit skews the thermocline
5. The algorithm assembles the possible values of the MLD for each temperature, salinity, and potential-density profile. The five possible MLD values for the temperature algorithm are given in (i)–
□ (i)The first possible MLD is from the TTMLD calculation. This is represented as
is the surface reference temperature. The salinity and density algorithms use the density threshold,
+ 0.003 kg m
), where
is the surface reference potential density.
□ (ii)The second MLD value for temperature is the result of the DTM calculation. This is represented as
If the gradient criterion is not met, the algorithm takes the gradient extreme,
The potential-density gradient MLD is calculated in the same manner, using a criterion of 0.0005 kg m
. The salinity algorithm uses the salinity gradient extreme.
□ (iii)The algorithm then finds the depth of the TM and the salinity and density minima. For temperature, this is represented as
□ (iv)For the fourth possible MLD value, the algorithm searches for a specific feature in the profile. Surface cooling and intense wind events in the winter deepen the mixed layer and erode
the summer thermocline. This process often leaves subsurface anomalies of temperature or salinity at the base of the mixed layer. An example of this feature is shown in
Fig. 3a
. The algorithm identifies these features in temperature profiles by searching for maxima of the smoothed temperature gradient profiles within a specified distance (the parameter Δ
) of subsurface TM; the algorithm takes the shallowest of the two as the fourth possible MLD value (TDTM). This is represented as
The fourth possible MLD value is set to zero if the temperature and temperature gradient maxima are separated by more than Δ
Figure 3a
provides an example of a subsurface temperature anomaly where TM and the temperature gradient maximum are separated by 50 dbar. Setting Δ
, the maximum allowable separation between TM and temperature gradient maxima, to 100 dbar allows the algorithm to identify temperature intrusions at the base of the mixed layer. As shown in
Fig. 5
, this value of Δ
encompasses the profusion of deep MLDs scattered around 0.0 dbar separation between TM and the temperature gradient maximum.
□ (iv)The final possible MLD value represents another physical feature in the profile; the depth of MLTFIT. This is designed to capture the MLD in profiles with homogenous mixed layers near
the surface and strong seasonal thermoclines. For temperature, this is represented as
is the mixed layer fit and
is the seasonal-thermocline fit. MLTFIT is set to 0 if the fits do not intersect. The salinity and density algorithms use their respective fits. This MLD measure works especially well in the
summer, when the algorithm can easily identify the seasonal thermocline, but occasionally falters in the winter, when the seasonal thermocline is weak.
b. Selecting the MLD estimate
The algorithm selection process is divided into two parts. In summary, the algorithm first determines whether the profile resembles a summer or winter profile. Simplistically, a summer profile
generally consists of a homogenous mixed layer near the surface; a seasonal thermocline where the temperature, salinity, and density change abruptly with depth; and a deep-water layer that is
seasonally invariant. Winter profiles lack the strong summer thermocline; the mixed layer visually blends into the underlying waters. The algorithm’s initial MLD selection is dependent on the “type”
of profile. Then, over a series of steps, the algorithm examines the other possible MLD values, looks for clusters of possible MLD values, and either confirms or replaces the initial MLD selection.
The algorithm selects MLDs for each temperature and potential-density profile; the algorithm also selects a salinity MLD, but it only serves to verify the potential-density MLD. The temperature
algorithm’s selection process is outlined in the following steps:
1. Before the algorithm can search for clusters of the possible MLDs, it must first define a depth range over which to search. The possible MLDs are rarely at the same Argo depth levels but might be
within 15 dbar of each other; this range parameter r allows the algorithm to identify clusters of possible MLDs separated by less than r and to accommodate Argo’s sampling scheme. The algorithm
also avoids selecting temperature maxima at the surface by checking whether they are deeper than r. The distribution of the maximum separation between MLTFIT, TTMLD, and DTM (plotted in Fig. 6)
determines the value of r. MLTFIT, TTMLD, and DTM have maximum separations of 5 dbar for 1941 profiles, 5–15 dbar for 4054 profiles, and 15–25 dbar for 1645 profiles. Because there is a falloff
in the number of profiles with maximum separations greater than 25 dbar, r is set to 25 dbar, the approximate equivalent of two Argo depth bins.
2. The algorithm uses the temperature or potential-density changes across the thermocline (ΔT and Δσ[θ]) to estimate whether a profile is summer-like (strong thermocline beneath the mixed layer) or
winter-like (weak thermocline beneath the mixed layer). The temperature change across the thermocline, in terms of Argo depth bins, is defined as T(i[MLTFIT]) − T(i[MLTFIT] + 2), where i[MLTFIT]
is the Argo depth index of MLTFIT; the potential-density change is calculated in the same manner. The algorithm compares this temperature change to a third parameter ΔT[c] a temperature change
cutoff, for information about the strength of the seasonal thermocline and to decide if a profile is summer- or winter-like. Figure 7 plots the temperature and potential-density changes across
the thermocline against the MLD as well as the temperature-change and potential-density-change cutoffs. If the temperature change is within the cutoff region (0.5°C > ΔT > −0.25°C), then the
algorithm initially assumes that the profile is winter-like. The potential-density-change cutoff σ[θ[c]] is −0.06 kg m^−3 (Δσ[θ] > −0.06 kg m^−3 for winter-like profiles). In the study region,
83% of profiles with MLDs deeper than 200 dbar are within the temperature-change cutoff range; 90% of the profiles with MLDs deeper than 200 dbar are within the potential-density-change cutoff.
3. If ΔT falls outside of the winter cutoff (ΔT[c]), the algorithm initially assumes that the profile features a strong thermocline. Figure 8 shows the temperature algorithm flow path for these
summer-like profiles. Because summer-like profiles are assumed to feature strong thermoclines, the algorithm first assigns MLTFIT to the final MLD. Steps (i) and (ii) check the other possible
MLDs to ensure that this MLD assignment is reasonable; if not, the MLD is reassigned to one of the other possible MLDs as described.
□ (i)Profiles with multiple temperature inversions, such as polar profiles, often have shallow MLDs but lack identifiable seasonal thermoclines. These profiles can cause the algorithm to
misidentify the thermocline and thus confound MLTFIT. To identify these profiles, in Fig. 8a, the algorithm searches for temperature increases beneath the mixed layer (ΔT < 0) and checks
whether MLTFIT overestimated the MLD relative to TTMLD. If so, the algorithm assigns the MLD to TTMLD.
□ (ii)This step treats TTMLD as an upper bound on the MLD to evaluate MLTFIT and TM. The algorithm first tests the current MLD against TTMLD (Fig. 8b); the final MLD is assigned to the current
MLD if it is shallower than TTMLD. If the current MLD is deeper than TTMLD, the algorithm subsequently examines TM. If TM is beneath the surface and shallower than TTMLD, then the algorithm
assigns the MLD to TM; if not, then it assigns the MLD to TTMLD (Fig. 8c).
4. If ΔT is within the winter cutoff range, the temperature algorithm assumes that the profile is winter-like and follows the flow path shown in Fig. 9. The selection process is conducted in the
following steps:
□ (i)The algorithm first tests whether it identified a seasonal thermocline (and therefore a meaningful MLD value for MLTFIT) by checking if MLTFIT and TTMLD are in close proximity (|MLTFIT −
TTMLD| < r) and by comparing MLTFIT to TDTM. TDTM represents a subsurface temperature anomaly at the base of the mixed layer, if such an anomaly exists. When the algorithm fails to identify a
seasonal thermocline, it often instead identifies the permanent thermocline, producing a very deep estimate for MLTFIT. Therefore, if MLTFIT is shallower than TDTM and if TDTM and TTMLD
differ by more than r, the algorithm has most likely identified the seasonal thermocline, so the MLD is assigned to MLTFIT (Fig. 9d) and the algorithm proceeds to (iv).
□ (ii)If the algorithm did not capture the seasonal thermocline (the MLD was not assigned to MLTFIT), then the algorithm searches for temperature anomalies at the base of the mixed layer; if
TDTM exists and is not at the surface, the algorithm assigns the MLD to TDTM (Fig. 9e). It then checks that TDTM does not greatly differ from the other possible MLDs (Figs. 9f,g). It
accomplishes this by first searching for clusters of three other MLD estimates; it determines if any two sets of MLTFIT, TTMLD, and DTM (|MLTFIT − TTMLD|, |MLTFIT − DTM|, or |DTM − TTMLD|)
are separated by less than r, as they often are for profiles with seasonal thermoclines. If so, the MLD is assigned to MLTFIT (Fig. 9f). As a final check, if the MLD is deeper than TTMLD, the
MLD is reassigned to TTMLD in Fig. 9g. The algorithm then proceeds to (iv).
□ (iii)Convective winter mixing does not necessarily produce temperature anomalies at the base of the mixed layer, so TDTM does not necessarily exist. Figures 9h,i are evaluated if TDTM does
not exist and if the algorithm did not assign the MLD to MLTFIT. The algorithm again considers MLTFIT by comparing MLTFIT to TTMLD; if MLTFIT is not more than r deeper than TTMLD (MLTFIT −
TTMLD < r; Fig. 9h), the MLD is assigned to MLTFIT. If MLTFIT is more than r deeper than TTMLD, the MLD is assigned to the gradient MLD estimate, DTM. To test DTM, the algorithm checks
whether it is deeper than TTMLD (Fig. 9i). If DTM is deeper than TTMLD, the MLD is reassigned to TTMLD.
□ (iv)The algorithm checks for poor thermocline fits by testing whether the final MLD estimate has been assigned to the surface and whether this near-surface MLD differs from TTMLD (|MLD −
TTMLD| < r). If these conditions are met, the MLD is most likely shallow and the algorithm assigns the MLD to TM (Fig. 9j). In two final checks, the algorithm assigns the MLD to TTMLD if TM
is at the surface or if it is deeper than TTMLD (Figs. 9k,l).
c. Example selection processes
Figure 3a provides an example of the temperature algorithm’s selection process. The algorithm first compares the temperature change below MLTFIT to the temperature-change cutoff ΔT[c]. For this
profile, ΔT = 0.06°C, so the algorithm considers this a winter-like profile and follows the path in Fig. 9. MLTFIT is deeper than TDTM (Fig. 9d), so the algorithm looks for a subsurface temperature
maximum at the base of the mixed layer (Fig. 9e); TDTM is greater than r, so the algorithm assigns the MLD to TDTM. In Fig. 9f, the algorithm checks whether there might be a thermocline, but MLTFIT
is 100 dbar shallower than TTMLD. TDTM is also much shallower than TTMLD (Fig. 9g), so the final MLD is assigned to TDTM. From visual inspection of the salinity and potential-density profiles, it is
clear that the temperature algorithm MLD is closer to the actual MLD than the temperature threshold and temperature gradient MLDs.
Figure 3b provides another example. This profile has a strong seasonal thermocline (ΔT = 1.4°C), so the algorithm considers this a summer-like profile and initially assigns the MLD to MLTFIT. MLTFIT
is at the same depth as TTMLD, so the MLD assignment does not change.
For the temperature profiles in this study, the algorithm uses the intersection of the mixed layer and thermocline fits as the MLD for 58% of the profiles in the study region. The threshold MLD is
used for 22% of the profiles and the gradient MLD is used for 9%. Collocated temperature and temperature gradient maxima are used for 7% of the profiles and temperature maxima are used for 4%.
4. Comparison to other methods
The MLDs produced by six different methods are considered here to evaluate the algorithm. The six MLD estimates are 1) the temperature algorithm estimate, 2) the density algorithm estimate, 3) a
temperature threshold estimate (threshold of 0.2°C), 4) a density threshold estimate (threshold of 0.03 kg m^−3), 5) a temperature gradient estimate (criterion of 0.005°C dbar^−1), and 6) a density
gradient estimate (criterion of 0.0005 kg m^−3 dbar^−1). The threshold estimates are from de Boyer Montégut et al. (2004) and the gradient criteria are derived from Dong et al. (2008). We evaluate
the six methods by first examining their MLDs for a single profile and then a single float record. For the float record, the exact MLD was determined by visually identifying the homogenous mixed
layer and comparing it to the MLD estimates of the six methods. The analysis is then expanded to the distribution of MLDs for all of the profiles in the southeast Pacific and southwest Atlantic. The
algorithm and threshold MLD distributions for the entire Southern Ocean are briefly examined.
a. Individual profile comparison
The algorithm MLDs are first compared to threshold and gradient MLDs for the two sets of temperature and potential-density profiles in Fig. 3. For the winter profile (Fig. 3a), the algorithm
calculates MLDs of 220 dbar for temperature and 200 dbar for potential density. The temperature threshold (ΔT = 0.2°C) calculates an MLD of 375 dbar; the density threshold (Δσ[θ] = 0.03 kg m^−3)
calculates an MLD of 225 dbar. The temperature gradient method (criterion of 0.005°C dbar^−1) identifies an MLD of 360 dbar, and the density gradient method (criterion of 0.0005 kg m^−3 dbar^−1)
identifies an MLD of 220 dbar. The temperature algorithm seizes upon the close proximity of the temperature maximum and the temperature gradient maximum to identify the MLD [Eq. (9) represents the
MLD]. The temperature threshold and temperature gradient methods both overestimate the MLD by nearly 150 dbar. In general, winter profiles, with no strong, sustained gradients in density or
temperature below the mixed layer, prove difficult for the temperature threshold and gradient methods. The density threshold and gradient methods slightly overestimate the MLD compared to the density
For the summer profile (Fig. 3b), the algorithm calculates MLDs of 80 dbar for both temperature and potential density using the intersection between the thermocline and mixed layer fits [represented
by Eq. (11)]. All of the threshold and gradient methods find similar MLDs, though the threshold MLDs are slightly deeper. These MLDs are representative of typical results. In general, the algorithm,
threshold, and gradient methods produce similar summer MLDs; the strong seasonal thermocline and pycnocline prohibit the threshold and gradient methods from advancing very far below the actual MLD
and ensure that the algorithm identifies and fits the thermocline.
b. Individual float comparison
Having examined two profiles, the comparison between the algorithm, threshold, and gradient methods is expanded to an entire float record. Figure 10 presents the track of float 3900082. The float was
deployed in December 2002, off the coast of Chile in the southeast Pacific Ocean. It collected 95 profiles before it ceased transmitting in August 2005. Entwined in numerous eddies, it crossed the
ACC and was carried through the Drake Passage and into the polar ocean surrounding Antarctica. The potential-density time series of float 3900082 is plotted in Fig. 11a, in addition to three MLD time
series. Figure 11b is the temperature time series of the float, again in addition to three MLD time series.
The density algorithm, threshold, and gradient methods generally produce comparable MLDs in summer (Figs. 3b, 11a). Subtle gradients in temperature, salinity, and density that blend mixed layers into
deep waters, as well as a wide variety of subsurface features such as salinity intrusions, often obscure the MLD of winter profiles. The density threshold winter MLDs are generally deeper than the
density algorithm winter MLDs (Fig. 11a). As seen in Fig. 3a, weak density gradients at the base of the mixed layer cause the density threshold method to slightly overestimate the MLD in winter. The
density gradient is much more erratic than the other methods, as evidenced by its frequent jumps to both shallow and extraordinarily deep MLDs in Fig. 11a. In winter 2004, the density gradient method
estimates the MLD to be over 100 dbar deeper than the other methods. These anomalous density gradient MLDs do not fit with the general trend of the exact MLD.
In summer, the temperature algorithm, temperature threshold, and temperature gradient methods find similar MLDs. The MLD time series in Fig. 11b closely follow each other in the summer because of the
strong temperature gradient beneath the mixed layer. The temperature algorithm is generally much more successful at finding winter MLDs than the temperature threshold and gradient methods. Judging
the actual MLD visually, the temperature threshold method overestimates many MLDs during the winter of 2003 by approximately 200 dbar (Fig. 11b). Likewise, the temperature gradient method
overestimates many MLDs during the winter of 2004. An example of this is given in Fig. 3a, in which the temperature is nearly uniform to a depth of 300 dbar. The MLD and density of this set of
profiles are determined by salinity; the MLD is clearly 200 dbar in the salinity and density profiles. The temperature threshold and gradient methods estimate the MLD to be 375 and 360 dbar,
respectively. The temperature algorithm identifies a small temperature protrusion at the base of the mixed layer and estimates an MLD of 220 dbar. This estimate is tied to a physical feature of the
profile; compared to the temperature threshold and gradient MLDs, it is much closer to the actual MLD. The temperature algorithm’s continued success at finding such features is evident in the
similarity of its MLD to the density algorithm and density threshold MLDs (Figs. 11a,b).
c. Southeast Pacific and southwest Atlantic comparison
An analysis of the MLDs for the six methods from the southeast Pacific and southwest Atlantic Oceans confirms that the temperature algorithm improves on the temperature threshold and gradient methods
and that the density algorithm offers a slight improvement over the other density methods. The MLD distributions of the six methods are plotted in Fig. 12. The temperature threshold method
consistently overestimates deep MLDs relative to the other methods; it finds more MLDs between 250 and 600 dbar than any other method. The temperature and density gradient methods find the deepest
MLDs: the temperature gradient method finds nearly 250 MLDs deeper than 700 dbar and the density gradient method finds 100 MLDs deeper than 700 dbar. None of the other methods finds mixed layers this
deep. Figures 11a,b contain multiple examples of these deep gradient method MLDs. Both gradient methods, particularly density, are also prone to finding anomalously shallow MLDs; in Fig. 12a, both
gradient methods find many more shallow MLDs (50 dbar or less) than the other methods.
In Figs. 13a,c, the scatter of temperature and density algorithm MLDs against the temperature and density threshold MLDs shows that the temperature and density algorithms generally find shallower
MLDs than their threshold counterparts. The temperature threshold method systematically overestimates many MLDs relative to the temperature algorithm, forming a cluster of MLDs highlighted in Fig.
14a. Of these highlighted profiles, the temperature algorithm uses collocated temperature and temperature gradient maxima [Eq. (9)] to find the MLD for 75% of the profiles. This results in an average
temperature algorithm MLD that better approximates the actual MLD than the average temperature threshold method, which overestimates the MLD by nearly 200 dbar (Fig. 14b).
Numerous examples of anomalously shallow and deep MLDs found by the temperature and density gradient methods are shown in Figs. 11a,b; Figs. 13b,d confirm the gradient methods’ tendencies to find
anomalous MLDs. The temperature gradient method finds more than 250 MLDs deeper than 700 dbar; these anomalously deep gradient MLDs correspond to temperature algorithm MLDs ranging from 25 to 600
dbar (Fig. 13b). The temperature gradient method’s proclivity to overestimate the MLD in profiles with weak gradients beneath the mixed layer is also illustrated in Fig. 14b; for the subset of
profiles, the average temperature gradient MLD is 250 dbar deeper than the average temperature algorithm MLD. Figure 13d illustrates the density gradient method’s tendency to find anomalous MLDs;
there is a large cluster of points corresponding to density gradient MLDs from 0 to 100 dbar and density algorithm MLDs varying from 0 to 600 dbar. Likewise, a similar cluster corresponds to density
gradient MLDs deeper than 600 dbar and density algorithm MLDs between 25 and 600 dbar. Brainerd and Gregg (1995) found gradient methods to be less stable than threshold methods, a result mirrored in
these distribution plots.
Figure 15 compares the density algorithm MLDs to the MLDs of the three temperature methods. Table 2 lists the means and standard deviations of the MLDs of the six methods, as well as the mean and
standard deviation of the difference between the temperature methods MLDs and the density algorithm MLD. Together, these provide a means to evaluate the temperature methods relative to the density
The cluster of deep temperature threshold MLDs highlighted in Fig. 14a is reproduced in Fig. 15a. There is no similar cluster in the scatter of density algorithm MLDs against the temperature
algorithm MLDs (Fig. 15c). The temperature threshold method systematically overestimates deep MLDs, producing a mean MLD of 109 dbar. This mean MLD is 19 dbar deeper than the temperature algorithm
mean MLD and 23 dbar deeper than the density algorithm mean MLD.
The temperature gradient method does not systematically overestimate the MLD relative to the density algorithm; rather, it identifies occasional anomalously deep MLDs compared to the density
algorithm (Fig. 15b). These anomalously deep MLDs result in a mean temperature gradient MLD of 110 dbar (24 dbar deeper than the mean density algorithm MLD) and an MLD standard deviation of 161 dbar,
which is much larger than any other method.
The temperature algorithm more closely tracks the density algorithm than the other temperature methods. The standard deviation of the difference between the temperature algorithm and the density
algorithm MLDs is much smaller than the standard deviation of the difference between the density algorithm and the other temperature methods (Table 2). The temperature gradient method produces many
MLDs similar to the temperature algorithm but is hampered by its tendency to find anomalously deep MLDs. The temperature threshold method routinely overestimates the depth of deep mixed layers.
d. Southern Ocean comparison
Expanding our analysis to algorithm and threshold MLD distributions for the entire Southern Ocean produces a more complex distribution of MLDs, though the general pattern is similar to the MLD
distributions from the study region. Dong et al. (2008) produced a Southern Ocean MLD climatology from Argo float data and provided us with plots of the scatter of MLDs found by the temperature and
density algorithms and threshold methods for the entire Southern Ocean (Fig. 16). As in the study region, the density methods produce very similar MLDs, though the threshold method tends to
overestimate deep MLDs relative to the algorithm. The temperature methods exhibit much more scatter than the density methods. In general, the temperature algorithm estimates shallower MLDs than the
temperature threshold method; a cluster of MLDs similar to the cluster highlighted in Fig. 14a is also visible in the Southern Ocean distribution.
The algorithm’s ability to identify physical features in the profiles allows it to track and identify the MLD more accurately than a traditional threshold method. Likewise, it is more stable than
gradient methods. This accuracy makes the algorithm useful for identifying density-compensating and barrier layers. An accurate estimation of the mixed layer depth is important for ocean models that
tune their turbulent mixing parameters to match observed ocean mixed layer depths (Noh et al. 2002). Because of its complexity, the algorithm is slower than threshold and gradient methods and it,
like any MLD-finding method, is liable to be stumped by unusual profiles.
5. Application to SAMW mixed layers
One region of the ocean known for persistent deep winter mixed layers and water mass formation is immediately north of the ACC. The ACC encircles Antarctica as it flows eastward through the southern
Pacific, Indian, and Atlantic Oceans. There are three fronts in the ACC associated with zonal jets in the current (Orsi et al. 1995). The deepest mixed layers in the Southern Ocean are associated
with the northern side of the northernmost front, the Subantarctic Front (SAF). The waters defined and enclosed by these deep mixed layers were termed SAMW by McCartney (1977). AAIW, characterized by
relatively low salinity, high oxygen, and low potential vorticity, is the densest, deepest, and freshest SAMW and is thought to form in the southeast Pacific just before the ACC enters the Drake
Passage (McCartney 1977; England et al. 1993; Talley 1996; Hanawa and Talley 2001).
AAIW can be traced as a relatively low-salinity (34.4 psu) tongue throughout almost all of the Southern Hemisphere and the tropical oceans at about 1000 m depth (Deacon 1937). The global-scale heat
and freshwater transports associated with AAIW’s movement into the world’s oceans reflect its relevance to studies of the earth’s climate and of the ocean’s global overturning circulation (Keeling
and Stephens 2001; Pahnke and Zahn 2005). The SAMW and AAIW formation region is an ideal location to test methods for finding MLDs. The mixed layer exhibits great variability; in winter, the mixed
layers north of the SAF can reach depths of 500 m and blend into deeper waters and remnant mixed layers. This makes determining the exact mixed layer difficult for many MLD-finding methods. Likewise,
polar waters and summer stratification test methods’ abilities to detect shallow mixed layers.
The algorithm identifies deep mixed layers, providing the locations; time of year; and temperature, salinity, and density characteristics of this oceanic process that has historically proven
difficult to observe. The locations of SAMW formation, identified by deep mixed layers, are found by mapping all of the MLDs found by the density algorithm (Fig. 17). The deepest mixed layers are
found in the southeast Pacific Ocean, immediately north of Orsi et al.’s (1995) climatological Subantarctic Front. The deepest MLDs are about 650 dbar, with numerous MLDs reaching 500 dbar. No
regions of similarly deep mixed layers are found in the South Atlantic.
The density algorithm MLD map (Fig. 17) generally features a broader region of deep mixed layers compared to four MLD climatologies (not shown). The 95% oxygen saturation depth has been used by
Talley (1999) as a proxy for the MLD. Using Antonov et al.’s (2006) 95% oxygen saturation depth as an MLD proxy produces MLDs of roughly the same depth range as the algorithm, but the climatology’s
region of deep MLDs is more localized and centered at 53°S and 92°W. Levitus and Boyer’s (1994) MLD climatology shows MLDs of 1000 m, far deeper than anything found by Argo in the study region. Their
deepest MLDs are also localized and centered at 52°S and 87°W. The deepest MLDs of de Boyer Montégut et al.’s (2004) climatology reach 450 m at 90°W and do not extend farther west. Kara et al. (2003)
used Levitus and Boyer’s (1994) density in constructing their climatology. The spatial distribution of their MLDs is similar to the density algorithm, but their MLDs reach 800 m, considerably deeper
than any mixed layers found by the density algorithm.
The temperature, salinity, and potential-density characteristics of the deep mixed layers are identified with a T–S diagram (Fig. 17). The deepest mixed layers have average potential densities of
approximately 27 kg m^−3, salinities of 34.1–34.2 psu, and temperatures of 4°–5°C.
Figure 18 plots the MLD time series of the floats in the area of the Pacific with deep mixed layers. This region, from 50° to 62°S and from 110° to 68°W, is boxed in Fig. 17. The deepest MLDs occur
in August and September. The temporal extent of the deep MLDs was greater for the 2003 winter than for any other. The mixed layers gradually deepen over the course of six months leading up to August
and September, after which they quickly restratify. The average MLD reached in winter is approximately 300 dbar, though individual floats record MLDs exceeding 650 dbar. As shown in section 4, the
threshold methods overestimate the MLD relative to the algorithms. In particular, the temperature hreshold method produces winter periods of deep mixed layers that are of earlier onset, greater
duration, and greater depth than the temperature algorithm (Fig. 18a).
To examine how the deep mixed layers relate to AAIW, the zonal average salinity for the Pacific study region during winter is plotted in Fig. 19. From this average section, the low-salinity water
mass at middepth (500–600 dbar in Fig. 19, between the 27.0 and 27.1 kg m^−3 isopycnals) can be traced to a surface density outcropping between 58° and 60°S. The region of deep mixed layers
corresponds to a sea surface salinity maximum between 54° and 57°S. On average, these deep winter mixed layers in the southeast Pacific Ocean appear to penetrate into the low-salinity layer at 56°S
and inject low-salinity water of the correct salinity, density class, and depth as AAIW into the ocean interior.
6. Summary
A new algorithm was developed to find the MLD of individual Argo ocean profiles. The algorithm fits straight lines to the mixed layer and thermocline, searches for subsurface property anomalies, and
incorporates threshold and gradient methods to find the MLD. The temperature and density algorithms tend to find shallower MLDs than their threshold counterparts. The temperature algorithm MLD nearly
matches the density algorithm MLD. In the study region, the temperature algorithm offers a marked improvement over a temperature threshold method using the criterion of de Boyer Montégut et al.
(2004); the temperature threshold method frequently overestimates winter MLDs by nearly 200 dbar for profiles in which the temperature algorithm successfully identifies temperature anomalies at the
base of the mixed layer. The temperature algorithm is preferred over the temperature gradient method because of the gradient method’s tendency to find anomalously deep MLDs. The density gradient
method also produces many anomalous MLDs. The algorithm was used to investigate the formation of SAMW and AAIW in the southeast Pacific and southwest Atlantic Oceans. We find that the deepest MLDs
routinely reach 500 dbar and occur north of the Orsi et al. (1995) mean SAF in the southeastern Pacific Ocean. Within the Pacific study region, the deepest winter mixed layers occur in August and
September at 57°S and are concurrent with the subsurface salinity minimum, a signature of AAIW.
NSF Ocean Sciences Division Grant OCE-0327544 supported this work. Harold Freeland of IOS, Sidney, BC, pioneered the deployment of Argo floats in the southeast Pacific. Shenfu Dong kindly provided us
with plots of algorithm and threshold MLDs for the entire Southern Ocean. Sharon Escher helped refine the algorithm’s coding. Three anonymous reviewers provided many useful comments that greatly
improved the manuscript.
• Antonov, J. A., Locarnini R. A. , Boyer T. P. , Garcia H. E. , and Mishonov A. , 2006: Salinity. Vol. 2, World Ocean Atlas 2005, NOAA Atlas NESDIS 62, 50 pp.
• Brainerd, K. E., and Gregg M. C. , 1995: Surface mixed and mixing layer depths. Deep-Sea Res. I, 42 , 1521–1543.
• Chen, D., Busalacchi A. J. , and Rothstein L. M. , 1994: The roles of vertical mixing, solar radiation, and wind stress in a model simulation of the sea surface temperature seasonal cycle in the
tropical Pacific Ocean. J. Geophys. Res., 99 , 20345–20359.
• Chereskin, T. K., and Roemmich D. , 1991: A comparison of measured and wind-derived Ekman transport at 11°N in the Atlantic Ocean. J. Phys. Oceanogr., 21 , 869–878.
• Chu, P. C., Wang Q. , and Bourke R. H. , 1999: A geometric model for the Beaufort/Chukchi Sea thermohaline structure. J. Atmos. Oceanic Technol., 16 , 613–632.
• Deacon, G. E. R., 1937: The hydrology of the Southern Ocean. Discovery Rep., 15 , 1–124.
• de Boyer Montégut, C., Madec G. , Fischer A. S. , Lazar A. , and Iudicone D. , 2004: Mixed layer depth over the global ocean: An examination of profile data and a profile-based climatology. J.
Geophys. Res., 109 , C12003. doi:10.1029/2004JC002378.
• Dong, S., Sprintall J. , Gille S. T. , and Talley L. , 2008: Southern Ocean mixed-layer depth from Argo float profiles. J. Geophys. Res., 113 , C06013. doi:10.1029/2006JC004051.
• England, M. H., Godfrey J. S. , Hirst A. C. , and Tomczak M. , 1993: The mechanism for Antarctic Intermediate Water renewal in a world ocean model. J. Phys. Oceanogr., 23 , 1553–1560.
• Hanawa, K., and Talley L. D. , 2001: Mode waters. Ocean Circulation and Climate, G. Siedler, J. Church, and J. Gould, Eds., Academic Press, 373–386.
• Kara, A. B., Rochford P. A. , and Hurlburt H. E. , 2000: An optimal definition for ocean mixed layer depth. J. Geophys. Res., 105 , 16803–16821.
• Kara, A. B., Rochford P. A. , and Hurlburt H. E. , 2003: Mixed layer depth variability over the global ocean. J. Geophys. Res., 108 , 3079. doi:10.1029/2000JC000736.
• Keeling, R. F., and Stephens B. B. , 2001: Antarctic sea ice and the control of Pleistocene climate instability. Paleoceanography, 16 , 112–131. doi:10.1029/2000PA000529.
• Lavender, K. L., Davis R. E. , and Owens W. B. , 2002: Observations of open-ocean deep convection in the Labrador Sea from subsurface floats. J. Phys. Oceanogr., 32 , 511–526.
• Levitus, S., Burgett R. , and Boyer T. P. , 1994: Salinity. Vol. 3, World Ocean Atlas 1994, NOAA Atlas NESDIS 3, 99 pp.
• Lorbacher, K., Dommenget D. , Niiler P. P. , and Köhl A. , 2006: Ocean mixed layer depth: A subsurface proxy of ocean-atmosphere variability. J. Geophys. Res., 111 , C07010. doi:10.1029/
• Lukas, R., and Lindstrom E. , 1991: The mixed layer of the western equatorial Pacific Ocean. J. Geophys. Res., 96 , 3343–3358.
• Marshall, J., and Schott F. , 1999: Open-ocean convection: Observations, theory, and models. Rev. Geophys., 37 , 1–64.
• McCartney, M. S., 1977: Subantarctic Mode Water. A Voyage of Discovery: George Deacon 70th Anniversary Volume, M. V. Angel, Ed., Pergamon, 103–119.
• Monterey, G., and Levitus S. , 1997: Seasonal Variability of Mixed Layer Depth for the World Ocean. NOAA Atlas NESDIS 14, 96 pp.
• Noh, Y., Jang C. J. , Yamagata T. , Chu P. C. , and Kim C. H. , 2002: Simulation of more realistic upper-ocean processes from an OGCM with a new ocean mixed layer model. J. Phys. Oceanogr., 32 ,
• Ohlmann, J. C., Siegel D. A. , and Gautier C. , 1996: Ocean mixed layer radiant heating and solar penetration: A global analysis. J. Climate, 9 , 2265–2280.
• Oka, E., Talley L. D. , and Suga T. , 2007: Temporal variability of winter mixed layer in the mid-to high-latitude North Pacific. J. Oceanogr., 63 , 293–307.
• Orsi, A. H., Whitworth T. , and Nowlin W. D. , 1995: On the meridional extent and fronts of the Antarctic Circumpolar Current. Deep-Sea Res. I, 42 , 641–673.
• Pahnke, K., and Zahn R. , 2005: Millennial-scale Antarctic Intermediate Water variability over the past 340,000 years as recorded by benthic foraminiferal δ13C in the mid-depth southwest Pacific.
Extended Abstracts, Spring Meeting, New Orleans, LA, Amer. Geophys. Union, A4+.
• Roemmich, D., and Coauthors, 2001: Argo: The global array of profiling floats. Observing the Oceans in the 21st Century, K. J. Koblinksy and N. R. Smith, Eds., Bureau of Meteorology, 604 pp.
• Sprintall, J., and Tomczak M. , 1992: Evidence of the barrier layer in the surface layer of the tropics. J. Geophys. Res., 97 , 7305–7316.
• Sprintall, J., and Roemmich D. , 1999: Characterizing the structure of the surface layer in the Pacific Ocean. J. Geophys. Res., 104 , 23297–23311.
• Talley, L. D., 1996: Antarctic Intermediate Water in the South Atlantic. The South Atlantic: Present and Past Circulation, G. Wefer et al., Eds., Springer-Verlag, 219–238.
• Talley, L. D., 1999: Some aspects of ocean heat transport by the shallow, intermediate and deep overturning circulations. Mechanisms of Global Climate Change at Millennial Time Scales, Geophys.
Monogr., Vol. 112, Amer. Geophys. Union, 1–22.
• Thomson, R. E., and Fine I. V. , 2003: Estimating mixed layer depth from oceanic profile data. J. Atmos. Oceanic Technol., 20 , 319–329.
• Tsuchiya, M., and Talley L. D. , 1998: A Pacific hydrographic section at 88°W: Water-property distribution. J. Geophys. Res., 103 , 12899–12918.
Fig. 1.
Profile locations of 277 Argo floats in the South Pacific and Atlantic Oceans. The profiling floats collected 13 601 temperature and salinity profiles between February 2002 and November 2008. The
time separation between each float’s subsequent profile is 10 days. The study region extends from 40° to 66°S and from 110° to 35°W. The climatological SAF and Polar Front are represented by white
and solid lines, respectively (Orsi et al. 1995). The track of float 3900082 is in gray. The bathymetry is contoured at 1000-m intervals.
Citation: Journal of Atmospheric and Oceanic Technology 26, 9; 10.1175/2009JTECHO543.1
Fig. 2.
Example Argo profiles of temperature, potential density, and salinity to 2000 m from float 3900082 on 29 Jan 2003 at 52.7°S, 89.7°W. To 400 m, the float’s sampling interval is 10 m, after which it
increases to 50 m. This profile is from a Canadian Argo float.
Citation: Journal of Atmospheric and Oceanic Technology 26, 9; 10.1175/2009JTECHO543.1
Fig. 3.
Temperature, salinity, and potential-density profiles (black dots) collected by float 3900085 in (a) winter and (b) summer. The winter profile was collected on 12 Jul 2003 in the South Pacific Ocean
at 54.3°S, 87.8°W. The summer profile was collected on 12 Feb 2003 at 52.6°S, 89.4°W. The algorithm identifies a unique MLD for each temperature, salinity, and density profile (horizontal bold solid
lines). The temperature algorithm does not use the density threshold MLD. The algorithm mixed layer (thin solid line) and thermocline (dashed line) fits are also plotted. The five mixed layer
estimates used in the temperature algorithm’s selection process are the MTLFIT (orange circle), TM (light blue circle), DTM (green triangle; criterion of 0.005°C dbar^−1), collocated TDTM (located at
the temperature gradient maxima; light blue square), and TTMLD [green square; de Boyer Montégut et al.’s (2004) threshold of 0.2°C]. For the density and salinity profiles, the threshold density MLD
[red square; de Boyer Montégut et al.’s (2004) threshold of 0.03 kg m^−3] and the gradient density MLD (red triangle; criterion of 0.0005 kg m^−3 dbar^−1) are plotted for each profile. The light blue
circles correspond to profile minima and the yellow circle corresponds to the salinity gradient extreme.
Citation: Journal of Atmospheric and Oceanic Technology 26, 9; 10.1175/2009JTECHO543.1
Fig. 4.
Distribution of MLDs for the temperature algorithm with varying error tolerances of 10^−6 (dashed gray), 10^−8 (light gray), 10^−10 (dark gray), and 10^−12 (black). An error tolerance of 10^−10 was
chosen for the algorithm.
Citation: Journal of Atmospheric and Oceanic Technology 26, 9; 10.1175/2009JTECHO543.1
Fig. 5.
Pressure difference between the depth of the temperature maxima and the temperature gradient maxima (p(TM) − p[(∂T/∂p)[max]]) plotted against the MLD as determined by the density threshold method.
Only values for subsurface temperature maxima or temperature gradient maxima are plotted. The vertical lines denote pressure differences of ∓100 dbar.
Citation: Journal of Atmospheric and Oceanic Technology 26, 9; 10.1175/2009JTECHO543.1
Fig. 6.
Distribution of the maximum separation between MLTFIT, TTMLD, and DTM [(|TTMLD − DTM|, |MLTFIT − DTM|, and |TTMLD − MLTFIT|)[max]]. The bin width is 10 dbar, so the first bin, centered at 0-dbar
separation, includes all profiles in which these MLD estimates are separated by a maximum of 5 dbar.
Citation: Journal of Atmospheric and Oceanic Technology 26, 9; 10.1175/2009JTECHO543.1
Fig. 7.
(a) The temperature change across the thermocline, ΔT, is plotted against MLD. (b) Same as in (a), but for potential density. In both cases the MLD was found using the density threshold method. The
vertical lines correspond to the temperature-change and potential-density-change cutoffs. The temperature-change cutoffs are 0.5° and −0.25°C. The potential-density-change cutoff is −0.06 kg m^−3.
Temperature changes within the temperature cutoffs and potential-density changes greater than the potential-density cutoff are treated as winter-like profiles by the algorithm.
Citation: Journal of Atmospheric and Oceanic Technology 26, 9; 10.1175/2009JTECHO543.1
Fig. 8.
The temperature algorithm’s summer flow diagram.
Citation: Journal of Atmospheric and Oceanic Technology 26, 9; 10.1175/2009JTECHO543.1
Fig. 9.
The temperature algorithm’s winter flow diagram.
Citation: Journal of Atmospheric and Oceanic Technology 26, 9; 10.1175/2009JTECHO543.1
Fig. 10.
Track of Argo float 3900082. The float was deployed in the Pacific Ocean off Chile at 53.5°S, 91.7°W in December 2002 and has since passed through the Drake Passage. Profiles with mixed layers deeper
than 200 dbar are represented by open squares (MLD calculated by the density algorithm). The first period of deep mixed layers lasted from 18 Jul to 26 Oct 2003; the second lasted from 11 Aug to 20
Sep 2004. The SAF and Polar Front are represented by the dashed and solid lines (Orsi et al. 1995).
Citation: Journal of Atmospheric and Oceanic Technology 26, 9; 10.1175/2009JTECHO543.1
Fig. 11.
Time series of (a) potential density and (b) temperature for float 3900082. The profiles extend to 2000 dbar but are only shown to 700 dbar. The time series runs from December 2002 to August 2005;
the tick marks along the time axis denote 10-day profile separation. In (a), the contour intervals are 0.02 kg m^−3 and three MLD time series are plotted: the density threshold calculation, using de
Boyer Montégut et al.’s (2004) criterion (red circle); the density gradient calculation (white circle), using a criterion of 0.0005 kg m^−3 dbar^−1; and the density algorithm’s result (light blue
circle). Solid lines connect each MLD time series. In (b), the contour interval is 0.2°C, the temperature threshold is plotted in green (criterion of de Boyer Montégut et al. 2004), the temperature
gradient result is plotted in white (criterion of 0.005°C dbar^−1), and the temperature algorithm result is plotted in light blue.
Citation: Journal of Atmospheric and Oceanic Technology 26, 9; 10.1175/2009JTECHO543.1
Fig. 12.
Distribution of (a) all MLDs and (b) MLDs between 200 and 700 dbar found in the entire study region by six methods: temperature algorithm (solid black), density algorithm (dashed black), temperature
threshold using de Boyer Montégut et al.’s (2004) criterion (solid dark gray), density threshold using de Boyer Montégut et al.’s (2004) criterion (dashed dark gray), temperature gradient (solid
light gray; criterion of 0.005°C dbar^−1), and density gradient (dashed light gray; criterion of 0.0005 kg m^−3 dbar^−1). The sawtooth pattern of the distribution in (b) is due to the depth sampling
scheme of the Argo floats.
Citation: Journal of Atmospheric and Oceanic Technology 26, 9; 10.1175/2009JTECHO543.1
Fig. 13.
Comparison of algorithm, threshold, and gradient MLD estimates from the study region: (a) temperature algorithm and temperature threshold, (b) temperature algorithm and temperature gradient, (c)
density algorithm and density threshold, and (d) density algorithm and density gradient. The thin black line has a slope of 1.
Citation: Journal of Atmospheric and Oceanic Technology 26, 9; 10.1175/2009JTECHO543.1
Fig. 14.
(a) Comparison of temperature threshold and temperature algorithm MLDs, with a subset of MLDs highlighted by black dots, and (b) the average temperature profile for the subset of profiles. Three
average MLD estimates are plotted in (b): the average temperature algorithm MLD (black circle), the average temperature gradient MLD (white triangle), and the average temperature threshold MLD (gray
square). The thin black line in (a) has a slope of 1.
Citation: Journal of Atmospheric and Oceanic Technology 26, 9; 10.1175/2009JTECHO543.1
Fig. 15.
Comparison of temperature (a) threshold, (b) gradient, and (c) algorithm MLDs to the density algorithm MLD. The thin black line has a slope of 1.
Citation: Journal of Atmospheric and Oceanic Technology 26, 9; 10.1175/2009JTECHO543.1
Fig. 16.
Algorithm and threshold MLD estimates for the entire Southern Ocean for (a) the temperature algorithm and threshold and (b) the density algorithm and threshold. The thin black line has a slope of 1.
These plots were provided by S. Dong (2006, personal communication).
Citation: Journal of Atmospheric and Oceanic Technology 26, 9; 10.1175/2009JTECHO543.1
Fig. 17.
The (a) MLD map and (b) mixed layer T–S diagram for the MLDs calculated by the density algorithm. The MLDs range from 0 dbar (blue) to 650 dbar (red). In (a), the SAF and Polar Front are represented
by the solid lines (Orsi et al. 1995). Profiles from the boxed region (50°–62°S, 110°–68°W) are used in the MLD time series in Fig. 18 and the zonally averaged salinity section in Fig. 19. Each mixed
layer`s average temperature and salinity are plotted in (b); the color of each point corresponds to the MLD.
Citation: Journal of Atmospheric and Oceanic Technology 26, 9; 10.1175/2009JTECHO543.1
Fig. 18.
Time series of MLDs (thin colored lines) derived by the (a) temperature and (b) density algorithms for floats within the region of deep mixed layers in the southeastern Pacific (50°–62°S, 110°–68°W).
In (a), the average temperature algorithm MLD is plotted in black and the average temperature threshold MLD in red; (b) plots the average potential-density algorithm MLD in black and the average
potential density threshold MLD in red.
Citation: Journal of Atmospheric and Oceanic Technology 26, 9; 10.1175/2009JTECHO543.1
Fig. 19.
Zonally averaged salinity of all profiles collected in the southeastern Pacific region (Fig. 17) during winter. The study period covers six winters, identified as intervals of deep mixed layers
running from mid-June to early November for 2003–08. An Akima spline was used to interpolate each profile to uniform vertical spacing, at which point they were averaged into 1° latitude bins. The
salinity is contoured at 0.025-psu intervals. The bold black line is the average winter MLD based on the density algorithm MLD of each profile. The thin black line, representing the maximum winter
MLD in the 1° bin, is the average of the five deepest MLDs, again determined by the sensity algorithm. Three potential contours (σ[θ] = 27.0, 27.1, 27.2 kg m^−3) are plotted in gray.
Citation: Journal of Atmospheric and Oceanic Technology 26, 9; 10.1175/2009JTECHO543.1
Table 1.
Acronyms for the temperature algorithm’s five possible MLDs.
Table 2.
MLD means and std devs (dbar) for the six methods: TA, TG, TT, DA, DG, and DT, as well as the means and std devs for the differences between DA and the three temperature methods. | {"url":"https://journals.ametsoc.org:443/view/journals/atot/26/9/2009jtecho543_1.xml","timestamp":"2024-11-07T16:18:21Z","content_type":"text/html","content_length":"1049715","record_id":"<urn:uuid:36ddaa04-af8b-4acd-9762-4ef91af9e562>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00648.warc.gz"} |
SciDAC-5 FASTMath Institute
Frameworks, Algorithms and Scalable Technologies
for Mathematics (FASTMath) SciDAC-5 Institute
The FASTMath SciDAC-5 Institute develops and deploys scalable mathematical algorithms and software tools for reliable simulation of complex physical phenomena and collaborates with application
scientists to ensure the usefulness and applicability of FASTMath technologies.
Objectives of the SciDAC-5 FASTMath Institute
• developing robust mathematical techniques and numerical algorithms to reliably address the challenges of large-scale simulation of complex physical phenomena;
• delivering highly performant software with strong software engineering to run efficiently and scalably on current and next-generation advanced computer architectures at the DOE Office of
Science’s three major computing facilities;
• working closely with domain scientists to leverage our mathematical and machine learning expertise and deploy our software in large-scale modeling and simulation codes; and
• building and supporting the broader computational mathematics and computational science communities across the DOE complex.
The SciDAC-5 Partnerships Calls for Proposals are being released by DOE. A summary of FASTMath resources available to proposal teams can be found at this page:
where meetings with the FASTMath Institute can be requested.
The FASTMath Institute is funded by the Scientific Discovery through Advanced Computing (SciDAC) Program under the Office of Science at the U.S. Department of Energy. | {"url":"https://scidac5-fastmath.lbl.gov/","timestamp":"2024-11-06T10:16:40Z","content_type":"text/html","content_length":"81386","record_id":"<urn:uuid:9f79c274-a727-4e96-bbf9-62b1d223bd9c>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00150.warc.gz"} |
WBA: Walgreens | Logical Invest
What do these metrics mean?
'The total return on a portfolio of investments takes into account not only the capital appreciation on the portfolio, but also the income received on the portfolio. The income typically consists of
interest, dividends, and securities lending fees. This contrasts with the price return, which takes into account only the capital gain on an investment.'
Using this definition on our asset we see for example:
• Looking at the total return of -78.8% in the last 5 years of Walgreens, we see it is relatively lower, thus worse in comparison to the benchmark SPY (101.5%)
• Compared with SPY (29.7%) in the period of the last 3 years, the total return of -76.2% is lower, thus worse.
'Compound annual growth rate (CAGR) is a business and investing specific term for the geometric progression ratio that provides a constant rate of return over the time period. CAGR is not an
accounting term, but it is often used to describe some element of the business, for example revenue, units delivered, registered users, etc. CAGR dampens the effect of volatility of periodic returns
that can render arithmetic means irrelevant. It is particularly useful to compare growth rates from various data sets of common domain such as revenue growth of companies in the same industry.'
Applying this definition to our asset in some examples:
• Looking at the annual performance (CAGR) of -26.7% in the last 5 years of Walgreens, we see it is relatively smaller, thus worse in comparison to the benchmark SPY (15.1%)
• Compared with SPY (9.1%) in the period of the last 3 years, the compounded annual growth rate (CAGR) of -38.1% is smaller, thus worse.
'In finance, volatility (symbol σ) is the degree of variation of a trading price series over time as measured by the standard deviation of logarithmic returns. Historic volatility measures a time
series of past market prices. Implied volatility looks forward in time, being derived from the market price of a market-traded derivative (in particular, an option). Commonly, the higher the
volatility, the riskier the security.'
Applying this definition to our asset in some examples:
• Compared with the benchmark SPY (20.9%) in the period of the last 5 years, the 30 days standard deviation of 38.3% of Walgreens is higher, thus worse.
• Looking at 30 days standard deviation in of 37.5% in the period of the last 3 years, we see it is relatively larger, thus worse in comparison to SPY (17.6%).
'The downside volatility is similar to the volatility, or standard deviation, but only takes losing/negative periods into account.'
Applying this definition to our asset in some examples:
• Compared with the benchmark SPY (14.9%) in the period of the last 5 years, the downside deviation of 28.3% of Walgreens is larger, thus worse.
• Compared with SPY (12.3%) in the period of the last 3 years, the downside risk of 29% is higher, thus worse.
'The Sharpe ratio is the measure of risk-adjusted return of a financial portfolio. Sharpe ratio is a measure of excess portfolio return over the risk-free rate relative to its standard deviation.
Normally, the 90-day Treasury bill rate is taken as the proxy for risk-free rate. A portfolio with a higher Sharpe ratio is considered superior relative to its peers. The measure was named after
William F Sharpe, a Nobel laureate and professor of finance, emeritus at Stanford University.'
Applying this definition to our asset in some examples:
• Looking at the Sharpe Ratio of -0.76 in the last 5 years of Walgreens, we see it is relatively lower, thus worse in comparison to the benchmark SPY (0.6)
• Compared with SPY (0.37) in the period of the last 3 years, the Sharpe Ratio of -1.08 is smaller, thus worse.
'The Sortino ratio, a variation of the Sharpe ratio only factors in the downside, or negative volatility, rather than the total volatility used in calculating the Sharpe ratio. The theory behind the
Sortino variation is that upside volatility is a plus for the investment, and it, therefore, should not be included in the risk calculation. Therefore, the Sortino ratio takes upside volatility out
of the equation and uses only the downside standard deviation in its calculation instead of the total standard deviation that is used in calculating the Sharpe ratio.'
Using this definition on our asset we see for example:
• Looking at the downside risk / excess return profile of -1.03 in the last 5 years of Walgreens, we see it is relatively lower, thus worse in comparison to the benchmark SPY (0.84)
• During the last 3 years, the excess return divided by the downside deviation is -1.4, which is smaller, thus worse than the value of 0.53 from the benchmark.
'Ulcer Index is a method for measuring investment risk that addresses the real concerns of investors, unlike the widely used standard deviation of return. UI is a measure of the depth and duration of
drawdowns in prices from earlier highs. Using Ulcer Index instead of standard deviation can lead to very different conclusions about investment risk and risk-adjusted return, especially when
evaluating strategies that seek to avoid major declines in portfolio value (market timing, dynamic asset allocation, hedge funds, etc.). The Ulcer Index was originally developed in 1987. Since then,
it has been widely recognized and adopted by the investment community. According to Nelson Freeburg, editor of Formula Research, Ulcer Index is “perhaps the most fully realized statistical portrait
of risk there is.'
Using this definition on our asset we see for example:
• Looking at the Ulcer Index of 40 in the last 5 years of Walgreens, we see it is relatively higher, thus worse in comparison to the benchmark SPY (9.32 )
• Compared with SPY (10 ) in the period of the last 3 years, the Ulcer Ratio of 46 is higher, thus worse.
'Maximum drawdown measures the loss in any losing period during a fund’s investment record. It is defined as the percent retrenchment from a fund’s peak value to the fund’s valley value. The drawdown
is in effect from the time the fund’s retrenchment begins until a new fund high is reached. The maximum drawdown encompasses both the period from the fund’s peak to the fund’s valley (length), and
the time from the fund’s valley to a new fund high (recovery). It measures the largest percentage drawdown that has occurred in any fund’s data record.'
Which means for our asset as example:
• The maximum drop from peak to valley over 5 years of Walgreens is -83.1 days, which is lower, thus worse compared to the benchmark SPY (-33.7 days) in the same period.
• Looking at maximum drop from peak to valley in of -82.1 days in the period of the last 3 years, we see it is relatively lower, thus worse in comparison to SPY (-24.5 days).
'The Maximum Drawdown Duration is an extension of the Maximum Drawdown. However, this metric does not explain the drawdown in dollars or percentages, rather in days, weeks, or months. It is the
length of time the account was in the Max Drawdown. A Max Drawdown measures a retrenchment from when an equity curve reaches a new high. It’s the maximum an account lost during that retrenchment.
This method is applied because a valley can’t be measured until a new high occurs. Once the new high is reached, the percentage change from the old high to the bottom of the largest trough is
Applying this definition to our asset in some examples:
• Looking at the maximum time in days below previous high water mark of 1251 days in the last 5 years of Walgreens, we see it is relatively higher, thus worse in comparison to the benchmark SPY
(488 days)
• Compared with SPY (488 days) in the period of the last 3 years, the maximum days below previous high of 706 days is higher, thus worse.
'The Drawdown Duration is the length of any peak to peak period, or the time between new equity highs. The Avg Drawdown Duration is the average amount of time an investment has seen between peaks
(equity highs), or in other terms the average of time under water of all drawdowns. So in contrast to the Maximum duration it does not measure only one drawdown event but calculates the average of
Using this definition on our asset we see for example:
• Compared with the benchmark SPY (123 days) in the period of the last 5 years, the average days under water of 625 days of Walgreens is larger, thus worse.
• During the last 3 years, the average time in days below previous high water mark is 337 days, which is greater, thus worse than the value of 177 days from the benchmark. | {"url":"https://logical-invest.com/app/stock/wba/walgreens","timestamp":"2024-11-04T11:51:09Z","content_type":"text/html","content_length":"62081","record_id":"<urn:uuid:64d28bf9-47fe-476c-a77c-1cbda27be031>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00655.warc.gz"} |
HyPhy message board - different rate matrix for every codon
Welcome, Guest. Please Login
HyPhy message board › Theoretical questions › Sequence Analysis › different rate matrix for every codon
YuanlingRui different rate matrix for every codon
Sep 3^rd, 2013 at 1:44am
Newbies Dear Hyphy users,
I am doing some work about coding sequences evolution that giving different rate matrix (Q matrix) for every codon. I want to know, 1) whether do I have to use the function
Offline LikelihoodFunction id = (datafilterid_0,tree_0,...,<datafilterid_n>,<tree_n>,), in which the n is codon number; 2) if I have to use the function, how about the time-consuming.
Thank you very much!
Curious Regards!
HyPhy user Yuanling Xia
Posts: 3
Back to top
Sergei Re: different rate matrix for every codon
Reply #1 - Sep 5^th, 2013 at 1:59pm
Administrator Hi Yuanling,
Offline In the most general case, you do need to use the form you describe. You probably also want to constrain branch lengths between trees on each codon, otherwise you'll have a separate
set of branch lengths (e.g. the t parameter) for every codon. Do you have prototype code? I can look at it and help you ensure that everything works smoothly and correctly.
Datamonkeys are Sergei
Posts: 1658
Associate Professor
Division of Infectious Diseases
Division of Biomedical Informatics
School of Medicine
University of California San Diego
Back to top
YuanlingRui Re: different rate matrix for every codon
Reply #2 - Sep 9^th, 2013 at 1:39am
YaBB Newbies
Hi Sergei,
Offline Thank you for answer, when I write out the prototype code, I will ask for you help again.
Curious HyPhy user Yuanling
Posts: 3
Back to top
YuanlingRui Re: different rate matrix for every codon
Reply #3 - Oct 26^th, 2013 at 7:44pm
YaBB Newbies
Hi Sergei,
I have send the prototype code to you. The time-consuming will increase greatly when I give many partitions using function LikelihoodFunction. Thank you very much!
Curious HyPhy user Regards
Posts: 3
Back to top | {"url":"http://www.hyphy.org/cgi-bin/hyphy_forums/YaBB.pl?num=1378197893/0","timestamp":"2024-11-08T12:31:56Z","content_type":"application/xhtml+xml","content_length":"40181","record_id":"<urn:uuid:2295a1c4-33db-4168-a607-36a9bbcb2bce>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00528.warc.gz"} |
Mathematics, Models, And Modality: Selected Philosophical Essays [PDF] [5ohvqj1c0750]
E-Book Overview
John Burgess is the author of a rich and creative body of work which seeks to defend classical logic and mathematics through counter-criticism of their nominalist, intuitionist, relevantist, and
other critics. This selection of his essays, which spans twenty-five years, addresses key topics including nominalism, neo-logicism, intuitionism, modal logic, analyticity, and translation. An
introduction sets the essays in context and offers a retrospective appraisal of their aims. The volume will be of interest to a wide range of readers across philosophy of mathematics, logic, and
philosophy of language.
E-Book Content
This page intentionally left blank
John Burgess is the author of a rich and creative body of work which seeks to defend classical logic and mathematics through countercriticism of their nominalist, intuitionist, relevantist, and other
critics. This selection of his essays, which spans twenty-five years, addresses key topics including nominalism, neo-logicism, intuitionism, modal logic, analyticity, and translation. An introduction
sets the essays in context and offers a retrospective appraisal of their aims. The volume will be of interest to a wide range of readers across philosophy of mathematics, logic, and philosophy of
language. JOHN P. BURGESS
is Professor in the Department of Philosophy, Princeton University. He is co-author of A Subject With No Object with Gideon Rosen (1997) and Computability and Logic, 5th edn with George S. Boolos and
Richard C. Jeffrey (2007), and author of Fixing Frege (2005).
MATHEMATICS, MODELS, AND MODALITY Selected Philosophical Essays
Cambridge, New York, Melbourne, Madrid, Cape Town, Singapore, São Paulo Cambridge University Press The Edinburgh Building, Cambridge CB2 8RU, UK Published in the United States of America by Cambridge
University Press, New York www.cambridge.org Information on this title: www.cambridge.org/9780521880343 © John P. Burgess 2008 This publication is in copyright. Subject to statutory exception and to
the provision of relevant collective licensing agreements, no reproduction of any part may take place without the written permission of Cambridge University Press. First published in print format
ISBN-13 978-0-511-38618-3
eBook (EBL)
Cambridge University Press has no responsibility for the persistence or accuracy of urls for external or third-party internet websites referred to in this publication, and does not guarantee that any
content on such websites is, or will remain, accurate or appropriate.
Dedicated to the memory of my sister Barbara Kathryn Burgess
Preface Source notes
page ix xi
Introduction PART I
Numbers and ideas
Why I am not a nominalist
Mathematics and Bleak House
Quine, analyticity, and philosophy of mathematics
Being explained away
E pluribus unum: plural logic and set theory
Logicism: a new look
Tarski’s tort
Which modal logic is the right one?
10 Can truth out? 11
Quinus ab omni naevo vindicatus
12 Translating names
13 Relevance: a fallacy?
14 Dummett’s case for intuitionism
Annotated bibliography References Index
The present volume contains a selection of my published philosophical papers, plus two items that have not previously appeared in print. Excluded are technical articles, co-authored works, juvenilia,
items superseded by my published books, purely expository material, and reviews. (An annotated partial bibliography at the end of the volume briefly indicates the contents of such of my omitted
technical papers as it seemed to me might interest some readers.) The collection has been divided into two parts, with papers on philosophy of mathematics in the first, and on other topics in the
second; references in the individual papers have been combined in a single list at the end of the volume. Bibliographic data for the original publication of each item reproduced here are given source
notes on pp. xi–xiii, to which the notes of personal acknowledgment, dedications, and epigraphs that accompanied some items in their original form have been transferred; abstracts that accompanied
some items have been omitted. It has become customary in volumes of this kind for the author to provide an introduction, relating the various items included to each other, as an editor would in an
anthology of contributions by different writers. I have fallen in with this custom. The remarks on the individual papers in the introduction are offered primarily in the hope that they may help
direct readers with varying interests to the various papers in the collection that should interest them most. But such introductions also serve another purpose: they provide an opportunity for an
author to note any changes of view since the original publication of the various items, thus reducing any temptation to tamper with the text of the papers themselves on reprinting. I have made only
partial use of the opportunity to note changes in view, but nonetheless I have felt no temptation to make substantial changes in the papers, since my own occasional historical research has convinced
me of the badness of the practice of revising papers on reprinting. ix
I have tried to acknowledge in each individual piece those to whom I have been most indebted in connection with that item, though I am sure there are some I have unintentionally neglected, whose
pardon I must beg. Here I would like to acknowledge those who have been helpful specifically with the preparation of the present collection: Hilary Gaskin, who first suggested such a volume, and
Joanna Breeze, along with Gillian Dadd and the rest of the staff who saw the work through publication.
Source notes
‘‘Numbers and ideas’’ was first delivered orally as part of a public debate at the University of Richmond (Virginia), 1999. Ruben Hersh argued for the thesis ‘‘Resolved: that mathematical entities
and objects exist within the world of shared human thoughts and concepts.’’ I argued against. It was first published in a journal for undergraduates edited at the University of Richmond (England),
the Richmond Journal of Philosophy, volume 1 (2003), pp. 12–17. (There is no institutional connection between the universities of the two Richmonds, and my involvement with both is sheer
coincidence.) ‘‘Why I am not a nominalist’’ was first delivered orally under the title ‘‘The nominalist’s dilemma,’’ to the Logic Club, Catholic University of Nijmegen, 1981. It was first published
in the Notre Dame Journal of Formal Logic, volume 24 (1983), pp. 93–105. ‘‘Mathematics and Bleak House’’ was first delivered orally at a symposium ‘‘Realism and anti-realism’’ at the Association for
Symbolic Logic meeting, University of California at San Diego, 1999. The other symposiast was my former student Penelope Maddy, and the Dickensian title of my paper is intended to recall the
Dickensian title of her earlier review, ‘‘Mathematics and Oliver Twist’’ (Maddy 1990). First published in Philosophia Mathematica, volume 12 (2004), pp. 18–36. ‘‘Quine, analyticity, and philosophy of
mathematics’’ was first delivered orally at the conference ‘‘Does Mathematics Require a Foundation?,’’ Arche´ Institute, University of St. Andrews, 2002. Identified in its text as a sequel to the
preceding item, this paper circulated in pre-publication draft under the title ‘‘Mathematics and Bleak House, II.’’ First published in the Philosophical Quarterly, volume 54 (2004), pp. 38–55.
‘‘Being explained away’’ is a shortened version (omitting digressions on technical matters) of a paper delivered orally to the Department of Philosophy, University of Southern California, 2004. (I
wish not only to thank that department for the invitation to speak, but especially to thank xi
Source notes
Stephen Finlay, Jeff King, Zlatan Damnjanovic, and above all Scott Soames for their comments and questions, as well as for their hospitality during my visit.) It was first published in the Harvard
Review of Philosophy, volume 13 (2005), pp. 41–56. ‘‘E pluribus unum’’ evolved from a paper ‘‘From Frege to Friedman’’ delivered orally at the Logic Colloquium of the University of Pennsylvania and
the Department of Logic and Philosophy of Science at the University of California at Irvine. It was first published in Philosophia Mathematica, volume 12 (2004), pp. 193–221. (I am grateful to Harvey
Friedman for introducing me to his recent work on reflection principles, to Kai Wehmeier and Sol Feferman for drawing my attention to the earlier work of Bernays on that topic, and to Penelope Maddy
for pressing the question of the proper model theory for plural logic, which led me back to the writings of George Boolos on this issue. From Feferman I also received valuable comments leading to
what I hope is an improved exposition.) ‘‘Logicism: a new look’’ was first delivered orally at the conference marking the inauguration of the UCLA Logic Center, and later (under a different title) as
part of the annual lecture series of the Center for Philosophy of Science, University of Pittsburgh, both in 2003. It has not previously been published. ‘‘Tarski’s tort’’ was first delivered orally
at Timothy Bays’ seminar on truth, Notre Dame University, Saint Patrick’s Day, 2005. It was previously unpublished. The paper should be understood as dedicated to my teacher Arnold E. Ross, mentioned
in its opening paragraphs. ‘‘Which modal logic is the right one?’’ was first delivered orally at the George Boolos Memorial Conference, University of Notre Dame, 1998. It was first published in the
Notre Dame Journal of Formal Logic, volume 40 (1999), pp. 81–93, as part of a special issue devoted to the proceedings of that conference. Like all the conference papers, mine was dedicated to the
memory of George Boolos. ‘‘Can truth out?’’ was first delivered orally under the title ‘‘Fitch’s paradox of knowability’’ as a keynote talk at the annual Princeton–Rutgers Graduate Student Conference
in Philosophy, 2003. It was first published in Joseph Salerno, ed., New Essays on Knowability, Oxford: Oxford University Press (2007). The paper originally bore the epigraph ‘‘Truth will come to
light; murder cannot be hid long; a man’s son may, but at the length truth will out’’ (Merchant of Venice II: 2). Thanks are due to Michael Fara, Helge Ru¨ckert, and Timothy Williamson for perceptive
comments on earlier drafts of this note.
Source notes
‘‘Quinus ab omni naevo vindicatus’’ was first delivered orally to the Department of Philosophy, MIT, 1997. It was first published in Ali Kazmi, ed., Meaning and Reference, Canadian Journal of
Philosophy Supplement, volume 23 (1998), pp. 25–65. (The present paper is a completely rewritten version of an unpublished paper, ‘‘The varied sorrows of modality, part II.’’ I am indebted to several
colleagues for information used in writing that paper, and for advice given on it once written, and I would like to thank them all – Gil Harman, Dick Jeffrey, David Lewis – even if the portions of
the paper with which some of them were most helpful have disappeared from the final version. But I would especially like to thank Scott Soames, who was most helpful with the portions that have not
disappeared.) ‘‘Translating names’’ was first published in Analysis, volume 65 (2005), pp. 96–204. I am grateful to Pierre Bouchard and Paul E´gre´ for linguistic information and advice. ‘‘Relevance:
a fallacy?’’ was first published in the Notre Dame Journal of Formal Logic, volume 22 (1981), pp. 76–84. Its sequels were Burgess (1983c) and Burgess (1984b). ‘‘Dummett’s case for intuitionism’’ was
first published in History and Philosophy of Logic, volume 5 (1984), pp. 177–194. The paper originally bore the epigraph from Chairman Mao ‘‘Combat Revisionism!’’ I am indebted to several colleagues
and students for comments, and especially to Gil Harman, who made an earlier draft of this paper the topic for discussion at one session of his summer seminar. Comments by editors and referees led to
what it is hoped are clearer formulations of many points.
‘‘ R E A L I S M ’’
A word on terminology may be useful at the outset, since it is pertinent to many of the papers in this collection, beginning with the very first. The label ‘‘realism’’ is used in two very different
ways in two very different debates in contemporary philosophy of mathematics. For nominalists, ‘‘realism’’ means acceptance that there exist entities, for instance natural or rational or real
numbers, that lack spatiotemporal location and do not causally interact with us. For neo-intuitionists, ‘‘realism’’ means acceptance that statements such as the twin primes conjecture may be true
independently of any human ability to verify them. For the former the question of ‘‘realism’’ is ontological, for the latter it is semantico-epistemological. Since the concerns of nominalists and of
neo-intuitionists are orthogonal, the double usage of ‘‘realism’’ affords ample opportunity for confusion. The arch-nominalists Charles Chihara and Hartry Field, for instance, are anti-intuitionists
and ‘‘realists’’ in the neo-intuitionists’ sense. They do not believe there are any unverifiable truths about numbers, since they do not believe there are any numbers for unverifiable truths to be
about. But they do believe that the facts about the possible production of linguistic expressions, or about proportionalities among physical quantities, which in their reconstructions replace facts
about numbers, can obtain independently of any ability of ours to verify that they do so. Michael Dummett, the founder of neo-intuitionism, was an early and forceful anti-nominalist, and though he
calls his position ‘‘anti-realism,’’ he and his followers are ‘‘realists’’ in the nominalists’ sense, accepting some though not all classical existence theorems, namely those that have constructive
proofs, and agreeing that it is a category mistake to apply spatiotemporal or causal predicates to mathematical subjects. On top of all this, even among those of us who are ‘‘realists’’ in both
senses there are important differences. Metaphysical realists suppose, like 1
Mathematics, Models, and Modality
Galileo and Kepler and Descartes and other seventeenth-century worthies, that it is possible to get behind all human representations to a God’s-eye view of ultimate reality as it is in itself. When
they affirm that mathematical objects transcending space and time and causality exist, and mathematical truths transcending human verification obtain, they are affirming that such objects exist and
such truths obtain as part of ultimate metaphysical reality (whatever that means). Naturalist realists, by contrast, affirm only (what even some self-described anti-realists concede) that the
existence of such objects and obtaining of such truths is an implication or presupposition of science and scientifically informed common sense, while denying that philosophy has any access to
exterior, ulterior, and superior sources of knowledge from which to ‘‘correct’’ science and scientifically informed common sense. The naturalized philosopher, in contrast to the alienated
philosopher, is one who takes a stand as a citizen of the scientific community, and not a foreigner to it, and hence is prepared to reaffirm while doing philosophy whatever was affirmed while doing
science, and to acknowledge its evident implications and presuppositions; but only the metaphysical philosopher takes the status of what is affirmed while doing philosophy to be a revelation of an
ultimate metaphysical reality, rather than a human representation that is the way it is in part because a reality outside us is the way it is, and in part because we are the way we are. My preferred
label for my own position would now be ‘‘naturalism,’’ but in the papers in this collection, beginning with the first, ‘‘realism’’ often appears. Were I rewriting, I might erase the R-word wherever
it occurs; but as I said in the preface above, I do not believe in rewriting when reprinting, so while in date of composition the papers reproduced here span more than twenty years, still I have left
even the oldest, apart from the correction of typographical errors, just as I wrote them. Quod scripsi, scripsi. This collection begins with five items each pertinent in one way or another to
nominalism and the problem of the existence of abstract entities. The term ‘‘realism’’ is used in an ontological sense in the first of these, ‘‘Numbers and ideas’’ (2003). This paper is a
curtain-raiser, a lighter piece responding to certain professional mathematicians turned amateur philosophers who propose a cheap and easy solutions to the problem. According to their proposed
compromise, numbers exist, but only ‘‘in the world of ideas.’’ Since acceptance of this position would render most of the professional literature on the topic irrelevant, and since the amateurs often
offer unflattering accounts of what they imagine to be the reasons why professionals do not accept their simple proposal, I thought it worthwhile to accept an invitation to try to state, for a
general audience, our real reasons,
which go back to Frege. The distinction insisted upon in this paper, between the kind of thing it makes sense to say about a number and the kind of thing it makes sense to say about a mental
representation of a number (and the distinction, which exactly parallels that between the two senses of ‘‘history,’’ between mathematics, the science, and mathematics, its subject matter) is
presupposed throughout the papers to follow. Some may wonder where my emphatic rejection of ‘‘idealism or conceptualism’’ in this paper leaves intuitionism. The short answer is that I leave
intuitionism entirely out of account: I am concerned in this paper with descriptions of the mathematics we have, not prescriptions to replace it with something else. Intuitionism is orthogonal to
nominalism, as I have said, and issues about it are set aside in the first part of this collection. I will add that, though I do not address the matter in the works reprinted here, my opinion is that
Frege’s anti-psychologistic and anti-mentalistic points raise some serious difficulties for Brouwer’s original version of intuitionism, but no difficulties at all for Dummett’s revised version.
Neither opinion should be controversial. Dummett’s producing a version immune to Fregean criticism can hardly surprise, given that the founder of neo-intuitionism is also the dean of contemporary
Frege studies. That Brouwer’s version, by contrast, faces serious problems was conceded even by so loyal a disciple as Heyting, and all the more so by contemporary neo-intuitionists. AGAINST
‘‘Why I am not a nominalist’’ (1983) represents my first attempt to articulate a certain complaint about nominalists, namely, their unclarity about the distinction between is and ought. It was this
paper that first introduced a distinction between hermeneutic and revolutionary nominalism. The formulations a decade and a half later in A Subject With No Object (Burgess and Rosen, 1997) are,
largely owing to my co-author Gideon Rosen, who among other things elaborated and refined the hermeneutic/revolutionary distinction, more careful on many points than those in this early paper. This
piece, however, seemed to me to have the advantage of providing a more concise, if less precise, expression of key thoughts underlying that later book than can be found in any one place in the book
itself. Inevitably I have over the years not merely elaborated but also revised (often under Rosen’s influence) some of the views expressed in this early article. First, the brief sketches of
projects of Charles Chihara and Hartry Field in the appendix to the paper (which I include on the recommendation of an
Mathematics, Models, and Modality
anonymous referee, having initially proposed dropping it in the reprinting) are in my present opinion more accurate as descriptions of aspirations than of achievements, and even then as descriptions
only to a first approximation; moreover the later approach of Geoffrey Hellman is not discussed at all. My ultimate view of the technical side of the issue is given in full detail in the middle
portions of A Subject, superseding several earlier technical papers. Further, though I still see no serious linguistic evidence in favor of any hermeneutic nominalist conjectures, I no longer see the
absence of such evidence as the main objection to them. For reasons that in essence go back to William Alston, such conjectures lack relevance even if correct. Even if we grant that ‘‘There are prime
numbers greater than a million’’ does just mean, say, ‘‘There could have existed prime numerals greater than a million,’’ the conclusion that should be drawn is that ‘‘Numbers exist’’ means
‘‘Numerals could have existed,’’ and is therefore true, as antinominalists have always maintained, and not false, as nominalists have claimed. There is no threat at all to a naturalist version of
anti-nominalism in such translations, though there might be to a metaphysical version. This line I first developed in a very belatedly published paper (Burgess 2002a) of which a condensed version was
incorporated into A Subject. Finally, I now recognize that there is a good deal more to be said for the position I labeled ‘‘instrumentalism’’ than I or almost anyone active in the field was prepared
to grant back in the early 1980s when I wrote ‘‘Why I am not,’’ or even in the middle 1990s, when I wrote my contributions to A Subject. The position in question is that of those philosophers who
speak with the vulgar in everyday and scientific contexts, only to deny on entering the philosophy room that they meant what they said seriously. This view is now commonly labeled ‘‘fictionalism,’’
and it deserves more discussion than it gets in either ‘‘Why I am not’’ or A Subject. It should be noted that while I originally opposed fictionalism (or instrumentalism) to both the revolutionary
and hermeneutic positions, Rosen has correctly pointed out that fictionalism itself comes in a revolutionary version (this is the attitude philosophers ought to adopt) and a hermeneutic version (this
is the attitude commonsense and scientific thinkers already do adopt). What I originally called the ‘‘hermeneutic’’ position should be called the ‘‘contenthermeneutic’’ position, and the hermeneutic
version of fictionalism the ‘‘attitude-hermeneutic’’ position, in Rosen’s refined terminology. On two points my view has not changed at all over the past years. First, while nominalists would wish to
blur what for Rosen and myself is a key distinction, and avoid taking a stand on whether they are giving a
description of the mathematics we already have (hermeneutic) or a prescription for a new mathematics to replace it (revolutionary), gesturing towards a notion of ‘‘rational reconstruction’’ that
would somehow manage to be neither the one nor the other, I did not think this notion had been adequately articulated when I first took up the issue of nominalism, and I have not found it adequately
articulated in nominalist literature of the succeeding decades. Second, as to the popular epistemological arguments to the effect that even if numbers or other objects ‘‘causally isolated’’ from us
do exist, we cannot know that they do, I have not altered the opinions that I expressed in my papers Burgess (1989) and the belatedly published Burgess (1998b), and that Rosen expressed in his
dissertation, and that the two of us jointly expressed in A Subject. The epistemological argument, according to which belief in abstract objects, even if conceded to be implicit in scientific and
commonsense thought, and even if perhaps true – for the aim of going epistemological is precisely to avoid direct confrontation over the question of the truth of anti-nominalist existence claims –
cannot constitute knowledge, surely is not intended as a Gettierological observation about the gap between justified true belief and what may properly be called knowledge. It follows that it must be
an issue about justification; and here to the naturalized anti-nominalist the nominalist appears simply to be substituting some extra-, supra-, praeter-scientific philosophical standard of
justification for the ordinary standards of justification employed by science and common sense: the naturalist anti-nominalist’s answer to nominalist skepticism about mathematics is skepticism about
philosophy’s supposed access to such non-, un-, and anti-scientific standards of justification. AGAINST FICTIONALIST NOMINALISM
Returning to the issue of fictionalism, in our subsequent work Rosen and I have generally dealt with it separately and in our own ways. A chapter bearing the names of Rosen and myself, ‘‘Nominalism
reconsidered,’’ does appear in Stuart Shapiro’s Handbook of Philosophy of Mathematics and Logic (2005), and it is a sequel to our book adding coverage of fictionalist nominalism, with special
reference to the version vigorously advocated over the past several years by Steve Yablo; but this chapter is substantially Rosen’s work, my contributions being mainly editorial. My own efforts to
address a fictionalist position are to be found rather in ‘‘Mathematics and Bleak House,’’ which revisits, in a sympathetic spirit, Rudolf Carnap’s ideas on the status of ontological questions and
Mathematics, Models, and Modality
theses. Neo-Carnapianism is on the rise, and I am happy to be associated with it, though like any other neo-Carnapian I have my differences with my fellow neo-Carnapians. ‘‘Quine, analyticity, and
philosophy of mathematics’’ can be read as a sequel to the Bleak House paper (it was written much later, though owing to various accidents both came out in the same year, 2004). It revisits the
famous exchange between Carnap and Quine on ontology, again in a spirit sympathetic to Carnap. Carnap thought there was a separation to be made between analytic questions about what is the content of
a concept such as that of number, and pragmatic questions about why we accept such a concept for use in scientific theorizing and commonsense thought. Quine denied there was in theory any sharp
separation to be made. I argue that there is in practice at least a fuzzy one. I also argue that Quine had better acknowledge as much if he is to be able to make any reply to a serious criticism of
Charles Parsons. The criticism is that Quine’s holist conception of the justification of mathematics – it counts as a branch of science rather than imaginative literature because of its contribution
to other sciences – cannot do justice to the obviousness of elementary arithmetic. Though placed in the first half of this volume along with papers about nominalism, the Quine paper can equally well
be read more or less independently as a paper in philosophy of language and theory of knowledge about the notion of analyticity, one that just happens to use mathematics and logic as sources of
examples. The placement of this paper, and more generally the division of the collection into two parts, should not be taken too seriously. As any neo-Carnapian will tell you, though Carnap was
certainly an anti-nominalist, his position is perhaps better characterized as generally anti-ontological rather than specifically anti-nominalist. My own general anti-ontologism became finally,
fully, and emphatically explicit in ‘‘Being explained away’’ (2005), my farewell to the issue of nominalism. In this retrospective (written for an audience of undergraduate philosophy concentrators)
I distinguish what I call scientific ontics, a glorified taxonomy of the entities recognized by science, from what I call philosophical ontosophy, an impossible attempt to get behind scientific
representations to a God’seye view, and catalogue the metaphysically ultimate furniture of the universe. The error of the nominalists consists, in my opinion, not in ontosophical anti-realism about
the abstract, but in ontosophical realism about the concrete – more briefly, the error is simply going in for ontosophy and not resting content with ontics. In taking leave of the issue of
nominalism, I should reiterate the point made briefly at the end of A Subject, that from a naturalist point of view
there is a great deal to be learned from the projects of Field, Chihara, Hellman, and others. Naturalists, I have said, hold that there is no possibility of separating completely the contributions
from the world and the contributions from us in shaping our theories of the world. At most we can get a hint by considering how the theories of creatures like us in a world unlike ours, or the
theories of creatures unlike us in a world like ours, might differ from our own theories. The nominalist reconstruals or reconstructions, though implausible when read as hermeneutic, as accounts of
the meaning of our theories, and unattractive when read as revolutionary, as rivals competing for our acceptance with those theories, do give a hint of what the theories of creatures unlike us might
be like. Another hint is provided by those monist philosophers who have reconstrued what appear to be predicates applying to various objects as predicates applying to a single subject, the Absolute,
with the phrases that seem to refer to the various objects being reconstrued as various adverbial modifiers. Thus ‘‘Jack sings and Jill dances’’ becomes ‘‘The Absolute sings jackishly and dances
jillishly,’’ while ‘‘Someone sings and someone else dances’’ becomes ‘‘The Absolute sings somehow and dances otherhow.’’ What is specifically sketched in ‘‘Being explained away’’ is how this kind of
reconstrual can be systematically extended, at least as far as firstorder regimentation of discourse can be extended. Of course it is not to be expected that we can fully imagine what it would be
like to be an intelligent creature who habitually thought in such alien terms, any more that we can fully imagine what it would be like to be a bat. Nor insofar as we are capable of partially
imagining what is not wholly imaginable are formal studies the only aid to imagination. The kind of fiction that stands to metaphysics as science fiction stands to physics – the example I cite in the
paper is Borges – may give greater assistance. FOUNDATIONS OF MATHEMATICS: SET THEORY
As long as mathematicians adhere to the ideal of rigorous proof from explicit axioms, they will face decisions as to which proposed axioms to start from, and which methods of proof to admit. What is
conventionally known as ‘‘foundations of mathematics’’ is simply the technical study, using the tools of modern logic, of the effects of different choices. Work in foundations emphatically does not
imply commitment to a ‘‘foundationalist’’ philosophical position, or for that matter to any philosophical position. In Burgess (1993) I nevertheless argued that work in foundations can be relevant to
philosophy, and tried to explain how. I will not attempt to summarize the
Mathematics, Models, and Modality
explanation here, except to give this hint: most of the interesting choices of axioms, especially those that are more restrictive rather than the orthodox choice of something like the axioms of
Zermelo–Frankel set theory, were originally inspired by positions in the philosophy of mathematics (finitism, constructivism, predicativism, and others). Foundational work helps us appreciate what is
at stake in the choice among those restrictive philosophies, and between them and classical orthodoxy. While the early papers in the first part of this collection are predominantly though not
exclusively critical, and the middle papers a mix of critical and positive – I would say ‘‘constructive,’’ except that this word has a special meaning in philosophy of mathematics – the last two are,
like the bulk of my more technical work, predominantly though not exclusively positive. Though they do not endorse as ultimately correct, they present as deserving of serious and sustained attention
three novel approaches to foundations of mathematics, very different in appearance from each other, but not necessarily incompatible. To the extent that there is an agreed foundation or framework for
contemporary pure mathematics, it is provided by something like the Zermelo–Frankel system of axiomatic set theory, in the version including the axiom of choice (ZFC). ‘‘E pluribus unum’’ (2004)
attempts to combine two insights, one due to Boolos, the other to Paul Bernays, to achieve an improved framework. The idea taken from Boolos is that plural quantification on the order of ‘‘there are
some things, the us, such that . . .’’ is a more primitive notion than singular quantification of the type ‘‘there is a set or class U of things such that . . .’’ and that Cantor’s transition from
the former to the latter was a genuine conceptual innovation, not a mere uncovering of a commitment to set- or class-like entities that had been implicit in ordinary plural talk all along. Boolos
himself had applied this idea to set theory, to suggest, not improved axioms, but an improved formulation of the existing axioms. For there is a well-known awkwardness in the formulation of ZFC, in
that two of its most important principles appear not as axioms but as schemes, or rules to the effect that all sentences of a certain form are to count as axioms. For instance, separation takes the
form 8x9y8zðz 2 y $ z 2 x & fðzÞÞ
wherein f may be any formula. Needless to say, no one becomes convinced of the correctness of ZFC by becoming convinced separately of
each of infinitely many instances of the separation scheme. But the language of ZFC provides no means of formulating the underlying single, unified principle. One proposed solution to this difficulty
has been to recognize collections of a kind called classes that are set-like while somehow failing to be sets. With capital letters ranging over such entities, and with ‘‘z 2 U ’’ written ‘‘Uz’’ to
emphasize that the relation of class membership is a kind of belonging that is like set-elementhood and yet somehow fails to be set-elementhood, the separation scheme can be reduce to a single axiom,
thus: 8U 8x9y8zðz 2 y $ z 2 x & UzÞ:
But notion of class brings with it difficulties of its own, leaving many hesitant to admit these alleged entities. The suggestion of Boolos (in my own notation) was to replace singular quantification
8U or ‘‘for any class U of sets . . .’’ over classes by plural quantification 88uu or ‘‘for any sets, the u’s . . .’’ and Uz or ‘‘z is a member of U’’ by z / uu or ‘‘z is one of the u’s,’’ thus
yielding a formulation in which the only objects quantified over are sets: 88uu8x9y8zðz 2 y $ z 2 x &z / uuÞ:
One may even take a further step and make the notion x uu or ‘‘x is the set of the u’s’’ primitive, with the notion y 2 x or ‘‘y is an element of x’’ being defined in terms of it, as 99uu(x x uu & y
/ uu) or ‘‘there are some things that x is the set of, and y is one of them.’’ Such a step was actually taken in a paper by Stephen Pollard (1996) some years before my own, of which I only belated
became aware, along with Shapiro (1987) and Rayo and Uzquiano (1999). The idea taken from Bernays was that an approach incorporating a so-called reflection principle can provide a simpler
axiomatization than the standard approach to motivating the axioms of ZFC, and permit the derivation of some further so-called large-cardinal principles that are widely accepted by set theorists,
though they go beyond ZFC. The original Bernays approach had the disadvantage of involving ‘‘classes’’ over and above sets, and of requiring a somewhat artificial technical condition in the
formulation of the reflection principle. Boolos’s plural logic was subject to the objection that, like any version or variant of second-order logic, it lacks a complete axiomatization. I aim to show
how the combination of Boolos with Bernays neutralizes these objections.
Mathematics, Models, and Modality FOUNDATIONS OF MATHEMATICS: LOGICISM
‘‘Logicism: a new look’’ (previously unpublished) provides a concise, semipopular introduction to two alternative approaches to foundations each of which I have examined more fully and technically
elsewhere. Each represents a version of the old idea of logicism, according to which mathematics is ultimately but a branch of logic. Computational facts such as 2 þ 2 ¼ 4, on this view, become
abbreviations for logical facts; in this case, the fact that if there exists an F and another and no more, and a G and another and no more, and nothing is both an F and a G, and something is an H if
and only if it is either an F or a G, then there exists an H and another and yet another and still yet another, but no more. One new idea derives from Richard Heck. Frege, the founder of modern logic
and modern logicism proposed to develop arithmetic in a grand system of logic of his devising. That system is, in modern notation and to a first approximation, a form of second-order logic, with
axioms of comprehension and extensionality, 9X 8xðXx $ fðxÞÞ 8X 8Y ð8zðXz $ YzÞ ! ðfðX Þ $ fðY ÞÞÞ
supplemented by an axiom to the effect that to each second-order entity X there is associated a first-order entity X in such a way that we have 8X 8Y ðX ¼ Y $ 8zðXz $ YzÞÞ:
Russell showed that a paradox arises in this system, and also introduced the idea of imposing a restriction of predicativity on the comprehension axiom, assuming it only for formulas f(x) without
bound class variables. Russell proposed a great many other changes, and his overall system diverged greatly from Frege’s. Heck was the first to consider closely what would happen if one made only the
one change just described, and he showed that the resulting system, though weak (and in particular consistent) is strong enough for the minimal arithmetic embodied in the system known in the
literature as Q to be developed in it. So a bare minimum of mathematics can be developed on a predicative logicist basis in the manner of Frege. (More technical details as to what can be accomplished
along these lines are provided in my book Fixing Frege (Burgess 2005b). Since the book and the paper were written there have been important advances by Mihai Ganea and Albert Visser.)
Russell’s version of logicism was opposed by Brouwer’s intuitionism and also by Hilbert’s formalism. The latter was consciously modeled on instrumentalist philosophies of physics, according to which
physical theory is a giant instrument for deriving empirical predictions, though theoretical terms and laws in physics in general do not admit direct empirical definitions or meaning. Hilbert’s
philosophy of mathematics can be represented by a simple proportion: computational : mathematics :: empirical : physics.
For Hilbert, ‘‘real’’ mathematics consists of basic computational facts like 2 þ 2 ¼ 4, and the rest of mathematics is merely ‘‘ideal,’’ with an instrumental value for deriving computational results,
but no direct computational meaning. My late colleague Dick Jeffrey proposed instead that one should think of mathematics as being logical in the same sense in which physics is empirical: the data of
mathematics are logical, as those of physics are empirical, though there can be no question of defining all mathematical notions in strictly logical terms, or all physical notions in strictly
empirical ones. Mathematics becomes, on this view, a giant engine for generating logical results, as physics is a giant engine for generating empirical results. Hilbert’s proportion is modified by
Jeffrey so that it reads thus: logical : mathematics :: empirical : physics.
The connection between the Jeffrey idea and the Heck idea is that predicative logicism provides enough mathematics to connect the basic computational facts that figure in the Hilbert proportion with
the logical facts that figure in the Jeffrey proportion. So, though predicativist logicism falls far short of the whole of mathematics it would be possible to regard it as providing the data for
mathematics. The question I raise about all this is whether the engine is doing any real work: do sophisticated mathematical theories (such as standard Zermelo– Frankel set theory or the proposed
Boolos–Bernays set theory) actually make available any more logical ‘‘predictions’’ in the way sophisticated theories in physics make available more empirical predictions? I show how (and in what
sense) certain results of Julia Robinson and Yuri Matiyasevich in mathematical logic yield an affirmative answer to this question. (More technical details are provided in a forthcoming paper
‘‘Protocol Sentences for Lite Logicism.’’)
Mathematics, Models, and Modality MODELS AND MEANING
‘‘Tarski’s tort’’ (the other previously unpublished item in the collection) is a sermon on the evils of confusing, under the label ‘‘semantics,’’ a formal or mathematical theory of models with a
linguistic or philosophical theory of meaning. Tarski’s infringing on the linguists’ trademark ‘‘semantics,’’ and transferring it from the theory of meaning to the theory of models, encourages such
confusion, which has several potentially bad consequences. Such a confusion may lead, on the one hand, to erroneous suspicions that many ordinary locutions involve covert existential assumptions
about dubious entities. (For example, one may fall into a fallacy of equivocation and argue that since possible worlds are present in the model theory of modal logic, they are therefore present in
the semantics of modality, and are therefore present in the meaning of modal locutions.) Such a confusion may lead, on the other hand, to unwarranted complacency about the meaningfulness of dubious
notions. (For example, one may fall into a different fallacy of equivocation and argue that since quantified modal logic has a rigorous model theory, it therefore has a rigorous semantics, and
therefore has a rigorous meaning.) Confusion of models and meaning under the label ‘‘semantics’’ may also give undeserved initial credibility to truth-conditional theories of meaning, through their
mistaken association with the prestigious name of Tarski. One point on which I think Dummett is entirely right is rejection of the truth-conditional theory of meaning, and insistence that meaning
must be explained in terms of rules of use, though my reasons for holding this view are rather different from Dummett’s. (For me, perhaps the most incredible feature of the truth-conditional theory
is the assumption that truth is an innate idea, possession of which is a prerequisite for all language-learning. I find much more plausible the suggestion that the idea of truth is acquired at the
time the word ‘‘true’’ is acquired, and that acquisition of the idea consists in internalizing certain rules for the use of the word.) Such a view leads naturally to the ‘‘inconsistency theory’’ of
truth advocated in different forms by my teacher Charles Chihara, my student John Barker, and (not under any influence of mine) my son Alexi Burgess. There being by Church’s theorem no effective test
for the inconsistency of rules, it would be a miracle if all the rules we ever internalized were consistent. The simplest and most natural rules for the use of ‘‘true,’’ permitting inference back and
forth between p and it is true that p, are inconsistent. Acceptance that these inconsistent rules of use are the ones we internalize when we acquire the word ‘‘true’’ and therewith the idea of truth
provides the simplest and
most natural explanation of the intractability of the liar and related paradoxes, an explanation favored not only by the persons already mentioned, but by Tarski himself. All these issues are touched
on in ‘‘Tarski’s tort,’’ though none is argued in all the detail it deserves. (Some of the issues were previously aired in Burgess (2002b) and elsewhere.) The warnings about fallacies of equivocation
are pertinent to other papers in the second part of the collection, which is one reason for placing this paper first in that part. MODELS AND MODALITY
Nothing is more important in approaching modal logic than to bear constantly in mind the distinction between two kinds of necessity: metaphysical necessity – what could not have been otherwise – and
logical necessity – what it would be self-contradictory to deny. Modal logic has been characterized from early on by a great proliferation of systems, even at the level of sentential logic. In a way
this is all to the good, since different conceptions of modality, once one has learned to distinguish them, may call for different formal systems. But we do need to distinguish the different notions
before we can meaningfully ask which system is right for which notion. And though philosophers and logicians nowadays are more aware of the distinctions among various kinds of modality than they were
formerly, the problem of determining which formal system is appropriate for which conception of modality is one that still has received surprisingly little attention. In ‘‘Which modal logic is the
right one?’’ (1999) I take up this question for the case of the original conception of necessity of C. I. Lewis, the founder of modern modal logic, for whom necessity was logical. And the first thing
that needs to be said about logical modality is that it comes in two distinguishable kinds, a ‘‘semantic’’ or model-theoretic notion of validity and a ‘‘syntactic’’ or proof-theoretic notion of
demonstrability. (For first-order logic demonstrability and validity coincide in extension, by the Go¨del completeness theorem, but they are still conceptually distinct; for other logics they need
not coincide even in extension.) The former makes necessity a matter of truth by virtue of logical form alone, the latter a matter of verifiability by means of logical methods alone. The common
conjecture is that the system known as S5 is the correct logic for the former, and that known as S4 for the latter. The well-known Kripke model theory for modal logic is a useful tool, but hardly in
itself provides a complete proof of either conjecture. (As the originator of this model theory said, ‘‘There is no mathematical substitute for philosophy.’’) As it happens, the conjecture about S5
admits a fairly easy proof,
Mathematics, Models, and Modality
which I expound, while for the conjecture about S4 only partial results are available, which I explore. Turning from logical to metaphysical necessity, no single tool is more useful for understanding
the logical aspects of the latter than the analogy between mood and modal logic on the one hand, and tense and temporal logic on the other. One of the older puzzles about metaphysical modality is
Frederic Fitch’s paradox of knowability, which purports to demonstrate the incoherence of the view that anything that is true could be known. ‘‘Can truth out?’’ (2006) first considers what light can
be shed on this puzzle by looking at a temporal analogue, and then by applying Arthur Prior’s branching-futures logic, in which modal and temporal elements are combined. As with the previous paper, a
certain amount of progress is possible, but a complete solution remains elusive. MODALITY AND REFERENCE
Logical necessity was originally what modal logicians had generally meant the box symbol to represent, and I believe that Quine was entirely correct in asserting that with that understanding of the
symbol, quantifying into modal contexts makes no sense. Quine’s complaint was that to make nontrivial sense of 9x&Fx one must make sense of so-called de re modality, of the notion of an open sentence
Fx being necessarily true of a thing, and this is impossible (or at any rate, has not been done by the proponents of quantified modal logic) if ‘‘necessarily true’’ is to mean ‘‘true by virtue of
meaning.’’ For a thing, as opposed to an expression denoting a thing, has not got a meaning for anything to be true by virtue of. Truth by virtue of meaning is an inherently de dicto notion,
applicable to closed sentences. Quine underscored his point by illustrating the difficulty of reducing de re to de dicto modality. One can’t say &Fx is true of the object b if and only if &Ft is
true, where t is a term denoting b, because &Ft may be true for some terms denoting the object and false for other terms denoting the same object. The early response of modal logicians to Quine’s
critique, by which I mean the responses prior to Kripke’s ‘‘Naming and necessity’’ (1972), involved an appeal, not to a distinction between metaphysical and logical modality, but rather to (purely
formal ‘‘semantics’’ and/or to) the magical properties of Russellian logically proper names. In simplest terms, the ‘‘solution’’ would be that &Fx is true of b if and only if &Fn is true, where n is
a ‘‘name’’ of b, it being assumed that if &Fn is true for one name it will be true for all. I believe this line of response is a total failure,
and anyone acquainted with the views of Mill ought to have been aware that it must fail. For it will be recalled that Mill, before Russell, held that a name has only a denotation, not a connotation.
He also held, like the modal logicians who identified & with truth by virtue of meaning, that all necessity is verbal necessity, deriving from relations among the connotations of words. What those
responding to Quine ought to have remembered is that, having committed himself to those two views, he inevitably found himself committed to a third view, that there are no individual essences: ‘‘all
Gs are Fs’’ may be necessarily true because being an F may be part of the connotation of G, but ‘‘n is an F ’’ cannot be necessarily true, because n has no connotation for being an F to be part of.
Positing Millian/Russellian ‘‘names’’ may permit the reduction of de re to de dicto modality, where the relevant dicta involve such names, but only at the cost of depriving de dicto modality, where
again the dicta involve such names, of any sense – so long as one continues to read & as truth by virtue of meaning. What is now the fashionable view evaluates the early responses to Quine much more
positively than I do, when it does not outright read Kripke’s ideas back into earlier texts. That Quine was right as against his early critics is the view that I, going against fashion, defend in
‘‘Quinus ab omni naevo vindicatus’’ (1998). The origin of the curious title is explained in the article. Closely linked with the issue of metaphysical versus logical necessity is the question of the
status of identities linking proper names, as in ‘‘Hesperus is Phosphorus.’’ This question has been the topic of an immense body of literature in philosophy of language. Like Kripke, I on the one
hand reject descriptivist theories of proper names, but on the other hand equally reject ‘‘direct reference’’ theories. I am attracted to a third view based on distinguishing two senses of ‘‘sense,’’
mode of presentation versus descriptive content, a view rather tentatively (and certainly non-polemically) put forward in connection with Kripke’s Puzzling Pierre problem in ‘‘Translating names’’
(2005). But returning for a moment to ‘‘Quinus,’’ since some readers like nothing better than polemics between academics, and others like nothing less, all potential readers should be informed in
advance that ‘‘Quinus’’ is as polemical as anything I have ever written (though much of the polemic is relegated to footnotes), and several degrees more so than anything else in this collection. Even
the explanation why the paper is polemical must itself inevitably be somewhat polemical, and so I will relegate it to a parenthetical paragraph which those averse to polemics may skip, along with the
paper itself. (My paper is explicitly a response to a paper by Ruth Barcan Marcus from the early 1960s, but it is also implicitly a response to a widely
Mathematics, Models, and Modality
circulated letter by the same writer from the middle 1980s, discussed in the editorial introduction to Humphreys and Fetzer (1998). This letter is the original source for the claim that Kripke’s
ideas were taken without acknowledgment from the early Marcus paper. The response to such allegations is that Kripke could not have stolen his ideas from the indicated source, since neither those
important and original contributions nor any others were present there to be plagiarized. In my paper I do not mince words in presenting this defense. Marcus’s insinuations are more directly
addressed in my paper (Burgess 1996). The contents of her letter, with elaborations but without acknowledgment of that letter as a source, reappear in the work of one of its many recipients, Quentin
Smith. His version is addressed in my paper (Burgess 1998a).) HERMENEUTIC CRITICISM OF CLASSICAL LOGIC: RELEVANTISM
In Burgess (1992) I offered qualified defense of classical logic, leaving plenty of room from additions and amendments once one moves beyond the realm of mathematics. I introduced in the paper a
distinction that seems to me crucial in evaluating certain criticisms of classical logic, namely, the distinction between prescriptive criticism, according to which classical logicians have correctly
described the incorrect logical practices of classical mathematicians, and descriptive criticism, which maintains that classical logicians have incorrectly described the correct logical practices of
classical mathematicians. The distinction parallels that between revolutionary and hermeneutic nominalism. The remaining two items in the collection discuss examples of the two types of
anti-classical logics. (Both have modal aspects or admit modal interpretations, and certainly can be studied by some of the methods, notably Kripke models, developed for modal logic, and insofar as
this is so may be squeezed under the ‘‘models and modality’’ heading for the second part of the collection.) Much descriptive criticism of classical logic, especially that from the old ordinary
language school, was essentially anti-formal. The best example of descriptive criticism in the service of a rival formal logic was provided by the ‘‘relevance’’ logic of A. R. Anderson and Nuel D.
Belnap, Jr. in its original form, back when it was a more or less unified philosophical school of thought, denouncing and deriding classical logic, and recommending one or the other of two specific
candidates, the systems E and R, as replacement. The enterprise has since been renamed ‘‘relevant’’ logic and devolved into the
study of a loose collection of formal systems linked by family resemblances, with quite varied intended or suggested applications. ‘‘Relevance: a fallacy?’’ (1992) was devoted to presenting
counterexamples to one key early claim of relevance logic, never up to the time I was writing explicitly retracted in print. Relevantists rejected, as ‘‘a simple inferential mistake,’’ the inference
from A ~ B and A to B. But inference from ‘‘A or B’’ and ‘‘not A’’ to B is common in classical mathematics and elsewhere. Anderson gave little attention to the resulting tension, but Belnap attempted
to resolve it by claiming that ‘‘or’’ in ordinary language generally means not the extensional ~ but some intensional þ. It was this specific claim I challenged. I do not have a globally negative
view of the relevantist enterprise. Not only has there been some impressive technical work by Saul Kripke, Kit Fine, Alasdair Urquhart, Harvey Friedman, and others, but the ‘‘first degree’’ and
‘‘pure implicational’’ fragments of several relevantist systems do have coherent motivations, as do various related logics, such as Neil Tennant’s idiosyncratic version of relevantism or ‘‘logical
perfectionism.’’ I do not, however, think either of the main relevantist systems E or R as a whole has any coherent motivation, and more importantly, I do not think there was any merit in the
original relevantist criticism and caricature of classical logic. My paper prompted replies in the same journal by Chris Mortensen and Stephen Read, to which I in turn responded, again in that
journal. Ironically, though members of a group who pride themselves on their sense of ‘‘relevance,’’ the two writers who directly replied to my paper simply could not confine themselves to addressing
the specific issue I had treated, but insisted on offering, as if this somehow refuted my claims, expositions of motivations for relevantism quite different from those of Anderson and Belnap (and
quite different from each other). This led me to write, ‘‘The champion of classical logic faces in relevantism not a dragon but a hydra.’’ It also led me to reiterate my original point and present a
variety of further counterexamples, drawn from several sources. The polemics do not seem to me worth reprinting, but I do refer any reader not convinced by the two examples in my original paper to
the first list of examples (borrowed from authors ranging from E. M. Curley to Saul Kripke) in x2 of the first of my replies to critics (Burgess 1983c). My second reply (Burgess 1984b) contained one
more example: by the regulations of a certain government agency, a citizen C is entitled to a pension if and only if C either satisfies certain age requirements or satisfies certain disability
requirements. An employee Z of the agency is presented with documents establishing that C is disabled. Z transmits to fellow
Mathematics, Models, and Modality
employee Y the information that C is entitled to a pension (i.e. is either aged or disabled). Y subsequently receives from another source the information that C is not aged, and concludes that C must
be disabled. REVOLUTIONARY CRITICISM OF CLASSICAL LOGIC: INTUITIONISM
While it may be contentious whether relevantism really does provide an example of descriptive criticism of classical logic, it is beyond controversy that intuitionism provides an example of
prescriptive criticism. Adherence to intuitionistic logic would certainly require major reforms in mathematics. Very likely acceptance of the verificationist concerns that motivate contemporary
intuitionism would require still more dramatic reforms of a nature we cannot yet quite take in, in empirical science. For in the empirical realm we have to contend with two phenomena that do not
arise in mathematics: first, we generally have to deal not with apodictic proof but with defeasible presumption; second, it may happen that though each of two assertions may be potentially
empirically testable, performing the operations needed to test one may preclude performing the operations needed to test the other. (A DNA sample, for instance, may be entirely used up by whichever
of the two tests we choose to perform first; nothing analogous ever happens with operations on numbers.) The ultimate verificationist logic may well have to combine features of intuitionistic,
nonmonotonic, and quantum logics. Michael Dummett is perhaps the single most influential representative of anti-naturalism in contemporary philosophy, though his contributions extend far beyond that
role. (He is, among many other things, the leader of Frege studies, and in that capacity motivated, among many other things, the exploration of the predicativist variant of Frege’s theory discussed
in the last paper of Part I.) His paper (Dummett 1973a) inaugurated a new era in the classical-intuitionist debate over logic and mathematics, and was the font from which what is now a vast stream of
‘‘anti-realist’’ literature first sprang. A noteworthy feature of Dummett’s approach is that, like Brouwer but unlike almost every writer on intuitionism in-between, he takes the considerations that
motivate intuitionism to apply not just to mathematics but to all areas of discourse. Mathematics is special only in that we have a better idea of what a revision of present practice would amount to
in that area than in any other. For that reason I have placed ‘‘Dummett’s case for intuitionism’’ in this part of the collection, rather than the part on philosophy of mathematics
specifically. But to repeat what I said in the introduction to Part I, the division of the collection into parts is not to be taken too seriously. ‘‘Dummett’s case’’ advances two kinds of
countercriticisms of the criticism of classical logic that appears in Dummett (1973a). In the first part of the paper, making an explicit mention of Noam Chomsky and an implicit allusion to John
Searle, I object to Dummett’s unargued behaviorist assumptions. In this part of the paper I am arguing as devil’s advocate, since I do in the end agree with Dummett in rejecting truth-conditional
theories of meaning. Some followers of Dummett have objected to the label ‘‘behaviorist’’; but I think this is largely a terminological issue. If I say that Sextus, Cicero, Montaigne, Bayle, and Hume
were all skeptics, I do not imply that their views were identical; likewise if I say that Watson, Skinner, Quine, Ryle, and Dummett are all behaviorists. It is clear that Dummett’s position, however
one labels it, remains light-years away from Searle’s, let alone Chomsky’s. What I object to in the second half of the paper is the lack of explicitness about how the transition from is to ought,
from the premise that a truthconditional theory of meaning is incorrect for classical mathematics to the conclusion that classical mathematics ought to be revised, is supposed to be made. Hints are
thrown out, to be sure, which have been developed in different ways by Dummett himself in later works (especially The Logical Basis of Metaphysics) and in more formally terms by Dag Prawitz, Niel
Tennant, and others. The most explicit proposals suggest that the meanings of logical operators should be thought of as constituted by something like the introduction and/or elimination rules for
those operators in a natural deduction system. Then intuitionistic logic is claimed to be better than classical logic because there is a better ‘‘balance’’ between its introduction and elimination
rules. In ‘‘Dummett’s case’’ I largely confine myself to saying that this aesthetic benefit of ‘‘balance’’ could hardly outweigh considerations related to the needs of applications, objecting to
Dummett’s apparent indifference to the latter in advocating revision of mathematics. How much of the standard mathematical curriculum for physicists or engineers can be developed within this or that
restrictive finitist, constructivist, predicativist, or whatever framework is a topic that has been fairly intensively investigated by logicians (though to this day I know of no serious study of how
a finitist, constructivist, or predicativist should interpret mixed contexts involving both mathematical and physical objects and properties, something nominalists by contrast take to be of central
concern). Dummett simply omits to address this issue at all in his key paper, and that omission would, I think, raise the eyebrows of any logician.
Mathematics, Models, and Modality
I have recently written a sequel to ‘‘Dummett’s case’’ (Burgess 2005e), but on the advice of an anonymous referee have decided not to include it here, as its tone may in places offend Dummettians
(even more so than that of my older paper, if that is possible). The main point of the paper still seems to me worth making: neither the view that the meaning of each logical particle is constituted
by its introduction rule (for example, the rule ‘‘from A and B to infer A & B’’), nor the view that it is constituted by the introduction rule together with the corresponding elimination rule (for
example, ‘‘from A & B to infer A and to infer B’’) is tenable if ‘‘meaning’’ is supposed to be what guides use. For we constantly ‘‘introduce’’ and ‘‘eliminate’’ conjunctions, say, by quite other
means than conjunction introduction and elimination (for instance, we may arrive at a conjunction by universal instantiation and by modus ponens, which is to say, by the elimination rules for the
universal quantifier and for the conditional). And these steps cannot be justified from the point of view of one who really, truly, and sincerely takes the sole direct guide to the use of ‘‘&’’ to be
the introduction and/or elimination rules for that connective. Nor – and this is the crucial point of the paper – can any metatheorem provide an indirect justification, since the proof of the
metatheorem inevitably involves introducing and eliminating conjunctions according to rules that, at least until the proof of the metatheorem is complete, have not yet been justified from the point
of view in question. (Moreover, the application of any metatheorem to any particular case would anyhow require universal instantiation and modus ponens.) More generally, my paper points out how
frequently writers (including not only neo-intuitionists, but relevantists and nominalists) who profess to reject certain principles of classical logic (or mathematics), and appeal to metatheorems
supposed to show that nothing much is lost thereby, can be caught using the supposedly rejected principles in the proofs of those very metatheorems. To my mind, this is one striking illustration of
how difficult it is to be a genuine dissenter from classical logic and mathematics.
Numbers and ideas
Philosophy is a subject in which there is very little agreement. This is so almost by definition, for if it happens that in some area of philosophy inquirers begin to achieve stable agreement about
some substantial range of issues, straightaway one ceases to think of that area as part of ‘‘philosophy,’’ and begins to call it something else. This happened with physics or ‘‘natural philosophy’’
in the seventeenth century, and has happened with any number of other disciplines in the centuries since. Philosophy is left with whatever remains a matter of doubt and dispute. Philosophy of
mathematics, in particular, is an area where there are very profound disagreements. In this respect philosophy of mathematics is radically unlike mathematics itself, where there are today scarcely
ever any controversies over the correctness of important results, once published in refereed journals. Some professional mathematicians are also amateur philosophers, and the best way for an observer
to guess whether such persons are talking mathematics or philosophy on a given occasion is to look whether they are agreeing or disagreeing. One major issue dividing philosophers of mathematics is
that of the nature and existence of mathematical objects and entities, such as numbers, by which I will always mean positive integers 1, 2, 3, and so on. The problem arises because, though it is
common to contrast matter and mind as if the two exhausted the possibilities, numbers do not fit comfortably into either the material or the mental category. Clearly numbers are not material bodies.
The so-called numbers on the front of a house, marking its street address, may indeed be made of brass or wood or plastic. But these ‘‘numbers’’ are not the numbers we speak of when we say that two
is an even number, or that three is an odd number, or that both are prime numbers. Rather, they are numerals, or names of numbers. Almost equally clearly, numbers are not mental in the way that, say,
dreams or headaches are. They are not private to an individual. One does 23
Mathematics, Models, and Modality
not speak of my number two and your number two, his number two and her number two, but simply of the number two. The individual, say a school child doing a simple sum, experiences the numbers as
something external, about which he or she is not free to think whatever he or she wants. But if numbers are not material bodies or private experiences, what (if anything) are they? Among professional
academic philosophers, which is to say university professors of the subject, the most commonly held views are two, for want of better terms called realism and nominalism. Realism maintains that
numbers exist, and are of a very different nature from human ideas: indeed, they differ quite as much from human ideas as they do from material bodies. They are abstract entities, to which it makes
no sense to ascribe a position in space or date in time, and which are not causally active or acted upon. There is nowhere to go to look for a number, and you cannot do anything to a number, any more
than a number can do anything to you. Nominalism maintains that numbers do not exist, and that theorems of mathematics asserting the existence of numbers are untrue, just like fairy tales asserting
the existence of gnomes. To be sure, much of mathematics is applicable in science and everyday life in a way that fairy tales generally are not, but that, according to nominalists, only shows it is a
useful fiction, not that it is non-fiction. There are problems for both opposing philosophical views, and the problems of each are cited by the adherents of the other as reasons for embracing it
instead. And formerly there were among philosophers also many who maintained a third view, conceptualism or idealism, according to which numbers exist, but only as shared human concepts or ideas. The
view has traditionally been popular among anthropologists and other social scientists, whose special subject matter is precisely the shared ideas of a culture. They point out that taking numbers to
be such shared or communal ideas sufficiently explains why the school child doing a simple sum does not feel free to make up an answer at will. If numbers are ideas shared by a culture, no one member
of that culture has the authority to change the rules of addition, any more than to change the rules of grammar of the culture’s language. The anthropological view has also found adherents among
mathematics educators. Rather more surprisingly, the same view has won adherents among the minority of professional mathematicians who are also amateur philosophers.1 1
The classical expression of the anthropological view is that of White (1947). For a recent endorsement by a mathematician, see Hersh (1997), a book that makes a professional philosopher’s hair stand
on end.
Numbers and ideas
Conceptualist and idealist views, however, were subjected along with other nineteenth-century views to a scathing critique by the late nineteenthcentury German mathematician and philosopher Gottlob
Frege.2 Largely as a result of that critique, the anthropological view today has virtually no adherents among professional academic philosophers. Its rejection is one of the rare cases of general
agreement and consensus on an issue in philosophy. Precisely because there is such general agreement, philosophers seldom stop to explain, in language more modern than Frege’s, just what is wrong
with the view that so many anthropologists, sociologists, psychologists, mathematics educators, and even mathematicians have found attractive. It is this task of explanation that I will be
undertaking in the present essay, using an example of a kind that definitely would not have been used by Frege. 2
Let us begin by considering the proposition that Bigfoot, also known as the Sasquatch – a cousin of the Abominable Snowman or Yeti – exists in the realm of shared human ideas and concepts. Now
certainly there is something in the neighborhood that exists in the realm of shared human ideas and concepts, namely, the shared human idea or concept of Bigfoot. This is the idea of a large, hairy,
humanoid creature inhabiting the wilder parts of the Pacific Northwest, from northern California to British Columbia. There are even people who claim to have sighted individual Bigfeet, and to have
formed ideas of these individuals, even to the point of giving them names like ‘‘Harry’’ or ‘‘Harriet.’’ The idea of an individual Bigfoot includes the traits that are common to all Bigfeet according
to the general idea of Bigfoot, but also more specific elements: for instance, Harry is male and Harriet is female. These ideas of individual Bigfeet are less widely shared than the idea of the
species, but we may suppose they are at least shared among members of the International Society for Cryptozoology, who take a special interest in such things. The majority view among zoologists is
that there do not, in fact, exist any large, hairy, humanoid creatures, and that the alleged sightings of Harry, Harriet, and other individual Bigfeet were either illusions or hoaxes. But I ask you
to join me in assuming, just for the moment, that the majority is wrong, and that creatures of the kind indicated, including Harry and Harriet, do exist. On this assumption, I will argue, two things
should be clear. 2
See Frege (1884), English translation by Austin (1960). The critical portions (the part of the book relevant to the present essay) are reprinted in Benacerraf and Putnam (1983).
Mathematics, Models, and Modality
The first is that Harry, Harriet, and other large, hairy, humanoid creatures inhabiting the wilder parts of the Pacific Northwest are very different sorts of things from shared human ideas and
concepts, and in particular are very different sorts of things from the ideas and concepts of Harry, of Harriet, and of Bigfoot in general. They differ in absolutely fundamental respects, for
instance, in their location in space and time. Let us consider space, for instance. (Similar considerations would apply to time.) It is not clear whether or where a shared human idea or concept
should be thought of as located in space, but presumably if it is located anywhere, it is located where the human beings who share it are located. Thus if the International Society for Cryptozoology
holds its annual convention on the banks of Loch Ness, the idea of Bigfoot in general, and the ideas of Harry and Harriet in particular, are located mainly in Scotland. Harry, Harriet, and the rest
of their kind, however, are still located in Washington or Oregon or thereabouts. The creatures cannot be the ideas, because the two are located in different places. The creatures differ from the
ideas also in respect of how many of them there are. People have ideas of Harry, Harriet, and several more Bigfeet that have allegedly come into contact with human beings; but there are supposed to
be, according to the minority view I have asked you to assume for the moment, more Bigfeet than just these: more individuals like Harry and Harriet than there are shared human ideas of individual
Bigfeet. So again the creatures cannot be the ideas, since there are more of the former than of the latter. A second point I hope will be clear is that it is the flesh-and-blood creatures, not the
ideas, that are the Bigfeet. The term ‘‘Bigfoot’’ refers to the inhabitants of the wilds of Washington and Oregon, not to the contents of the minds or brains of the cryptozoologists assembled in
Scotland. If we wish to refer to the latter, we must use some other expression than the word ‘‘Bigfoot,’’ such as the phrase ‘‘the idea of Bigfoot.’’ In short, on the minority view, according to
which the flesh-and-blood creatures do exist, the following is the case: Bigfeet, being flesh-and-blood creatures, are not ideas, and are more numerous than the ideas of them and located in a
different place from those ideas. 3
Are things any different on the majority view? It is when one assumes that there are no such flesh-and-blood creatures that some are tempted to say that the Bigfoot in general, or Harry and Harriet
in particular, are human ideas. I think this temptation should be rejected.
Numbers and ideas
Let me say straightaway that it would be pointless to object to someone expressing disbelief in Bigfoot by saying, ‘‘Bigfoot exists only in the imagination of the credulous,’’ or something of the
sort. Someone might well say this – I might well say it myself, for that matter, when not talking philosophy – and mean it only as a manner of speaking, as a way of saying ‘‘Bigfoot doesn’t exist at
all, though some credulous persons imagine that it does.’’ The proposition I want to consider, however, is that Bigfoot literally does exist, but only in the realm of shared human ideas and concepts,
where, according to the anthropological view, numbers also have their being. To indicate the reasons why I reject this proposition, suppose the population of some endangered forest or swamp species
falls until there is only one left. So long as this one surviving flesh-and-blood or woodand-sap organism lives, considerations of the kind already adduced in the case of Bigfoot indicate that it is
the only member of the species, and it is not an idea, from which it follows that the members of the species are not ideas. Now suppose this last survivor also perishes. Are we now to say that the
species still has members, but that the members of the species are now ideas? Should we say that the species has not become extinct but rather has undergone a metamorphosis, transcending its former
carnal or xyline nature, and taken on a conceptual essence: that its members have cast aside their fleshly or wooden bodies, and are now made of whatever ideas are made of? Should we say that the
species has undertaken a migration, abandoning the woods or marshes that were once its home, and occupying now instead a niche in the minds or brains of human subjects? It seems to me about as plain
as anything can be in philosophy – where admittedly things are never as plain as they are in some other disciplines – that this is not what we should say, and that the correct way to describe the
situation is by saying that creatures of this animal or plant species simply no longer exist at all, though of course human ideas about them do exist, and may perhaps continue to exist as long as the
human species does. Likewise in the case of Bigfoot. If the forest creature exists, then Bigfoot is that forest creature, and is something very different from an idea. If the forest creature does not
exist, then Bigfoot is, so to speak, even more different from an idea: for in that case Bigfoot is nothing, while the idea is at least something, and what could be more different than something and
nothing? The case is the same, I maintain, with our shared human ideas and concept of number in general, and of individual numbers such as one or two or three. (Again the individual ideas contain
whatever is contained in the general idea, plus additional distinguishing elements. We no longer
Mathematics, Models, and Modality
imagine, as did the Pythagoreans, that two is female and three is male, but, for instance, two is even and three is odd.) These ideas are clear enough, I maintain, to indicate that one, two, three,
and the other numbers, if they exist at all, do not have the same sort of spatial or temporal features as human ideas, and above all are more numerous than human ideas could possibly be. Taking first
issues of time and place, mathematics is used throughout science, and mathematical objects and entities are referred to in all its branches, including those like cosmology that deal with times and
places very remote from any inhabited by human beings. Are we to say that a cosmologist’s estimates of the relative numbers of heavy and light elements at a certain stage in the early evolution of
the universe must be wrong, because there were no numbers at all back then, no human beings having yet evolved to create them? Surely not. And then there is the matter of infinity. It is a crucial
feature of the concept of the number system that it has infinitely many elements, that there are infinitely many numbers. But surely human beings have formed ideas or concepts of only finitely many
of them. There simply are not enough human ideas and concepts for each number to be one. Some numbers at least must therefore either enjoy a mode of existence different from that of any human idea,
as realists maintain, or else must simply fail to exist, as nominalists hold. And is it not preposterous to maintain that while one of the pair realism or nominalism gives the correct account of
mathematical existence in the case of some numbers, conceptualism is correct for the rest? Surely the question of the existence and nature of numbers has a uniform answer, and if conceptualism fails
in any case, then it must fail in all. 4
Such, then, are some of the principal reasons why I and almost all professional philosophers of mathematics reject conceptualism, and consider the only real issue to be that between nominalism and
realism. This last issue is far too large to be thrashed out here, but I do wish to say a word about it, and in particular about the character of the realist position, which very often tends to be
misrepresented. Nominalists do not believe in numbers because they cannot see them (or see any visible effects caused by them), and tend to represent their opponents as claiming that they can see
them. According to an old story, Plato was once lecturing in his Academy on his Forms, and was speaking of the forms of ‘‘tableness’’ and ‘‘cupness.’’ Diogenes the Cynic interrupted and said, ‘‘O
Plato, I see the table and the
Numbers and ideas
cup, but the tableness and the cupness I do not see.’’ To this Plato replied, ‘‘Very naturally, Diogenes, since you have eyes, by which material things are perceived, but lack Intellect, by which the
forms are seen.’’3 Nominalists tend to represent their opponents as Platonists, maintaining that if numbers do not emit electromagnetic radiation to which the eye is sensitive, then they must be
emitting something else, perhaps noetic rays, which can be sensed by some other organ, perhaps the pineal gland. This, however, is a misrepresentation of realism. Or at least, I have never known a
single realist who was in any meaningful sense a Platonist. What is actually the case is that anti-nominalists take much more seriously than nominalists the thought that mathematics is a human
creation, since mathematics is a body of theory expressed in language, and language is a human creation. Now creating a language involves creating certain rules for its use. Among these is, I
believe, a rule to the effect that tense and date are not to be applied to mathematical existence assertions. One can say ‘‘There exist infinitely many prime numbers,’’ but to ask ‘‘How many of them
already existed in 1000 B C E , or during the Cenozoic Era?’’ is to commit a kind of grammatical solecism. Nominalists say they are opposed to the view that numbers are ‘‘eternal,’’ existing
‘‘outside of time.’’ But to say that numbers are ‘‘eternal’’ is a misleadingly Platonistic way of putting the simple negative grammatical fact of the inapplicability of tense distinctions in
mathematical contexts. That simple grammatical point is all the realist really believes about the ‘‘timelessness’’ of number. (By contrast with the case of the numbers themselves, it makes perfect
sense to ask whether the idea or concept of prime number had emerged by 1000 B C E – the issue involved would be that of the interpretation of certain Babylonian tablets and Egyptian papyri – and it
makes perfect sense to assert that it had not emerged in the Age of the Dinosaurs. This difference between the ‘‘timeless’’ numbers proper and datable ideas of them was one of the points I was
arguing in rejecting conceptualism.) Likewise, there are certain rules or standards as to what counts as adequate or sufficient to establish or prove a mathematical existence theorem, and by these
rules Euclid’s theorem on the existence of infinitely many prime numbers is as well-established as anything can be. The nominalists assume that they have an understanding of what it would be for a
mathematical object or entity to exist that is independent 3
See the life of Diogenes the Cynic in Diogenes Laertius (1925).
Mathematics, Models, and Modality
of ordinary mathematical standards of sufficient proof, by reference to which understanding they can criticize the ordinary mathematical standards. So-called realism is really just skepticism about
the existence of any understanding of what ‘‘existence’’ means in mathematics that is independent of ordinary mathematical standards for evaluating existence proofs. The nominalist denies the
existence of numbers, while the realist denies that the nominalist understands what is meant by ‘‘existence’’ as applied to numbers. Thus the realists think the nominalists are confused. But realists
and nominalists agree that the conceptualists are confused, and while I cannot hope to have convinced anyone by the foregoing very brief remarks that the realists are right as against the
nominalists, I hope I have convinced some of you that realists and nominalists are right in their common opposition to conceptualism.
Why I am not a nominalist
The sum of the divisors of 220 is 284, and the sum of the divisors of 284 is 220. The Pythagoreans spoke of numbers so related as being amicable. I do not know how this ancient teaching should be
taken, but surely nobody nowadays, except perhaps a stray numerologist or two, would imagine that numbers are literally capable of forming friendships. A number is just not the sort of thing that can
enjoy a social life. And this is but the least of a number’s lacks. A number lacks a position in space, such as tables, chairs, and other material bodies possess. It lacks dates in time, such as
dreams, headaches, and other contents of minds possess. It lacks all visible, tangible, audible properties. In a word, it is abstract. Disbelievers in numbers and other abstract entities or
‘‘universals’’ have come to be called nominalists. Nominalism has always attracted philosophers of the hard-headed, no-nonsense type. But does it not conflict with modern science, which speaks the
language of abstract mathematics? 1
Some nominalists concede that their philosophy of mathematics conflicts with science by implying that science, when it speaks the language of mathematics, is not speaking truly. These nominalists
adopt an instrumentalist philosophy of science, according to which science is just a useful mythology, and no sort of approximation to or idealization of the truth. Truth is to be sought, rather, in
a philosophy prior and superior to science. The position of the well-known nominalist Nelson Goodman is best understood as a subtle and sophisticated variation on instrumentalism. For Goodman,
science is less a useful fiction than useful nonsense. But whereas a 31
Mathematics, Models, and Modality
straightforward, simple-minded instrumentalist would be willing to label science as untrue and let it go at that, Goodman holds that the philosopher ought at least to attempt to give some sense to
the scientist’s otherwise senseless productions by reconstructing them nominalistically: The nominalist does not presume to restrict the scientist. The scientist may use Platonistic class
constructions, complex numbers, divination by inspection of entrails, or any claptrappery that he thinks may help him get the results he wants. But what he produces then becomes raw material for the
philosopher, whose task is to make sense of all this: to clarify, simplify, explain, interpret in understandable terms . . . Nominalism is a restraint the philosopher imposes on himself, just because
he feels he cannot otherwise make real sense of what is put before him. (Goodman 1956, objection vii)
Goodman’s own steps towards nominalistic reconstruction of science (taken jointly with Quine in Goodman and Quine (1947)) never led very far. So presumably for Goodman the bulk of science remains
nonsensical. Most recent philosophers of science, even those nominalistically inclined, have been hostile toward instrumentalist philosophies like Goodman’s for a couple of good reasons. For one
thing, since science is just an outgrowth of common sense, there can be no sharp dividing line between them. The most abstruse theoretical physics is connected in a thousand ways through experimental
and applied science, through engineering and technology, to everyday belief. And much of everyday belief is couched in the vocabulary of mathematics, albeit of a sort more elementary than that which
figures in general relativity theory or quantum mechanics. The philosopher who begins by rejecting theoretical physics as fiction will find no logical place to stop, and in the end will be unable,
without inconsistency and selfcontradiction, to accept commonsense belief as fact. For another thing, the behavior of instrumentalists when not consciously philosophizing strongly suggests that their
professed disbelief in science is a sham. Catch them off guard, and you are likely to find them classing the Steady State theory as false, and the Big Bang theory as true, just like the rest of us.
The instrumentalist seems to be ‘‘engaging in intellectual doublethink: taking back in [his] scientific moments what [he] asserts in doing science’’ (Field 1980, p. 2). He seems to be ‘‘an irrational
person . . . who is unwilling to accept the consequences of his own theories’’ (Chihara 1973, p. 63). It is on account of such slippery slope and insincerity objections that instrumentalism is not a
live option for most contemporary nominalists; and it is certainly not a live option for me.
Why I am not a nominalist 2
Some anti-nominalists have argued that the conflict between nominalism and science is so strong that nothing like modern science as we know it could survive if the nominalist ban on mathematical
abstractions were accepted. Such a position has been reluctantly maintained by the exnominalist Quine ever since the failure of his joint attempt with Goodman at nominalistic reconstruction. Such a
position was also maintained, under Quine’s influence, by Hilary Putnam, during his phase of enthusiastic realism. I have explained early and late that I see no way of meeting the needs of scientific
theory . . . without admitting universals irreducibly into our ontology . . . Nominalism . . . is evidently inadequate to a modern scientific system of the world. (Quine 1981, pp. 182–3) It has been
repeatedly pointed out that such a [nominalistic] language is inadequate for the purposes of science . . . The restrictions of nominalism are devastating . . . It is not just ‘‘mathematics’’ but
physics as well that we would have to give up. (Putnam 1971, p. 35)
In short, Quine and Putnam have maintained that mathematical objects are scientifically indispensable. The refutation of this thesis has been the first aim of the most prominent recent nominalist
writers, Charles Chihara and Hartry Field. The programs of nominalistic reconstruction developed in their books (Chihara 1973; Field 1980) are reviewed in outline in the Appendix to the present
chapter. Suffice it to say here that Chihara and Field draw on results from advanced research in the foundations of mathematics (predicative analysis, measurement theory, proof theory), and that
Chihara assigns the work normally done by mathematical abstractions to certain modal notions (including that of the possibility-in-principle of inscribing tokens of symbols of a certain formal
language), while Field assigns it to certain spatiotemporal objects (admitting as concrete entities regions of space-time that are irregular, disconnected, and of heterogeneous material content).
Their books cast considerable doubt on the thesis of the scientific indispensability of mathematical objects. Does this suffice to establish nominalism? Chihara and Field seem to think so. While for
many readers the most valuable parts of Chihara’s book will be the chapters on Russell and Poincare´, for the author himself, to judge by his Introduction, what is most important is the attempt to
refute the anti-nominalist arguments of Quine, and some not dissimilar
Mathematics, Models, and Modality
arguments of Kurt Go¨del. Chihara implicitly presumes that a refutation of these arguments is tantamount to a proof of nominalism. As for Field, his book bears the subtitle ‘‘A Defense of
Nominalism,’’ but includes (p. 4) the disclaimer that ‘‘nothing in this monograph purports to be a positive argument for nominalism.’’ The resolution of the paradox lies in Field’s presumption that
nominalism does not need to be defended by positive arguments. He explicitly says that if he can accomplish the negative aim of undercutting the arguments of Quine and Putnam, then he will have
reduced belief in mathematical objects to the status of ‘‘unjustifiable dogma.’’ Thus Field, like Chihara, presumes the burden of proof to be on his lotus-eating ‘‘Platonist’’ opponent. I disagree.
Chihara and Field may have gone a long way toward showing that science could be done without numbers. I maintain, however, that science at present is done with numbers, and that there is no
scientific reason why in future science should be done without them. And thus it is not the (continued) acceptance of mathematical objects, but rather the nominalist’s insistence on their rejection,
that constitutes an unjustified and anti-scientific philosophical dogmatism. Quine and Putnam have been false friends of numbers in making the case for their acceptance seem to depend on a claim of
indispensability. Actually, the burden of proof is on such enemies of numbers as Chihara and Field, to show either: (a) that science, properly interpreted, already does dispense with mathematical
objects, or (b) that there are scientific reasons why current scientific theories should be replaced by alternatives dispensing with mathematical objects. I will call the claim (a) about the proper
interpretation of current science hermeneutic nominalism, and the proposal (b) to replace current science by an alternative revolutionary nominalism. I have argued that any anti-instrumentalist
nominalism must be either hermeneutic or revolutionary. I will argue that hermeneutic nominalism, judged by the standards of linguistics, is an implausible hypothesis thus far unsupported by
evidence; and that revolutionary nominalism, judged by the standards of physics, is a costly proposal thus far without scientific motivation. 3
If we take everyday beliefs at face value, then we must conclude that natural numbers are posits of common sense dating from prehistoric times. If we take physics even halfway literally, then we must
conclude that science has been committed to complex numbers for well over a century. According to
Why I am not a nominalist
hermeneutic nominalism, this is all illusion. General relativity theory may seem to make statements about vector-valued functions. Quantum mechanics may seem to make statements about linear
operators. But, in fact, no physical theory asserts or presupposes the existence of such mathematical objects; no branch of science actually posits or commits itself to the existence of abstract
entities. Hermeneutic nominalism is thus a thesis of a type that has recently been described by Saul Kripke: The philosopher advocates a view in patent contradiction to common sense. Rather than
repudiating common sense, he asserts that the conflict comes from a philosophical misinterpretation of common language – sometimes he adds that the misinterpretation is encouraged by the
‘‘superficial form’’ or ordinary speech. He offers his own analysis of the relevant common assertions, one that shows that they do not really say what they seem to say. (Kripke 1976, p. 269)
Let us imagine a laboratory assistant to Lord Kelvin reporting the data in some experiments on the conversion of mechanical to thermal energy. It sounds as if he is speaking of energy-in-joules and
temperature-in-degreesKelvin and other such numerical and abstract entities. According to hermeneutic nominalism, he is actually speaking of something completely different: perhaps of possible chalk
marks on possible blackboards (following Chihara). Maybe of so-called basic regions scattered through the vastness of space-time (following Field). Or perhaps of something still less expected and
still more surprising (following some yet unwritten rival to Chihara (1973) and Field (1980)). Now this claim is in itself not very plausible, and it becomes even less so when we reflect that to take
anything like what we find in Chihara’s book or Field’s as an account of what the laboratory technician is saying is to attribute to that technician a tacit knowledge of such topics in foundations of
mathematics as predicative analysis and measurement theory. These subjects did not even exist in Lord Kelvin’s day, and even now they are studied by few pure mathematicians, let alone working
physical scientists and their technical assistants. Kripke’s words (not specifically directed against nominalism by their author) seem appropriate here: Personally I think such philosophical claims
are almost always suspect. What the claimant calls a ‘‘misleading philosophical misconstrual’’ of the ordinary statement is probably the natural and correct understanding. The real misconstrual comes
when the claimant continues, ‘‘All the ordinary man really means is . . .’’ and gives a sophisticated analysis compatible with his philosophy.
Mathematics, Models, and Modality
Certainly the burden of proof is on the proponents of hermeneutic nominalism, who claim to have discovered a radical difference between appearance and reality in scientific discourse. As a thesis
about the language of science, hermeneutic nominalism is, I presume, subject to evaluation by the science of language, linguistics. For I am prepared to dismiss those who write as if, in addition to
. . . everyday or ‘‘garden variety’’ rules of English, capable of being discovered by responsible linguistic investigation carried on by trained students of language, there were also . . . ‘‘rules’’
capable of being discovered only by philosophers. (Putnam 1971, p. 5)
In the current technical jargon of linguistics, the hermeneutic nominalist’s thesis that scientific statements do not really say what they appear to say becomes the hypothesis that their deep
structure differs from their surface structure, while the thesis that such statements are not really about what they appear to be about becomes the hypothesis that certain noun phrases in the surface
structure are without counterpart in the deep structure. Now readers of professional linguistics journals will recognize that hypotheses of this general type (though normally less radical than those
of hermeneutic nominalism) are not seldom entertained by trained students of language. Such readers will also be familiar with the kinds of evidence cited in responsible linguistic investigations to
support such hypotheses. Until some evidence of this kind can be adduced in support of its implausible hypotheses, I for one will be prepared to dismiss hermeneuticism as a desperate device of
‘‘ostrich nominalism.’’ 4
It is one thing to observe that matters could equally well have been arranged otherwise than they currently are. It is quite another thing to urge that a rearrangement would constitute an
improvement. To say that the British convention of driving on the left-hand side of the road is no worse than our own convention of driving on the right-hand side is not to advance a criticism of our
traffic laws. ‘‘Science,’’ Putnam tells us, lives ‘‘extremely happily on the rich diet of impredicative sets’’ (1971, p. 56). The work of Chihara and Field suggests that science could survive on more
meager fare, on a diet of inscriptionpossibilities or of spatiotemporal regions. But would science be healthier after such a change of menu?
Why I am not a nominalist
When scientists abandoned caloric fluid and luminiferous ether, it was because they had discovered alternative theories that were empirically superior, of wider scope and greater accuracy in
predicting the results of observations and experiments. Now the alternative theories concocted by Chihara and Field cannot be claimed to be empirically superior to current theories, for they have
been designed to be empirically equivalent. Will it be urged that those alternatives are somehow pragmatically superior? Their awkward and ungainly character makes it difficult to claim that they are
more convenient and efficient as systematizations of the data of experience. Will it be urged that, despite their unnatural and artificial character, they somehow contributed to clarity, simplicity,
intelligibility, and the like, in ways that matter to working scientists? Something of the sort must be urged if a nominalistic revolution in science is to be motivated. The proviso, ‘‘in ways that
matter to working scientists,’’ is crucial, if a mere instrumentalist opposition to science is to be avoided. It is pointless and futile to urge a revolution in the practice of physicists motivated
only by considerations appealing only to philosophers of a certain type. Physicists are too well aware of the dismal historical record of philosophical interference in science to accept such
dictation from outsiders. Now the avoidance of ontological commitments to abstract entities does not seem to have won recognition in the scientific community as being in itself a goal of the
scientific enterprise on a par with scope and accuracy, and convenience and efficiency, in the prediction and control of experience. It seems, on the contrary, a matter to which most working
scientists attach no importance whatsoever. It seems distinctively and exclusively a preoccupation of philosophers of a certain type. Thus Goodman is able to cite only a few linguists who are
nominalistically inclined, and not one physicist: Paucity of means often conduces to clarity and progress in science as well as philosophy. Some scientists indeed – for example, certain works in
structural linguistics – have even imposed the full restriction of nominalism upon themselves in order to avoid confusion and self-deception.
One would search the physics journals in vain for any expression of nominalistic qualms and scruples, of reluctance and hesitancy to use mathematical apparatus, of suspicion that such ‘‘Platonistic
claptrappery’’ as complex numbers may be a source of ‘‘confusion and self-deception.’’ The proposed nominalistic revolution in physics can be scientifically motivated only by showing that the
avoidance of ontological commitments
Mathematics, Models, and Modality
to abstract entities would somehow serve indirectly to advance us toward some more recognizably scientific goals. For my own part, I cannot discern any such scientific benefits to be expected from
the proposed revolution, while I do discern a couple of non-negligible costs. First, any major revolution involves transition costs: the rewriting of textbooks, redesign of programs of instruction,
and so forth. A reform along the lines of Chihara (1973) would involve reworking the mathematics curriculum for science and engineering students, avoiding impredicative methods in favor of
predicative parodies that are harder to learn and not so easy to apply. A reform along the lines of Field (1980) would involve reworking the physics curriculum, so that each basic theory would
initially be presented in qualitative rather than quantitative form. A course on measurement theory would have to be crammed into the already crowded study plan, to explain and justify the use of the
usual numerical apparatus. This is educational reform in precisely the wrong direction: away from applications, toward entanglement in logical subtleties. Second, the physicist who puts on
nominalistic blinders may be unable to see certain potentially important paths for the development of science. I have in mind here not an inevitable logical consequence of nominalistic revolution,
but a likely psychological consequence. Chihara (1973, p. 209) promises that he will recant his nominalism should some future physical theory turn out to require mathematical objects indispensably.
But the danger I have in mind is that if science goes nominalist today, that future theory may simply never be discovered. Yuri Manin has noted this point in connection with intuitionism:
Unfortunately, it seems that it is these ‘‘extremes’’ – bold extrapolations, abstractions which are infinite and do not lend themselves to a constructivist interpretation – which make classical
mathematics effective. One should try to imagine how much help mathematics could have provided twentieth century quantum physics if for the past hundred years it had been developed using only
abstractions from ‘‘constructive objects.’’ Most likely, the standard calculations with infinite dimensional representations of Lie groups which today play an important role in understanding the
microworld, would simply never have occurred to anyone. (Manin 1977, pp. 172–3)
(Mention of quantum mechanics should remind us that it is unclear whether the methods of Chihara and Field are adequate even for presentday science in its entirety. For Chihara the problem is a minor
one, and could probably be solved by adopting a somewhat stronger system of predicative analysis than the particular weak system Rx he favors. For Field, the problem is a major one, for he has given
us no idea how he
Why I am not a nominalist
proposes to treat quantum theory, which differs radically (owing to its use of infinite-dimensional apparatus and to its statistical character) from the one theory he does treat in detail, Newtonian
gravitational theory.) But I need not enlarge on the costs for present-day and future physics of a nominalistic revolution. Surely the burden of proof is on the revolutionary, who proposes a drastic
departure from our thus far eminently successful policy of ontological tolerance in common sense and scientific theory construction. Until it is shown that nominalism offers physical science some
substantive advantages, I for one am prepared to dismiss its revolutionary proposals as motivated only by medieval superstition (‘‘Ockham’s razor’’) and fastidious bigotry (cf. Goodman 1964,
objection viii). Chihara and Field have gone a long way toward constructing nominalistic alternatives empirically equivalent and pragmatically only slightly inferior to our current scientific
theories. Their work suggests that an ontology of abstracta may be one feature of those current theories that is merely conventional, in the best sense of the word (that of David Lewis 1969). This
suffices to cast considerable doubt on some more extreme versions of realism. It does not suffice to cast doubt on moderate versions of realism, which merely observe that our current theories seem to
invoke abstracta and that we do not yet have reasons to abandon those theories. For to characterize some feature of our present ways of doing things (in scientific theorizing or in driving) as
conventional is not in itself to criticize that feature. And Chihara and Field have not come close to constructing nominalistic alternatives that are manifestly superior (empirically or
pragmatically) to our current scientific theories. 5
I have rejected nominalism in its traditional ontological form, as the doctrine that there exist no abstract entities. I equally reject it in its currently fashionable epistemological form, as the
thesis that even if there exist any abstract entities, still we could never come to know about their existence. Epistemological nominalism is usually supported by an argument of the following form:
all entities of which we can have knowledge are causally connected with our organism; no abstract entities are causally connected with our organism; ergo, no abstract entities are entities of which
we can have knowledge. The argument is, of course, valid, a syllogism in Camestres. But the premises are dubious and debatable. As for the minor premise, of course a cyclic group does not act on our
organs of sight, touch, and hearing in the
Mathematics, Models, and Modality
same way as an alarm clock. And nobody nowadays, except perhaps a stray numerologist or two, would imagine that mathematical objects act on us through some mysterious sixth sense or ESP unknown to
orthodox physiology. Nonetheless, as Maddy (1980) skillfully argues, there is a good deal of research in developmental psychology and neurophysiology that can be read as showing that we do, in a
sense, have causal contact with certain abstracta. As for the major premise, it rests on a causal theory of knowledge. That theory has many opponents, who regard it as a half-truth arrived at by
overhasty generalization from too narrow a range of cases, to which the cases of knowledge of mathematical objects, ethical values, other minds, and so forth are just so many counterexamples.
Significantly, that theory has also a good many half-hearted sympathizers, who do not regard it as wrongheaded or misguided, but merely as in need of amendment. In many amended versions, the notion
of causality disappears, to be replaced by that of reliability or explanation or something of the sort, and with it disappears the major premise of the epistemological nominalist’s syllogism. Again
Maddy (1984) provides a useful survey of the issues. The more cautious sympathizers with the causal approach to the theory of knowledge now maintain only that the abstractness and consequent causal
inertness and isolation of mathematical objects create difficulties for the epistemologist in trying to account for mathematical knowledge. I am surprised to find Field citing these epistemological
difficulties as if they in themselves constituted some sort of grounds for nominalism: [Nominalism] saves us from having to believe in a large realm of . . . entities which are very unlike the other
entities we believe in (due for instance to their causal isolation from us and from everything that we experience) and which give rise to substantial philosophical perplexities because of those
differences. (Field 1980, p. 98)
A footnote to this passage makes it plain that Field’s ‘‘philosophical perplexities’’ are precisely the epistemological difficulties just alluded to. (Incidentally, the same footnote provides a good
bibliography of works arguing for epistemological nominalism.) To bring out just how odd this argument is, I want to consider a parallel: suppose that Burrhus Skinner were to confess that after all
those years of work with his rats and pigeons he is still ‘‘substantially perplexed’’ by the ability of freshman students to master calculus and mechanics. Now what mathematician or physicist would
take that as motivation for rewriting the textbooks in those subjects? What linguist would take it as evidence that the sentences in those textbooks have some bizarre and outre´ depth grammar?
Why I am not a nominalist
No one would take it as an indication of anything but the inadequacies of behaviorist learning theory. Likewise, a philosopher’s confession that knowledge in pure and applied mathematics perplexes
him constitutes no sort of argument for nominalism, but merely an indication that the philosopher’s approach to cognition is, like Skinner’s, inadequate. CONCLUSION
Unless he is content to lapse into a mere instrumentalist of ‘‘as if ’’ philosophy of science, the philosopher who wishes to argue for nominalism faces a dilemma: he must search either for evidence
for an implausible hypothesis in linguistics, or else for motivation for a costly revolution in physics. Neither horn seems very promising, and that is why I am not a nominalist. APPENDIX
For the reader’s convenience, I here outline the constructions of Chihara and Field, and the claims which those authors make for their constructions. I will not advance any technical objections
against those constructions (though in fact I have one small reservation about Chihara’s approach, and share with Kripke several large reservations about Field’s), since my aim has been to argue that
even if the constructions are technically flawless, they do not suffice to establish nominalism. A Chihara’s modal nominalism I here outline the construction of Chihara (1973), Chapter V and
Appendix. Chihara’s strategy is to reinterpret in a nominalistically acceptable fashion a portion of pure mathematics: arithmetic first, then so-called predicative analysis. He then argues that the
portion of mathematics so reinterpreted suffices for scientific applications, and dismisses the rest of mathematics (the impredicative part) as mythology. To illustrate Chihara’s approach to
arithmetic, consider Euclid’s famous theorem: (0) ð8 number mÞð9 number nÞðm5n & n is primeÞ
As a first attempt to avoid mathematical objects, let us rewrite this as: (1) ð8 numeral aÞð9 numeral bÞ . . .
Mathematics, Models, and Modality
(I will indicate only the transformation of the prefix of (0); this is not to say that the transformation of the matrix does not require some caution.) Now if numerals are taken as types (patterns of
inscription), then they are themselves abstract entities akin to shapes, and (1) is not much of an improvement on (0). But if numerals are taken as tokens (individual inscriptions), then they are
concrete entities, made of chalk or ink, but there may not be (indeed, almost certainly are not) enough of them around to make (1) true. To get a version of (0) that is both true and committed only
to concrete entities, we must introduce the modal notions of necessity (&) and possibility (}). Then, taking numerals as tokens, our final reinterpretation of (0) is: (2) &ð8 numeral aÞ}ð9 numeral bÞ
. . .
Informally this says: however long a tally you could ever write down, I could write down a still longer one such that . . . Here we have the idea behind the approach to arithmetic in Chihara (1973).
Chihara’s approach builds on the work of predicativists (specifically, Hao Wang), mathematical constructivists somewhat more liberal than intuitionists. Predicativists accept uncritically classical
arithmetic (theory of natural, or equivalently rational, numbers) but in analysis (theory of real numbers, or equivalently of sets of natural numbers) they accept only what is definable. To begin
with, they accept those sets of natural numbers that are definable by purely arithmetical conditions, conditions quantifying only over natural numbers. These are the order zero sets. Next they accept
those sets of natural numbers that are definable by conditions quantifying over natural numbers and order zero sets. These are the order one sets. And so on, through higher and higher orders. (Just
how high to go is a delicate question.) A surprisingly large portion of classical mathematics can be ‘‘parodied’’ within this framework, as the survey by Feferman (1977) shows. Intuitively, it is
plausible that a theory of definable sets should be reducible to arithmetic plus truth-predicates, with quantification over definable sets being replaced by quantification over the code numbers of
their defining conditions, and the membership relation replaced by the relation ‘‘n is the code number of a formula with one free variable that is true of m.’’ The details can be worked out, and we
get a reduction of predicative analysis to something that has already been shown to be nominalistically reinterpretable. Chihara’s account of the application of mathematics in science is illustrated
by Figure 2.1. While scientific theories are formulated mathematically in terms of sharply defined functions, at least in the overwhelming majority of
Why I am not a nominalist Empirical conditions define FUZZY FUNCTION
Basic scientific theory formulated for SHARP FUNCTION Theorems of analysis
Information about FUZZY FUNCTION hence information about empirical situation
Consequences of scientific theory: information about SHARP Approximation FUNCTION
Figure 2.1
applications empirical conditions define only fuzzy functions. For instance, the condition: f(t) ¼ x iff the projectile is x meters above the floor of the chamber at t seconds after firing
cannot define a sharp function because of the fuzziness of the projectile and the chamber (viewed on a scale of micrometers) and of the firing event (viewed on a scale of nanoseconds). Thus the
application of a scientific theory to empirical conditions typically involves an element of idealization. Now predicative mathematics provides sufficiently many sharp functions to serve as
idealizations for empirically defined fuzzy functions, because any classical function can be approximated as closely as desired by a predicative function. Moreover, the theorems of analysis used in
deriving consequences from basic scientific theories can all be parodied predicatively: this can be verified by comparing the mathematics curriculum for science and engineering students as listed in
any college catalogue with the survey by Feferman (1977) of predicative mathematics. Thus predicative mathematics, which we have already seen to be nominalistically reinterpretable, suffices for
scientific applications. B Field’s spatiotemporal nominalism I here outline the construction of Field (1980). Field’s strategy is to reformulate basic scientific theories and their consequences (the
special information they entail about special situations) in a way that avoids all
Mathematics, Models, and Modality
mathematical vocabulary, and then to argue that the consequences can be deduced from the basic theories without introducing any mathematics. To illustrate Field’s approach to the formulation of
science, let us consider, as he does, thermodynamics. Here a typical qualitative, math-free, nominalistically acceptable notion would be the comparative relation R between point-events of space-time
given by ‘‘x is cooler than y.’’ Here a typical quantitative, mathematical, ‘‘Platonistic’’ notion would be that of temperature on a given scale, conceived as a real-valued function r on pointevents
in space-time. Measurement theory, as surveyed in the compendium Krantz et al. (1971), is a corpus of theorems to the effect that suitable assumptions on qualitative relations entail the existence
(and uniqueness up to stated changes of scale) of quantitative functions appropriately representing them. In thermodynamics, suitable assumptions would include that R is irreflexive and transitive,
appropriate representation would include that xRy if and only if r(x) < r(y), and stated changes of scale would be like those used in passing between Fahrenheit and Celsius. Once we have such a basic
representation theorem, it becomes possible to reformulate any scale-invariant assumption on the quantitative functions as an assumption on the qualitative relations. In thermodynamics, continuity
for the temperature function r can be reformulated in terms of a notion of temperature-basic region, itself defined in terms of the cooler relation R. In this way the nominalist can reformulate the
whole of science, both basic, general theoretical principles, and particular consequences for practical applications. However, the only route we have seen so far from the qualitatively formulated
version of a basic theory to the qualitatively formulated QUALITATIVE formulation of BASIC THEORY
Conservation theorems
Representation theorems of Measurement theory
QUANTITATIVE formulation of BASIC THEORY
Theorems of analysis
QUALITATIVE Representation QUANTITATIVE theorems of formulation of formulation of CONSEQUENCES Measurement CONSEQUENCES of basic theory of basic theory theory
Figure 2.2
Why I am not a nominalist
versions of its consequences involves a ‘‘Platonistic’’ detour: from qualitative basic theory by measurement theory to quantitative basic theory, thence by theorems of analysis to quantitative
consequences, and thence by measurement theory again to qualitative consequences, as in Figure 2.2. It is, however, theoretically possible, though practically inconvenient, to avoid the introduction
of mathematics, to avoid the detour through the quantitative and abstract: the conclusions we arrive at by these means are not genuinely new, they are already derivable in a more long-winded fashion
. . . without recourse to the mathematical entities. (Field 1980, pp. 10–11) for these purposes [‘‘problem solving’’] the usual numerical apparatus is a practical necessity. But it is a necessity
that the nominalist has no need to forgo: he can treat the apparatus . . . as a useful instrument for making deductions from the nominalistic system that is ultimately of interest; an instrument
which yields no conclusions not obtainable without it, but which yields them more easily. (Field 1980, p. 91)
These claims are supported by appeal to conservation theorems from proof theory (the most important being perhaps one due to Scott Weinstein).
Mathematics and Bleak House
1 ‘‘ N O M I N A L I S M ’’
‘‘ R E A L I S M ’’ 1
Nominalism is a large subject. In our book (Burgess and Rosen 1997) my colleague Gideon Rosen and I distinguished a negative or destructive side of nominalism, which tells us not to believe what
mathematics appears to say, from a positive or reconstructive side, which aims to give us something else to believe instead. We noted that there were a few nominalists who contented themselves with
the negative side, conceding that mathematics is useful, insisting that what it appears to say is not true, and letting it go at that, without attempting any reconstrual or reconstruction of
mathematics. We expressed some surprise that there were not more such destructive nominalists, since as compared with reconstructive nominalism, destructive nominalism has what Russell in another
context called ‘‘the advantages of theft over honest toil’’; and if nothing else was clear from the work of Hartry Field, Charles Chihara, Geoffrey Hellman, and other reconstructive nominalists whose
work we surveyed, it was clear that the amount of honest toil that would be required for a nominalistic reconstrual or reconstruction of mathematics would be quite considerable. Today, a couple of
years after publication, it is beginning to seem that the main achievement of our book will have been to provide a decent burial for the hard-working, laborious variety of nominalism. For almost
everything that has come forth since from the nominalist camp has represented a light-fingered, larcenous variety, which helps itself to the utility of mathematics, while refusing to pay the price
either of acknowledging that what mathematics appears to say is true, or of providing any reconstrual or reconstruction that would make it true. The usual label for this variety of
I have decided to keep this paper in the form in which it was originally written for oral delivery, adding footnotes to supply citations of the literature, and for clarification at a few points where
experience has shown misunderstanding of my intended sense may be likely.
Mathematics and Bleak House
nominalism is ‘‘[mathematical] fictionalism.’’2 Not only has fictionalism become the most widely pursued form of nominalism, but even some former anti-nominalists have been wavering or drifting in
its direction, including even the author of one of the most eloquent early criticisms, ‘‘Mathematics and Oliver Twist.’’ I will take the liberty, therefore, of substituting ‘‘fictionalism’’ for the
‘‘nominalism’’ label in the official title of our symposium. While I am at it, I had better say something about the ‘‘realism’’ as well. It is an even larger subject, since there is hardly any bit of
philosophical terminology more diversely used and overused and misused than the Rword. There seems to be a systematic difference between the way in which the word is understood by many of those who
describe themselves as ‘‘realists’’ and the way in which it is understood by most of those who describe themselves as ‘‘anti-realists.’’ For many professed ‘‘realists,’’ realism amounts to little
more than a willingness to repeat in one’s philosophical moments what one says in one’s scientific moments, not taking it back, explaining it away, or otherwise apologizing for it: what we say in our
scientific moments is all right, though no claim is made that it is uniquely right, or that other intelligent beings who conceptualized the world differently from us would necessarily be getting
something wrong. For many professed ‘‘anti-realists,’’ realism seems rather to amount to a claim that what one says to oneself in scientific moments when one tries to understand the universe
corresponds to Ultimate Metaphysical Reality; that it is, so to speak, a repetition of just what God was saying to Himself when He was creating the universe. The weaker position might be called
anti-anti-realism, and the stronger position capital-R Realism. Quine says somewhere that his ‘‘realism’’ and his ‘‘pragmatism’’ are reconciled by his ‘‘naturalism.’’ This is a hard saying, but I
think it can be explained along the following lines. First, ‘‘realism’’ here means anti-anti-realism: the refusal to apologize while doing philosophy for what is said while doing mathematics or
science. Second, ‘‘pragmatism’’ 2
In earlier publications I called this stance ‘‘instrumentalism,’’ while Rosen called it ‘‘constructive nominalism,’’ by analogy with van Fraassen’s ‘‘constructive empiricism.’’ (Incidentally, though
van Fraassen is best known for his fictionalism about unobservable physical entities, he is also a fictionalist about abstract mathematical entities.) However, the analogous positions in other areas
of philosophy (e.g. modality) are today generally called ‘‘fictionalist,’’ and on this ground Rosen adopted the ‘‘fictionalist’’ label, and I now follow him. Unfortunately, there has been another use
of ‘‘fictionalist’’ in philosophy of mathematics, by Hartry Field and his students, for whom it includes all rejectionist views, all views that hold that standard mathematics is false, whether
fictionalist in our sense or reconstructive. This usage seems to be out of alignment with the usage of ‘‘fictionalism’’ in other areas of philosophy, and for that reason to be avoided.
Mathematics, Models, and Modality
means rejection of capital-R Realism, rejection as unjustified of any claim that mathematics and science give us a God’s-eye view of capital-R Reality. Third, ‘‘naturalism’’ means adherence to a
conception of epistemology as an inquiry conducted by citizens of the scientific community examining science from the inside, rather than an inquisition conducted by philosophers foreign to science
judging science from the outside. And the reconciliation? Naturalism teaches us to look at our scientific, philosophical, and other forms of intellectual endeavor as activities of biological
organisms with cognitive capacities that, though extensive, stop well short of omniscience. As such, none of these endeavors can succeed in achieving a God’s-eye view of Reality. And therefore there
is no reason to apologize for one of them, science, failing to achieve such a view, and every reason not to suppose that another of them, philosophy, could do better. Should I, therefore, replace
‘‘realism’’ by ‘‘naturalism’’ in the title of our symposium? No, for unfortunately ‘‘naturalism’’ too has been used and overused and misused in multiple senses. In particular, while in Quine’s sense
naturalism abstains from imposing philosophical constraints on science, there is another equally widespread but diametrically opposed conception in which ‘‘naturalism’’ consists precisely in imposing
such a philosophical constraint, namely, the constraint that no entities are to be assumed that do not stand in natural cause-and-effect relations with us. So I will just replace ‘‘realism’’ by
‘‘anti-fictionalism,’’ making the title of our symposium come down to this: ‘‘fictionalism in philosophy of mathematics: pro and con.’’ 2
To begin with the pro side, it is impossible to quarrel with the proposition that mathematics is in some respects like fiction. For indeed, anything is like anything else, in some respect. I even
think the comparison may be illuminating, as to the nature of fiction, or rather, as to some of the questions philosophers raise about fiction, and in particular about the status of fictional
characters. The hard-headed view is that they just do not exist. Another view is that they are abstract entities of some sort or other. Yet another view is that they are mental entities. Now the view
that mathematical entities are mental entities, though a perennial favorite of amateur philosophers, has been in disrepute among professional philosophers since Frege’s trenchant critique of
psychologistic philosophies of mathematics more than a hundred years ago. I think that fiction is enough like mathematics to suggest that the view that fictional entities are mental entities is
Mathematics and Bleak House
equally dubious. For example, one problem for mentalistic theories of mathematical entities is that there are too many such entities for minds to have created each one; and a little reflection shows
that the situation is similar with fiction: a novelist may write of the doings of a vast army with thousands of officers and myriads of soldiers, while in fact describing only a couple of dozen of
them individually. But I am straying from our topic, which was supposed to be what, if anything, comparison with fiction can show us about mathematics, and not the reverse. Reverting to that topic,
then, I have said on the one hand that it can hardly be denied that mathematics is like fiction in some respects, most obviously in consisting of large bodies of manuscript and printed and now
electronic writing. On the other hand, it must be said that there is a sense in which mathematics is clearly non-fiction: there is a well-established practice of classifying writing as ‘‘fiction’’
and ‘‘non-fiction,’’ and setting aside attempted theoretical definitions and analyses, attending rather to the criteria by which in actual practice such classifications are made, clearly mathematics
counts as non-fiction. The compilers of the New York Times best-seller list will never put any mathematical work, however wonderful, at the top of the fiction column, and not just because nothing
even by Andrew Wiles will ever sell like Stephen King. Nor will any librarian catalogue, say, the Proceedings of the Cabal Seminar as an ‘‘anthology of short stories based on the characters created
by Georg Cantor.’’ Now of course misclassifications are sometimes made, and mischievous hoaxes and pious frauds sometimes succeed, but these represent mistakes in applying general criteria to
specific cases. I do not think it even makes sense to suggest that the general criteria are themselves mistaken, and it seems unquestionable that by those criteria mathematical writing is not
literally fictional writing: mathematics is not in all respects the same as fiction. So the question is: in which respects is mathematics like, and in which respects is it unlike, fiction? That in
part depends on the species of the genus fiction one considers. The first species one thinks of will probably be the novel, and comparison with novels is in fact common among professed fictionalists
and their critics, as with Oliver Twist, mentioned earlier. There has been, however, a minority – including especially the late Leslie Tharp3 – who have preferred a comparison with mythology. In a
slightly different vein, Steve Yablo, in an interesting paper,4 has suggested a comparison with metaphor. Metaphor of course is not a genre of fiction but a figure of 3 4
Rescued from oblivion in Chihara (1989). The reference was to an on-line version, but the paper has since appeared in print as Yablo (2000).
Mathematics, Models, and Modality
speech, and I think that to speak as Yablo does of a metaphor running on for volumes and volumes and volumes is to stretch the concept of ‘‘metaphor’’ well beyond breaking point. But perhaps much of
the content of Yablo’s suggestion could be preserved if we took the comparison to be between mathematics and parables or fables. Be that as it may, I believe the comparison with fables is the most
apt of the candidates I have considered, and comparison with novels the least so. Novels almost always are attributable to identifiable individual authors: Proust or Flaubert, Trollope or Dickens.
Some fables are attributable to such authors, Lafontaine for instance; others are traditional. Mathematics also consists of both traditional elements and elements with identifiable authors. Novels
are almost always unique. Fables tend to be retold over and over in variant versions by different writers, so that we have Aesop’s version, Lafontaine’s version, and many latter-day retellings of the
fox and the crow, for instance. Mathematics likewise gets retold by textbook writer after textbook writer. The characters in one novel seldom reappear in another, and even those who do reappear, like
Swann or Palliser, do so only in comparatively few stories, all by the same author. This is so with some characters of fable, but many, like the clever fox, reappear in whole cycles of tales. The
same mathematicalia, p and e, the sine and cosine functions, 0 and 1 and 2 and so on, reappear through whole libraries of mathematical works. Again, characters encountered in novels are generally of
the same species as those encountered in daily life, while those in fables are, as one dictionary definition reminds us, beings of a different order: ‘‘animals that talk and behave like human
beings.’’ Mathematics, too, has objects even more unlike those of any other subject, and it is for precisely that reason that there is thought to be a philosophical problem about them. Yet more
important is the matter of application, which in literature typically takes the form of a ‘‘message.’’ The fable typically though not invariably has a ‘‘moral,’’ while to demand one of the novel is
virtually the definition of Philistinism. I am reminded in this connection of what Nabokov says in his posthumously published lectures about Bleak House and its supposed concern with reform of the
Court of Chancery. At first blush it might seem that Bleak House is a satire. Let us see. If a satire is of little aesthetic value, it does not attain its object, however worthy that object may be.
On the other hand, if a satire is permeated by artistic genius, then its object is of little importance and vanishes with its times while the dazzling satire remains, for all time, as a work of art.
So why speak of satire at all? . . . Such cases as Jarndyce did occur now and then in the middle of the last century although, as legal historians have shown, the bulk of our author’s information on
legal matters
Mathematics and Bleak House
goes back to the 1820s and 1830s so that many of his targets had ceased to exist by the time Bleak House was written. But if the target is gone, let us enjoy the carved beauty of the weapon. (Nabokov
1980, p. 64)
The question of applications is crucial in the case of mathematics, because though it would be a kind of Philistinism to demand that every piece of mathematics have one, many do; and it is precisely
because many do that many philosophers have opposed nominalism, this being the least common denominator of all ‘‘indispensability arguments.’’ Still more important, however, is a feature common to
all genres of fiction. The most important single respect in which fictionalists hold mathematics to be like novels or fables or whatever is in being a body of falsehoods. In particular the existence
theorems of mathematics are supposed to be untrue: these say there exist, for instance, prime numbers 10 greater than 1010 , whereas according to mathematical fictionalists, and indeed all
nominalists, there are no such things as numbers at all.5 3
Nominalism, to repeat, is a large subject. In our book, Rosen and I distinguished two varieties of reconstructive nominalists. Those of one variety, the hermeneutic, insist that their reconstruals of
mathematics reveal what, contrary to superficial appearances, deep down mathematical language has meant all along: mathematical theorems are true, but while what mathematical theorems appear to mean
implies the existence of mathematical entities, what they really mean does not. This position might be summed up in the formula, ‘‘There are no numbers, and some of them are 10 primes greater than
1010 .’’ Reconstructive nominalists of the other variety, the revolutionary, concede that their reconstructions of mathematics are not analyses of current mathematics, but amendments to it; not
exegeses, but emendations.6 5
I have been concerned here with comparison between mathematics and fiction less for its own sake than for its bearing on the issue of nominalism. The comparison deserves a much fuller examination.
For an extended discussion of analogies and disanalogies, containing extensive references to the further literature, see Thomas (2000, 2002). This is not the place to discuss reviews of Burgess and
Rosen (1997) at any length, but it may be mentioned that many adherents of and sympathizers with reconstructive nominalism have wished to claim that there is some third alternative. For instance, it
is sometimes said that a nominalist interpretation represents ‘‘the best way to make sense of ’’ what mathematicians say. I see in this formulation not a third alternative, but simply an
equivocation, between ‘‘the empirical hypothesis about what mathematicians mean that best agrees with the evidence’’ (hermeneutic) and ‘‘the construction that could be put on mathematicians’ words
that would best reconcile them with certain philosophical principles or prejudices’’ (revolutionary).
Mathematics, Models, and Modality
We did not in the book discuss a division between hermeneuticists and revolutionaries among the fictionalists, but such a distinction can be made.7 The hermeneutic fictionalist maintains that the
mathematicians’ own understanding of their talk of mathematical entities is that it is a form of fiction, or akin to fiction: mathematics is like novels, fables, and so on in being a body of
falsehoods not intended to be taken for true. According to the hermeneutic fictionalist, the anti-nominalist philosopher is being more royalist than the queen (of the sciences); is being a kind of
fundamentalist, taking literally what was never so meant. Consider, for instance, the question when the ‘‘reification’’ of numbers first occurred historically, in the main line of development leading
up to modern mathematics. What I take to be a fairly conventional view dates the reification of natural numbers to about the time of Archytas of Tarentum and other Pythagoreans, among whom we begin
to see the transition from the use of numerals as adjectives, as in ‘‘six oxen were sacrificed to Zeus,’’ to their use as nouns, as in ‘‘six is a perfect number’’; and it dates the reification of
real numbers from about the time of Omar Khayyam and other medieval Islamic and Hindu mathematicians, who in contrast to Greek mathematicians treated ratios of geometric magnitudes as numbers, as
things that could be added and multiplied. According to Chihara (1990), however, these dates are all wrong. The reification of number actually occurred not circa 500 B C E or 1000 C E , among
mathematicians, but some time in the 1960s and 1970s, and among philosophers: it is a rash and recent innovation of Quine and other ‘‘literalist’’ philosophers, myself and my fellow symposiast
included. Chihara’s argument for this claim is essentially that the mathematicians he has questioned have either expressed puzzlement when asked about their ontological commitments, or have
repudiated suggestions that they are committed to any ontology at all. Yablo’s argument is on the face of it different, but I believe at bottom similar. He begins by calling attention to a very
interesting phenomenon, the ‘‘paradox’’ of his title. Let me attempt my own presentation of it. According to an old story, when Lindemann settled the ancient problem of squaring the circle, his
colleague Kronecker reduced him to tears by 7
The term ‘‘hermeneutic fictionalism’’ was taken as the title for a large-scale study by Jason Stanley (2001), which pursues at length a number of the points to be made below, and many more also,
through several different areas of philosophy where fictionalism has become fashionable. In an alternative terminology, hermeneutic reconstructive nominalism and hermeneutic non-reconstructive
nominalism are called content hermeneuticism and attitude hermeneuticism respectively. This terminology is used in Rosen and Burgess (2005).
Mathematics and Bleak House
asking, ‘‘What is the value of your investigation of p, since irrational numbers do not exist?’’ Suppose a nominalist philosopher today were to say to Wiles, ‘‘What’s all this fuss about your proving
there are no natural numbers x, y, z > 0 and n > 2 such that xn þ yn ¼ zn? Since there are no numbers at all, a fortiori there are none satisfying that equation.’’8 I imagine most mathematicians
would be contemptuous of this speech and most philosophers – even most nominalist philosophers – embarrassed by it. According to another story, G. E. Moore famously once argued along something like
the following lines: he held up his hands and said, ‘‘As you can see, here are two human hands. Since human hands are material bodies in the external world, there exists an external world of material
bodies.’’ Suppose an anti-nominalist philosopher today were to hold up his hands and say, ‘‘As you can see, the number of my hands is two. Since the number two is an abstract entity, abstract
entities exist.’’ Would not most mathematicians be baffled and bewildered by this argument, and would not most philosophers – even anti-nominalist philosophers – balk and boggle at it? On the one
hand, objections to what mathematicians and others would ordinarily say about mathematical entities on the grounds of their alleged non-existence are regarded with scorn, while on the other hand,
purported proofs of their existence are viewed with suspicion. Yablo argues that this double attitude is explicable if we assume that mathematical assertions are meant non-literally, as fiction, or
rather, in this preferred terminology, as metaphor. For that would make both objections on the grounds that they are not literally true, and attempts to prove that they are literally true, equally
off the mark in opposite directions. As I said earlier, I think Yablo’s argument ultimately relies on the same kind of consideration as Chihara’s, namely, mathematicians’ puzzlement at or repudiation
of philosophical theses and arguments about the existence of the entities they study. I also think the positions of Chihara and Yablo ultimately involve similar mistakes about the nature and meaning
of ‘‘commitment’’ and ‘‘literalness.’’ Let me begin with commitment. The reason Quine and others have spoken of the ‘‘commitment’’ of mathematical, scientific, and everyday thought to mathematical
and other abstract entities, is precisely because they did not want to speak of mathematicians’ or scientists’ or lay-persons’ ‘‘assertions’’ of or ‘‘beliefs’’ in the existence of such entities.
Mathematicians qua mathematicians do address questions about whether there are prime 10 numbers greater than 1010 , but they generally do not spend much time talking, and presumably do not spend much
time thinking, about the 8
This example is from a talk by the late George Boolos.
Mathematics, Models, and Modality
question whether there are any such things as numbers at all: hence the inappropriateness of speaking of their ‘‘assertions’’ or ‘‘beliefs’’ about such questions. Quine’s claim was that they are
committed to an affirmative answer to this question, because what they do assert and believe, that there 10 are prime numbers greater than 1010 , implies that there are prime numbers and therefore
that there are numbers, and it has this implication whether or not they ever acknowledge it, and indeed even if they repudiate it, when talking philosophy rather than mathematics. Now for
literalness. Quine does allow that someone might say, ‘‘There are 10 prime numbers greater than 1010 ’’ and yet not be committed to its implications. For one could say it and not really mean it, or
not really believe it, whether or not one were capable of articulating what it is that one does mean and believe when saying it. One could say it and not intend it to be understood ‘‘literally.’’ But
what does this mean? If the function of the word ‘‘literal’’ were to indicate the presence of something positive, some extra enthusiasm perhaps, then it would indeed be very doubtful whether mathe10
maticians who assert that there are prime numbers greater than 1010 mean it ‘‘literally.’’ But I suggest that the function of the word is in actual fact less to indicate the presence of something
positive than to indicate the absence of something negative: roughly what in the legal phrase is called ‘‘mental reservation and purpose of evasion.’’ If this is so, then what is in actual fact very
doubtful is whether mathematicians who assert that there are prime 10 numbers greater than 1010 intend their assertion only as something ‘‘nonliteral.’’ To do that, they would have to be more
philosophically selfconscious than they appear to be. To put the matter another way, the ‘‘literal’’ interpretation is not just one interpretation among others. It is the default interpretation.
There is a presumption that people mean and believe what they say. It is, to be sure, a defeasible presumption, but some evidence is needed to defeat it. The burden of proof is on those who would
suggest that people intend what they say only as a good yarn, to produce some actual evidence that this is indeed their intention.9 9
To mean what one says literally is simply to mean what one says, just as to be a genuine antique is simply to be an antique. The force of ‘‘literally’’ is not to assert that one is doing something
more besides, but to deny that one is doing something else instead: meaning something other than what one says, as when one speaks metaphorically, hyperbolically, elliptically, or otherwise
figuratively. One does not have to think anything extra in order to speak literally: one has to think something extra in order to speak non-literally. Such, at any rate, is the not uncommon view of
the meaning of ‘‘literal’’ to which I subscribe. For more on this point, see Searle (1979). Assuming this point, the hypothesis that someone is writing or speaking literally is the hypothesis that
nothing more is going on than meets the eye or the ear: the null hypothesis about covert intentions. In the text I call it the default hypothesis, since I take it there is a defeasible presumption in
favor of null hypotheses in general. In the specific
Mathematics and Bleak House
And I submit that the fact that mathematicians tend to be perplexed by and dubious about philosophical argumentation over the existence of mathematical entities – the ‘‘frog and mouse battle’’ as
Einstein once called a specific instance of such debate – and even the fact that mathematicians who are badgered with skeptical questions by skeptical philosophers can be got to make skeptical
noises, are not very good evidence. In this connection, I remember reading an interview in Scientific American some years back with Murray Gell-Man, who explained that in his original paper on the
quark hypothesis he avoided claiming that quarks were real in order to avoid trouble with ‘‘philosophers.’’ This self-censorship perhaps does not say much for Gell-Man’s courage – Galileo, after all,
did not recant until he was ‘‘shown the instruments’’ – but it is rather revealing as to how we philosophers are viewed by some of our scientific colleagues, and suggests that there may be serious
difficulties with the methodology of pestering scientists for opinions on philosophical issues to which they may have given little or no thought, and accepting their answers as indicative of their
intentions in putting forward the affirmations that they do put forward when philosophers leave the scene and let them get back to work. 4
To underscore these points, I would like to contrast the comparative lack of evidence for attributing a global ‘‘non-literalist’’ intention to mathematicians generally with two cases where we do have
good evidence of ‘‘nonliteralist’’ intent. One case is that of certain local usages and idioms, the other of certain specific mathematicians. To take the latter case first, among professional
mathematicians there are a tiny minority who have given serious, sustained thought to philosophical questions. Notoriously they tend to disagree with each other quite as much as professional
philosophers do – think of Hilbert and Brouwer – so that the quickest way to tell whether they are talking mathematics or talking philosophy is to listen whether they are agreeing or disagreeing.
Probably among this very small and sharply divided group of mathematician– philosophers can be found adherents of virtually every position found
case of the null hypothesis about meaning, there is the additional consideration, not mentioned in the text, that word-meaning and speaker-meaning, though distinct, are not independent. It would be
impossible for words to mean what they do if everyone always used them to mean something else, and difficult for them to mean what they do unless most people most of the time use them to mean that,
so that a randomly chosen person at a randomly chosen time probably means what he or she says.
Mathematics, Models, and Modality
among philosophers proper, including positions that regard all of mathematics as just a good yarn, or merely a great game. Certainly the bulk of Hilbert’s own results, beginning with his basis
theorem, were meaningless or ‘‘ideal’’ statements according to his official philosophy. The topological work, and notably the fixed-point theorem, which established Brouwer’s reputation among
mathematicians, is largely false or meaningless according to his intuitionist principles. There is of course always a question, whenever someone says something in one context and takes it back in
another, whether we should regard the original affirmation as merely pretense and the subsequent denial as revealing the person’s real opinion, or whether contrariwise we should regard the ceremony
of recantation as play-acting, and the real opinion the one expressed originally. But I think that in the case of the field marshal of the batrachians and the generalissimo of the rodents we may
conclude on the basis of their extensive philosophical writings that they did indeed mean much of what they said in their mathematical work as ‘‘non-literal’’ or ‘‘fictitious’’ in some sense. My
earlier point was that in the case of the overwhelming majority of mathematicians there is no such evidence that unspoken philosophical caveats accompany their mathematical assertions. The other case
I wish to mention is that of the mathematician who, for instance, asserts on one page that there is only one non-cyclic group of order four, and on the next page that among the subgroups of some
larger group there are three non-cyclic groups of order four. How can there be three of them if there is only one of them? That is a mystery whose solution is that the assertion of uniqueness was not
meant literally, but rather involved a figure of speech, namely ellipsis: ‘‘unique’’ was elliptical for ‘‘unique up to isomorphism,’’ and what is really meant is that all noncyclic groups of order
four, including the three that turn up as subgroups of the larger group alluded to, are isomorphic to each other. This isomorphism is, of course, what the proof of ‘‘uniqueness’’ proves. The
difference between such cases of mathematicians locally meaning this or that form of expression as one or another kind of figure of speech, and the hermeneutic fictionalist’s claim that
mathematicians globally mean none of what they say literally, is that there is evidence within what mathematicians say while engaged in mathematical research or teaching, to indicate, for instance,
that the claim of uniqueness of the Klein four-group is intended to be understood as pertaining not literally to uniqueness, but to uniqueness up to isomorphism. For one thing, if a student who had
not yet learned all the relevant idioms and usages were to raise a question, the mathematician would explain. Owning up to the non-literal character of
Mathematics and Bleak House
the uniqueness assertion is something the mathematician does on the job, not just when interrupted and pestered by skeptical philosophers, or after hours pursuing some philosophical hobby. To be
sure, it is also possible in this particular example that the mathematician just does not know what ‘‘unique’’ means, like the merchant whose advertisement I once saw, who claimed that ‘‘Every item
in the store is unique, and many are one-of-a-kind.’’ But there are other classes of examples of local non-literalness. Indeed, my discussion of the case of uniqueness assertions is largely inspired
by Stewart Shapiro’s discussion of one extensive class of such cases,10 mathematicians’ frequent use of dynamic language, as if the functions, for instance, were moving mathematical objects around,
changing one mathematical object into another, and so on – a manner of speaking to which, as Shapiro notes, Plato already objected. I think again, with Shapiro, that there is good evidence within
what mathematicians say while engaged in mathematical research or teaching for taking this particular kind of language to be meant non-literally. But the presence of evidence in this case again
contrasts with the comparative absence of evidence for the hermeneutic fictionalist’s global claim that all kinds of mathematical language are meant ‘‘non-literally.’’ 5
My conclusion, then, is that hermeneutic fictionalism is implausible, and that if one is going to be a fictionalist, one had better be a revolutionary fictionalist, denying while doing philosophy
what is asserted while doing mathematics, but not pretending that it never was asserted, or pretending that it was only asserted but was not really meant or believed. And there are indeed
similarities between mathematics and fiction that can be cited by the revolutionaries that would seem to give no support at all to hermeneutics. One such is the apparent incompleteness of
mathematics, suggest by the Go¨del theorems. (I trust that in speaking to an audience of logicians I need hardly add that the question of the philosophical bearing of these theorems is by no means an
easy one, and that such theorems are, in the words of my fellow symposiast, the ‘‘beginning of the story’’ and not ‘‘the end of the story.’’) This is often compared to the incompleteness of fictional
tales, which leave many questions, beginning with the proverbial length of the protagonist’s nose, undecided and undecidable.
In public talks since the later 1970s and in Shapiro (1997).
Mathematics, Models, and Modality
This similarity would seem to provide less than no support to the hermeneutic view that mathematicians do intend their mathematical assertions as fictions. After all, mathematicians were making
mathematical assertions for centuries before Go¨del, in blissful ignorance of any incompleteness or incompletability phenomena, and even while confidently asserting that ‘‘in mathematics there is no
ignorabimus’’ and ‘‘wir mu¨ssen wissen, wir werden wissen.’’ So the fact that fiction rather obviously leaves many questions undecided would seem to be a powerful argument for the conclusion that
mathematicians were not thinking of their science as a branch of creative writing. By contrast, the observation that mathematics resembles fiction in respect of incompleteness might well be thought
to provide support for the revolutionary view that philosophers should regard mathematical assertions as fictions. At any rate, the observation often is cited in support of such a position. To this
sort of observation there are two kinds of response: a more specific and a more general. The more specific notes that, even if we accept that Go¨delian theorems do show that there are fundamental
reasons of mathematical principle why some mathematical questions must remain forever undecided, this by no means establishes a dividing line with mathematics and novels, fables, and so on, on the
one side, and scientific and commonsense thought on the other side. For are there not, to begin with, also fundamental reasons of physical principle why some physical questions must remain forever
undecided? The second law of thermodynamics suggests the growth of entropy will blur the historical record to the point where many facts about the past will become irrecoverable. General relativity
suggests information may be swallowed up by black holes and hidden from us forever beyond their horizon. And then there is quantum mechanics. On top of all this, there are the apparently undecidable
questions that arise wherever there is vagueness, which is just about everywhere except mathematics. But these considerations are doubtless familiar, and I need not enlarge on them here. What I want
to consider instead is another line of response, suggesting that even if it were distinctive of mathematics among what pass for sciences to present us with unanswerable questions, it would be
doubtful that this feature should be taken as a criterion for distinguishing fiction from fact. More generally, the other line of response I wish to consider – one vaguely analogous to Hume on
miracles – suggests that it is virtually always more doubtful that philosophy has arrived at the correct understanding of ‘‘truth’’ or ‘‘existence’’ or what have you, than that what well-established
mathematical, scientific, and commonsense principles tell us are facts are fantasies, or that what they tell us exists is a phantasm.
Mathematics and Bleak House
Revolutionary mathematical fictionalism is an ‘‘error theory,’’ offering a ‘‘correction’’ to mathematics, and especially to mathematical existence theorems, much as Brouwer once offered an
intuitionistic ‘‘correction’’ to his own Brouwer Fixed Point Theorem.11 At the same time, it is offering a ‘‘correction’’ to science insofar as it is formulated mathematically. The line of response
against such philosophical ‘‘corrections’’ to mathematics and science that I want to consider is the one expressed in the Credo of my colleague David Lewis. Rosen and I quoted it in our book, and I
will not quote it again here; but for those who have not read it, its point is that, given the comparative historical records of success and failure of philosophy on the one hand, and of mathematics
on the other, to propose philosophical ‘‘corrections’’ to mathematics is comically immodest. And indeed, the historical record of philosophical ‘‘corrections’’ to mathematics and science from
Bellarmine’s ‘‘correction’’ of Galileo – an early form of fictionalism or ‘‘constructive empiricism’’ – onwards, has been pretty dismal. One argument against revolutionary fictionalism is thus just
that, given the historical record, on simple inductive grounds it seems extremely unlikely that philosophy can do better than mathematics in determining what mathematical entities exist, or what
mathematical theorems are true, and much more likely that for the (n þ 1)st time, philosophy has got the nature of truth and existence wrong. 6
Supplementing this Ludovician line of thought, there is another, deriving ultimately from Rudolf Carnap’s ‘‘Empiricism, semantics, and ontology’’ (Carnap 1950), that deserves attention. It is a line
of thought admittedly in considerable disrepute today. For instance, the late George Boolos in a passage in Boolos (1997b) characterizes the Carnapian argument I have in mind as ‘‘rubbish.’’ And so
far as I know, when that paper was delivered orally this remark raised no substantial murmur from the audience. Any attempt to rehabilitate an argument that has been thus consigned to the rubbish bin
of philosophical history by so prominent a figure and with so little protest would be a difficult undertaking, and what I will attempt here will be something much less ambitious than a full
rehabilitation. I will merely attempt to restate the essence of the argument in a way that strips it of most of its dated formulations and presuppositions.
The theorem can be found in Brouwer (1976) and the ‘‘correction’’ in Brouwer (1975).
Mathematics, Models, and Modality
To begin with, Carnap notes, as so many others have since, and as I did above, that the question whether there exist any such things as numbers at all is one that is never raised by the
mathematicians themselves (while doing mathematics), but only by philosophers (including such few professional mathematicians as are also amateur philosophers, when doing philosophy rather than
mathematics). And the most salient fact Carnap notes about the debate among philosophers is that it goes on and on and on without ever being settled. The philosophical issue of nominalism and realism
is in this respect like Jarndyce and Jarndyce. There is an apparent difference in that in Dickens’ novel the parties to the lawsuit were finally driven mad by the interminable legal wrangling,
whereas few philosophers have been driven over the edge by the issue of nominalism and realism. But then philosophers are not really in the position of the suitors. Rather, as Rosen put it when we
were once discussing the matter, ‘‘We’re the lawyers.’’ Of course, in Dickens’ novel, even the lawyers eventually stopped arguing the case: the case ended when the entire value of the estate had been
eaten up by legal costs, and there was no money left to pay any more lawyers. So far we philosophers are still being paid, and even to some slight degree paid attention, and perhaps there is no
immediate danger of the value of the estate running out in our case. Still, Carnap thought that the endless litigation with no sign of settlement approaching was an indication that something was
badly wrong. His analysis of the situation was rather complex, and had three strands: there is a positive phase with two aspects, plus a negative phase. Let me take the elements one at a time. The
two aspects of the positive phase can be brought out by comparing Carnap’s position on the one hand with that of one of Yablo’s targets, Crispin Wright, and on the other hand with that of all
nominalists’ target, Quine. Wright once propounded an argument against nominalism roughly as follows: I have as many fingers as toes; but as everyone who understands the concept of ‘‘number’’ knows,
to say that I have as many fingers as toes is equivalent to saying that the number of my fingers equals the number of my toes; but to say this presupposes that there is such a thing as the number of
my fingers or toes; hence the number ten exists. What the Carnapian agrees with in this argument is the recognition that concepts come with rules for their employment, some of which entail
affirmative answers to certain existence questions, so that one has only two choices: either one rejects the concept, in which case the existence questions cannot even be asked; or else one accepts
the concept, in which case one immediately gets
Mathematics and Bleak House
affirmative answers to those existence questions. One cannot ask the question and answer it in the negative. One is compelled to accept certain existence assertions, if one accepts the concept, or
the ‘‘framework’’ to substitute Carnap’s term for Wright’s. One is not, however, compelled to accept every concept that might be proposed. In response to criticism by Hartry Field, Wright
acknowledged this point, and in subsequent rounds of debate he and his colleague Bob Hale have attempted to formulate a general principle that would enforce acceptance of the concept involved in many
though not all cases, including the particular case relevant to Wright’s original argument, that of the concept of ‘‘number.’’12 So far they have not come up with a general principle that has
commanded any very widespread support, and the Carnapian view would be that it is a mistake to attempt to enforce the acceptance of concepts by a priori arguments. Quine, and along with him Hilary
Putnam at one stage in the evolution of his views, urged a very different sort of reason for accepting the existence of numbers (or other abstract mathematical entities to which numbers could be
‘‘reduced’’). According to Quine, we must (alas, with the greatest reluctance!) resign ourselves (ah, that it should have come to this!) to accepting (unbidden and unwelcome!) mathematical entities,
because (most regrettably and unfortunately!) mention of them seems (would that it were not so!) to be an unavoidable requirement (how cruel a necessity!) in formulating scientific theories. What
Carnap agrees with in this argument is that the ultimate grounds for accepting the concept ‘‘number’’ is the role it plays in formulating scientific theories, together with the role scientific
theories in turn play in much of our life. Quine’s argument invites two kinds of objections. On the one hand, there are the reconstructive nominalists who question that mention of mathematical
entities really is an unavoidable necessity after all. It is the large concessions to nominalism made in the anti-nominalist argument of the exnominalist Quine that invite this sort of objection. On
the Carnapian view, by contrast, a ‘‘framework’’ of mathematical entities is not something to which we grudgingly resign ourselves because it is a necessary evil, but something we gratefully adopt
because it is an enormous convenience; and there is no suggestion that if mathematical entities by hook or crook or somehow could be eliminated, then they should, and hence no invitation to
nominalists to go looking for hooks and crooks.
See Hale and Wright (2001), especially ‘‘Responses to Critics.’’
Mathematics, Models, and Modality
On the other hand, there are objections of the kind especially voiced by Charles Parsons (1980, p. 150), roughly to the effect that Quine’s argument does not do justice to the seeming obviousness of
elementary arithmetic, and makes acceptance of ‘‘two plus two is four’’ or ‘‘the number of my hands is two’’ depend on recondite and abstruse considerations about whether it is possible to formulate
general relativity without referring to tensors, or quantum mechanics without reference to linear operators. Such objections are invited by Quine’s refusal to acknowledge any ‘‘conceptual’’ truths,
even conditional ones, and his insistence that all knowledge ‘‘faces the tribunal of experience’’ as a whole. By contrast, on the Carnapian view, the seemingly obvious simple statements about the
number two just mentioned are taken to be immediate, given the relevant concept or framework, and considerations of the larger role of that framework in science are only relevant to explaining why
the framework, as well as whatever comes thus immediately with it, is classified as non-fiction rather than fiction. Moreover, to repeat, what is important about the larger role in science is merely
that it would be very inconvenient in practice not to use mathematical language, and not that detailed scrutiny of the most sophisticated theories shows it to be wholly impossible in principle to do
without such language. It is against the background of this two-sided positive account, according to which there are both a local a priori element, the rules constitutive of the concept, and also a
global pragmatic element, motivating acceptance of the concept, that Carnap attempts to account for the interminability of philosophical debate between nominalists and anti-nominalists. What is
missing, according to Carnap, when philosophers debate whether numbers exist – not whether they exist according to standard mathematics, or whether it is convenient to speak and think as if they
existed, but whether they really exist – is any framework providing rules for assessing assertions of ontological metaphysics about what italics-added really or capital-R Really exists, in the way
that the framework of number-language determines that ‘‘I have as many hands as feet’’ is a sufficient condition for ‘‘The number of my hands equals the number of my feet.’’ Or to put the matter
another way, the ontologists are treating the practical question whether to accept the framework of number-language, to which only considerations of convenience and the like are genuinely germane, as
if it were a theoretical question of a kind that can only be asked within a preexisting conceptual framework. Or to put the matter yet another way, and as briefly as possible, the debate never ends
because there is not only a lack of agreement between the two sides as to what the right answer to the question is, but also a lack of agreement between the two sides, and for that matter even among
those on
Mathematics and Bleak House
the same side, about what counts as a relevant consideration for or against one or another answer to the question.13 7
I suspect the reason Carnap’s presentation of the case failed to convince was largely that he was too much identified with the infamous ‘‘empiricist criterion of meaningfulness,’’ which certainly has
by now been consigned, if not to the rubbish bin, then at least to archives, where it may be studied by historians of philosophy, but where it no longer influences current philosophical debate.
According to this criterion – which among other things sits rather poorly with a recognition of Go¨delian phenomena – the absence of agreed empirical and/or logical criteria for what counts for or
against one or another ontological hypothesis renders argumentation for these theses meaningless, a kind of nonsense poetry without the poetry. Instead of a comparison of mathematics with the work of
Charles Dickens or George Elliot, we have here a comparison of philosophy of mathematics with the work of Lewis Carroll or Edward Lear: it makes the same amount of sense, though it is less
entertaining. Carnap was notorious for the provocative claim that the issue of nominalism and realism, the ‘‘problem of universals,’’ is a ‘‘pseudo-problem.’’ This Carnapian thesis is much stronger
than, for instance, the Ludovician claim that induction suggests it is less likely that philosophy has now solved the problem of universals in a way that shows mathematics to be in error, than that
philosophy has once again failed to solve the problem. Moreover, the Carnapian thesis implies or presupposes other very large and controversial claims. Let me elaborate. One very traditional sort of
way to try to make sense of the question of the ultimate metaphysical existence of numbers would be to turn the ontological question into a theological question: did it or did it not happen, on one
of the days of creation, that God said, ‘‘Let there be numbers!’’ and there were numbers, and God saw the numbers, that they were good? According to Dummett, and according to Nietzsche – or my 13
As mentioned in the review (Burgess 2001), there is a superficial appearance of similarity between Carnap’s view and that of Mark Balaguer, especially as in Balaguer (1998, pp. 158–79), who maintains
that the question of the existence of numbers is ‘‘factually empty.’’ Really, there is a deep difference between the two positions, since Balaguer does not distinguish two questions as Carnap does.
Like 10 someone who thought that the plain, literal meaning of ‘‘Prime numbers greater than 1010 exist’’ was 1010 are ingredients of ultimate metaphysical Reality,’’ Balaguer ‘‘Prime numbers greater
than 10 declines to join Carnap in answering, ‘‘Yes, of course,’’ to the question, ‘‘Do numbers exist?’’ If doubters as well as deniers of numbers are counted as nominalists, then his refusal to
return an affirmative answer to the question whether numbers exist makes Balaguer a kind of nominalist.
Mathematics, Models, and Modality
perspective on Nietzsche – this is the only way to make sense of questions of ontological metaphysics. The Carnapian claim that ontological metaphysics is meaningless is roughly equivalent to the
conjunction of this Nietzsche–Dummett thesis, ‘‘realism makes sense only on a theistic basis,’’ with analytic atheism, the thesis that theological language is meaningless. Both these theses are
highly controversial: analytic atheism was explicitly rejected even by outspoken agnostics like Russell, and the Nietzsche–Dummett thesis is rejected by many philosophers in Australia who regard
themselves as simultaneously ‘‘realists’’ in some strong sense, and ‘‘physicalists’’ in some sense equally strong. I myself believe, like Russell, that analytic atheism is false, and suspect,
contrary to the Australians, that the Nietzsche–Dummett thesis is true. If as I believe the theological question does make sense, and if as I suspect it is the only sensible question about the
italics-added real or capital-R Real existence of numbers, then I would answer that question in the negative; but then I would equally answer in the negative the question of the Real existence of
just about anything. For as has been said elsewhere, everything we have learned about our processes of cognition points in the direction of the conclusion that even other intelligent creatures, to
say nothing of an Omniscient Creator, would or might well have patterns of language and thought very different from ours, recognizing categories of objects very different from those we recognize, or
perhaps not even having a category of ‘‘objects’’ at all, if they used a language without a category of nouns, as well they might. Since I do not wish to claim that the absence of empirical meaning
is tantamount to the absence of all meaning, where Carnap would put forward a categorical negative, ‘‘These questions are meaningless,’’ I only put forward a rhetorical question, ‘‘What are these
questions supposed to mean?’’ But I do agree with Carnap that the question of the Real existence of mathematical entities does lack empirical meaning, and while I do not think this settles the
question of nominalism, I think it does have an important bearing on the question of how much mathematics is like novels, fables, and other forms of fiction. For consider the question whether, say,
the works of Carlos Castan˜eda or Rigoberta Menchu` are non-fiction or fiction: are they eye-witness reportage of magic or tragic occurrences, or merely novels masquerading as anthropology or
autobiography? In these cases, we know very well what it would have looked like for the events in Menchu`’s book to have occurred, and in this age of cinematic special effects, we can even say the
same for Castan˜eda’s books. By contrast, as regards the question of the ultimate
Mathematics and Bleak House
metaphysical Reality of numbers, we have absolutely no idea of what difference it would make to how things look; or rather, we have a very strong suspicion that it would make no difference at all.
This is what is meant by having an empirically meaningful question in one case, and an empirically meaningless question in the other.14 I think that in view of this radical difference between
mathematics and novels, fables, or other literary genres, the slogan ‘‘mathematics is a fiction’’ is not very appropriate, and the comparison of mathematics to fiction not very apt. My conclusion is
that, whatever may remain to be said for or against nominalism, about whether we should or should not call ourselves ‘‘nominalists,’’ we should not call ourselves ‘‘fictionalists.’’ 14
In other words, I mean no more by saying that the choice between two views is ‘‘empirically meaningless’’ than that the two views themselves are empirically equivalent. Thus understood, it is a
trivial truism that there is no empirically meaningful difference between any given theory T, and the fictionalist alternative T * according to which everything observable happens as if theory T were
true, though it is not. What would be highly non-trivial would be the claim that the difference between T and T * is meaningless tout court et sans phrase. But that would only follow from empirical
meaninglessness assuming a discredited empiricist criterion of meaningfulness.
Quine, analyticity, and philosophy of mathematics
‘‘ F O U N D A T I O N S
O F M A T H E M A T I C S ’’
Does mathematics requires a foundation?1 The first thing that must be said about the question is that the expression ‘‘foundations of mathematics’’ is ambiguous. Let me explain. Modern mathematicians
inherited from antiquity an ideal of rigor, according to which each mathematical theorem should be deduced from previously admitted results, and ultimately from an explicit list of postulates. It
also inherited a further ideal according to which the postulates should be self-evidently true. During the great creative period of early modern mathematics, there were and probably had to be many
departures from both ideals. But during the century before last, as mathematicians were driven or drawn to consider less familiar mathematical structures, from hyperbolic spaces to hypercomplex
numbers, the need for rigor was increasingly felt, and higher standards were eventually instituted. But while the ideal of rigor may be claimed to have been realized, the ideal of selfevidence was
not. Considering only the ideal of rigor, the working mathematician’s understanding of its requirements, of what is permissible in the way of modes of definition and modes of deduction of new
mathematical notions and results from old, is largely implicit. Logic, which investigates such matters, and fixes explicit canons, is a subject in which the algebraist, analyst, or geometer need
never take a formal course. Nor are mathematicians in practice much concerned with tracing back the chain of definitions and deductions beyond the recent literature in their fields to the ultimate 1
This question was the title of the Arche´ Institute conference, August 12–15, 2002, where a preliminary version of this paper was first delivered. I would like to thank the leadership of the
institute and local organizers of the affair, and especially Crispin Wright and Fraser MacBride, for making my participation both possible and pleasurable, and my fellow participants for valuable
feedback on the preliminary draft.
Quine, analyticity, and philosophy of mathematics
primitives and postulates. But there is a place, and one even may go so far as to say a need, for someone to investigate the choice of postulates, and what differences a different choice would make.
It is these kinds of investigations, when carried out by mathematical means, that in standard classifications of the branches of mathematics are called ‘‘logic and foundations.’’ Or rather, that
label is applied to the kind of investigation just mentioned plus any other research that can fruitfully apply the same methods. This, then, is the first and weaker of two senses in which the term
‘‘foundations’’ is used. There will be a need for ‘‘foundations of mathematics’’ in this first sense so long as mathematicians continue to adhere to an ideal of rigor, and – despite hype from some
popularizers about ‘‘the death of proof ’’ – that would mean for the foreseeable future. But there is a second and stronger sense, in which one would speak of a ‘‘foundation’’ for mathematics only
if, in addition to the ideal of rigor, the ideal of self-evidence or something like it were realized. Though I have listed this sense second, it is presumably older, since it is only if something
like the ideal of self-evidence is realized that the metaphor implicit in the word ‘‘foundations’’ is appropriate. Postulates with something like selfevidence would provide the firm foundations of
the edifice of mathematics, and this firmness together with the firmness of the rigor by which new results are built upon old would guarantee the firmness of the higher stories. This picture is
merely the application to mathematics of the picture offered by epistemological ‘‘foundationalism,’’ according to which the edifice of knowledge is to be built up from a secure and privileged basis
by secure and privileged means. Mathematicians have learned to live with the absence of self-evident postulates, of ‘‘foundations,’’ and in this sense passively acquiesce in the proposition that
foundations (in the foundationalist sense) are not required. Many philosophers remain more troubled by the situation, and in consequence either lapse into nominalist, constructivist, or other
heresies, rejecting orthodox mathematics, or involve themselves in programs to provide the missing foundations. For a very familiar specific instance of the distinction between two senses of
‘‘foundations,’’ we may consider the result we have been taught to call Frege’s theorem, namely, the deducibility of the basic laws of arithmetic from the postulate we have been taught to call Hume’s
principle, according to which, if there are as many of these as of those, then the number of the former is equal to the number of the latter. Uncontroversially Frege’s theorem is a major contribution
to foundations of mathematics in the first and weaker sense, which is concerned with logical relationships between
Mathematics, Models, and Modality
postulates and theorems, without too much concern over the status of the postulates. But there has been a controversy, involving the late George Boolos and the nominalists on one side, and some of
the St Andrews school on the other, over the status of Hume’s principle, and over whether Frege’s theorem can provide a foundation for mathematics in the second and stronger sense. On one view,
Hume’s principle is analytic, Frege’s theorem does provide such a foundation for arithmetic, and the challenge is to find a way of providing a similar foundation for more of mathematics. On the other
view, Frege’s theorem does not provide a foundation, and Hume’s principle is either a synthetic truth, or an untruth. What I want to do here is to elaborate a third or intermediate position,
according to which Hume’s principle is analytic, but still does not provide a foundation for arithmetic in the sense of foundationalist epistemology. Naturally this presupposes a notion of
analyticity in which a statement may be analytic but nonetheless need not be self-evident or a logical consequence of self-evident statements, or anything of the sort. In sketching the intermediate
position I will be mainly concerned to sketch a conception of analyticity with this feature. My starting point will be, as my title suggests, the thought of the late W. V. Quine. His work, by the
way, provides another illustration of the distinction between the two senses of foundations. Quine was, in his generation, a significant contributor to ‘‘logic and foundations’’ in the first sense.
(I heard it said, by one of my fellow speakers at a memorial meeting, that asked about his standing on one occasion, he described himself as ‘‘captain of the B-team’’; and this seems quite a just
estimate.) But Quine was also famously a paradigmatic opponent of epistemological foundationalism, and the author of the best-known rival to the architectural metaphor. According to Quine, knowledge
is not a building but a web, more or less fixed at the edges by the attachment of observation sentences to sensory evidence, but underdetermined as to how we spiders should spin the middle portions,
including mathematics, which lies somewhere very near the center. My reexamination of Quine will be a sequel to an earlier reexamination of Carnap, entitled ‘‘Mathematics and Bleak House’’ (Burgess,
2004b). I even thought of entitling the present paper ‘‘Mathematics and Bleak House II,’’ though in the end I was deterred from doing so when I found myself having only one occasion to refer to the
Dickens novel, mentioning the police investigator in it, one Bucket, who was for a generation2 the 2
Until the appearance of Sherlock Holmes.
Quine, analyticity, and philosophy of mathematics
canonical fictional detective in the English-speaking world. The concern of my earlier paper with the Dickensian title was to re-evaluate Carnap’s classic ‘‘Empiricism, semantics, and ontology’’
(Carnap 1950). Here I wish to consider Quine’s reply, ‘‘Carnap’s views on ontology’’ (Quine 1951a) and more particularly the famous paper of a few months earlier on which that reply was based, ‘‘Two
dogmas of empiricism’’ (Quine, 1951b). To put the matter very roughly, Quine argued in replying to Carnap that the position of Carnap presupposed the analytic-synthetic distinction, the first of the
two dogmas Quine took himself to have refuted. Like some other recent commentators,3 I dissent from the common view that Quine clearly vanquished Carnap in their exchange. To put matters very roughly
again, my claim will be that Quine almost needs to recognize a notion of analyticity – and also that he can recognize such a notion, without betraying his core philosophical principles. 2
Before I examine the differences between Quine and Carnap, I wish to consider what divides them both from the nominalists. And before I consider what divides them from the nominalists, I wish to
consider what Quine, Carnap, and many nominalists have in common that divides them from anyone who would really deserve to be called a ‘‘Platonist’’ in anything like a traditional sense. I will begin
with a sympathetic description of – I do not pretend it is anything like an argument for – a Quinian world-view. It was the ambition of Galileo, Kepler, and other worthies of their era, by close
reading of the book of nature, to discover the very intentions of its great Author in writing it; or, to vary the metaphor, to produce a plan of the universe faithfully reproducing the blueprint used
by its great Architect in constructing it. For Quine, this is a hopeless ambition: no science produced by human beings can provide a God’s-eye view of the universe, and this should be evident almost
as soon as one begins to view the human knower as an object of scientific study. When human knowers are so viewed, human knowledge, including especially scientific theorizing, is seen as the product
of certain organisms 3
Let me in particular cite two useful unpublished works, the Princeton senior thesis of Tom Dixon ‘‘Separating Semantics from Empiricism and Ontology,’’ and the doctoral dissertation of Inga Nayding,
‘‘Positing Existence.’’ Both see less difference between Quine and Carnap than perhaps the two saw between themselves, and both attempt, each in his or her own way, to narrow the difference still
further. I derived encouragement from their example, even though my own way of attempting to narrow the difference is not quite theirs.
Mathematics, Models, and Modality
in a certain environment, seeking to fulfill certain needs. Beavers build dams; people first construct hydrological theories, and only then construct dams (unless, having also constructed ecological
theories, they decide not to construct the dams after all). Scientific theories make intelligible patterns in the environment as we experience it, and in favorable cases make it possible to influence
(or warn us not to try to influence) the course of that experience. But what science can make intelligible in our experience, and how it can make it so, inevitably depends on the nature of our
intellects, and what kinds of experience we are capable of. Thus scientific theory, product of a certain organism in a certain environment, will inevitably be the way it is in part because that
environment, the universe, is the way it is, and in part because the organisms, ourselves, are the way we are: there is no reason to suppose intelligent extraterrestrials, with very different kinds
of sensory experience and very different intellects, would produce the same scientific theories we have. For that matter, there is not much reason to believe that even if we ourselves had it to do
all over again we would come up with the very same theories we have. Thus scientific theories are the way they are partly because the universe is the way it is, partly because we are the way we are,
and partly because of a third factor: partly because the interaction between the universe and ourselves has gone the way it has. But if our theories as they are thus differ from what they might
equally well have been had history gone slightly differently, and differ even more from what the theories of alien creatures might be expected to be like, then a fortiori they must differ greatly
from the ‘‘theories’’ of an omniscient Creator, and the ambition of gaining a God’s-eye view of the universe must be unrealizable. Such is the Quinean picture at the highest level of generality. To
descend to a level of slightly greater specificity, one feature of the way our intellects work is that language is crucial to scientific thought, and our language exhibits a comparatively limited
range of grammatical forms. In particular, our scientific theories run very much to sentences of the noun–verb, subject–predicate, object–property type. As we employ sentences of this type over and
over in different contexts and with different functions within scientific theorizing, our scientific theories will be positing over and over again different kinds of objects with different kinds of
properties. To be a little more specific still, all this applies the mathematical apparatus deployed in our scientific theories. Starting with the use of numerals as adjectives, we have found that to
bring out certain patterns a shift to using them as nouns is required, and so natural numbers have been
Quine, analyticity, and philosophy of mathematics
posited. After long speaking only of relations of proportions among geometric magnitudes, we have found it immensely convenient, if not practically indispensable, to shift to speaking of ratios of
magnitudes as objects, and as objects that can be added and multiplied, and so real numbers have been posited. Later we have found useful a transition from speaking of real numbers (plural) with
certain properties to speaking of the set (singular) of such real numbers, thus positing sets as single objects constituted by pluralities of objects. We thus end up speaking of different kinds of
numbers, sets, functions, and so forth in sentences of the same grammatical form as ones about medium-sized dry goods, even though these sentences occupy very different positions and roles in the
body of our knowledge. ‘‘Septimus is a prig’’ and ‘‘Seventeen is a prime,’’ for instance, have similar grammatical or logical forms, but very different epistemological positions. The best way to
verify the former would be to locate Septimus in space and time and interact with him causally: he may sit near us and speak, sending soundwaves to our ears, by which we detect his priggishness. The
number seventeen does not do anything analogous. It does not sit in Cantor’s paradise and shine N-rays on our pineal glands, by which we detect its primality. That arithmetical property is checked by
quite different means. This feature of our actual scientific theories is perhaps one they need not have had. Whether or not we could have managed to do without any nonspatial, non-temporal, causally
inactive, causally impassive mathematical objects at all – the partial success of programs of nominalistic reconstruction of mathematics suggests that in principle this might have been possible,
though examination of the details suggests that in practice it might not have been feasible – there is no strong reason to suppose that if we had it to do all over again we would end up with the very
same kinds of mathematical objects. As for the scientific theories of space aliens, not only is there no strong reason to suppose they would involve distinctive mathematical objects, but what is
more, there is no strong reason to suppose they would involve objects at all, since there is no strong reason to suppose space aliens would have a language that involved nouns. The Chomskians
maintain that universal grammar is species specific, and in combinatory and predicate-functor logics we get at least a vague and dim image of what a language with a grammar radically unlike ours
might be like. And as for the ‘‘theories’’ of a Deity, ‘‘we see but through a glass, darkly.’’ Thus Quine – and in this Carnap would surely join him – can concede to the nominalist that the
(particular kinds of) mathematical objects that figure in our current scientific theories are there largely because of the way
Mathematics, Models, and Modality
we are (and the way our interaction with the universe has gone) rather than because of the way the universe is. The positing of numbers may be extremely convenient and in practice even indispensably
necessary for us, but theories that involve such posits cannot be claimed to give a God’s-eye view of the universe, to reflect ultimate nature of metaphysical reality, or anything of the sort. 3
‘‘ F I C T I O N A L I S M ’’
What Quine – and again Carnap would surely be with him – will not concede to the nominalist is that any of this gives us any reason at all to reject current science and mathematics. It gives us no
reason why we should perform, on entering the philosophy seminar room, a ceremony of abjuration, recanting what we habitually say outside that room, when engaged in scientific work or in daily life.
No theory of ours can give a God’s-eye view of the universe: ‘‘the trail of the human serpent’’ will be over all. But the fact that any particular theory fails to deliver a reflection of the ultimate
nature of metaphysical reality, uncontaminated by any contribution from us, is no special grounds for complaint against that particular theory. If there are grounds for complaint, it is against the
human condition. And thus Quine is willing to speak inside the philosophy room in the same terms in which he speaks outside it, neither taking back, nor explaining away, nor apologizing for, the use
of number-laden or set-laden language. Rather he is willing to reiterate in his capacity as philosopher the theorems of mathematics and theories of science. To apply the words used by the great
Scottish philosopher in a somewhat different context, ‘‘Nothing else can be appealed to in the field, or in the senate. Nothing else ought ever to be heard of in the school, or in the closet.’’4 Here
we have direct opposition to the most common kind of nominalist today. They do take back in their philosophical moments what they assert in their scientific moments. And for most of them, that is
about all they do by way of expression of their nominalist allegiances: few nominalists any more are involved in active programs for reconstructing science on a number- and set-free basis. This kind
of inactive nominalism is generally called ‘‘fictionalism’’5 and in my earlier paper on Carnap it was the contrast between his 4
David Hume, ‘‘On the practical consequences of natural religion, or, Of a particular providence and a future State,’’ in Enquiry Concerning the Human Understanding, Part XI. The label is also used by
some activists like Hartry Field, whose views I am leaving to one side in the present discussion.
Quine, analyticity, and philosophy of mathematics
views and fictionalism that was my topic. Much that I said there about Carnapian anti-fictionalism applies also to Quinian anti-fictionalism, and I will recapitulate briefly. The first thing to be
said against ‘‘fictionalism’’ is that the label is remarkably ill chosen. To say, for instance, that Victorians regarded Bleak House as fiction, is among other things to say that when Victorians were
in need of the services of a good detective, they did not waste any time looking for Bucket. They did not use the contents of the novel as a guide to practice.6 The label ‘‘fictionalist’’ for the
dominant contemporary variety of nominalism is ill chosen because those nominalists who apply the label to themselves do not in fact regard current mathematically formulated ordinary and scientific
lore and theory in the same way they regard the productions of Dickens or other novelists. They do use such lore and such theory as a guide to practice. The analogy ‘‘fictionalists’’ cite ought to
be, not with works of imaginative literature, but with scientific theories that are regarded as no more than useful approximations to more complex but truer theories, known or remaining to be
developed. An architect, for instance, designing a modest private residence, will obviously have to take into account the topography of the site, the fact that some points on the site are at a higher
elevation above sea level than others, but generally will not take into account the curvature of the earth, or the fact that points at the same elevation do not lie on a perfect plane. To this
extent, the architect is using as a guide to practice the primeval theory that the earth is flat, though no architect today believes any such thing. Here is a case where a theory is rejected, a
theory is not believed, and yet that theory is not regarded as fiction, as a work of creative writing, but rather is used as a guide to practice, is employed for practical purposes. The analogy the
mislabeled ‘‘fictionalists’’ ought to be citing is with such cases, with uses of flat-earth geography when we know the earth is round, or of Newtonian gravitational theory when we know it is only an
approximation to Einsteinian general relativity, or indeed of this last when we know it needs a quantum correction, though we do not as yet have a developed theory of quantum gravity. But the
citation of such examples soon makes evident a serious disanalogy between the attitude of the nominalist and that of the scientist or engineer who makes use of a theory known to be only a simplified
approximation. 6
At least not in the way they would have used it as a guide had they thought it non-fiction: for all I know, some readers may have been stirred by it to work for reforms in the Court of Chancery.
Mathematics, Models, and Modality
This can be brought out by considering an architectural firm suddenly given a commission for a much larger project than the private homes that are all they had built before. They would need to take
into account the fact they had been ignoring, that the earth is round, and recalculate whether it is safe to ignore its curvature on the new and larger scale on which they would be working. If the
project were as vast as the Very Large Array,7 clearly they could not get away with treating the earth’s surface as flat, they way they can in building a house on a half-acre lot. For projects of
intermediate size, they would have to think at least for a moment about the question, or ask some consulting engineer whether flat-earth geography can still be trusted. And the engineer would base
the calculation on round-earth geography: just how far flat-earth theory can be trusted for architectural purposes is something that is estimated in terms of round-earth theory. The situation is
similar in all cases of technical and everyday applications of theories that are not really accepted in the sense of being not really believed. The scientific and technical application of an
approximation is always framed by some kind of estimate of how good the approximation is, obtained from some more accurate and credible theory, whether fully developed or still a work in progress.
The situation is quite dissimilar in the case of the nominalists’ attitude towards mathematics, pure and applied. Here there is no question of the untruth of ordinary and scientific theories ever
being relevant to practical questions. Nor is there any nominalistic alternative theory present in the background, or being sought. Rather, ‘‘fictionalism’’ became the dominant form of nominalism in
the 1990s largely as a result of disappointment with the search for nominalist alternatives to standard mathematical formulations of scientific theories in the 1980s. The dissimilarity and disanalogy
I have been describing marks nominalism as a non-, un-, or anti-scientific, and distinctively philosophical concern. For a Quinean, this feature – a willingness to say that some formulation is
acceptable in everyday and all scientific contexts, however theoretical, but still not acceptable in the philosophy seminar room – would mark nominalism of the contemporary kind as involving a
species of old-fashioned alienated epistemology as opposed to the ‘‘naturalized’’ epistemology Quine promoted. Now, though I cannot discuss Carnap at length here, I believe that the position he held
by the 1950s was not fundamentally different in this respect
An arrangement of radio dish-antennas spread out over twenty miles or more, used for radio astronomy.
Quine, analyticity, and philosophy of mathematics
from that I have just associated with the name of Quine. Thus, just as Quine and Carnap and ‘‘fictionalists’’ are all agreed that our current science, partly owing to its being mathematically
formulated, does not present a God’s-eye view of the universe, so also Quine and Carnap agree that the nominalist suggestion that current science should therefore be regarded as only a ‘‘useful
fiction’’ is to be rejected. To put the matter another way, the ‘‘fictionalist’’ nominalist considers that, even when we have answered in the affirmative whether an apparatus of numbers and/or other
mathematical objects does or will figure in the best physical theory, there remains a further question, whether numbers really exist, which they answer in the negative. Quine and Carnap agree in
doubting that there is any intelligible question of this form. 4
Thus the issue between Quine and Carnap on one side, and ‘‘fictionalist’’ nominalism on the other side, is over the intelligibility or appropriateness of the question, ‘‘Science and common sense
aside, are there really numbers?’’ The issue between Quine on one side and Carnap on the other can also be represented as a difference over whether or not a certain question arises, or more
precisely, over whether such a question as, say, ‘‘If I have as many fingers as toes, is the number of my fingers equal to the number of my toes?’’ arises in more than one sense. Carnap famously
thought there were two senses to such a question, internal and external to the ‘‘linguistic framework’’ of number – or, as I will say, to the ‘‘concept’’ of number – where Quine took there to be only
one question. Taking the concept of number for granted – as the Carnapian is justified in doing, since the questioner has used the term ‘‘number’’ in framing the question – the question admits an
immediate positive answer: that if one has as many fingers as toes, then the number of one’s fingers is the same as the number of one’s toes, is something that ‘‘comes with’’ or ‘‘is part of ’’ the
concept, and in this sense is analytic. This is the first, ‘‘internal’’ way of taking the question. But there is another, ‘‘external’’ way of taking it. Perhaps in asking the question what the
questioner really means to do is to raise the issue whether we should accept the concept of number. The ‘‘material mode’’ formulation, which has the appearance of presupposing the concept of number,
may be misleading in this respect. Perhaps what is really being questioned is, put in the less misleading ‘‘formal mode,’’ simply this: is the concept of ‘‘number’’ to be accepted?
Mathematics, Models, and Modality
This question is certainly one the Carnapian is willing to discuss, and the answer to this external question will take longer to give than did the answer to the internal question. What the Carnapian
will insist is that in discussing why we accept the concept of ‘‘number,’’ questions about the correspondence of that concept to a feature of ultimate metaphysical reality are out of order. The
considerations the Carnapian advances will rather be broadly speaking ‘‘pragmatic.’’ Thus, for Carnap, the immediate affirmative answer is justified by appeal to linguistic considerations (by the
consideration that Hume’s principle or something like is analytic); by contrast, the further question why to accept that concept takes longer to answer, and the ultimate affirmative answer is
justified largely by appeal to pragmatic considerations (by the consideration of the utility of mathematical formulations in scientific theory). Quine, rejecting the analytic–synthetic distinction,
cannot recognize that there are two questions here. I have said that I belong to the minority who are not so sure Quine scored a victory in his debate with Carnap; but what I want now to say is that
if he did score a victory, it was a pyrrhic one. For in rejecting the distinction between the two ways of taking the question, Quine seems to deprive himself of any justification for giving it an
immediate affirmative answer. For Quine the answer is ultimately affirmative, to be sure, but his right to give this answer seems to depend on the whole long story, involving pragmatic
considerations, that has to be told to answer Carnap’s second question. And this lays Quine open to the objection raised especially by Charles Parsons, that his account of matters cannot do justice
to the felt obviousness of elementary mathematics. (He acknowledges the existence of the feeling, but has no apparatus with which to explain it.) I myself consider this type of objection so serious
that a Quinian ought to want to be able to endorse some notion of analyticity that would allow an immediate affirmative answer, ‘‘Yes, that’s analytic,’’ or ‘‘Yes, that’s just part of the concept.’’
It may be an exaggeration to say Quine needs a notion of analyticity if his position is to be at all plausible, since other means by which an immediate affirmative answer could be justified have not
been explored; but certainly the most obvious means would be to accept some notion of analyticity. (Note that ‘‘fictionalists’’ have no trouble returning, with their fingers crossed, an immediate
affirmative answer to the question, which they will retract upon entering the philosophy seminar room. Then they will claim that all they meant was, ‘‘Yes, that’s part of the usual fairy tale,’’ or
‘‘Yes, that’s how the traditional legend goes.’’)
Quine, analyticity, and philosophy of mathematics 5
Quine, I have said, almost needs to accept a notion of analyticity. The question is, can he? To answer this question we need to look at Quine’s arguments against the analytic–synthetic distinction in
‘‘Two dogmas.’’ And what are these two dogmas? The analytic–synthetic distinction is one. The other is the kind of logical empiricist theory of meaning according to which for each meaningful
statement there must be a more or less definite range of verifying and falsifying (or, at least, of confirming and disconfirming) potential observations. The latter dogma implies the former or, at
least, a theory of meaning of the kind indicated gives one way of making sense of an analytic–synthetic distinction: the analyticity is the limiting or degenerate case in which every potential
observation counts in favor of the statement, and none against it. So for Quine, rejection of the second doctrine is a corollary to rejection of the first. But Quine’s writings contain also other,
more independent, arguments against the second dogma, based on considerations of underdetermination of theory by evidence, the lack of crucial experiments, and the like. And whether on account of
these observations or others, by 1950 many veteran logical empiricists were in the process of giving up their older theories of meaning. Difficulties with the theory had become notorious by the time
of the Carnap–Quine exchange.8 It is therefore of some interest to consider what kind of notion of analyticity might survive rejection of the old logical empiricist theory of meaning. Perhaps the
most obvious fall-back theory of meaning would be of the type philosophers of science have called ‘‘cluster concept’’ theories, of which Carnap’s later ‘‘Ramsey sentence’’ view can be construed as a
variant version. Let me, avoiding detailed comparisons of different views of the same general type, merely describe the kind of theory I have in mind at the highest level of generality. The
background assumption is that there must be something surveyable and graspable associated with words to guide our usage of them. On almost any conception, and certainly on Quine’s, we are supposed to
be able to grasp the logical form of a statement, and to grasp the basic laws of logic; for that is how we grasp the logical implications among statements.9 On almost 8
‘‘Problems and changes in the empiricist criterion of meaning’’ (Hempel 1950) immediately followed Carnap (1950), ‘‘Empiricism, semantics, and ontology,’’ in the same issue of the same journal. I
intend ‘‘laws of logic’’ to be neutral here as between what would correspond in a formal system to axioms and what would correspond to rules. That there would have to be at least one rule is the
lesson of Quine (1936).
Mathematics, Models, and Modality
any conception, and again on Quine’s, we are supposed to be able to grasp the connection between observational terms and predicates and observable objects and properties. What remains to be
considered are theoretical terms and predicates that are non-logical but also non-observational. Here the obvious candidate for a surveyable, graspable something associated with an item of vocabulary
would be some core theory, some basic laws. For a theoretical term generally is learnt along with a batch of related theoretical terms, and along with a batch of basic laws involving the given term
and those other terms. The surveyable, graspable, body of basic laws does in at least one obvious sense guide subsequent usage of the term, and if one calls this surveyable, graspable, usage-guiding
body a meaning, then it can be said that the basic laws that are members of that body are part of the meaning of the theoretical term, or part of the concept expressed by that term, and in that sense
analytic. Now already this fall-back notion of analyticity differs appreciably from more traditional notions of analyticity, and cannot do all the same work. Notably, to call something ‘‘analytic’’
in this sense is not at all to call it unproblematic. What is analytic must be accepted if the concept is accepted, but then perhaps the concept should not be accepted!10 The basic laws may, after
all, be internally inconsistent, either obviously so, as with Prior’s ‘‘tonk,’’ or less obviously so, as with Frege’s infamous law V.11 Or the basic laws may have implications clashing with the
results of observation. Or the term and concept may simply be otiose, creating clutter and doing no useful work. For a Quinian, the fact that the analytic need not be unproblematic would not be an
unwelcome feature. For certainly Quine wants to be able to say that there is a question whether one should accept a given concept or not, though indeed such questions are to be resolved by pragmatic
considerations, and not on the basis of whether or not the concept corresponds to an ingredient of ultimate metaphysical reality. To concede that, say, Hume’s principle, is analytic in the indicated
sense would permit Quine to join Carnap in giving an immediate affirmative answer to the question whether the number of my fingers is the same as the number of my toes, 10
An alternative closer to the traditional notion would be to count as part of the meaning of the term only such laws as have the form of an equation or a biconditional and the status of an
abbreviatory definition. But as Quine correctly points out, the same equation or biconditional may have the status of a definition in one exposition and lack it in another. In such a case, where
there is a word, a more or less definite list of basic laws, and an inconsistency in that list, should we say that there is a concept, but it is an inconsistent and therefore unacceptable one, or
that there simply is no concept? I follow the former usage, but have found some other philosophers very strongly attached to the latter. I consider this a purely verbal issue in the sense of ‘‘purely
verbal issue’’ to be discussed below.
Quine, analyticity, and philosophy of mathematics
while still doing justice to the thought that somehow its pragmatic role in scientific theorizing is relevant to the question why we regard Riemann’s On the Hypotheses Which Lie at the Foundations of
Geometry as a contribution to science, and Dickens’ Bleak House (finished just the year before) as a contribution to art. This rather untraditional notion of analyticity would in fact seem to be just
what Quine should want, if he is to be able both to remain faithful to his core philosophical principle that the ultimate grounds for regarding mathematics or anything else as non-fiction rather than
fiction are pragmatic, and also to explain the felt obviousness of elementary arithmetic. And yet Quine will accept neither this nor any other analytic–synthetic distinction. And why not? Why does
Quine reject the kind of theory of meaning I have just been very abstractly and very vaguely describing? Well, he does not discuss that particular kind of theory specifically, but he thinks he has
reasons for rejecting any theory of meaning at all. He allows that if the notion of synonymy, or sameness of meaning, could be made sense of, then meanings could be admitted, being identifiable, if
all else fails, with equivalence classes of expressions under the equivalence relation of synonymy. He allows that synonymy can be made sense of in terms of analyticity, and vice versa. But he
famously denies that either of the two can be made sense of in a scientifically acceptable way. But as the earliest replies to ‘‘Two dogmas’’ recognized, Quine in making his case is taking for
granted some not-uncontroversial assumptions about what is scientifically acceptable in a linguistic or other psychological inquiry. As Grice and Strawson (1956) put it, he is assuming that only some
kind of ‘‘combinatorial’’ or ‘‘behavioral’’ account could make a linguistic or psychological posit respectable. Quine’s general complaint about analyticity, as applied to the particular kind of
picture of analyticity vaguely sketched above, amount to just this: that there is no clear combinatorial or behavioral criterion for distinguishing which laws count as ‘‘basic’’ and therefore
‘‘constitutive of the meaning’’ of a term, and which count as ‘‘non-basic’’ and ‘‘additional to the meaning’’ of the term. The assumption that there would have to be such a criterion before the
notion could be respectable and acceptable is just an instance of the same behavioristic assumptions that notoriously led Quine to write that ‘‘different persons growing up in the same language are
like different bushes trimmed and trained to take the shape of identical elephants. The anatomical details of twigs and branches will fulfill the elephantine form differently from bush to bush, but
the overall outward results are alike’’ (Quine 1960, p. 8). The canonical objection to this assumption is given in
Mathematics, Models, and Modality
Chomsky’s devastating review of Skinner’s Verbal Behavior (Chomsky, 1959): children’s language grows to resemble that of their parents with strikingly little input. The resemblance between the two
bushes cannot be simply the result of their being trimmed the same way, because the gardeners have done very little trimming. The conclusion is that if one is to get anywhere thinking about language,
one is going to have to learn to think outside the Skinner box. The brain is not an organ for cooling the blood, and one is not going to get anywhere trying to understand the complex relations
between sensory input and behavioral output simply by seeking correlations between the two, treating everything going on in-between in the brain and elsewhere as a black box. Nor can one wait until
neuroscience finds the relevant physical structures inside the skull before bringing them into psychological theorizing. Without some theorizing in advance, one would not even know what to look for
inside the skull, any more than, without Mendel’s positing ‘‘factors’’ and formulating laws of heredity in terms of these, would one have known what to look for in seeking the physical basis of
heredity. Theoretical posits, including meanings of a kind that bring with them a distinction between analytic and synthetic, must be allowed even if they are not directly manifested in observable
behavior – provided that they play a useful role in explaining the phenomena of language and thought. This, no doubt, most philosophers today will grant, and Quine’s failure to grant it dates his
classic paper more than any other feature. And yet, even granting that behaviorism of any kind, logical or substantial or methodological, is to be rejected, and that non-behaviorist programs positing
meanings are to be not just tolerated but even encouraged, there is still this much to be said for Quine’s skepticism about meanings: no stable consensus in favor of any one such program has as yet
emerged among either linguists or philosophers. There is nothing that could be called a body of accepted scientific conclusions about meaning or analyticity that workers in other areas, such as
philosophy of mathematics, can draw upon and apply to their concerns. And this being so, the question retains some interest whether a notion of analyticity can be developed without introducing
unobservable theoretical posits, and without stepping outside the boundaries within which Quine confined himself. 6
Quine’s worry about analyticity, even about the notion of analyticity sketched earlier that would seem to give him just what he should want
Quine, analyticity, and philosophy of mathematics
and no more, is that there is no clear, principled way to draw the boundary between laws that are so ‘‘basic’’ to a term that inquirers who differ over them would be correctly described as
‘‘attaching a different meaning or concept to the term’’ or ‘‘speaking of different things,’’ and laws that are not that ‘‘basic,’’ so that inquirers who differ over them would be correctly described
as ‘‘attaching the same meaning or concept to the term’’ or ‘‘speaking of the same things but saying different things about them.’’ Towards indicating a way to quiet this worry, let me briefly
examine some cases of apparent disagreement. I will consider cases of disagreement in logic, though examples could also be drawn from theoretical disagreements in empirical science. At one extreme,
let us consider an Australian logician who tells us that, unlike so many of his or her compatriots who merely claim that there are some statements of the form ‘‘P and not P’’ that are true, he or she
maintains that all such statements are true. This sounds very radical. But suppose that in further conversation we find him or her frequently arguing from P or from Q to ‘‘P and Q,’’ but never to ‘‘P
or Q,’’ and from ‘‘P or Q,’’ but never from ‘‘P and Q’’ to P or to Q. This is a case where one will feel inclined to say that the radical wannabes are simply attaching a different concept to the same
logical particles, meaning ‘‘and’’ by ‘‘or’’ and vice versa. And it is a case where a change of terminology would be helpful. Indeed it would actually put an end to the appearance of disagreement
between Bruce or Sheila and us. That a terminological switch would thus terminate debate is the mark of a purely verbal dispute. At the other extreme, consider two logicians Y and X who both accept
all the simple laws of classical logic in textbooks, but who disagree over whether in a certain complicated case certain premises do or do not imply a certain conclusion. Y claims he has found a
deduction showing they do. X claims she has found a counter-model showing they do not. This is a case where one will not feel inclined to say that the two are attaching slightly and subtly different
meanings or concepts to the same words. That kind of suggestion would, in fact, sound like a bad joke. The obvious way for the two to settle their differences, given how much they have in common,
would be for each to go carefully over both his or her own and the other’s work, looking for a flaw in the deduction or in the model, or if necessary to bring in a third party who shares their
classical orientation as referee. For an intermediate case, consider a group of Dutch logicians who reject, if not all, then very many, instances of ‘‘P or not P,’’ if not in the sense of denying
them, then in the sense of refusing to affirm them. We may find that they persistently argue in accordance with the canons of Brouwer’s or Heyting’s intuitionistic logic rather than of Frege’s or
Mathematics, Models, and Modality
Russell’s or Hilbert’s classical logic. In this case, sophisticated work by Go¨del and others shows that for substantial parts of discourse, our way of speaking can be translated into something
acceptable to them, and vice versa.12 But there are definite limits to how far one can go, and there can be no question of translations putting a complete end to the dispute, which is therefore not
purely verbal. Nonetheless, one can see that, as a practical, utilitarian matter, it would be helpful for the sides to distinguish their ‘‘or’’s. And in fact this is commonly done, the two ‘‘or’’s
being called ‘‘classical disjunction’’ and ‘‘intuitionistic disjunction,’’ and similarly for negation and so forth. That there is something in common is indicated by ‘‘disjunction’’ appearing in both
labels, but that there are significant differences is indicated by the different qualifying adjectives. The terminological distinction at least discourages partisans of either side from engaging in
question-begging argument – ‘‘we must have P or not P, else we would have not P and not not P, which is a contradiction’’ – and tends to deflect controversy in the direction of meta-level
discussions, which may not lead to their being very quickly settled,13 but will at least help clarify what is at stake. Logicians less wary of ‘‘meanings’’ than Quine seem spontaneously to say about
this case that the meaning of ‘‘or’’ as used by Brouwer is different from the meaning of ‘‘or’’ as used by Hilbert. And as a matter of fact, Quine himself says that when the deviant logician tries to
deny a classical doctrine, ‘‘he only changes the subject.’’ The appearance of this assertion in Quine’s Philosophy of Logic,14 however, startled readers of his earlier works, since it seems to go
clean against his whole official doctrine of repudiating meanings. What I would now like to propose is that Quinians can, without betraying the overarching Quinian principles, incorporate a notion of
meaning that would make what Quine says about the deviationist not a startling lapse, but exactly right. My proposal is that the law should be regarded as ‘‘basic,’’ as ‘‘part of the meaning or
concept attached to the term,’’ when in case of disagreement 12
The allusion is to the double negation interpretation, by which classical ‘‘either / or’’ becomes intuitionistic ‘‘not neither / nor,’’ and to the modal interpretations, by which intuitionistic
‘‘either / or’’ becomes classical ‘‘it is constructively provable that either / or.’’ Witness the immense literature spawned by Michael Dummett’s neo-intuitionism, for instance. Quine (1970, p. 81).
Two pages later it says that ‘‘he may not be wrong in doing so,’’ meaning that the reasons or motives for deviations from classical logic must be examined, though Quine in fact ultimately rejects
them on pragmatic grounds. Quine’s position in these passages seems not far from conceding that the intuitionistically unacceptable classical laws, say, may be called ‘‘analytic’’ provided that term
is not taken to imply ‘‘unproblematic’’ or ‘‘incorrigible.’’ This contrasts with his official view in Chapter 7 of the same work.
Quine, analyticity, and philosophy of mathematics
over the law, it would be helpful for the minority or perhaps even both sides to stop using the term or at least to attach some distinguishing modifier to it. Such basic statements would then count
as analytic.15 This proposal makes the notion of analyticity vague, a matter of degree, and relative to interests and purposes: just as vague, just as much a matter of degree, and just as relative to
interests and purposes, as ‘‘helpful.’’ But the notion, if vague, and a matter of degree, and relative, is also pragmatic, and certainly involves no positing of unobservable psycholinguistic
entities, and for these reasons seems within the bounds of what a Quinian could accept. There is no denying that utility of the notion is limited by its vagueness, and yet I think there are some
non-trivial cases that are comparatively if not absolutely clear. The intuitionist case just discussed is one of them, and I think that the case of greatest interest in the present context, Hume’s
principle, is another. That is to say, I think Hume’s principle can be called analytic in the sense that it would be helpful if ‘‘fictionalists’’ would stop saying things like ‘‘I grant that if
numbers exist then Hume’s principle is true of them, but I don’t grant that numbers exist,’’ and instead just abandoned (inside the philosophy seminar room) the use of the term ‘‘number’’ (with the
usual exception of use in negative existentials and in indirect discourse), and said instead, ‘‘I don’t grant that use of ‘number’ is accepted, though I grant that if it is accepted, it should be
used in accordance with Hume’s principle.’’ What is the difference here? Well, the first formulation tends to turn discussion in the direction of the question, ‘‘Do numbers exist?’’ And this is a
question that cannot usefully be debated between anti-nominalists and nominalists, since there is simply no agreement at all between them as to what would constitute a relevant consideration in favor
of or against the statement that numbers exist. (It was in connection with this point that in my earlier paper I alluded to the interminable lawsuit Jarndyce and Jarndyce in Dickens.) By contrast,
much as in the intuitionist case, the second formulation tends to turn discussion in the direction of issues about what makes it appropriate or inappropriate to accept a given concept (in a
philosophical as opposed to a scientific context). Though, again as in the intuitionist case, there is no guarantee that thus turning from the object level to the metalevel of discourse will result
in convergence of opinion, an airing of differences at the meta-level can at least somewhat clarify why there seems to be 15
As would their logical consequences, at least in contexts where, in contrast to the examples above, there is no disagreement over logic.
Mathematics, Models, and Modality
so little chance of achieving agreement at the object level, and debate over the criteria of acceptability of concepts does not seem as wholly futile as does debate at the object level, where it
seems impossible for either side to do more than argue in a (vicious or virtuous) circle. I will not press the point, however, but will leave it to the reader to ponder. What I want to do instead,
before closing, is to consider one alltoo-obvious complaint about the notion of analyticity proposed. The proposed notion of ‘‘analyticity’’ seems, in its relativity and vagueness, as well as in its
failure to imply unproblematicity, so different from traditional ones as to make it unhelpful to use the traditional term for it, or at least unhelpful to use that term without some distinguishing
modifier. Hence by my own principles I ought at least to add a qualifying adjective. Let me therefore do so, and close by commending to Quinians and non-Quinians alike a notion that may be called
that of the pragmatic analytic.
Being explained away
When I first began to take an interest in the debate over nominalism in philosophy of mathematics, some twenty-odd years ago, the issue had already been under discussion for about a half-century. The
terms of the debate had been set: W. V. Quine and others had given ‘‘abstract’’ and ‘‘nominalism’’ and ‘‘ontology’’ and ‘‘Platonism’’ their modern meanings. Nelson Goodman had launched the project of
nominalistic reconstruction of science, or of the mathematics used in science, in which Quine for a time had joined him before turning against him. William Alston and Rudolf Carnap and Michael
Dummett had raised doubts about what the point of Goodman’s exercise could be; and though they had unfortunately been largely ignored, Quine’s contention that the exercise cannot be successfully
completed had gained wide publicity as the so-called indispensability argument against nominalism. By contrast, two subtle discussions of Paul Benacerraf had been appropriated by nominalists and
turned into the socalled multiple reductions and epistemological arguments for nominalism. While such arguments, if sound, would suffice to establish the nominalist position even if Quine were right
that mathematical entities cannot be eliminated from science, nonetheless a number of nominalists were just then setting out to prove Quine wrong. Reviving Goodman’s project, but by allowing
themselves means Goodman had not been willing to allow himself, they hoped to succeed where Goodman had failed: they hoped to find a way of interpreting standardly formulated scientific theories,
which at least appear to imply or presuppose the existence of such things as numbers and functions and sets, in alternative theories that would not even appear to do so. Now a lot of the work of
logicians since the time of Kurt Go¨del has consisted in finding interpretations of one theory in another of a superficially quite different appearance. So an experienced logician should be in 85
Mathematics, Models, and Modality
a good position to give advice as to how what the nominalists were trying to do should be done. When I entered the field, I attempted just this: I undertook to tidy up the ongoing work of nominalists
of the period, by indicating the optimal method of interpreting away numbers and sets in favor of points and regions of space-time, or interpreting away claims about the actual existence of abstract
numbers into claims about the possible existence of concrete numerals. But while I was thus not impressed by claims that the nominalist project was infeasible, I was concerned over the question ‘‘Why
bother?’’ What I doubted was not whether what the nominalists were trying to do could be done, but whether it was worth doing. For logicians are used to thinking of the differences between theories
that can be interpreted in each other as less important than the difference that exists when there is only an interpretation of a first theory in a second, and not the other way around. In the latter
case, the second theory is overall stronger than the first, as logicians measure strength of theories, while in the former case the two theories are of equal strength. To me it was a bit surprising
that so many philosophers seemed to attach great importance to differences between theories whose assumptions were of the same overall strength, simply because the interpretation of the one in the
other involves switching what Quine called ‘‘ontological’’ commitments (namely, implications about what sorts of objects exist) for what he called ‘‘ideological’’ commitments (namely, assumptions
about what sorts of predicates and what sorts of logical operators make sense). It seemed to me that this kind of difference cannot make much of a difference, because it is simply too easy to
interpret and reinterpret and, like a ‘‘creative’’ accountant, move costs and benefits back and forth between the ontological and the ideological columns. I found philosophers mostly dismissive of
this attitude. They typically would suggest that a logician has the impression that it is easy to reinterpret theories to change their ontologies only because the logician has been working with
theories about abstract entities, and that creative accounting is much more difficult when the entities with which one is concerned are concrete. (Indeed, this supposed difference between abstract
and concrete is behind one of the standard nominalist arguments, the ‘‘multiple reductions’’ argument appropriated from the discussion of John von Neumann’s and Ernst Zermelo’s rival set-theoretic
definitions of numbers in Benacerraf ’s ‘‘What numbers could not be.’’) My own impression, by contrast, was that the reason not much had been accomplished in interpreting away apparent implications
or presuppositions as to the existence of concrete entities of one
Being explained away
sort or another was that very little effort had been put into trying to do so. At the time I did not, however, attempt to show how it could be done – an omission I will begin to rectify later on in
the present note. I did attempt to express my reservations and explain why I am not a nominalist, but found myself even more completely ignored than Alston and Carnap and Dummett had been. 2
Meanwhile, however, Gideon Rosen was independently arriving at overlapping ideas, and a more persuasive way of presenting them; and I had the good luck that he preferred to join forces with me on a
book, rather than publish his dissertation as a book on its own. Our approach in A Subject With No Object (Burgess and Rosen, 1997) was two-pronged.1 We began by distinguishing, in a terminology
carried over from my earlier work, two spirits in which a nominalist reinterpretation of a scientific theory might be put forward: the hermeneutic, according to which the nominalist version is a
revelation of what the current version really meant all along; and the revolutionary, according to which the nominalist version is a rival to the current version intended to replace it henceforth.
Rosen has since elaborated this distinction, making some subdivisions, and his elaborated form appears in a chapter in Stewart Shapiro’s Handbook of Philosophy of Mathematics and Logic (2005). I do
not want to go into great detail here, but let me review some of the key points. Revolutionaries claim their nominalist theories are distinct from and better than current theories. But better by what
standards? Revolutionaries may be subdivided into the naturalized (citizens of the scientific community) and the alienated (foreigners to the scientific community) according to whether their appeal
is to scientific standards or to some supposed suprascientific philosophical standards. About the latter little need be said here, since few contemporary nominalists wish to put themselves in a
position like that of 1
Lest footnotes and bibliography become longer than the paper proper, I will refer the reader to the long list of references at the end of Burgess and Rosen (1997) for the full titles and
bibliographical data of the relevant works of all the various authors alluded to in passing here, from Alston to Zermelo. Exceptions must be made in the case of two more recent authors whom Rosen and
I did not discuss in our book. One is Yablo, whom Rosen does discuss in Rosen and Burgess (2005). (My role in producing the chapter involved little more than some rewriting of Rosen’s work to make it
fit within the editor’s word limit.) But I advise the reader to visit Yablo’s website (www.mit.edu/yablo/ home.html) for the most up-to-date listing of his works, since he is actively engaged in
producing new material, all of it intriguing whether one agrees with it or not. The other exception is Azzouni (2004). My review has appeared as (Burgess 2004d).
Mathematics, Models, and Modality
Cardinal Bellarmine ‘‘correcting’’ scientific theories of planets or of Norman Malcolm ‘‘correcting’’ scientific theories of dreaming by appeal to the higher authority of Aristotle or Wittgenstein
(as interpreted by themselves). Generally, revolutionaries profess to be naturalized. But if they are, if they think their versions of gravitational theory or whatever are superior scientifically to
standard versions, then one might expect them to publish their work in theoretical physics journals, or at least to attempt to do so. If ‘‘ontological economy’’ of mathematical apparatus really is as
important to scientists as it is to certain philosophers – something I myself very much doubt, since it is very difficult to find any clear historical instance of such a preference – such
contributions ought to be welcome. Yet the experiment of submitting a write-up of a nominalist project to a theoretical physics journal has never been tried, so far as I know, and candid
revolutionaries of a professedly naturalist stamp would probably concede that if undertaken, the test would very likely be failed: the papers would not make it through peer review. But if this is
admitted, how can a revolutionary profess to be a naturalist, adhering to scientific standards in judging theories? One common line is to claim that though nominalist physics is perhaps not superior
to mathematical physics by the standards of physicists, what really need to be compared are not just the two versions of physics, but rather two packages of combined physics and epistemology. Somehow
nominalist revision, which may make the job of the physicist more difficult, is supposed to make the job of the naturalized, scientific epistemologist easier. In what way? It is at this point that
nominalists of the school I have been alluding to bring forward their appropriation of Benacerraf’s discussion of knowledge in his famous paper ‘‘Mathematical truth.’’ The so-called Benacerraf
problem is the puzzle, ‘‘How could we come to know anything asserting or implying or presupposing that there are numbers or functions or sets, given that it does not make sense to ascribe
spatiotemporal location or causal powers to such mathematical entities?’’ Nominalism provides a very easy answer to this ‘‘How can we?’’ question – namely, the answer ‘‘We can’t!’’ – which otherwise
would be a difficult one, it is said. This line of thought involves a serious confusion, which can be brought out by considering what properties a belief must have in order to rank as knowledge.
These are three: justification, truth, and whatever it takes to bridge the gap between justified true belief and knowledge that was discovered by Edmund Gettier. But the epistemological argument for
nominalism is not about Gettierology. Nor is it really about truth. (The nominalist argues that standard mathematical existence theorems cannot be known to be true as a way of avoiding directly
confronting the question
Being explained away
of whether they are true.) So the issue is one of justification. Once this is appreciated, it can be seen that the whole idea of trading costs to physics against benefits to epistemology is a muddle.
For providing an explanation of the historical fact that current mathematical and scientific theories have come to be believed will be an important task for scientific, naturalized epistemology
regardless of whether or not one takes belief in those theories to be justified. This task is in no way made easier by the assumption that the belief in question is unjustified. The obvious
anti-nominalist solution to the Benacerraf puzzle is to suggest that if you cannot think how we could justifiably come to believe anything implying, say, ‘‘There are functions,’’ then just look at
how mathematicians come to believe, say, Go¨del’s result, ‘‘There are solutions to the field equations of general relativity with closed time-like paths.’’ That’s how one can justifiably come to
believe something implying ‘‘There are functions.’’ The revolutionary nominalist who rejects this answer must think the actual historical process leading to belief in this theorem of Go¨del’s is
somehow not a justifiable process of belief-formation. But it is virtually a tautology that the belief arrived at is justifiable by mathematico-scientific standards. And hence the revolutionary
nominalist’s position, according to which it is unjustifiable, must involve covert appeal to suprascientific philosophical standards of justification – must be alienated – after all. To vary the
example, let us consider the claim of nominalists who maintain that what they are appealing to is just ‘‘what science teaches us about how we humans obtain knowledge,’’ and see how this applies to,
say, the belief that more than a half-dozen books advocating nominalism have been published in the past three decades or so. By ‘‘books’’ here I clearly do not mean concrete book tokens, since there
are not just ‘‘more than a halfdozen’’ but hundreds or thousands of such tokens, scattered through various institutional and personal libraries. So the belief in question is one about abstract book
types, and hence according to the nominalist must be something ‘‘science teaches us’’ we cannot know. But is this a teaching of science, or of some Procrustean epistemological theory? If you asked me
for evidence to justify the belief that more than a half-dozen books advocating nominalism have been published in the past three decades or so, I could point to various book tokens on the shelves of
my office, with titles like Ontology and the Vicious Circle Principle and Science without Numbers and Mathematics without Numbers, and names like ‘‘Hartry H. Field’’ and ‘‘Charles S. Chihara’’ and
‘‘Geoffrey Hellman’’ on the title page, along with dates like 1973 and 1980 and 1989. Can anyone seriously maintain that science teaches us that this is insufficient evidence?
Mathematics, Models, and Modality 3
Turning to hermeneutic nominalism, the most obvious objection to its claim that what appear to be statements about numbers and sets are really statements about something quite different, is the
simple lack of evidence for it. But there is another problem, which can be illustrated by the case of the proposal to paraphrase away apparent talk of the existence of abstract numbers as really
being talk of the possible existence of concrete numeraltokens. The hermeneutic nominalist who resorts to this kind of paraphrase – and similar remarks would apply to those who favor other kinds –
will want to say, as the revolutionary nominalist does not, that ‘‘There are prime numbers greater than 1010’’ is true, justifiably believed, and so on, because ‘‘deep down all it really means’’ is
something like ‘‘There could have existed prime numeral-tokens greater than 1010.’’ The trouble is that parity of reasoning suggests then that ‘‘There are numbers’’ must equally be true, justifiably
believed, and so on, because ‘‘deep down all it really means’’ is something like ‘‘There could have existed numeral tokens.’’ But whether ‘‘There are numbers’’ is true, justifiably believed, and so
on, was the whole original issue. Certainly this was the question Goodman and Quine were asking when they first agitated the issue of nominalism (and not, for instance, some question about
hypothetical ‘‘deep structures,’’ in which neither Goodman nor Quine ever believed). To concede that ‘‘There are numbers’’ is true, justifiably believed, and so on, is to concede all the
antinominalist maintains. (Perhaps anyone who really deserved to be called a ‘‘Platonist’’ in any historically serious sense would want to claim more; but I doubt that there are any living Platonists
in any such sense of ‘‘Platonist.’’) Such, in brief, were the kinds of arguments Rosen and I put forward in A Subject With No Object. Since the appearance of that book it has become apparent,
however, that hermeneuticists, like revolutionaries, are divisible into two subcategories, which Rosen has called content hermeneuticism and attitude hermeneuticism. The former is the kind of view I
have been discussing so far, about what mathematically formulated statements ‘‘deep down really mean.’’ The latter is not a view about meaning in this sense, but about the attitude of mathematicians
and scientists and the lay public towards scientific and mathematical and commonsense theories apparently involving abstract entities. Attitude hermeneuticism is the view that, contrary to the common
assumption of the anti-nominalist, revolutionary nominalist, and content-hermeneutic nominalist, such theories are not really believed. As developed by Steve Yablo and others, the attitudehermeneutic
view has been the dominant version of nominalism over the
Being explained away
better part of the past decade, though the attitude-hermeneuticist line has zigged and zagged a bit over the course of that period. First we were told mathematics is like fiction. Well, it is not,
and this in two crucial respects. For one thing, our attitudes towards mathematics and towards fiction are totally different: we rely on mathematics in important practical applications, as we do not
rely on novels, short stories, and the like. (If we need the services of a good detective, we do not go to Baker Street.) For another thing, when there is a dispute about whether some particular text
is fiction or non-fiction – the Don Juan books of the elusive Carlos Castan˜eda may serve as an example – we at least have a pretty clear idea what it would be like for it to be true or not true. For
instance, we have a pretty clear idea of what it would be like for a man to smoke a concoction of dried mushrooms, turn into a bird, and fly off a cliff. By contrast, once we join the nominalist in
abandoning ordinary mathematical standards for judging when mathematical existence claims are true, or have been adequately proved, we are left with no other agreed standards. Then we began to be
told that mathematics is like metaphor or some related figure of speech. Well, again it is not. For one thing, as Rosen argues in our chapter in Shapiro (2005), metaphorical usages can almost always
be instantly recognized by the speaker as having been meant non-literally, as soon as anyone raises the issue, whether or not one is able to say in literal terms what was meant; and again that is far
from being true in the mathematical case. The ‘‘figuralist’’ or ‘‘figurative’’ interpretation seems to be attributing to mathematicians and scientists and lay people too philosophically sophisticated
an attitude. I do not know what we will be told next. Fictionalism and figurativism do not exhaust the options for the attitude hermeneuticist – in ongoing work Yablo has some very interesting things
to say about ‘‘non-catastrophic presupposition failure’’ – though I think it is clear by now that attitude hermeneuticism is not something arrived at by first studying some linguistic phenomenon,
then noticing that the conclusions one draws about it have nominalistic implications. Rather, a commitment to nominalism seems to be there first, and to be what is driving the search for some
linguistic phenomenon or other whose analysis could somehow or other be applied to support the nominalist position. Setting Yablo’s developing views aside, let me turn to the latest booklength work
on the issue of nominalism, Jodi Azzouni’s Deflating Existential Consequence (2004). This work makes the mind-boggling claim that one can sincerely assert ‘‘There are such things as numbers’’ and
even ‘‘‘There are such things as numbers’ is literally true’’ and still not be ‘‘ontologically
Mathematics, Models, and Modality
committed’’ to numbers. Azzouni’s position may perhaps be classed as an extreme version of attitude hermeneuticism, but owing to its extremism it is perhaps best put in a class by itself. However it
ends up being classified, there is clearly something radically wrong with it. In the first place, to repeat an earlier observation, whether it is true that there are numbers was the whole issue, and
in conceding that it is true, the would-be nominalist of this new style is conceding everything the antinominalist maintains. In the second place, the claim about ‘‘ontological commitment’’ that the
new-style would-be nominalist is making is selfcontradictory. For ‘‘ontological commitment’’ was a phrase without use and therefore without meaning until Quine gave it a meaning by stipulative
definition; and that stipulative definition makes sincere assertion that there are numbers, or that ‘‘There are numbers’’ is literally true, a more than sufficient condition for ‘‘ontological
commitment’’ to numbers. In an effort to make some kind of sense of Azzouni’s nonsensical claim I was led to speculate that what he has done has been to take Quine’s phrase ‘‘ontological commitment’’
and substitute for Quine’s understanding of ‘‘ontological,’’ on which the word is merely a fancy synonym for ‘‘existential,’’ some other understanding of ‘‘ontological,’’ presumably adopted from some
pre-Quinean tradition. So let me, without making any strong exegetical claim about Azzouni, examine the contrast between preQuinean and post-Quinean senses of ‘‘ontology’’ and its derivatives. For
this purpose we must turn back to a time before the beginning of the debate on modern nominalism, and I think we do best to turn quite a way back, right back to the beginning of the modern era. 4
READING GOD’S MIND OR IMPOSING A SCHEME ON THE WORLD?
My account of the history will be condensed to the point of being a cartoon, but nonetheless I hope it may help the woods stand out from the trees. I begin with a much-quoted passage in William
James, describing the attitude of the heroes of the Scientific Revolution, who hoped for a science that would be nothing less than a reproduction in our minds of the blueprint for the universe used
by the Great Architect: When the first mathematical, logical, and natural uniformities, the first laws, were discovered, men were so carried away by the clearness, beauty and simplification that
resulted, that they believed themselves to have deciphered authentically the eternal thoughts of the Almighty. His mind also thundered and reverberated in syllogisms. He also thought in conic
sections, squares and roots and ratios, and
Being explained away
geometrized like Euclid. He made Kepler’s laws for the planets to follow: he made velocity increase proportionally to the time in falling bodies; he made the law of the sines for light to obey when
refracted; he established the classes, orders, families and the genera of plants and animals, and fixed the distances between them. He thought the archetypes of all things, and devised their
variations; and when we rediscover any one of these his wondrous institutions, we seize his mind in its very literal intention. (James 2000, p. 29)
To show James is not just making this up, I could reproduced muchquoted passages from Galileo’s Assayer and Kepler’s Astronomia Nova, but let me forbear. The goal for those who accepted this picture
was to produce a description of reality ‘‘just as it is in itself,’’ or equivalently a description of the universe as God sees it, and not as we see it. (Take the invocation of the Deity literally or
metaphorically as you choose.) Such a description would necessarily be very different from the description of our environment which we use in everyday life. (According to the seventeenth-century
worthies I have been alluding to, a chief difference would be that the colors and sounds and odors we see and hear and smell would be gone, and only size and shape and position, along with speed and
direction of motion, would be left.) But as David Hume already saw, if one makes one’s standard for ‘‘knowledge’’ the possession of a representation of reality that describes it ‘‘just as it is in
itself,’’ then the consequence will ‘‘an universal skepticism’’ and the conclusion that ‘‘knowledge’’ is impossible. Hence Immanuel Kant’s Copernican revolution. For Kant, the aim is still to
separate out, in our ordinary and scientific accounts of the world, what is contributed by the world and what by us; but instead of attempting to do this by producing an account with nothing
contributed by us, Kant proposed to proceed the other way around, by producing an account with nothing contributed by the world, an account of the pure forms of sensibility and categories of the
understanding supplied by us, into which the world pours empirical content. In the century and a half between Kant and Carnap, which I will leap over in a single bound, there was really surprisingly
little change in the nature of the project. With Carnap there is more talk of ‘‘linguistic frameworks’’ and less of ‘‘pure forms of sensibility’’ or ‘‘categories of the understanding,’’ and there is
a shift from claims about what we inevitably must impose to claims about what we conventionally do impose on the world. But even if for Carnap there is no one conceptual scheme we must impose on the
world, yet still we must impose some conceptual scheme or other, and
Mathematics, Models, and Modality
there can be no question of getting behind any and every conceptual scheme to the world ‘‘just as it is in itself.’’ Alongside this agreement of substance between Kant and Carnap, there is a
disagreement over terminology, and in particular over the role of the term ‘‘metaphysics.’’ Originally this term applied to the attempt to get behind our conceptual schemes to a God’s-eye view of
reality, something Kant and Carnap agree is impossible. Kant proposes to use it instead for his own project of articulating just what the scheme our intuition and understanding impose on the world
amounts to. Carnap, by contrast, proposes simply to retire the term. Thus for Kant ‘‘the future of metaphysics is critique,’’ while for Carnap metaphysics has no future. Against Carnap, Quine claimed
that while the fabric of our theory of the world is ‘‘white with convention’’ and ‘‘black with fact,’’ there are no purely black threads and no purely white threads in it. The point about black had,
in effect, already been conceded, or rather insisted upon, by Carnap when he argued, contra Moritz Schlick, that the evidence in science consists of corrigible reports of observations about the
furniture and implements of the laboratory, and not incorrigible reports about sense-data. The point about white, about the existence of a purely conventional element, was the issue between Carnap
and Quine. Quine’s contention was just this. Suppose, as Carnap maintains, we generally favor one linguistic framework or conceptual scheme over another on grounds of convenience: in attempting to
describe the world, we find it better suits our purposes to do so in using this framework or scheme rather than that. Well, what sort of fact is this fact that one scheme is more convenient than
another for us to use in attempting to deal with the world? It would seem to be a fact not just about us, but also about the world: we are such and it is such that we can more successfully deal with
it in this way rather than that. So the scheme is not, after all, something contributed purely by us, since part of the reason we choose it is that the world lends itself to description in terms of
these conceptual resources rather than others. Rightly viewed, the difference between Quine and Carnap here is one of detail: much more unites than divides them. In particular, Quine has no more use
than Carnap for the kind of pre-Kantian project of attempting to describe reality ‘‘just as it is in itself.’’ And yet there is a terminological difference between the two over the term ‘‘ontology,’’
traditionally a near-synonym for ‘‘metaphysics.’’ Quine agreed with Carnap that ontology in this traditional, pre-Kantian sense is meaningless. Quine, however, differed from Carnap in what he called
the ‘‘ethics of terminology,’’ insisting that if a word was meaningless, he had the right to give it a meaning by stipulative definition,
Being explained away
and choosing to exercise this alleged right in the case of the word ‘‘ontology.’’ The new enterprise of ‘‘ontology’’ in the post-Quinean sense is simply a glorified taxonomy, an attempt to catalogue
what sorts of objects there are in reality, not ‘‘just as it is in itself ’’ but as apprehended by us through our everyday and technical language, our commonsense and scientific theories. This
untraditional use of ‘‘ontology’’ is of a piece with the historically dubious use of ‘‘nominalism’’ and the historically absurd use of ‘‘Platonism.’’ (In any traditional sense, it is people James is
talking about, people like Galileo and Kepler, who are the Platonists, while an anti-metaphysical pragmatist like Quine is no more a Platonist than was James.) Why Quine chose to apply an old label
to a new project is to me something of a mystery. It is clear that having a synonym, ‘‘ontological’’ or ‘‘ontic,’’ for ‘‘existential’’ must have been useful during the heyday of Jean-Paul Sartre.
Readers would have winced if the section of Word and Object entitled ‘‘ontic decision’’ had instead been entitled ‘‘existential choice.’’ I fear, however, that Quine may have chosen to use
‘‘ontology’’ mainly to needle Carnap, who seems to have more than just disliked the word (perhaps because it was a favorite of Heidegger, who incidentally also used it in a radically untraditional
sense). The danger posed by Quine’s transferring ‘‘ontology’’ from the old project to the new – rather than coining contrasting labels – is that some may be led to confuse the two homonymous
enterprises. And just this is what I suspect may have happened in the case of those recent nominalists who say in one breath, ‘‘I sincerely believe that it is literally true that there are such
things as numbers,’’ and in the next, ‘‘I am in no way ontologically committed to numbers.’’ This otherwise nonsensical double-talk becomes less nonsensical if one takes ‘‘ontology’’ in the second
assertion to be meant in a pre-Kantian rather than a postQuinean sense. Indeed, while I myself sincerely believe that it is literally true that there are such things as numbers, I do not believe that
the aim of traditional, pre-Kantian ontology (namely, the aim of getting behind all conceptual schemes to reality ‘‘just as it is in itself ’’ and cataloguing what sorts of objects it contains) is a
feasible one. Of course, this being my attitude, I wish to make ‘‘ontological’’ claims, in a traditional, pre-Kantian sense, neither for abstract objects nor for concrete ones. It is here that I
differ from what seems to be the attitude of the double-talking nominalists, who go on to say in their third breath, ‘‘But I am ontologically committed to this table and these chairs, and to the moon
and the stars.’’ What I see wrong in this kind of nominalism is not its ‘‘anti-realism’’ about the abstract, but what appears to be its ‘‘realism’’ (in a traditional, preKantian sense) about the
Mathematics, Models, and Modality
What I am inclined to conclude from the tendency observable over these last decades for nominalism to morph from one form to another is that nominalism can never be defeated by arguments solely about
the abstract, since what feeds it is an underlying na¨ıvete´ about the concrete and our knowledge thereof. It is for this reason that I welcome recent epistemological arguments for what I will call –
from the Greek word for ‘‘simple’’ – the haplist position. As the nominalist holds that everything there is is concrete, and hence that there are no numbers, no books (in the sense of types rather
than tokens), and so on, so the haplist holds that everything there is is simple, not extended and composite, and hence that there are no chairs and tables, and no moon and stars – and no people, and
in particular no haplist philosophers! Though the haplist conclusion is absurd, attention to what haplists have to say may at least help show that the explanation of our knowledge of the concrete is
not so straightforward as nominalists seem to suppose. This is especially so since the form of the epistemological argument for haplism is so similar to that of the epistemological argument for
nominalism. The nominalist’s skeptical argument goes something like this: I look at my hand and see that (counting the thumb as a finger) there is a finger and another and another and another and
another and no more, and conclude that the number of fingers on my hand is five. But if we look at what fundamental physics tells us is really going on here, what we find is just this: light coming
from an external source is reflected off my fingers over there to my eye over here, beginning a process in my body that ends with my forming the belief that the number of fingers on my hand is five.
But assume what you will about whether, in addition to the concrete fingers, such an abstract entity as the number five exists or not, no such alleged thing plays any role in this explanation. If I
end up speaking as if there were such a thing, there actually being such a thing plays no role in explaining why I do: the explanation why I do must be sought quite elsewhere, in the convenience of
positing such ‘‘useful fictions’’ as numbers and sets for purposes of getting on in the world. The haplist’s skeptical argument goes rather like this: I look over there and see something brown and
chair-shaped, and conclude that there is a chair over there. But if we look at what fundamental physics, as in Richard Feynman’s Q.E.D., tells us is really going on here, what we find is just this:
photons coming from an external source are absorbed by the electrons among the myriad fundamental particles swarming in chair formation over there, some of which electrons quickly emit other photons
directed over here, initiating a process – and so on. But assume what you will about
Being explained away
whether, in addition to the simple fundamental particles, such an extended, composite entity as the chair exists or not, no such alleged thing plays any role in this explanation, in which the
electrons and quarks do all the work. If I end up speaking as if there were such a thing as the chair, there actually being such a thing plays no role in explaining why I do: the explanation why I do
must be sought quite elsewhere, in the infeasibility of my keeping track of the complex motions of the myriad tiny fundamental particles, and consequent convenience of positing such ‘‘useful
fictions’’ as chairs and tables for purposes of getting on in the world. Pointing to the parallelism between the two forms of skepticism, I submit that if the haplist’s is nothing more than a clever
sophism (as I imagine most nominalists would agree it is), then the nominalist’s is no better. Still, I would like to make a stronger case against the claim that while ultimate metaphysical reality
‘‘as it is in itself ’’ does not contain numbers or books, by contrast it does contain tables and chairs, or the moon and the stars. This brings me at long last to the topic my title was intended to
herald: the reasons for doubting that ultimate metaphysical reality ‘‘as it is in itself ’’ contains objects of any sort. These reasons were adumbrated in a section of A Subject With No Object that
has been little read, but I think it is time to refresh and elaborate upon the suggestion made there, in the hopes of moving the never-ending but ever-changing debate over nominalism in a new
direction. 6
We speak of the world as containing objects with properties inasmuch and insofar as we speak a language with nouns and verbs, and sentences with subjects and predicates. The position to which I
subscribe and that I wish to defend here is that there is no reason to suppose, just because we speak to each other in such a language, that God speaks to himself in such a language, or that the
object-property structure is a feature not merely of reality as apprehended by us, but of reality as apprehended by God, or equivalently, ‘‘as it is in itself.’’ There is nothing wrong with our
speaking as we do, and certainly I do not myself propose to speak otherwise, but there is nothing uniquely right about it either, and if other intelligent creatures do not do so, they are not
necessarily making a mistake. Now as a matter of fact, though I have said that we speak to each other in a language with certain grammatical features, it is not beyond controversy that all human
languages do in fact share these features. Some linguists have claimed otherwise, as in the following passage from Whorf:
Mathematics, Models, and Modality
[I]n Nootka, a language of Vancouver Island, all words seem to us to be verbs, but really there are no classes [of nouns and verbs]; we have, as it were, a monistic view of nature that gives us only
one class for all kinds of events. ‘‘A house occurs’’ or ‘‘it houses’’ is the way of saying ‘‘house,’’ exactly like ‘‘a flame occurs’’ or ‘‘it burns.’’ These terms seem to us like verbs because they
are inflected for durational and temporal nuances, so that the suffixes of the word for house event make it mean long-lasting house, temporary house, future house, house that used to be, what started
out to be a house, and so on. (Whorf 1956, pp. 215–16)
And some literary writers have imagined a whole world of speakers of such languages, as in the following passage from Borges: Hume noted once for all time that Berkeley’s arguments did not admit the
slightest refutation nor did they cause the slightest conviction. This dictum is entirely correct in its application to earth, but entirely false of Tlo¨n. The nations of this planet are congenitally
idealist. Their language and the derivations of their language – religion, letters, metaphysics – all presuppose idealism. The world for them is not a concourse of objects in space; it is a
heterogeneous series of independent acts. It is successive, not spatial. There are no nouns in Tlo¨n’s conjectural Ursprache, from which the ‘‘present’’ languages and dialects are derived: there are
impersonal verbs, modified by monosyllabic suffixes (or prefixes) with an adverbial value. For example: there is no word corresponding to the word ‘‘moon,’’ but there is a verb which in English would
be ‘‘to moon’’ or ‘‘to moonate.’’ ‘‘The moon rose above the river’’ is hlo¨r u fang axaxaxas mlo¨, or literally: ‘‘upward behind the on-streaming it moon[at]ed.’’ (Borges 1962, p. 23)
(In what follows let me use ‘‘moonate’’ – or perhaps better, ‘‘lunate’’ – since the other verb suggested already exists in English in a vulgar sense.) Whorf is speaking about an actual language, and
if he is right, then a noun-free language is not only possible but actual. Unfortunately, however, though Whorf is speaking of real people, it has been disputed whether what he is saying about them
is really true. Borges, of course, is only describing a fictional planet. That does not matter for us philosophers, since all we are interested in is the possibility of speaking a language without
nouns. But Borges does not really show this, since his description of the language of Tlo¨n does not go into enough detail to convince one that the bulk of the things we might like to say could be
replaced by saying things using only verbs and adverbial modifiers. What I wish to review here is a different approach to showing how a language like English could be translated into a language with
only those grammatical categories. Of course, if one’s only understanding of this new language were by way of explanations of how to translate it into English or English into it, no conclusions about
‘‘ontology’’ would follow; so an effort of the imagination is still required to convince oneself that children could
Being explained away
grow up being spoken to and speaking such a language and no other. I trust this will not be too difficult, but ultimately the reader must judge. To begin with, we need to imagine English translated
or, as Quine called it, ‘‘regimented’’ into what logicians call a first-order language (with predicates only and no singular terms). The possibility of such regimentation is what lies behind Quine’s
slogan ‘‘to be is to be the value of a variable.’’ Here I must assume familiarity, from Quine’s writings or elsewhere, with how such regimentation might be attempted. To give at least one example,
consider the following truth: (1) Whatever lives, changes.
Now (1) can be regimented as follow: (2) 8x(x lives ! x changes)
Also (2) admits several equivalents, including one involving only negation, conjunction, and existential quantification: (3) 9x(x lives (x changes))
In his paper ‘‘Variables explained away,’’ Quine shows how we can eliminate variables like the x in (3). We first enrich our language with new operators, the so-called predicate functors, operators
that attach to predicates to form new predicates, defined thus: (mF)x1 . . . xm (jFG)x1 . . . xmy1 . . . yn (rF)x1 . . . xm–1 (qF)x1 . . . xm–1 (fF)x1 . . . xm–1xm (yF)x1 . . . xm–1xm
Fx1 . . . xm Fx1 . . . xm Gy1 . . . yn 9xm(Fx1 . . . xm–1xm) Fx1 . . . xm–1xm–1 Fx1 . . . xmxm–1 Fxm . . . xm–1x1
Going from English to Quinese, each expression in the right-hand column may be abbreviated by the corresponding expression in the left-hand column. Writing for short ‘‘F’’ and ‘‘G’’ for ‘‘lives’’ and
‘‘changes,’’ (3) becomes: (4) 9x (Fx Gx)
It can then be reduced to an equivalent without variables in the following steps (5) 9x (Fx (mG)x) (6) 9x (jF(mG))xx (7) 9x (q (j F(m G)))x
Mathematics, Models, and Modality
(8) (r (q (j F(m G)))) (9) m (r (q (j F(m G))))
Going back and restoring ‘‘lives’’ and ‘‘changes’’ for ‘‘F’’ and ‘‘G’’ in (9) we have (10) m (r (q (j lives (m changes))))
My personal contributions in this area have been two. First, I thought of combining Quine’s slogan ‘‘to be . . .’’ with his paper title ‘‘. . . explained away’’ to produce the title of the present
paper. Second, I suggested a way of pronouncing the predicate functors: (m talks)x (j walks runs)xy (r stares at)x (q destroys)x (f eats)xy or (y eats)xy
x doesn’t talk x and y respectively walk and run x (just) stares x self-destructs x suffers or undergoes eating by y
Applying this suggestion to (10), we can go back from symbols to words in the following steps: (11) (12) (13) (14) (15)
m (r (q (j lives (doesn’t change))) m (r (q (respectively live and don’t change))) m (r (self-respectively lives and doesn’t change)) m (just self-respectively lives and doesn’t change) doesn’t just
self-respectively live and not change
Here we have a noun-free verb phrase. We may, if we wish, supply a subject, ‘‘The Absolute,’’ or we may indicate that the verb phrase is a complete sentence in itself by using the obsolete
third-person singular verbal ending -th, as when one eliminates the pleonastic subject pronoun in ‘‘it rains’’ by writing, as Quine somewhere suggests, ‘‘raineth.’’ We thus have the choice between
two options: (16a) Monist: The Absolute doesn’t just self-respectively live and not change. (16b) Nihilist: Doth not just self-respectively live and not change. Two subsidiary points should be
emphasized. First, one needs some way not merely of making assertions, but also of carrying out arguments in the new kind of language – some way other than translating back into firstorder terms and
applying text-book rules. In fact, John Bacon and others have supplied proof procedures, which, however, cannot be gone into here. Second, the point noted by Johann van Benthem should be emphasized,
that if one starts with a many-sorted first-order language, one can apply the
Being explained away
tricks I have been describing to some of the sorts and not the rest of them, retaining whatever sorts of objects one likes, and eliminating whatever sorts of objects one does not. 7
Thus whether one speaks overtly of abstract objects or concrete objects, of simple objects or compound objects, or indeed of any objects at all, is optional. My claim is that if children who grew up
speaking and arguing in Monist or Nihilist or some Benthemite hybrid between one or the other of these and English, it would be gratuitous to assume that covertly they are ‘‘committed’’ to a full
range of sorts of objects, just as if they spoke a language like ours, with a full range of sorts of nouns. And any assumption that the divine Logos has a grammar more like ours and less like theirs
would be equally unfounded, I submit. It is in this sense that I claim any assumption as to whether ultimate metaphysical reality ‘‘as it is in itself ’’ contains abstract objects or concrete
objects, or simple objects or compound objects, or again any objects at all, would be gratuitous and unfounded. This kind of anti-metaphysical claim, if not quite the kind of reason for it that I
have offered, has been characteristic of pragmatism from James onward. I have mentioned one recent pragmatist, Quine, whom I believe to hold essentially this sort of view, despite his very
regrettable coquetting with the modes of expression peculiar to early modern metaphysicians. I need now to say something about another recent pragmatist, Hilary Putnam. Putnam is often cited
alongside Quine as the second author of the indispensability argument against nominalism, but as explained in A Subject With No Object, there is an important difference between Putnam and Quine here.
This is because when Putnam put forward his indispensability argument he had already committed himself to a doctrine of ‘‘equivalent descriptions,’’ according to which there is nothing to choose
between a conventional formulation of mathematics in settheoretic terms and an alternative formulation in modal-logical terms. Thus what he was really claiming to be indispensable for science was
something of the overall strength of classical mathematics, as opposed to constructive mathematics of one kind or another. He was not making any claim about the indispensability of ontological
commitments (to sets) specifically, since he thought these could always be traded for ideological commitments (to modality). From a logician’s point of view, Putnam’s
Mathematics, Models, and Modality
claim is a good deal more interesting than Quine’s, but this is not the place to go into that aspect of Putnam’s views. An aspect that does require discussion is Putnam’s very regrettable coquetting,
like James before him, with the modes of expression peculiar to traditional idealist metaphysicians, rather as another recent pragmatist, Richard Rorty, coquettes with the modes of expression
peculiar to contemporary post-modernes. I am alluding to the tendency to say that the moon and the stars are ‘‘mind-dependent,’’ or worse, ‘‘socially constructed.’’ There is something right in what
Putnam maintains, and even in what Rorty maintains, and my hope is that my sketch of an alternative kind of language can help us separate this correct element from pernicious nonsense about mind
dependence and social construction – really amounting to little more than what Quine called a ‘‘use-mention confusion,’’ with or without the added twist of a confusion of academic radical skepticism
with genuine political radicalism – emanating from the idealist or po-mo Dark Side. The view from the Bright Side is that if we do choose the conventional option, and follow the conventional rules
for making and evaluating claims about objects, we must conclude that the moon and the stars long antedate human mentality and society, and therefore cannot be dependent on the former and cannot have
been constructed by the latter. On the other hand, if we choose an alternative option, then we will not be speaking about objects at all, and among the objects of which we will not be speaking or
saying anything will be the moon and the stars, and among the things we will not be saying about them is that they are mind-dependent or socially constructed. One either plays the language game by
the rules, or does not play it all, and in neither case is saying ‘‘The moon is mind-dependent’’ or ‘‘The stars are socially constructed’’ a legitimate move. We may choose between ‘‘moon’’ and
‘‘stars’’ on the one hand and ‘‘lunate’’ and ‘‘stellate’’ on the other, but if we take the first option, we must say that the moon and the stars were there long before there were astronomers or human
beings or primates or mammals or animals or life, while if we take the latter option, we must say that the Absolute was lunating and stellating long before it began to astronomize or humanize or
primatize or mammalize or animalize or vitalize. All this is merely by way of avoiding certain objections to pragmatism resulting from injudicious diction on the part of some of its most
distinguished advocates. On issues of substance rather than style, I stand with the pragmatist tradition: I agree with James that ‘‘the trail of the human serpent is over all,’’ and think that this –
and not some lesson about the
Being explained away
impossibility of mathematical knowledge – really is something ‘‘science teaches us about how we humans obtain knowledge.’’ From the pragmatist thesis that it is impossible for human beings to obtain
a God’s-eye view of the world, I infer the anti-nominalist corollary that it is pointless to complain that, for all we can know, mathematical objects may not be part of the world as God sees it. To
be sure, I am well aware that the considerations I have presented above are very far from constituting a knock-down argument for that anti-nominalist conclusion. But what the course of the debate
over nominalism seems to me to reveal is that arguments are not what are needed at this point: nominalists are in the grip of a picture, and that until that grip is shaken, no argument, however
cogent, can hope to accomplish more than to cause nominalism to morph once again into some new form: knock down one form of nominalism, and another will pop up in its place. What is really needed is
a Gestalt switch, and this the above sketch of another way of speaking may perhaps help to induce.
E pluribus unum: plural logic and set theory
If one is interested in how best to formulate and motivate axioms for set theory, it is worthwhile to take another look at the early history of the subject, right back to the work of its founder,
Cantor. Cantor’s definition of a set was ‘‘any collection into a whole’’ of ‘‘determinate, welldistinguished’’ sensible or intelligible objects. According to a well-known quip of van Heijenoort, this
definition has had as much to do with the subsequent development of set theory as Euclid’s definition of point – ‘‘that which hath no part’’ – had to do with the subsequent development of geometry.
But in fact the notion of a many made into a one, which is what Cantor’s definition makes a set to be, will repay some study. In order to give concrete meaning to Cantor’s abstract definition, our
study should begin with a look at the context in which Cantor first felt it desirable or necessary to introduce the notion of set. As is well known, Cantor’s general theory of arbitrary sets of
arbitrary elements emerged from a previous theory of sets of points on the line or real numbers. This itself emerged from work on Fourier series. The technical details of Cantor’s theorems on this
topic are irrelevant for present purposes, but the general form of his results should be noted. Cantor’s first result was a certain (uniqueness) theorem for a given series, which depended on the
assumption that at every point or number x without exception, a certain (convergence) condition holds. He then was able to relax this assumption, and show that the theorem still holds if there is
only one exceptional point, or only two, or only three, and so on. A more substantial generalization was that the theorem still holds even if there are infinitely many exceptional points, provided
they are isolated from each other, in the sense that for every exceptional point there are two points such that it is the only exceptional point lying between them. 104
E pluribus unum: plural logic and set theory
In connection with this last result and further generalizations, Cantor introduced a new notion. If X is a set of points, let @X be the set of points in X that are not isolated from other points in
X. Then writing E for the set of exceptional points, and writing Ø for the empty set, the last result mentioned may be restated as saying that the theorem still holds if the following condition is
met: (1) @E ¼ Ø.
Cantor was then able to prove that the theorem still holds if @E has only one element, or only two, or three, and so on, or even if there are infinitely many points in @E, provided they are isolated
from each other. Writing @ 2X for @(@X), this last result says the theorem still holds if the following condition is met: (2) @ 2E ¼ Ø.
Similarly, a sufficient condition would be @ 3E ¼ Ø, or @ 4E ¼ Ø, and so on. Writing @ xE for the intersection of all @ n for n ¼ 1, 2, 3, and so on, the following is also a sufficient condition: (3)
@ xE ¼ Ø.
Cantor went on to iterate the operation @ beyond @ x, introducing in the process the transfinite ordinal numbers x þ 1, x þ 2, and so on, but we need not follow him further. At what stage in this
process of generalization does it become indispensable to think of the infinitely many exceptional points as together forming a single object, a set E to which an operation @ can be applied and
reapplied? Close examination of the form of the theorems shows that for any fixed finite n the condition @ nE ¼ Ø can be expressed as a condition mentioning only the exceptional points, and not the
set E or the operation @, while beyond this stage, restatement in terms of just the points and not the point-set is impossible. Consider, for instance, the hypothesis that (1) fails but (2) holds, or
in other words the following: (4) @E 6¼ Ø but @ 2E ¼ Ø.
Without mentioning E or @, this can be expressed, as follows: (5) Not every exceptional point is isolated from all other exceptional points, but every exceptional point that is not isolated from all
other exceptional points is isolated from all other exceptional points that are not isolated from all other exceptional points.
Mathematics, Models, and Modality
If we introduce a predicate Ex for ‘‘x is exceptional,’’ and use the usual symbol < for order, and if we spell out explicitly the definition of isolation, (5) can even be restated in the formalism of
first-order logic, with variables ranging only over points, as follows: (6) 9u(Eu ! I(u)) & 8u(Eu & I(u) ! 9v19v28v(v1 < v & v < v2 & Ev & I(v) « v ¼ u))
wherein I(u) is an abbreviation: I(u) « 9u19u28u0 (u1 < u0 & u0 < u2 & Eu0 « u0 ¼ u).
This is none too perspicuous, and psychologically it is doubtful that Cantor could have discovered even those of his results that can be thus reformulated without set language, had he not introduced
set language at the stage he did. But logically the results prior to (3) can be reformulated without set language, both in words, as in (5), and in first-order logic, as in (6). From a logical point
of view, the existential generalization of (4), which tells us we really are getting a significantly stronger theorem when the hypothesis is weakened from (1) to (2), is especially interesting. It
reads as follows: (7) There is a set X of points such that @X 6¼ Ø but @ 2X ¼ Ø.
This, too, can be restated in terms of points rather than in terms of a pointset, along the lines of (5), as follows: (8) There are some points such that not every point among them is isolated from
the other points among them, but every point among them that is not isolated from all other points among them is isolated from all other points among them that are not isolated from all other points
among them.
In this case, however, formalization in first-order logic, along the lines of (6), is impossible. The initial quantification, ‘‘there are some points . . .’’ is irreducibly plural, and cannot be
rendered in terms of singular quantifiers ‘‘there is a point . . .’’ or ‘‘for every point . . .’’ The status of (7) is thus somewhere between that of (1) or (2) or (4) on the one hand, which can be
given first-order formalizations quantifying only over points, and that of (3) on the other hand which cannot be stated without treating sets as objects. The kind of irreducibly plural quantification
exemplified by (8) was made an object of special study by the late George Boolos, who gave simpler examples of the phenomenon, and developed an extension of first-order logic to handle them.
Analogous to (5) and its existential generalization (8) are the following:
E pluribus unum: plural logic and set theory
(9) The critics who write for Exceptional magazine admire only each other. (10) There are some critics who admire only each other.
Here (9) can be given a first-order formalization, while (10), attributed to Peter Geach and David Kaplan, cannot. Irreducibly plural quantification amounts to the very last stop before the
introduction of sets. If we are to understand what is most distinctive about set theory, we must understand what it adds to and how it goes beyond the mere plural; and to understand that we must
first understand the plural and its logic.
Unfortunately notation and terminology connected with plural quantification have not been standardized. Boolos (1984), Lewis (1991), and Burgess and Rosen (1997) all differ. The first item of
business must be to indicate the notation and terminology to be used here. To begin with we have all the apparatus of singular logic. Thus we have , &, , !, «, 8, 9, and the logical predicate of
identity ¼, along with appropriate logical axioms and rules of implication. In particular we have the usual laws of identity, namely, self-identity and indiscernibility of identicals: (1) u ¼ u (2) u
¼ v ! ((u) ! (v)).
Note the conventions used in displaying these laws. In both (1) and (2) initial universal quantifiers have been suppressed. In (2), what we really have is a scheme or rule according to which, for any
formula (t), writing (u) and (v) for the results of substituting u and of substituting v for each free occurrence of t therein, (2) is to count as an axiom. In (2) again, the technical proviso is
left tacit that the variables u and v are free for t in (t), which is to say that no free occurrence of t in (t) occurs within the scope of a quantifier 8u or 9u or 8v or 9v. In (2) yet again, it is
to be understood that there may be parameters, or free variables not displayed, so that what is really to count as an axiom is something like: (3) 8w1 . . . 8wn8u8v (u ¼ v ! ((u, w1 . . ., wn) ! (v,
w1 . . ., wn)))
The derivability of the law of symmetry (4) u ¼ v ! v ¼ u
Mathematics, Models, and Modality
depends on allowing parameters, the formula (t) to which (2) is applied being t ¼ u. In terms of identity one may define certain other notions, notably distinctness and unique existence: (5) u 6¼ v «
u ¼ v (6) 9!x(u) « 9u(u) & 9u19u2((u1) & (u2) & u1 6¼ u2).
Here the distinctness predicate 6¼ and the unique existence quantifier 9! are to be treated not as primitive but as defined, which is to say that they are not part of the official notation, but
rather are unofficial abbreviations. The definitions (5) and (6) do not count as substantive assumptions or axioms, but merely as abbreviations for tautologies, for biconditionals of the form A « A.
So much for singular or first-order logic. The principles concerning the formulation of axioms and the status of definitions that have been set out at some length above in connection with identity
are to be tacitly understood as still applying as we now move on to plural logic, and when we later move on to set theory. Turning then to plural logic, to begin with we need to add three items of
notation. First, there are plural variables, xx, yy, zz, and so on. Second, there are plural quantifiers, written 99 and 88. If 9u and 8u are read ‘‘there is an object, u’’ and ‘‘for any object, u,’’
then 99xx and 88xx may be read ‘‘there are some objects, the xs’’ and ‘‘for any objects, the xs.’’ Third, there is a logical predicate of two places, with singular variables going in the first place
and plural variables in the second, u / xx, which may be read ‘‘u is one of the xs’’ or ‘‘u is among the xs.’’ Much as the symbol used in set theory for ‘‘element’’ is a stylized epsilon, the symbol
used here for ‘‘among’’ is a stylized alpha. A question immediately arises about the understanding of the plural quantifier. In a language with a threefold distinction among singular, dual, and
plural, it would be natural to take ‘‘there are some objects . . .’’ to mean ‘‘there are three or more objects . . .’’ In a language like English, where we have only the distinction between singular
and plural, it is natural to take it to mean ‘‘there are two or more objects . . .’’ For instance, in the Geach–Kaplan example (1.10) – that is, displayed item (10) of x1 above – it is natural to
understand ‘‘some critics’’ as meaning two or more critics. But Lewis (1991) takes ‘‘there are some objects . . .’’ to mean ‘‘there are one or more objects . . .,’’ while Burgess and Rosen (1997) go
further and take it to mean ‘‘there are zero or more objects . . .’’ The last-named authors, however, show that taking any one reading as official, the other readings can be defined in terms of it,
so in one important sense it does
E pluribus unum: plural logic and set theory
not matter which reading we take. The ‘‘zero or more’’ reading will be adopted here. To frame a basic deductive system for plural quantification, one adds to complete such a system for singular
quantification the following: (7) Axiom of Comprehension 99xx8u(u / xx « (u)) (8) Axiom of Indiscernibility 8u(u / xx « u / yy) ! ((xx) ! (yy)).
Axiom (7) is a plural analogue of existential generalization, allowing us, for instance, to infer (1.8) from (1.5). It says that for any condition there are some objects such that an object is among
them if and only if the condition holds of it. Which objects are these? The objects of which the condition holds, of course! (If we adopted the ‘‘one or more’’ reading, comprehension would have to be
formulated as the conditional with antecedent 9u(u) and consequent (7).) Axiom (8) is a plural analogue of the indiscernibility of identicals (4). It says that if exactly the same objects are among
these objects as are among those objects, so that these and those are the very same objects, then any condition that holds of these holds of those. One can introduce an additional notion and
notation, not as primitive, but as defined, namely, ‘‘the xs are the same as the ys,’’ the definition being as follows: (9) xx ¼¼ yy « 8u(u / xx « u / yy).
Then (8) can be abbreviated in the following form, to make clear the analogy with (2): (10) xx ¼¼ yy ! ((xx) ! (yy)).
No claim is made that the addition of the basic axioms (7) and (8) to a complete system of first-order logic produces a complete system of plural logic. For the moment, however, we have all the
logical axioms and rules we will be needing.
Two additional, non-logical primitives will be introduced in order to allow us to express the basic notions of set theory: ßu will be used to express ‘‘u is a set,’’ and u xx to express ‘‘u is the
set of (all and only) the xs.’’ (The symbol ß seems appropriate since as a ligature it is itself a one made out of a many, or at least out of a two.) Thus the primitive predicates of the language of
Mathematics, Models, and Modality
plural set theory, to be used in an axiomatization here, are those of plural logic, ¼¼ and /, together with ß and ; and the formulas of this language are built up from atomic formulas involving these
predicates by means of , &, , !, « and the singular and plural quantifiers 8, 9, 88, 99. By contrast, the primitive predicates of the language of singular set theory are that of singular logic, ¼,
together with 2 and ß; and the formulas of this language are built up from atomic formulas involving these predicates by means of , &, , !, « and the singular quantifiers 8, 9 only. The first axiom
to be assumed reads as follows: (1) Axiom of Heredity ßu « 9xx(u xx).
The right-to-left direction of (1) expresses only the triviality that if an object is the set of some objects, then it is a set. The left-to-right direction would seem to express only the equally
trivial converse, that if an object is a set, then it is the set of some objects; and this indeed is all it expresses if it is understood that the range of the quantifiers or universe of discourse
includes all objects. However, what a sentence expresses changes if the universe of discourse is restricted, and if it is understood that the range of the quantifiers may be something less than all
objects, what the left-to-right direction of (1) also expresses is that whenever a set is included in the universe of discourse, its elements are to be included along with it. It follows that if any
of these elements are sets, their elements also are included, and if any of those elements are sets, their elements are included as well, and so on for generation after generation. Hence the name
proposed for the axiom. The second axiom involves a biconditional, and expresses in the two directions of the double arrow the features that distinguish the concept of a set made of elements from, on
the one hand, the concept of a whole made of parts in mereology, and on the other hand, the concept of a property instantiated by objects in theories of universals. The axiom reads as follows: (2)
Axiom of Extensionality u xx & v yy ! (u ¼ v « xx ¼¼ yy).
The two features of the concept of set this axiom expresses are simply that, given a set, it is uniquely determined what elements it comprises, while given some elements, it is uniquely determined
what set they compose. By contrast, a whole may be decomposable into parts in several different ways, while two properties may be instantiated by exactly the same objects and yet be distinct. While
extensionality may not be an utter triviality, it is still in a sense less a substantive assumption than a partial explication of the concept of set. It would
E pluribus unum: plural logic and set theory
be inappropriate to use the word ‘‘set,’’ rather than ‘‘whole’’ or ‘‘property’’ or whatever, for a concept of which extensionality was not a feature. The relation of element to set, symbolized 2,
which in conventional axiomatizations is taken as primitive, may now be introduced instead as an abbreviation. Either of the following will do as a definition: (3) v 2 u « ßu & 9xx(u xx & v / xx) (4)
v 2 u « ßu & 8xx(u xx ! v / xx).
The equivalence of the right sides of (3) and (4) follows from (1) and the left-to-right direction of (2). It is also useful to introduce the plural version of 2, symbolized 22, by the following
definition: (5) yy 22 u « 8v(v / yy ! v 2 u).
In terms of the defined notion 2, the right-to-left direction of (2) yields the following: (6) ßu & ßv ! (8w(w 2 u « w 2 v) ! u ¼ v).
The last conditional could be strengthened to a biconditional, since the converse implication is immediate from the indiscernibility of identicals. In conventional axiomatizations it is (6) or some
variant version thereof that is usually called the axiom of extensionality. The feature that the elements determine the set is explicit in this formulation, while the intention to include within the
range of the quantifiers or universe of discourse all the elements of any set that is itself, though it must be assumed since otherwise distinct sets might have the same elements within the
restricted range or universe, is left implicit. The introduction of plurals, and the definition of 2 in terms of have, so to speak, permitted what lies beneath extensionality in its conventional
formulation to be exhibited explicitly in the form of the axioms (1) and (2). Making the assumption of heredity explicit will prove to be surprisingly important in x7 below. Before leaving the topic
of extensionality, it should be mentioned that the formulation of extensionality most commonly used in conventional axiomatizations is not (6) itself but a variant. Discussion of this issue requires
some machinery we do not yet have, so it will be necessary to return to the issue of extensionality later on. For the moment we are done with this axiom. The remaining axioms will be more substantive
and will be existence assumptions. Before indicating what they are, let me recall that one tempting existence assumption cannot be made. One cannot assume that for any objects there is a set of just
those objects: (7) 9u(u xx).
Mathematics, Models, and Modality
For comprehension tells us the following: (8) 99xx8v(v / xx « v 2 v).
Then (7) and (8) imply the following: (9) 9u8v(v 2 u « v 2 v).
But (9) is a logical contradiction. All this is just the Russell paradox of the set of all sets that are not elements of themselves, adapted to the context of plural quantification. This may be the
best place to interject a word on the role of alleged entities called ‘‘classes’’ in the axiomatization of set theory. The word ‘‘class’’ rather than ‘‘set’’ was originally used by Frege for the
extensions of concepts, but it has since come to be used for set-like entities that in some mysterious way fail to be sets. Usually though not invariably classes are spoken of as having ‘‘members,’’
while sets have ‘‘elements.’’ Thus membership is an elementhood-like relationship that in some mysterious way fails to be elementhood. There is a mystifying distinction between two fundamentally
different kinds of collections, sets and classes, and two fundamentally different kinds of belonging, elementhood and membership. Such, at any rate, is the situation described in the language of
splitters, for whom no set is a class, though every set is coextensive with some class in the sense that the elements of the set are all and only the members of the class. On a different usage,
favored by lumpers, one has not just coextensiveness but outright identity. Sets simply are some but not all of the classes, the others being called proper classes; the sets are usually distinguished
from the proper classes in that each set is assumed to be a member of some class, while no proper class is a member of any proper class. The mysterious distinction between two fundamentally different
kinds of collections, sets and classes, is replaced by the mysterious distinction between two fundamentally different kinds of classes: those that can and those that cannot be members of other
classes. If one is willing to accept the mystery, then classes have certain uses in set theory. In conventional axiomatizations, they permit the reduction of certain schemes (separation and
replacement) to single axioms. But as Boolos urged, in this use plural quantification over sets can be employed to replace singular quantification over classes, as will be done in the next section.
Classes are also appealed to in heuristic motivations for certain extensions of the conventional axioms, so-called large cardinal axioms, and the fact that they are thus appealed to has been argued
by Schindler (1994) and others to weaken the case for large cardinals. Recently Uzquiano (2003)
E pluribus unum: plural logic and set theory
has pointed out that at least in most cases of such use, plural quantification over sets can again be employed to replace singular quantification over classes, as will be done in one important case
in the section after next. The advantage of employing plural quantification in this way is that it leaves us able to maintain that there is just one kind of collection, and that set theory is the
most general theory of collections. It might be thought that, conversely, anything that can be accomplished with plurals can be accomplished with classes, enabling us to avoid appeal to an
extra-classical logic. But plural quantification over sets cannot, in fact, always be replaced by singular quantification over classes, a point emphasized by Boolos and worth reemphasizing here. To
begin by restating in words what has already been stated in symbols in (8) and (9) above: (10) It is true that there are some objects such that all and only those objects that are sets and not
elements of themselves are among them. (11) It is false that there is a set of objects such that all and only those objects that are sets and not elements of themselves are elements of it.
So ‘‘there are some objects . . .’’ cannot in general be replaced by ‘‘there is a set of objects . . .’’ Class theory indeed tells us that (12) There is a class of objects such that all and only
those objects that are sets and not elements of themselves are members of it.
But still ‘‘there are some objects’’ cannot in general be replaced by ‘‘there is a class of objects . . .,’’ and one need only replace each occurrence of ‘‘set’’ or ‘‘element’’ in (10) and (11) by
‘‘class’’ or ‘‘member,’’ respectively, to see why not. Another limitation of class-and-set theory is that, unlike plural-andsingular set theory, it is in a sense incapable of making explicit the
assumption of heredity underlying extensionality. Technically, it would be possible to replace the plural-logical (3.1), asserting that every set is the set of some elements, by a class-theoretic
version, asserting that every set is coextensive with some class. But to state this last while keeping elementhood primitive, and coextensiveness as defined, would be merely to restate one specific
instance of the general scheme of comprehension for classes. For a genuine statement of the heredity assumption, one would need to take the notion of a set’s being coextensive with a class to be
primitive, and the notion of an object’s being an element of a set to be defined in terms of it, namely, defined as the object’s being a member of the class with which the set is coextensive. And
that choice of primitives is extremely unnatural from a class-theoretic point of view, and to my knowledge no class theorist
Mathematics, Models, and Modality
has ever made that choice. Of course, the importance of this limitation cannot be clear until it is seen what use is made of the heredity assumption in the further development of set theory. 4
Let us return to that development. The name ‘‘extensionality’’ is a reminder of Frege’s notion of the ‘‘extension of a concept.’’ Frege really did assume that every condition determines a set, or
more precisely, determines a ‘‘concept’’ and thereby an ‘‘extension,’’ and so fell into paradox. Even before the discovery of any such paradoxes, however, Cantor (1885) had rejected Frege’s
assumptions in a review of Frege’s Grundlagen: The author’s own attempt to give a strict foundation to the number-concept seems to me less successful. Specifically, the author has the unfortunate
idea . . . to take as the foundation for the number-concept what in Scholastic logic is called the ‘‘extension of a concept.’’ He completely overlooks the fact that in general the ‘‘extension of a
concept’’ is quantitatively wholly indeterminate. Only in certain cases is the extension quantitatively determinate, in which cases it can then of course be assigned a definite number, if it is
finite, or power, in case it is infinite. But for this sort of quantitative determination we must already possess the concepts of ‘‘number’’ and ‘‘power,’’ and it is getting things backwards to try
to found these latter concepts on the concept ‘‘extension of a concept.’’
Exactly what Cantor meant by ‘‘quantitatively indeterminate’’ (quantitativ unbestimmt) is not entirely clear, but he seems to be alluding to the kind of distinction within the ‘‘actually infinite’’
that he makes elsewhere, between merely ‘‘transfinite’’ or ‘‘consistent multiplicities,’’ which do form sets, and the ‘‘absolutely infinite’’ or ‘‘inconsistent multiplicities,’’ which do not. The
Russell paradox does not arise for Cantor because he never assumes that every condition determines a set, but only those conditions that do not hold of too many objects, an assumption known as the
principle of limitation of size. The principle is subject to differing interpretations, based on differing understandings of what it is to be ‘‘too many’’ to form a set. In advance of any deep
analysis of how to measure ‘‘size,’’ however, it is already clear that the limitation of size principle motivates one important axiom, namely, Zermelo’s axiom of separation, according to which if
given things are not too many to form a set, then any things among them are also not too many to form a set; or to put the matter another way, given any things that are among the elements of some
set, they also may be ‘‘separated out’’ from the other elements of the set to form a subset.
E pluribus unum: plural logic and set theory
Formally, one way to state the axiom is as follows: (1) ßu ! 9v(ßv & 8w(w 2 v « w 2 u & w / zz)).
It is not hard to see that (1) is equivalent to the following: (2) ßu & 8w(w / yy ! w 2 u) ! 9v(v yy).
And to eliminate the use of the defined symbol 2, it is also not hard to see that (2) is equivalent to the following: (3) Axiom of Separation 8w(w / yy ! w / xx) ! 9u(u xx) ! 9v(v yy)).
Separation could equivalently be formulated as a scheme: (4) ßu ! 9v(ßv & 8w(w 2 v « w 2 u & (w))).
For (1) is implied by (4), being simply the instance with w / zz as (w); while conversely comprehension, which for any (w) gives us zs such that w / zz if and only if (w), together with (1) implies
(4). But clearly no one becomes convinced of a scheme by becoming convinced separately of each of its instances, one by one; there must be some single underlying principle, and much as (3.1) made
explicit an assumption underlying conventional formulations of extensionality, so (3) above makes explicit the single assumption underlying schematic formulations of separation. Hence (3) will be
taken as our official version. Formulating separation as a single axiom rather than a scheme will prove to be important in x7 below. However formulated, the assumption of separation is so fundamental
to Cantorian thought that it is arguably inappropriate to apply Cantor’s word ‘‘set’’ (Menge) to theories (such as Quine’s NF and ML) that do not accept it. In other words, separation may be regarded
as a partial explication of the concept of set, indicating what sets are supposed to be like if they exist. It is also, however, a kind of existence assumption. Specifically, it is a relative
existence assumption that, given one set, provides us with others. But it is not a positive existence assumption. The axioms adopted so far, taken together, do not yet imply the existence of a single
set. Now as has been said, on virtually any understanding of ‘‘many’’ and related notions, separation follows from the principle that objects form a set unless they are too many, because if all of
these are among those, then there are no more of these than of those. Likewise, if there are just as many of these as of those, and if those are not too many, then these are not too many either. To
extract an axiom from this latter thought, however, we need to adopt some specific understanding of ‘‘just as many.’’
Mathematics, Models, and Modality
But here Cantor’s analysis in terms of a correspondence between these and those is the only live option. Understanding ‘‘just as many’’ according to this analysis, we get Frankel’s axiom of
replacement, according to which if some things form a set, and some other things are in correspondence with them, then those other things also form a set; or to put the matter another way, if to each
element of a set there corresponds some one other object, then the elements of the set may be ‘‘replaced by’’ the corresponding objects, and we will still have a set. The more precise and formal
statement need not detain us here. To obtain the further axioms of the usual Zermelo–Frankel (ZF) set theory, one needs further specific judgments or interpretations of how many objects are ‘‘too
many’’ to form a set. Assuming that two is not too many, we get the axiom of pairing, assuming – contrary to tradition, but following Cantor – that infinitely many is not too many, we get the axiom
of infinity, and so on. Yet further judgments or interpretations of this kind suggest axioms beyond those of ZF, the large cardinal or higher infinity axioms alluded to earlier, in an open-ended
series of increasing strength. But at present such large cardinal assumptions, not included in ZF, generally are not taken for granted as axioms by mathematicians. If a theorem depends on such an
assumption, it must be stated as part of the hypothesis of the theorem. By contrast, the famous or notorious axiom of choice (AC) generally is no longer, as it once was, stated as part of the
hypothesis of a theorem that depends on it. Today it is generally taken for granted as an axiom, and the usual system of axiomatic set theory is ZF þ AC ¼ ZFC. The main discussion, and even the
statement, of AC will be postponed to a later section, but it may be mentioned here that von Neumann has argued that even AC can be motivated as an additional axiom by appeal to a certain
interpretation of the principle of limitation of size. According to this interpretation, called the maximality principle, given objects form a set unless there are as many of them as there are
objects altogether. How AC follows from this principle is among the many points explained in the thorough study of the history of the principle of limitation of size from Cantor through Zermelo to
von Neumann and beyond undertaken in Hallett (1984). In the light of this study it can be said that the usual axioms of set theory, and large cardinal axioms also, can be motivated by appeal to a
single principle, that of limitation of size, provided we allow ourselves an openended series of increasingly more specific understandings of that principle. But what we do not yet have is a single
axiom formalizing a single understanding of the principle of limitation of size, from which the usual axioms can be formally deduced. If we are to obtain set theory from a single
E pluribus unum: plural logic and set theory
positive existence axiom in addition to those already adopted, another approach must be tried.
Another approach is available from the work of Paul Bernays (1961), building on the work of Azriel Levy, and this rival approach seems more promising than an appeal to the maximality principle. The
background is as follows. Levy derived from the usual axioms of set theory a result known as the reflection principle, whose precise statement need not detain us. Bernays showed that most of the
usual axioms could be deduced from a strengthened version of the principle, which thus could be adopted as an axiom in their place. He also showed that several large cardinal axioms – for the
cognoscenti, those known as the axioms of inaccessible and of Mahlo cardinals of all orders, an upper bound for the cardinals obtainable in this way being indescribable cardinals – could also be
deduced from his version of the reflection principle. One may attempt to motivate the principle by appeal to the idea of limitation of size. This heuristic motivation is perhaps best presented as a
list of principles, the first of which is Cantor’s vague principle of limitation of size, each succeeding one of which represents a plausible way of making the preceding one more precise, and the
last of which will provide the basis for the formal axiom whose consequences are to be explored. Without further ado, here is the list. All principles have the general form: ‘‘The xs form a set
unless . . .’’ (1) (2) (3) (4)
. . . they are indeterminately or indefinitely many. . . . they are indefinably or indescribably many. . . . any statement that holds of them fails to describe how many they are. . . . any statement
that holds of them continues to hold if reinterpreted to be not about all of them but just about some of them, fewer than all of them. (5) . . . any statement that holds of them continues to hold if
reinterpreted to be not about all of them but just about some of them, few enough to form a set.
(Note that the transition from (4) to (5) would be automatic assuming the maximality principle.) Our interest will in fact be limited to the case where the xs are all objects. If our general
understanding is that our quantifiers range over all objects, then every statement is about all objects, and the principle thus becomes: (6) Any statement that holds continues to hold if
reinterpreted to be just about the elements of some set t.
Mathematics, Models, and Modality
With respect to what is expressed by , the macrocosm of all objects is ‘‘reflected’’ in the microcosm of elements of t. One must be careful about how one applies this reflection principle. One thing
that holds is that no object is a set that has all objects as elements. Applying reflection might seem to give a set t for which it holds that no object is a set that has all elements of t as
elements; and, of course, there can be no such set. This example, however, is an improper application. Proper application only yields a set t for which it holds that no element of t is a set that has
all elements of t as elements; and this is entirely non-paradoxical. But clearly what is needed at this point is a formal restatement. As a preliminary we need the notion of relativizing or
restricting quantifiers to a formula (u). By this we mean replacing statements about some or any object or objects by statements about some or any object or objects of which the predicate holds.
Formally, given any formula , we form the relativization by replacing each quantification of the kind shown on the left side below by the quantification shown beside it on the right side. 8u 88xx 9u
8u((u) ! . . .) 8xx(8u(u / xx ! (u)) ! . . .) 9u((u) & . . .) 99xx(8u(u / xx ! (u)) & . . .)
Of special interest will be the case where the formula (u) is u 2 t for some parameter t. In this case one writes t for and calls it the relativization to t. In this connection it is often useful to
use the abbreviations on the left side below for the expressions on the right side: 8u 2 t (. . .) 88xx 22 t (. . .) 9u 2 t (. . .) 99xx 22 t (. . .)
8u(u 2 t ! . . .) 8xx(xx 22 t ! . . .) 9u(u 2 t & . . .) 99xx(xx 22 t & . . .)
Then the replacements needed to obtain t amount to the following: 8u 88xx 9u 99xx
8u 2 t (. . .) 88xx 22 t (. . .) 9u 2 t (. . .) 99xx 22 t (. . .)
Our principle then becomes the following reflection axiom: (7) Axiom of Reflection: ! 9tt.
E pluribus unum: plural logic and set theory
Note that (7) is meant to apply to formulas written out in primitive notation, without abbreviations, or at worst, involving only abbreviations that, like 6¼ and unlike 9!, do not involve any
quantifiers in their definitions. An abbreviation involving quantifiers needs to be written out to make the quantifiers explicit, so that they can be relativized. 6
The set theory based on plural logic and the non-logical axioms of heredity (3.1), extensionality (3.2), separation (4.3), and reflection (5.7) will be called BB for Bernays–Boolos. It turns out that
all the usual existence axioms of ZF, as well as the large cardinals obtainable in the manner of Bernays, can be obtained from BB by deduction as logical consequences. Bernays was not especially
concerned with the intuitive or heuristic motivation for reflection, and in fact assumed reflection in an ostensibly stronger version than (5.7), with an additional technical condition on t that it
is not immediately obvious how to motivate by appeal to the idea of limitation of size. Our first task must be to show that this ostensibly stronger version actually follows from the version (5.7)
that has been taken as axiomatic here. The first step in the deduction is to replace the version of reflection in (5.7), which does not even explicitly state that t is a set, by a version that does
state explicitly at least that much. The deduction uses a trick employed over and over by Bernays. The trick consists in noting that any statement of course implies its own conjunction with any axiom
or theorem C. And so (5.7) yields the following: (1) ! 9t(Ct & t ).
Taking as C the trivial truism 9u(u ¼ u) we get the following: (2) ! 9t(9u(u 2 t & u ¼ u)& t ).
The existence of an element u of t implies that t is a set by (3.3), so we have the following: (3) ! 9t(ßt & t ).
The derivation of (3) is a trivial illustration of how (5.7) can yield stronger versions of reflection. For a slightly less trivial instance, take as C the axiom (3.1). What we get is the following:
(4) ! 9t(ßt & 8u 2 t (ßu « 99xx 22 t(u xx)) & t ).
Mathematics, Models, and Modality
What the left-to-right direction of the second conjunct (3.1)t says is that any set that is an element of t is the set of some objects that are themselves elements of t. This is merely a long-winded
way of saying that t is a set such that every element of an element of t is an element of t, a property called the transitivity of t. Thus we have obtained the following strengthening of (5.7): (5) !
9t(t is transitive & t ).
It was, of course, crucial to the derivation of (5) that has been taken as primitive. For recall that in relativizing a formula, abbreviations defined using quantifiers are supposed to be written out
in primitive terms. In principle this includes even the symbol 2, which here is officially an abbreviation defined by (3.3). What (v 2 u)t amounts to is the following: (6) 99xx 22 t(u xx & v / xx).
If, however, t is transitive, and u is an element of t, then it will automatically be the case that the objects of which u is the set will all be elements of t, and therefore (3) is equivalent to the
simple v 2 u. In other words, in practice the abbreviation 2 does not need to be written out in primitive terms after all, so long as the set t can be taken to be transitive, which by (5) it always
can be. The principle (5), with allowed to contain 2, is the version of reflection assumed by Bernays, apart from his using the language of singular quantification over classes rather than that of
plural quantification over sets, which makes no difference to the subsequent deduction of further existence axioms. Thus at this point one could simply cite Bernays for the deduction of further
axioms, and close the present section. It may be instructive, however, to indicate the first few steps in the further deduction. The first step will be yet another application of the trick that took
us from (5.7) to (3) and from (3) to (5). A transitive set t is said to be supertransitive if any subset of any element of t, as well as any element of any element of t, is itself an element of t. By
taking as C in (1) the axiom (4.3), the following further strengthening is also obtainable: (7) ! 9t(t is supertransitive & t ).
Note that it was crucial for obtaining supertransitivity that separation is formulated as a single axiom (4.3), not as a scheme (4.4). For the singleaxiom formulation one needs either to use singular
quantification over classes, as Bernays did, or plural quantification over sets, as was done here. If one attempts to stick with singular quantification over sets, it will be
E pluribus unum: plural logic and set theory
impossible to obtain supertransitivity, and in consequence impossible to obtain certain existence axioms. But having strengthened (5) to (7), Bernays rapidly obtains the axioms of pairing, union,
power, infinity, replacement. For instance, following a deduction he attributes to Klaus Gloede, given any two objects a and b, we take as our the following logical truth: (8) 9u(u ¼ a) & 9u(u ¼ b).
Reflection according to (3) then gives us the following: (9) 9t(ßt & 9u 2 t(u ¼ a) & 9u 2 t(u ¼ b)).
And this in turn implies the following: (10) 9t(ßt & a 2 t & b 2 t).
This already is an alternative formulation of the axiom of pairing. A more usual formulation calls for the existence of a set t whose elements include a and b and nothing else; but this follows
immediately from (10) on applying separation (4.4) to the condition u ¼ a u ¼ b. The most usual formulation calls for the existence of a unique set, denoted {a, b}, whose elements include a and b and
nothing else; but this follows immediately on applying extensionality (3.6). The deductions of union and of power are almost equally easy and would make good exercises for the reader familiar with
the statement of the axioms. As a hint it may be mentioned that for union, transitivity is needed; for power, supertransitivity. For replacement, and for infinity and ‘‘higher infinities’’ or large
cardinals, for which last supertransitivity and hence the single-axiom formulation of separation is again needed, as it was for power, the reader is referred to Bernays. 7
To obtain full ZFC we need to obtain three more axioms. The first of these is a stronger version of extensionality that reads as follows: (1) 8w(w 2 u « w 2 v) ! u ¼ v.
This will be forthcoming from (3.6) provided we assume the following: (2) Axiom of Purity ßu.
From one point of view (2) is an utter absurdity, saying that there are no objects but sets. Now the mere absurdity of a proposition is no guarantee
Mathematics, Models, and Modality
that some philosopher will not endorse it. So perhaps there is a philosopher somewhere who denies that anything but sets exists, and who denies that he himself exists. Or perhaps there is a
philosopher somewhere who denies that anything but sets exists, and maintains that she herself is a set. But this is not what is usually intended by those who adopt (2). From the usual point of view,
(2) merely expresses the intention to exclude anything but sets from the range of the quantifiers. Note, however, that since we are already assuming that whenever a set is included its elements are
included as well, the assumption (2) actually involves a restriction not merely to sets, but to sets all of whose elements are sets, and all elements of whose elements are sets, and so on. Only pure
sets are included, hence the name for (2). Now there are two approaches that might be taken to obtain (2) and thence (1). One would be to take it as an axiom. Another would be to try to find an
interpretation of set theory with the axiom (2) within set theory without the axiom (2). That is, one could try to find a formula such that each axiom of set theory with axiom (2) becomes a theorem
of set theory without axiom (2) when quantifiers are relativized to . Then for every theorem of set theory with axiom (2), one would have a theorem of set theory without axiom (2) saying that holds
for those sets for which condition holds. Ideally, the formula (u) should be one that intuitively expresses that u is a pure set, or in other words, that u, its elements, the elements of its
elements, and so on, are all sets. Now intuitively the set u, its elements, the elements of its elements, and so on, are together some objects such that u is among them and any element of a set among
them is among them. Thus intuitively if u is pure, there will be some objects such that any element of a set among them is among them, u is among them, and every object among them is a set.
Conversely, if there are some objects such that any element of a set among them is among them, and if u is among them, and every object among them is a set, then it follows that u, the elements of u,
the elements of elements of u, and so on, are all sets, and u is pure. Thus a natural candidate for the formula (u) would be the following: (3) 99xx(8v8w(v / xx & w 2 v ! w / xx) & u / xx & 8v(v / xx
! ßv)).
Reflection can be used to show (3) is equivalent to the following alternative not involving plural quantifiers: (4) 9t(8v8w(v 2 t & w 2 v ! w 2 t) & u 2 t & 8v 2 t ßv).
The first conjunct of (4) says that t is transitive. Other variations in the choice of are also possible.
E pluribus unum: plural logic and set theory
Whatever variant is chosen, what has to be proved is, first, that for each axiom that is being assumed, one can deduce its relativization from the axioms that are being assumed, and, second, for the
axiom (2) that is not being assumed, one can also deduce its relativization (2) from the axioms that are being assumed. It follows that whenever follows from the axioms that are being assumed plus
(2), then follows from the axioms that are being assumed. Though the tedious verification of details will not be given here – for they are quite similar to the details involved in the case of the
axiom of foundation, to be discussed below – this is in fact the case whether by ‘‘the axioms that are being assumed’’ one means the other axioms of ZFC or means the axioms of BB. Moreover, it is
also true that for any large cardinal axiom, even if stronger than those provided by BB, that one might decide to assume in the future, the assumption of such an axiom would render its relativization
deducible as well. Thus one may say that, without losing any other axiom that one might want, axiom (2) and hence axiom (1) can be ‘‘obtained,’’ not in the sense of being got by deduction as a
logical consequence, but in the sense of being got by relativization or restriction of quantifiers. An arguable advantage of the first approach of assuming purity as an axiom is that it permits a
simpler axiomatization. One can introduce what amounts to a function symbol ee, which applied to a singular variable u yields a term eeu of a kind that can be substituted for plural variables, and
whose intended meaning is ‘‘the elements of u.’’ The symbol ß can be dropped, and the symbol need not be taken as primitive, but rather may be taken as defined by the following: (5) u xx « eeu ¼¼ xx.
In general, the use of function symbols builds existence and uniqueness assumptions into the notation, making it unnecessary to assume them as axioms. In the present case, with ee there is no need
for the axiom of heredity, and no need for one direction of the axiom of extensionality. All that needs to be assumed as an axiom of extensionality is the following: (6) eeu ¼¼ eev ! u ¼ v.
The converse follows from the indiscernibility of identicals. By contrast, the advantage of the second approach of restricting quantifiers is the philosophical one that it makes explicit what the
first approach leaves implicit, namely, that purity is not a substantive assumption, but a restriction on the universe of discourse. It is the second approach that will
Mathematics, Models, and Modality
be adopted officially here. Thus the ZFC version of the extensionality axiom will not be added to the axioms of BB, and any theorems of ZFC that depend on it will not be theorems of BB; but for any
such theorem , the relativization will be a theorem of BB, or in other words, it will be a theorem of BB that holds of pure sets. The second of the three axioms of ZFC requiring discussion here is
the axiom of foundation, also known as regularity. As axiom (2) implies that all objects (in the range of the quantifiers) are pure sets, so foundation or regularity asserts that all objects (in the
range of the quantifiers) are wellfounded sets. Since all elements of sets included in the universe of discourse are themselves included, it follows that if a set u is included, not only is u
well-founded itself, but so are the elements of u, the elements of elements of u and so on: u is hereditarily well founded. Several equivalent formal versions of foundation are known, but it will not
be necessary to give any formulation here, since the question of the status of this axiom is already well explained – with details of the kind omitted in the discussion of purity above – in
introductory textbooks, such as Hrbacek and Jech (1999). The options are the same as in the case of purity. Foundation may either be taken as an axiom, while insisting that it is not a substantive
assumption, but merely an expression of the intention to limit what is included in the range of the quantifiers; or it may be obtained from the other axioms by relativization. The latter approach has
the advantage of making explicit what the former leaves implicit, and will be adopted officially here. 8
We now have ‘‘obtained,’’ in one way or another, or one sense or another, all the axioms of ZF, and it ‘‘only’’ remains to consider AC. Having set aside von Neumann’s approach based on his maximality
interpretation of the principle of limitation of size, two approaches remain, and one is exactly similar to the approach to purity and foundation just taken. That is to say, choice can be obtained
from the other axioms, not by deduction as a logical consequence, but by relativization or restriction of quantifiers. For Go¨del’s famous proof of the relative consistency of choice – his proof that
if ZF is consistent, then ZFC is consistent – proceeds by defining a technical notion of constructible set, and proving in ZF that all the ZF axioms, and in addition AC, hold when all quantifications
are replaced by quantifications restricted to constructible sets. If large cardinal axioms in the range obtained by Bernays are considered, they also continue
E pluribus unum: plural logic and set theory
to hold when all quantifications are thus restricted, and this is true even for some larger large cardinals, though for still larger large cardinals one needs to look for some modification of
Go¨del’s method, which for all but the largest large cardinals has indeed been found. It is, moreover, tedious but routine to adapt this approach to an axiomatization based on plural logic. At a
philosophical level, however, there is a great difference between the case of choice and that of foundation or purity. For it emphatically cannot be claimed that when Zermelo originally affirmed the
choice axiom, or when Go¨del himself and most later set theorists reaffirmed that axiom, they merely intended to exclude non-constructible sets from the universe of discourse. On the contrary,
Zermelo was unacquainted with the sophisticated notion of the constructible set, while Go¨del and many later set theorists have explicitly denied that all sets are constructible. Here one faces a
decision of principle. If the goal is to provide an intuitive motivation for the axioms, the acceptability of relativizing quantifiers will be limited to cases like purity and foundation, where the
relativization arguably represents the intentions of those who adopt the axiom in question. By contrast, if the interest lies elsewhere, in finding a reinterpretation of set theory under which all
its axioms will be derivable from a minimal basis, the method of relativizing quantifiers may be usable without limitation. It should be noted, however, that if one does allow reinterpretation
without limitation, then it is possible to do very much better in the way of obtaining set theory from a minimal basis than has been indicated so far. For recent work of Harvey Friedman shows that
without the apparatus of plural logic, without any axiom of heredity, without any version of an axiom of extensionality, and without a separate axiom of separation, all of set theory can be
reinterpreted in a system whose sole axiom is a suitably formulated reflection principle. Moreover, the formulation of the principle can be modulated so as to obtain, as one wishes, ZFC without the
power axiom, or ZFC exactly, or ZFC with the kind of large cardinals obtained by Bernays, or ZFC with much larger large cardinals. But since our interest here has been in the more direct motivation
of axioms, Friedman’s impressive results will be left aside.1 Along with them will be left aside Go¨del’s approach to obtaining AC using his constructible sets. Having set aside other approaches, one
last approach to motivating the axiom of choice remains to be considered. Or rather, I should say, one last 1
Friedman makes his papers available, between the time of their writing and the time of their publication, on the preprint server: www.mathpreprints.com/math/Preprint/show. Type ‘‘Harvey Friedman’’
into the window for access to these preprints.
Mathematics, Models, and Modality
approach to obtaining the axiom, leaving it to the reader to judge how far this approach motivates the axiom. But to begin with, the statement of the axiom should be given. It reads as follows: (1)
8u(u 2 a ! 9w(w 2 u) & 8u8v(u 2 a & v 2 a ! 8w(w 2 u « w 2 v) 9w(w 2 u & w 2 v)) ! 9v8u(u 2 a ! 9!w(w 2 u & u 2 v)).
If we call the sets in a distinguished, then the first clause of the hypothesis says that distinguished sets are non-empty, and the second that distinguished sets are non-overlapping: a distinguished
u and v either have exactly the same elements (and hence by extensionality are exactly the same set) or else have no common elements at all. The conclusion asserts the existence of what is called a
choice set, having exactly one element in common with each distinguished set. Now even before set-theoretic notions are introduced, there is a version of the axiom of choice that can be formulated as
a purely logical assumption. It is a scheme, reading as follows: (2) 88xx((xx) ! 9w(w / xx)) & 88xx88yy((xx) & (yy) ! 8w(w / xx « w / yy) 9w(w / xx & w / yy)) ! 99yy88xx((xx) ! 9!w(w / xx & w / yy)).
If we call objects distinctively related to each other, or distinguished for short, when condition holds of them, the two conjuncts of the hypothesis of the axiom are non-emptiness and
non-overlappingness conditions for distinguished objects. The conclusion asserts the existence of some objects, which may be called the chosen objects, such that for any distinguished objects, there
is exactly one chosen object among them. It is an easy exercise to deduce (1) from (2) in our framework, taking as (xx) in the condition: (3) 9u 2 a (u xx).
The hypothesis of (1) easily gives us the hypothesis of (2) for (3) as . What the conclusion of (2) for (3) as gives us is some chosen objects, the ys, while what the conclusion of (1) demands is a
choice set. What is left to the reader is to verify that the ys do form a set v as required. It is not quite so easy an exercise to show, though it is also true, that given the set theory BB, the
set-theoretic version of AC in the form (1) implies (each instance of) the logical version of AC in form (2). (This derivation of (2) from (1) is an instance of a more general phenomenon of the
derivability of logical conclusions from set-theoretic assumptions, to be discussed in the next section.) What one proves is the contrapositive, that if (2) fails (in a
E pluribus unum: plural logic and set theory
particular instance), then (1) fails. From the assumed failure of (2) for a particular , one obtains by reflection a supertransitive set t for which (2)t holds and hence (2)t fails. One can then take
as a the set of all subsets u of t such that (4) 99xx(u xx & t(xx)).
The non-emptiness and non-overlap clauses in the antecedent of (2)t will then imply the corresponding clauses in the antecedent of (1) for the set a defined by (4). But if there were, as per the
consequent of (1), a choice set v for a, then its elements, call them the ys, would be chosen elements, as per the consequent of (2)t, contrary to the hypothesis that (2)t fails. It follows that one
can obtain the usual set-theoretic version of AC without adding any new set-theoretic axioms to BB, by adding a version of AC to the background plural logic. This is the course that will be adopted
here, but the logical version of AC adopted will not be (2) above, but something else, whose introduction requires some background. The most important points about AC in the present context would be
the following. On the one hand, AC is useful for proving many mathematical results in their most general form. This pragmatic consideration has historically and practically been the most widely
persuasive motivation for the axiom. It is for this reason that, as stated earlier, mathematicians have generally acquiesced in its assumption: it is the reason why today only logicians still keep
track of which theorems require AC and which do not. On the other hand, AC is a non-constructive existence assertion, asserting that something exists for which a given condition holds, without
specifying any particular such thing. That is why when it was first introduced mathematicians did not at once embrace it. But AC is not the only or the most basic non-constructive existence assertion
in classical mathematics or logic, since the following basic law of monadic first-order logic is also such an assertion: (5) 9u(9v(v) ! (u)).
That is why it is now widely agreed that if one is going to object to nonconstructive existence assertions, one should not wait for AC, but should begin objecting already at the level of classical
logic, if not of classical sentential logic, thus placing one’s objections entirely outside the scope of the present paper. And inversely, those who are determined, in the words of Hilbert, not to be
driven out of ‘‘Cantor’s paradise,’’ must begin their defensive operations no later than the level of monadic first-order logic, or indeed sentential logic – ‘‘Boole’s paradise,’’ as it might be
Mathematics, Models, and Modality
Hilbert, and with him Bernays, starting from this last observation, proposed a way of building AC even into first-order, singular logic, where it cannot otherwise be expressed, even as a scheme after
the pattern of (2). They allow the formation of a term eu[(u)] which can be substituted for free variables. It may be thought of as a description, ‘‘the chosen u such that (u),’’ provided that this
is understood in such a way that, when there is in fact no u such that (u), what the phrase denotes is some arbitrarily chosen object. Accordingly the following logical axiom for the e-symbol is
assumed: (6) 9u(u) ! (eu[(u)]).
(Actually, a biconditional version of (6) was for Hilbert the very definition of the existential quantifier, though few have followed him in this, and we will not.) The connection with (5) is
apparent. The following is also sometimes assumed: (7) 8u((u) « C(u)) ! eu[(u)] ¼ eu[C(u)].
This says that what the chosen object is depends not on the condition but on what things it holds of, so that if C holds of exactly the same things, the same object will be chosen. If the e-symbol is
added to plural logic, then in terms of it we can define the chosen object among some given objects: (8) axx ¼ eu[u / xx].
(If we had the ee notation of the preceding section, aeeu would amount to ‘‘the chosen element of u’’ if u is non-empty.) But actually, in this case it is more natural to take a as the primitive
notion, and let e be defined by: (9) v ¼ eu[(u)] « 99xx(8u(u / xx « (u)) & v ¼ axx).
To derive (6) it is enough to assume the following: (10) Axiom of Choice 9u(u / xx) ! axx / xx.
It is this version that will be taken officially as an axiom of plural logic here, alongside comprehension (2.7) and indiscernibility (2.8). Now (6) follows from (10) and the definition (9).
Inversely, (10) follows from (6) and the definition (8). As for (7), the acceptance of indiscernibility (2.8) as a scheme means the acceptance of all its instances, whatever notations are added to
the language. We have now added one new notation, the a-symbol, as we have the following new instance of (2.8) for it: (11) xx ¼¼ yy ! axx ¼ ayy.
E pluribus unum: plural logic and set theory
From (11) and the definition (9) (together with comprehension), (7) follows without the need to assume anything like it as an additional axiom. The main point is that with an a-symbol and the sole
additional axiom (10) for it, the plural-logical axiom of choice (2) can be deduced. The ys required by (2) are given as follows: (12) u / yy « 99xx((xx) & u ¼ axx).
The verification that (2) does follow from (10) given the definition (12) is left to the reader. 9
So much for the axiomatization of set theory (by heredity, extensionality, separation, and reflection), starting from a suitable plural logic (with comprehension, indiscernibility, and choice). We
have seen what the plural-logical perspective of Boolos can contribute to set theory in the style of Bernays: it enables us to do without classes, and naturally suggests a choice of primitives that
allows the reflection principle to be expressed in a particularly simple form that makes it arguably an expression of Cantor’s principle of limitation of size. Before closing, I should mention what,
conversely, Bernays-style set theory can contribute to Boolos’s plural logic. The point to be made is one familiar to specialists, though in the idiom of second-order logic and class-set theories,
rather than of plural logic. It has also been treated briefly in the idiom of plural logic in Boolos (1985), but for the sake of completeness, and because Boolos’s treatment is rather compressed, and
stops short of explicitly endorsing the reflection principle, let me review the matter here. But first, since the point to be made pertains to a model theory for plural logic, a word of caution may
be in order. Throughout philosophical logic, much mischief is caused by a double usage of the word ‘‘semantics.’’ It is used on the one hand for models, like those provided by Tarski for singular or
first-order logic, or by Kripke for modal logic; and it is used on the other hand for a theory of meaning. Confusion between these two usages is manifested in the literature in two different,
complementary ways. On the one hand, if a model theory has not yet been developed for a given logical notion, it may be alleged that the notion is ‘‘meaningless’’ because it lacks a ‘‘semantics.’’ On
the other hand, once a model theory has been developed for a given logical notion, it may be alleged that problematic ‘‘ontological commitments’’ are implicit in use of the notion – for instance,
that ontological commitment to ‘‘unactualized possible worlds’’ is implicit in
Mathematics, Models, and Modality
the use of modal notions – because something like them appears in its ‘‘semantics.’’ Both types of objections could be raised against plural logic. On the one hand, I have not yet presented a model
theory for plural logic, and when I do it may not immediately be found satisfactory, so it might be claimed that the meaningfulness of plural quantification is in doubt. On the other hand, when I do
present a model theory and an argument that it is satisfactory, if that argument is accepted, then since the model theory will involve an apparatus of sets, it might be claimed that this shows that
an ‘‘ontological commitment’’ to sets is implicit in the use of the plural. Against the first objection I maintain that even if no one ever did present a satisfactory model theory for plural logic,
the plural was in systematic use in natural languages long before model theory for anything had been born or thought of, and such long-standing systematic usages are meaningful if anything is.
Against the second objection I maintain – in addition to Boolos’s point about the Russell paradox in (3.10) and (3.11) and their analogues for classes – that the transition from plural language to
settheoretic language in the work of Cantor and his followers involved an intellectual struggle more difficult than would have been called for if the task had been merely one of making explicit
something already implicit in ordinary language. So much by way of warning against confusing model theory with ‘‘semantics’’ in the sense of a theory of meaning. Now for the model theory itself.
Familiarity will be assumed with the usual notion, due to Tarski, of a model for first-order logic, though it may be well to begin with a quick outline even of this. To illustrate the principles
involved, it will be enough to consider formulas with just one non-logical symbol, a two-place predicate R. Under Tarski’s definition, a model M would consist of a non-empty set M, the universe of
the model, and a set RM of ordered pairs of elements of M, the relation of the model. In order to define what it is for a formula without free variables to be true in the model, one must define what
it is for a formula with free variables to be true in the model relative to an assignment of elements of the universe to its free variables. The details of the inductive definition will not be
recalled, except to mention that at the base step, Ruv is true for the assignment of l and m to u and v if and only if the ordered pair (l, m) belongs to the set RM; while at the induction step for
the universal quantifier, 8uC(u) is true if and only if C(u) is true whatever element l of M is assigned to u. In (one version of) the usual symbolism:
E pluribus unum: plural logic and set theory (1) M, l, m |¼ Ruv (2) M |¼ 8uC(u)
iff (l, m) 2 RM iff M, l |¼ C(u) for all l 2 M.
There is a standard extension of this notion of model to monadic secondorder logic, where we have in addition to the variables u, v, . . . a second style of variable X, Y, . . . and an additional
two-place logical predicate e, and therewith a new kind of atomic formula exemplified by u e X. All that is changed in the definition is that truth is now relative to an assignment not only of an
element l of M to each variable u of the old kind, but also of a subset of M to each variable X of the new kind. At the base step, u e X is true for the assignment of l to u and to X if and only if l
is an element of . At the induction step for the universal quantifier, 8XC(X) is true if and only if C(X) is true whatever subset of M is assigned to X. In symbols: (3) M, l, |¼ u 2 X (4) M |¼ 8XC(X)
iff iff
l2 M, |¼ C(X) for all ˝ M.
Turning now to plural logic, it has what I will call an official model theory in which truth is defined relative to an assignment of an element l of M to each singular free variable u, and a subset
of M to each plural free variable xx. At the base step, u / xx is true for the assignment of l and to u and xx, respectively, if and only if l is an element of . At the induction step for the
universal quantifier, 8xxC(xx) is true if and only if (xx) is true whatever subset of M is assigned to xx. In symbols: (5) M, l, |¼ u / xx (6) M |¼ 8xxC(xx)
iff iff
l2 M, |¼ C(xx) for all ˝ M.
Obviously, the effect is just the same as if each plural variable xx were replaced by a variable X, and each atomic formula u / xx by an atomic formula u e X, and then the standard model theory
applied to the resulting second-order sentence. In this official model theory for plural logic validity is defined in the usual way as truth in all models. The result is that the claim that some
plural formula is valid will be expressible in the language singular set theory, since plurals are not used in the metalanguage in (5) and (6); indeed, such a claim will be equivalent to the claim
that a corresponding second-order formula is valid in the standard model theory for secondorder logic, making all the many known results about that standard model theory applicable to plural logic.
Two questions immediately arise about how satisfactory this official model theory is, and how validity in the official sense is related to validity in the intuitive sense of being true in all
interpretations. The first question
Mathematics, Models, and Modality
is whether it would not be more natural to define truth relative to an assignment of some elements, the s, of M to each free plural variable xx, with induction clauses as follows: (7) M, l, |¼ u / xx
(8) M |¼ 8xxC(xx)
iff l / iff M, |¼ C(xx) for all 22 M.
Boolos’s answer is that indeed it would more natural to use plurals in the metalanguage, but that the official singular definition with (5) and (6) is in fact equivalent, given the comprehension
axiom of plural logic and the separation axiom of set theory, to the more natural plural definition with (7) and (8). For on the one hand, by separation, for any elements, the s, of M, there is a
subset of M such that a given element l of M is an element of if and only if it is among the s. And on the other hand, by comprehension, for any subset of M, there are some elements, the s, of M such
that a given element l of M is among the s if and only if it is an element of . The second question is whether it would not be more natural to define validity in terms, not of models whose universes
must be sets, but of interpretations in a more general sense, in which the objects over which the variables range might well be all objects, or all pure sets, or all pure and hereditarily
well-founded sets, or the like. (The analogous question arises even in the case of first-order logic, and was raised long ago by Kreisel (1967).) To specify an interpretation in this more general
sense, one need only specify, by means of some condition (perhaps involving parameters) which objects the variables are to be understood as ranging over, and by another condition (again perhaps
involving parameters), which ordered pairs of such objects are to be understood as R-related. The official definition would be a special case of this general notion, so that validity in the natural
sense would imply validity in the official sense, but there would be a question about the converse implication. An answer extrapolated from Boolos’s remarks would be that indeed again the proposed
definition would be more natural than the official one, but that again the official definition is equivalent, this time by the reflection principle, to the more natural definition. To see this,
suppose a sentence is not valid in the general sense, and consider conditions (x) and C(x, y) specifying an interpretation in which is false. Here (x) indicates what objects the variables are to be
interpreted as ranging over, and C(x, y) which ordered pairs of such objects are to be interpreted as being R-related. Now go through and replace each of the items on the left below by the
corresponding item on the right:
E pluribus unum: plural logic and set theory 8u. . . 9u. . . 8xx. . . 9xx. . . Ruv
8u((u) ! . . .) 9u((u) & . . .) 8xx(8u(u / xx ! (u)) ! . . .) 9xx(8u(u / xx ! (u)) & . . .) C(u, v)
Calling the resulting formula , the fact that was true under the given interpretation implies that is true. Now apply reflection to obtain a set t such that t is true. Let M be the set of all
elements of t for which the condition t holds, and RM the set of all ordered pairs of elements of M for which the condition Ct holds, thus obtaining a model M. A little thought shows that the fact
that t is true implies that is true in M, showing that is not valid in the official sense either. So much by way of argument that the official model theory is satisfactory. I have mentioned that this
model theory makes all the many known results about the standard model theory for second-order logic applicable to plural logic. One of these results is the incompleteness, or rather, the
incompletablity, of the logic. Validity does not correspond to deducibility using comprehension, indiscernibility, and choice – the latter being a sufficient, but not a necessary, condition for the
former – nor would the addition of further axioms enable one to capture validity either, so long as the set of axioms is recursive. For the sentences deducible from a recursive set of axioms form a
recursively enumerable set, and it is known that the second-order sentences valid in the standard model theory do not. That is the bad news. There is also, however, some good news implicit in the
discussion above of why the official model theory is satisfactory. The good news is that it will never become necessary to add any logical axioms to the three we already have, even if we become
convinced that something further not deducible from them is intuitively valid, because the effect of adding a new logical axiom can always be obtained by adding a new settheoretic axiom instead,
namely, the axiom asserting that the candidate logical axiom is valid in the official model theory. (In the converse direction, in many but not all cases adding a new set-theoretic axiom is
equivalent to adding a new logical axiom, namely, in all those cases where the candidate set-theoretic axiom is equivalent to the assertion of the validity of some second-order sentence in the
standard model theory. This is known to include very many candidate set-theoretic axioms that have been considered in the literature, but not all large cardinal axioms.) In this sense, the logical
axioms we have are all the logical axioms we will ever need. In closing, let me reiterate that what we needed to establish that the official model theory is satisfactory, and therewith the somewhat
Mathematics, Models, and Modality
result just stated, were precisely the set-existence axioms of BB, separation and reflection. The plural logic of Boolos and the reflection principle of Bernays, though introduced independently, the
one decades after the other, turn out to be ideally suited for each other. The combination of the two is a marriage made in heaven – or at least, in Cantor’s paradise.
Logicism: a new look
After a quick review of the original Fregean logicist program, I would like to describe two recent revivals of logicist ideas – Richard Heck’s predicativist logicism, and the late Richard Jeffrey’s
logicistico-formalism – and briefly suggest how the two might be combined. Frege in his Begriffsschrift (1879/1967) presented a deductive system of second-order logic – with the first-order entities
called ‘‘objects’’ and the second-order ‘‘concepts’’ – including an absolutely unrestricted axiom of Comprehension, as follows: (1) 9X 8x(Xx « f(x))
together with its analogues for two-, three-, and many-place relational concepts or relations. The axiom of Extensionality, in the following form: (2) X Y ! (f(X) « f(Y ))
is then provable using the following definition of coextensiveness, ‘‘the analogue of identity’’ for concepts: (3) X Y « 8z(Xz « Yz).
Since (2) implies the uniqueness of the concept whose existence is asserted by (1), we may speak of the concept under which fall all and only those objects x for which f(x) holds, or for short, the
concept of being an x such that f(x), for which I will write Æx: f(x)i. Frege added, informally in the Grundlagen (1884/1950) and formally in the Grundgesetze (1893/1903) for the purposes of the
derivation of arithmetic from logic, the infamous Basic Law V, the assumption that to each concept X is associated an object X, its extension, in such a way that the extensions of two concepts are
identical if and only if the concepts are coextensive: (4) X ¼ Y « X Y. 135
Mathematics, Models, and Modality
We may then introduce set-theoretic notation as follows: (5) Set(y) « 9Y(y ¼ Y) (6) x 2 y « 9Y(y ¼ Y & Yx) (7) {x: f(x)} ¼ Æx: f(x)i.
Frege then defines equinumerosity in the familiar Cantorian manner: (8) X Y « 9R(R is a bijection between X and Y)
where I have not bothered to write out the definition of bijection in purely logical terms. He then defines number thus: (9) #X ¼ {y: 9Y(y ¼ Y & X Y )}.
Using this definition he derives what has come to be called (on account of a passing allusion on Frege’s part to Hume) ‘‘Hume’s Principle’’ or HP: (10) #X ¼ #Y « X Y.
He then proceeds to define zero, successor, and natural number, and to derive the Peano postulates. Russell (1902) famously found a contradiction in Frege’s system. It is almost too well known to
bear repeating, but it needs to be pointed out that the contradiction about the set of all sets that are not elements of themselves uses both comprehension for a formula with a bound second-order
variable and the assumption of the existence of extensions: (11) {x: Set(x) & x 2 = x} ¼ Æx: 9X(x ¼ X & Xx)i.
To block the paradox, therefore, one might either restrict (1), or restrict or replace (4). The first option was not seriously explored until recently. To be sure, Russell professed a ‘‘vicious
circle principle’’ banning definitions of concepts by conditions involving quantification over concepts, or impredicative definitions as they came to be called. But he also introduced an ‘‘axiom of
reducibility’’ whose effect was to circumvent such predicativity restrictions, so that as Ramsey (1925) observed he might as well not have imposed them in the first place. The second option is what
saves Russell from the contradiction. His ‘‘no classes theory’’ rejected (4) altogether, and with it Frege’s approach to logicism, on which numbers are objects or first-order entities. (For Russell
they are third-order entities, and he has to make essential use, as Frege did
Logicism: a new look
not, of higher-order logic; also, since unlike Frege he cannot prove that there are infinitely many objects, he has to assume so as the ‘‘axiom of infinity.’’) Contemporary neo-Fregeanism, whose
continuous history begins (though there were significant precursors) somewhat over two decades ago with Crispin Wright’s Frege’s Conception of Numbers with Objects (1983), retains (1) and rejects
(4), but in place of the latter assumes (10) as axiomatic, thus making numbers into objects after all. The key technical result about this approach was given precise formulation and rigorous proof by
the late George Boolos (1987), who showed that second-order logic with HP, or Frege arithmetic, and second-order Peano arithmetic, or second-order logic with the Peano postulates, are interpretable
in each other and hence equiconsistent. The direction of the interpretability of second-order Peano arithmetic in Frege arithmetic he called Frege’s theorem. There are serious questions how much more
of mathematics beyond second-order arithmetic one can get on a natural extension of such an approach. (Kit Fine (2002) has a natural-seeming extension that gives third-order arithmetic.) There is
also a question whether (10) is close enough to being a ‘‘logical’’ principle (as Frege thought (4) was) to justify neo-Fregeans in calling their position ‘‘neologicism.’’ The latter kind of question
seems less of a issue with Richard Heck’s neo-neo-logicism, which takes the opposite approach of retaining (4) but restricting (1), assuming only predicative comprehension. Heck (1996) explored what
one would have been left with if one had modified Frege’s system only by imposition of a ‘‘vicious circle’’ restriction, without any ‘‘reducibility axiom’’ to cancel it, and without the further ‘‘no
classes’’ restriction. He was able to show (building on early work of Terence Parsons (1987)) that a system of this kind was consistent. He was also able to show that it is sufficiently strong to
interpret Raphael Robinson’s system Q of minimal arithmetic. Now while that system appears very weak, an idea of Robert Solovay, further pursued by Edward Nelson and then a number of others, has
shown that ostensibly much stronger theories can be interpreted in Q. See Ha´jek and Pudlak (1998) for details. All these stronger theories can then be interpreted indirectly (via Q) in Heck’s
system. Further refinements of these results are possible in several directions. First, interpretability of Q can be proved for weaker predicative systems than Heck’s. In the very simplest such
system, which I call PV (with P for ‘‘predicative’’ and V pronounced ‘‘five’’), one has comprehension in its simplest predicative form:
Mathematics, Models, and Modality
(12) 9X8x(Xx « f(x)) provided there are no bound concept variables in f
and in addition Law V in its original form (4). With the definitions of settheoretic notions as before we get a little bit of set theory. We of course get the axiom of extensionality for sets (13)
Set(x) & Set(y) & 8z(z 2 x « z 2 y) ! x ¼ y
from the axiom of extensionality for concepts when sets are introduced as extensions of concepts. We also easily get empty sets, singletons, pairs and so on, thus: (14) ˘ ¼ {x: x 6¼ x} (15) {u} ¼ {x:
x ¼ u} (16) {u, v } ¼ {x: x ¼ u x ¼ v}.
Here the defining formulas have no concept variables at all. When we recall that we also may allow free concept variables as parameters, we see that we also get complements, intersections, and
unions: (17) -X ¼ {x: Xx} (18) X \ Y ¼ {x: Xx & Yx} (19) X [ Y ¼ {x: Xx Yx}.
One consequence of (14) and (18) is worth mentioning – the axiom of adjunction: (20) 8x(Set(x) ! 8y9z8w(w 2 z « w 2 x w ¼ y)).
This guarantees the existence of x [ {y} for any set x and any object y. That (13)–(19) and their consequences such as (20) are in effect all we get can be proved (for the cognoscenti: by using
elimination of quantifiers for monadic first-order logic with identity). This does not look much like set theory, but in the famous little book Undecidable Theories by Tarski, Mostowski, and Robinson
(1953), which first introduced Q, a joint result of Tarski and Wanda Smielew is mentioned without proof, to the effect that Q is interpretable in the set theory whose axioms are extensionality, empty
set, and adjunction. In order to get an interpretation of Q we may take 0 ¼ ˘ and for successor use either the Zermelo definition x 0 ¼ {x} or the von Neumann definition x 0 ¼ x [ {x}. Interestingly
enough, about ten years ago Franco Montagna and Antonella Mancini (1994) showed that Solovay–Nelson methods for interpreting other theories in Q can be adapted to prove the Szmielew–Tarski theorem
about the interpretability of Q in another theory.
Logicism: a new look
Second, in the case of stronger systems of arithmetic interpretable in Q and hence by the Smielew–Tarski theorem indirectly interpretable in PV, it is sometimes easier to prove interpretability in PV
directly. For the most important specific example, a D0-formula in the language of arithmetic is one containing only bounded quantifiers 8x < y and 9x < y. A difficult argument of Alex Wilkie (as
reported in Ha´jek and Pudlak, 1998) shows that the system called ID0, with the principle of mathematical induction (if f(x) holds for zero and for the successor of any number for which it holds,
then it holds for all numbers) for D0-formulas, can be interpreted in Q. But A. P. Hazen showed that we can get interpretability of ID0 in PV without Wilkie’s difficult argument. See Burgess and
Hazen (1998) for details. Third, one can actually go beyond ID0, which allows only for the operations of addition and multiplication, to a system which allows also exponentiation, if one works in a
slightly stronger predicative Fregean theory, which I will call P2 V. In this theory there are objects x, degreezero concepts X 0, and degree-one concepts X 1. A formula is of degree zero if it
contains no degree-one variables, and no bound degree-zero variables, and is of degree one if it contains no bound degree-one variables. We then have two forms of comprehension, for the two degrees:
(21) 9X 08x(X 0x « f(x)) for f of degree zero (22) 9X 18x(X 1x « f(x)) for f of degree one.
P2 V, with axioms (20), (21), and (4), can interpret the theory known as ID0(exp), which in a convenient equivalent of its usual formulation may be taken to have the following axioms: (23) 0 6¼ x 0
(24) x 6¼ y ! x0 6¼ y 0 (25) x < 0 (26) x < y 0 « x < y x ¼ y (27) x þ 0 ¼ x (28) x þ y 0 ¼ (x þ y)0 (29) x 0 ¼ 0 (30) x y 0 ¼ (x y) þ x (31) x 0 ¼ 00 (32) x y 0 ¼ (x y) x (33) (f(0) & 8x(f(x)) ! f(x
0 )) ! 8xf(x) provided f is a D0-formula.
A formula like the conclusion of (33), consisting of universal quantifiers preceding a D0-formula is called a 1-formula. Many important theorems of number theory (including the Fermat–Wiles theorem)
have this
Mathematics, Models, and Modality
form. It is known that quite a bit of mathematics can be developed in such a system as the above (and others known to be interpretable in or conservative over it). Harvey Friedman has conjectured
that every 1-theorem published in the Annals of Mathematics can be proved in such a system. The refinements of Heck’s work that I have been discussing so far for the most part go back to a joint
paper by Hazen and myself. That paper left open how much further one can go. It turns out that, unfortunately, one is not going to get in this way the whole of first-order arithmetic, or even all of
so-called primitive recursive arithmetic, PRA, which is generally accepted, following William Tait, as a formalization of finitist mathematics. This is so even if one goes beyond P2V to P3V and so
on, defined in the obvious way, or even RPV ¼ P1V, the union of all the Pn V, which amounts to so-called ramified predicative second-order logic plus Law V. This is on account of considerations
related to Go¨del’s second incompleteness theorem, together with the fact that there is a finitist consistency proof for RPV. (The original consistency proofs of Parsons and Heck were model-theoretic
rather than proof-theoretic in character, and infinitistic rather than finitistic.) This is the main new result in this area to be found in my little book Fixing Frege (2005b) where technical details
and bibliographical references for all the material described so far can be found. (Important improvements have since been obtained in forthcoming work by Mihai Ganea and by Albert Visser.) To sum up
so far, predicative logicism provides a foundation for a respectable modicum of arithmetic, but nothing anywhere near the whole of classical mathematics. 2
Lite ‘‘beer’’ is a fluid with approximately the taste of a mixture of 50 per cent real beer and 50 per cent soda water. Lite ‘‘logicism’’ is something brewed up by my late colleague Richard Jeffrey
in his last years (1996, 2002), with the approximate composition 50 per cent logicism plus 50 per cent formalism. Though this recipe does not, perhaps, make it sound very appetizing, I myself on
tasting it have found it considerably more palatable than I had expected, and I would like to say enough about it to tempt some of you to take a sip. The leading idea can be brought out by
contrasting the following Hilbert proportion: (1) computational : mathematics :: empirical : physics
Logicism: a new look
which is the leading idea behind formalism, with the following Jeffrey proportion: (2) logical : mathematics :: empirical : physics.
So let me begin with Hilbert in interpreting whom I follow (Weyl, 1944). With both Hilbert and Jeffrey we have a view of mathematics selfconsciously modeled on a certain philosophy of physics (more
popular perhaps in Hilbert’s day than in Jeffrey’s). On this view the theoretical portions of physics are only there to imply empirical laws: universal generalizations whose instances are empirically
decidable. Now it is an immediate consequence of the definition of empirical decidability that theoretical physics and empirical laws cannot imply any empirically decidable sentences that could not
be discovered directly by observation. But they can yield such sentences more quickly, as predictions of experiences future rather than records of experiences past. A better description of the
philosophy of physics in question is that it takes physics to be nothing but a giant engine for generating empirical predictions. While the ‘‘nothing but’’ is controversial, it is comparatively
uncontroversial that the data to which physics is responsible are empirical, and that empirical fruitfulness must be demanded. It is demanded of higher and higher theories that they should continue
to yield more and more empirical predictions. On Hilbert’s view the theoretical (or as he called them ‘‘ideal’’) portions of mathematics are only there to imply universal laws whose instances are
computationally decidable sentences (which he considered the only ‘‘real’’ sentences). In modern terminology these would be 1-sentences. Now again it is an immediate consequence of the definition of
computational decidability that theoretical mathematics and 1-sentences cannot imply any computationally decidable sentences that could not be discovered directly by calculation. But again they can
yield such sentences as ‘‘predictions’’ (for instance, the commutative law of multiplication predicts that the results of multiplying two hundred-digit numbers in one order and in the opposite order
will be the same far more quickly than we can verify as much by tedious calculations). Hilbert thought that higher and higher mathematical theory would be computationally fruitful, yielding new
computational predictions, in the sense that it would yield 1-sentences more quickly, but not in the sense that it would outright yield more 1-sentences. On the contrary, his program was to try to
convince the finitist of the reliability of classical mathematics, through proving by finitist means that for any 1-sentences having a classical proof, it is possible in principle, though perhaps not
feasible in practice, to produce a finitist proof.
Mathematics, Models, and Modality
Go¨del’s incompleteness theorems showed Hilbert was wrong: classical mathematics cannot be finistically proved to be reliable or even consistent; we must be content with inductive evidence on these
points. This is the negative side of the coin whose positive side is that higher and higher theories do not just yield quicker proofs of 1-sentences that could be proved, albeit perhaps very much
more slowly, in lower theories, but yield outright new 1-sentences, so that we have computational fruitfulness in a stronger sense than Hilbert expected. All this is because a higher theory can prove
the consistency of a lower theory, as the lower theory itself cannot, and because consistency can be expressed as a 1-sentence. Though I have so far spoken only of 1-sentences, there are important
distinctions among them. Hilbert accepted 1-sentences involving arbitrary primitive recursive functions, including every function in the sequence addition, iterated addition or multiplication,
iterated multiplication or exponentiation, iterated exponentiation, and so on. From certain philosophical points of view, however, one might wish to stop the series with exponentiation, or even with
multiplication. (Recall, in particular, that the simplest form of predicative logicism gave only addition and multiplication, while the ramified form gave exponentiation but not much more.) In that
case, the question of computational fruitfulness, of whether higher and higher theory does more and more computational work, would have to be reopened. Do we actually get new 1-sentences involving
only addition and multiplication and exponentiation? Do we get new 1-sentences involving only addition and multiplication? (We do not get any involving only addition; for the cognoscenti, this is a
consequence of Pressburger’s theorem.) These questions are non-trivial, but answers are known. An affirmative answer to the former question is implied by the work of Julia Robinson (building on the
work of Martin Davis and Hilary Putnam), and an affirmative answer to the latter question is implied by the work of Yuri Matiyasevich that builds thereupon, for an exposition of which see
Matiyasevich (1993). No logicist of any stripe can be satisfied with an approach that, like Hilbert’s as described so far, takes computation to be an end in itself, without regard to applications.
Even Hilbert recognized that some connection must be made between the use in pure mathematics of numerals as nouns, denoting numerical objects on which we can perform arithmetical operations, and the
doubtless historically far older use in applications of numerals as adjectives, in numerically definite quantifications. The computational fact that 2 þ 2 ¼ 4 is obviously somehow connected with the
logical fact that if two sheep jumped the fence in the morning and two
Logicism: a new look
sheep jumped the fence in the evening, and no sheep jumped both morning and evening, then four sheep altogether jumped the fence morning or evening. But Hilbert gave no formal account of the
connection. Why do I call the fact about the sheep a logical fact? If we think of firstorder logic (with identity) as enriched by (definable) numerical quantifiers of the type (3.1) 91xAx « 9xAx
(3.2) 92xAx « 8y9x(x 6¼ y & Ax) (3.3) 93xAx « 8z8y9x(x 6¼ z & x 6¼ y & Ax)
and so on, along with the quantifiers (4) 9m!xAx « 9mxAx & 9m þ 1xAx
then the fact about the sheep becomes an instance of the general first-order logical law (5) 92!xAx & 92!xBx & 9x(Ax & Bx) ! 94!x(Ax Bx).
Any variety of logicist or set-theoretic approach will supply what Hilbert does not, a systematic way of connecting arithmetical facts with logical laws, through the characterization of arithmetical
relations and operations that Fregean and Russellian logicism share with (or rather, borrow from) the Cantorian theory of cardinal numbers. To recall those characterizations, if a and b and c are
respectively the number of As and Bs and Cs, then we have the following characterizations of arithmetic notions by existential secondorder formulas, wherein R is two-place and S is three-place: (6.1)
a b « 9R(R gives a bijection between the As and some of the Bs) (6.2) a þ b ¼ c « 9R(R gives a bijection between the As and some of the Cs and R gives bijection between the Bs and the rest of the Cs)
(6.3) a b ¼ c « 9S(S gives a bijection between the pairs consisting of an A and a B and the Cs)
where ‘‘bijection’’ is to be written out in logical terms in the usual way. If the numbers involved are finite we also have (7.1) a < b 9R(R gives a bijection between the As and some but not all of
the Bs)
Mathematics, Models, and Modality
(7.2) a b ¼ c « 9S(8x(Cx ! S(–, –, x) gives a function Sx from the Bs to the As) & 8x8w(Cx & Cw & x 6¼ w ! Sx and Sw are distinct) & 8x8y8z(Cx & By & Az ! 9w(Cw & Sw(y) ¼ z and otherwise Sw agrees
with Sx)
To illustrate by example how these characterizations provide a bridge between the computational and the logical, consider the simple law (8) 8a8b(a < c & c b & b a)
and any particular numerical instance, say that for c ¼ 17: (9) 8a8b(a < 17 & 17 b & b a).
Using the definitions of the numerical quantifiers and characterizations of arithmetical relations and operations above, this yields (10) 8A8B(917xAx & 917xBx & 9R(R gives a bijection between the Bs
and some of the As))
which is equivalent to (11) 8A8B8R(917xAx & 917xBx & 9R(R gives a bijection between the Bs and some of the As))
which amounts to an assertion of the validity of the first-order scheme obtained by dropping the initial universal quantifiers in (11), or equivalently and more naturally to the validity of the
first-order scheme (12) 917xAx & 917xBx ! (R gives a bijection between the Bs and some of the As).
One instance of (12) would be: if there are fewer than 17 pigeonholes and at least 17 pigeons, then it cannot be that each pigeon goes into one and only one hole, with only one pigeon per hole. The
general law (8), by taking more and more specific numerical instances, yields more and more pigeonhole principles. Hilbert’s idea was that just as physics is ‘‘empirical’’ not in the sense that all
its theoretical concepts admit definitions in terms of observation, but in the sense that its data are empirical, so is mathematics ‘‘computational’’ not in the sense that all its theoretical
concepts admit definitions in terms of calculation, but in the sense that its data are computational. Jeffrey’s idea is to substitute ‘‘logical’’ for ‘‘computational’’ here. Note that by the Go¨del
completeness theorem, no first-order logical law can be derived from arithmetic or via arithmetic from higher mathematics, in the manner
Logicism: a new look
sketched above, that cannot in principle be derived by textbook methods, though perhaps only very much more slowly. It is a question of getting results more quickly, as logical ‘‘predictions.’’ Now
even in the case of pigeonhole principles, which Sam Buss has shown can be proved relatively quickly by textbook methods, proof by instantiation of (8) will be quicker for large values of c. But in
this particular example, the 1-sentence (8) involved is one provable already in ID0. A question of logical fruitfulness now arises, which Jeffrey did not find time to consider, the question whether
higher and higher theory in mathematics does any logical work, by yielding more and more logical ‘‘predictions,’’ by yielding more and more 1-sentences that in turn yield logical laws. It turns out
that the answer is affirmative, as a slight extension or corollary of the work of Robinson and Matiyasevich alluded to earlier. This is the one formal result in this area to be found in my paper
‘‘Protocol sentences for lite logicism’’ (Burgess, forthcoming), where technical details and bibliographical references for all the material described so far can be found. A combination of the two
new forms of logicism I have been discussing can now be contemplated. On such a combined view, the ‘‘real’’ mathematics, including the characterizations needed to derive logical laws from arithmetic
results, and the most basic arithmetical results themselves, would be provided by predicative logicism, perhaps RPV or alternatively PV. The attitude towards ‘‘ideal’’ mathematics would be that of
lite logicism: it is justified by, on the one hand, inductive evidence that its logical predictions are reliable, and on the other hand, by a metatheorem to the effect that higher and higher theory
is fruitful in the sense of yielding more and more logical predictions. I do not on the present occasion wish to advocate this form of logicism in any stronger sense than commending it to the
reader’s attention as worthy of consideration. For the combination to be completely satisfactory, we would need to know that the metatheorem in question can actually be proved in ‘‘real’’
mathematics, RPV or PV. For RPV this is almost certain, though I cannot claim to have dotted every i and crossed every t. Even leaving this question unresolved, among various other loose ends, it
will I hope be clear that the potential of logicism is far from having been exhausted even today, well over a century after it was first introduced by Frege.
Models, modality, and more
Tarski’s tort
While what Alfred Tarski labeled his ‘‘semantic conception of truth’’ has been much discussed, one topic that has not received all the attention it deserves is his choice of that label. It is this
comparatively neglected aspect of Tarski’s conception that I wish to address here. But first a word about the situation prior to Tarski. I begin with a result I learned as a fifteen-year-old student
in a summer mathematics program for high-school students run by the late Arnold Ross: the theorem that every natural number is interesting. The proof is by contradiction. Suppose that not every
natural number is interesting. Then the set of uninteresting natural numbers is non-empty. So by the well-ordering property of the natural numbers, it must have a smallest element n. But if n is the
smallest uninteresting natural number, then n is interesting for that very reason. Thus we have a contradiction, establishing that our original hypothesis was false, and that every natural number is
interesting after all. But, of course, some numbers do appear completely uninteresting to most of us. I suppose a so-called dialethist might claim that here we have yet another example of a true
contradiction, but the more usual reaction to this bit of adolescent mathematical humor is that ‘‘interesting’’ is too vague or ambiguous, too subjective or relative, a concept to be admissible in
mathematical reasoning. And when Alfred Tarski was beginning his mathematical career, most mathematicians held essentially the same opinion about the concept of truth. In order to understand what
Tarski was up to with his truth definition, one needs to keep ever in mind this historical fact. The first suspect notion that engaged Tarski’s attention was not that of truth, but rather that of
definability. That notion belongs to the same family as the notion of truth, since an object is definable if there is a condition true of or satisfied by it and it alone. And eighty years or so ago
this notion of 149
Mathematics, Models, and Modality
definability was in as much disrepute as the notion of truth itself. Thus Tarski begins his first relevant paper (Tarski 1931), on the notion of definability for sets of real numbers, by remarking
‘‘Mathematicians, in general, do not like to operate with the notion of definability; their attitude towards this notion is one of distrust and reserve.’’ And he goes on to concede ‘‘The reasons for
this aversion are quite clear and understandable.’’ Indeed, the grounds for mathematicians’ suspicions about definability are not far to seek. The emergence of Russell’s and other set-theoretic
paradoxes had put mathematicians in mind of some of the paradoxes propounded by ancient philosopher–logicians – Poincare´, as I recall, somewhere explicitly mentions Zeno the Eleatic and the school
of Megara in this connection – beginning with the paradox of the liar. And soon people began inventing modern paradoxes of a similar stripe. The closest in spirit to Russell’s paradox of the set of
sets not elements of themselves was Grelling’s heterological paradox, about adjectives or adjectival phrases not true of themselves. Paradoxes of this type had an important influence, in that they
helped convince Russell and others that what was responsible for the set-theoretic paradoxes was not the assumption that actual infinities exist – the heterological paradox has nothing to do with
infinity – but a kind of self-reference or vicious circularity that came to be called impredicativity. There were also paradoxes about definability: Berry’s, Richard’s, and above all Ko¨nig’s. This
last was especially important because while no one had ever taken Berry’s or Richard’s arguments to be anything but ingenious sophisms, Ko¨nig and presumably at least two others (the referee and the
editor who accepted his note (Ko¨nig 1905/1967) for publication), took his argument to be a legitimate proof, or more precisely, a legitimate disproof of the hypothesis that the continuum can be well
ordered. The argument, it will be recalled, is that supposing there exists a well-ordering W of the continuum, we may consider the set of real numbers that are definable in terms of W (including
those that, like real number zero, for instance, are definable even without mentioning W ). Since by one theorem of Cantor there are only countably many finite strings of letters of the alphabet to
serve as definitions, this set must be countable. Since by another theorem of Cantor there are uncountably many real numbers, the complement of this set must be non-empty. But then, since W is by
hypothesis a well-ordering, it must have a W-least element: the W-least real number not definable in terms of W. But this last description provides a definition of the number in question (in terms of
W ), thus yielding a contradiction, and refuting the supposed theorem of Zermelo.
Tarski’s tort
It was, however, not just on account of this paradox of Ko¨nig’s that the repudiation of definability as an unmathematical notion was especially firm on the part of the defenders of set theory. For
there were other, more direct arguments than Ko¨nig’s against Zermelo’s axiom of choice, based on the assumption that the existence of a set or relation or function depends on its being definable. In
replying to these arguments, the defenders of set theory emphasized the lack of mathematical precision in the notion of definability. Thus Hadamard, in the exchange with other French analysts on the
axiom of choice (Baire et al. 1905), makes this point and goes so far as to say the notion of definability belongs to psychology rather than mathematics. And yet Tarski saw that the notion of
definability had important mathematical applications, as becomes especially clear in the various follow-ups to Tarski (1931/1983b) from which emerged the so-called Tarski–Kuratowski algorithm, which
allows one to compute the topological complexity of point-sets in the line or plane or space (open, closed, Fr, Gd, Borel, analytic, co-analytic, projective) by consideration of the logical
complexity (number of alternation of quantifiers when reduced to prenex form) of the condition that defines the set. The ideas in this short paper, though of the sort that once absorbed come to seem
obvious, were of the greatest importance, especially after Addison clarified the connection between the hierarchies of interest to topologists and the hierarchies introduced, also in the first half
of the 1930s, by the recursion theorist Kleene. These ideas opened the door to the application of ever more sophisticated logical techniques to the descriptive theory of pointsets, eventually leading
to the explanation by Go¨del, Cohen, and their successors of why so many problems in descriptive set theory had resisted solution – they are undecidable on the basis of the conventional axioms of set
theory – and in the work of Woodin and his predecessors, showing how a satisfactory solution was obtainable on the basis of large cardinal axioms. It may be noted that Tarski in these early papers
was concerned with something more general than the definability of an element of a domain (which was what was at issue with the Berry, Richard, and Ko¨nig paradoxes), namely, the definability of a
subset of the domain, which is a matter of there being a condition satisfied by all and only the elements of that subset (definability of an element reducing to a special case, the definability of
its unit set). If one-, two-, three-, and many-dimensional sets are in question, it is necessary to consider satisfaction for conditions with one-, two-, three-, or many variables. And as Tarski
notes, the notion of truth is
Mathematics, Models, and Modality
simply the degenerate or dimension-zero case of the notion of satisfaction. Thus there is a direct line between the earlier papers, which defined satisfaction for a single, specific interpreted
language, and the great Wahrheitsbegriff paper, which defined it for all interpreted languages of a given kind. What concerned Tarski in that later paper is the same thing that had concerned him in
his earlier papers, namely, the rehabilitation of a notion in disrepute among contemporary mathematicians. 2
Tarski in his great paper on truth (Tarski 1935/1983c) was not interested in determining the meaning of the word ‘‘true.’’ He thought he already had a partial understanding sufficient to determine
what the extension of ‘‘true’’ was supposed to be. This understanding is expressed in Convention T, or rather, in his laying down Convention T as a criterion of ‘‘material adequacy.’’ And he
repeatedly tells us that he has no interest in going further and determining the intension of ‘‘true.’’ The other requirement he states for a truth definition, that of ‘‘formal correctness,’’ has
nothing to do with fidelity to the intuitive sense of the term, but merely means mathematical rigor, which is of course essential if the suspicions prevalent among mathematicians about the notions
with which he is concerned are to be allayed. Material adequacy and formal correctness are his only official requirements, though naturally he is also interested in the usefulness of notions he is
defining, beyond the applications he has already made of them in his earlier papers on definability. To repeat, there is no requirement of fidelity to the intuitive, pre-theoretic sense of ‘‘true,’’
beyond conformity to Convention T. Of course, mathematicians in propounding definitions of mathematical terms for words already in extra-mathematical use are generally even less interested than
Tarski was in their ordinary meanings. For they care no more about extensional than about intensional agreement between the technical sense being introduced and the ordinary sense, and they do not
lay down any criteria of ‘‘material adequacy’’ based on their (total or partial) understanding of the word in its extra-mathematical sense. Thus mathematicians feel no compunction whatsoever in
speaking about Hilbert ‘‘space,’’ though it certainly is not the kind of ‘‘space’’ that, say, astronauts travel around in. But there is also a less superficial sense in which mathematicians are
unconcerned with meanings. For they are not really concerned even with the meaning of the term in its technical sense, at least not if ‘‘meaning’’ is understood in the same fine-grained way it is
Tarski’s tort
understood by lexicographers propounding definitions or philosophers propounding analyses. This deeper kind of indifference to meaning, or at least interest only in a very coarse-grained kind of
‘‘meaning,’’ is best illustrated by an example. An analysis text may define the constant e using the well-known representation as the limit of a sequence: (1) e ¼ limn!1 (1 þ n1)n.
Thereafter, all theorems about e in the text will refer back to this definition, or to previous lemmas based on this definition. This includes, for instance, the well-known representations as the sum
of a series: (2) e ¼ 1/0! þ 1/1! þ 1/2! þ 1/3! þ 1/4! þ . . .
Surely it is in this practice of fixing a definition and referring all later results back to it that we should look for the origin of Frege’s notion that in a properly constructed scientific
language, each name should be associated with a single, fixed definition. The language of the mathematical community, however, does not conform to this requirement. For while one textbook writer may
take (1) as the definition and (2) as a theorem, another writer may do the reverse. Even a mathematician taught originally from a textbook using one of these approaches may him- or herself, when he
or she comes in due course to write a textbook, adopt the other. Moreover, even if a sizable majority favors one approach, which is as may be, they will admit the legitimacy of the other. There is no
question of insisting on one definition as the true, original one. It would be wrong to say that mathematicians do not care about distinctions between definitions that are mathematically equivalent,
or even to say that they do not care about distinctions between those that are provably equivalent, or even to say that they do not care about distinctions between those that have actually been
proved to be equivalent, if the proof is long and difficult and involves advanced ideas. But they are indifferent to distinctions between definitions for which there are comparatively well-known,
short, simple, elementary proofs of equivalence. (1) and (2) are certainly not synonymous in any reasonable sense of ‘‘synonymous,’’ but they are equally acceptable as ‘‘definitions’’ of e. It is in
this situation that we find the origin of the notion expressed in Quine (1936) and elsewhere that definitional status is local and transitory – a notion at the root of his ultimate rejection of the
analytic/synthetic distinction. This same kind of indifference to meaning may be found also in Tarski’s paper, in addition to the kind of indifference to meaning I mentioned
Mathematics, Models, and Modality
earlier. For as Wilfrid Hodges reminds us,1 while Tarski’s general definition of truth is applicable to arbitrary interpreted languages of a certain kind, for certain special interpreted languages
another kind of definition of truth, very different in appearance from the general definition, is available: a definition of truth by elimination of quantifiers. The alternative definition is equally
mathematically rigorous or ‘‘formally correct,’’ and equally ‘‘materially adequate,’’ so that it agrees in extension with the general definition applied to this specific case, though it is by no
means synonymous with the general definition in any reasonable sense of ‘‘synonymous.’’ Where such an alternative definition is available, Tarski is perfectly happy with it. Thus Tarski is
unconcerned with the meaning of ‘‘true’’ in a double sense: first, he is unconcerned with pinning down precisely the meaning of ‘‘true’’ in ordinary language, or in insuring that ‘‘true’’ as a
technical term will agree more than extensionally with ‘‘true’’ as an ordinary term; second, he is unconcerned with differences between alternative technical definitions, if these have been proved
extensionally equivalent. And if Tarski is doubly unconcerned with pinning down the meaning of ‘‘true,’’ he is even less concerned with analyzing the meaning of any other word or symbol. A clause
such as either of the following: (3) (A and B) is true iff A is true and B is true (4) True(A B) «True(A) True(B)
emphatically cannot be construed as telling us the meaning of the word ‘‘and’’ or the symbol ‘‘.’’ For in Tarski’s original set-up, the ‘‘object language’’ for which truth is being defined is
contained in the metalanguage in which the proof is being given. That is why we see ay-en-dees on both sides of (3) and carets on both sides of (4). In order to understand the definition, one must
understand the metalanguage, and that includes understanding the object language which is part of it, and therewith each of the words or symbols of the object language. 3
It was not linguistic understanding but mathematical fruitfulness that Tarski sought with his definition, and in this he was very successful. For Tarski is the creator of the incredibly rich branch
of mathematical logic
In his ‘‘Tarski’s truth definitions,’’ in Stanford Encyclopedia of Philosophy, http://plato.stanford.edu/ entries/tarski-truth/ (available only on-line).
Tarski’s tort
known as the theory of models. Hodges, however, has noted how very slow Tarski was to advance beyond the 1933 definition of truth for sentences of an interpreted language to the definition of the
notion that is central to the theory of models, namely, the notion of truth in a structure for sentences of an uninterpreted language. Formally the step is a very short one. An interpreted language
is naturally thought of as an ordered pair consisting of an uninterpreted language and an interpretation. And an interpretation is simply a set, the domain, and an assignment of a relation or
operation of the right number of places on it to each non-logical primitive. But that is essentially what a mathematical structure is: a set, the domain, and certain distinguished relations and/or
operations on it, distinguished from each other by certain symbols associated with them. For instance, a ring is, first of all, a set with two binary operations, one written additively and one
written multiplicatively. And formally, the step from a two-place relation between a sentence and an ordered pair consisting of an uninterpreted language plus an interpretation or structure to a
three-place relation among a sentence, an uninterpreted language, and a structure or interpretation is a very short one. But though Tarski was in effect operating with the latter notion within a few
years, he did not give an explicit definition until over two decades later. Indeed, it is oversimplifying to say that he gave the definition, since it appears in a joint paper with a student (Tarski
and Vaught 1956). This notion of a sentence of a given uninterpreted language being true in a structure, or conversely, of a structure being a model of a sentence of an uninterpreted language, has
proved immensely useful both in applications to core mathematics (mainly abstract algebra) and in applications to the metamathematics of set theory. The notion is also needed to give a fully rigorous
statement of the Lo¨wenheim–Skolem and Go¨del completeness theorems, which were proved before they were stated, so to speak. Closely related to this last application was another application
envisioned by Tarski, that of giving a definition of logical truth. Here, however, his work (Tarski 1936/1983d) is of more ambiguous status. Tarski himself pointed to one limitation of his definition
of logical truth or truth by virtue of logical form alone as truth in all models: the definition presupposes a division of symbols and notions into logical and non-logical, of which Tarski was not to
give an account until a late lecture, published only posthumously (though since much discussed). Kreisel (1967) pointed to another limitation: logical truth in an intuitive sense is truth in all
interpretations in an intuitive sense, and is not restricted to truth in
Mathematics, Models, and Modality
interpretations in the technical sense considered by Tarski, where the quantifiers must range over a set and not a proper class. For first-order logic, as Kreisel notes, the completeness theorem
shows that it is enough to consider interpretations where the domain is a set. But for second-order logic (if one grants that it is logic) the assumption that holding in all interpretations and
holding in all set-interpretations – the intuitive notion and the Tarski notion – coincide is not provable on the basis of the usual ZFC axioms for set theory, but is in effect a large cardinal
assumption. But despite the limitations of Tarski’s approach on this one issue, there is no denying that Tarski’s definition of truth was immensely successful. Yet Tarski’s achievement was marred by
a misdeed on his part that opened the door to considerable mischief. His violation did not rise to the level of felony or even misdemeanor – indeed, as my title suggests, it was a civil rather than a
criminal wrong. It was a case of trademark infringement: his appropriating to his own use the linguists’ term ‘‘semantics.’’ Of course, it was not really an actionable offense at all, since academic
disciplines, unlike business corporations, are not legal ‘‘persons’’ with standing to sue anyone, though in a way that is a shame. (It would be a very good thing if, for instance, the International
Seismological Union could collect hefty punitive damages from any writer who uses ‘‘epicenter’’ as a fancy synonym for ‘‘center of activity.’’) And if Tarski’s act were actionable, there would be, as
in virtually all cases of this particular offense, something to be said for the defense. There was a minority usage of ‘‘semantics’’ for something other than a theory of the meanings of words.
Indeed, there were several minority usages, and in ‘‘The semantic conception of truth’’ (Tarski 1944), he is quick to disassociate himself from one of them, the ‘‘General Semantics’’ of another
Polish thinker, Count Alfred Korzybski. But there can hardly be any question that what ‘‘semantics’’ conveyed and conveys to the mind of the general reader is a theory of meaning, which Tarski’s
theory most emphatically was not. By calling his theory ‘‘semantics,’’ Tarski opened the door to endless misunderstandings on this point. There has been significant damage to logic arising from such
misunderstandings, from confusion of model theory or ‘‘semantics’’ improperly so-called with meaning theory or ‘‘semantics’’ properly so-called. Needless to say, if one is careful, one can avoid the
confusion even while keeping the double use of ‘‘semantics’’ by distinguishing formal from linguistic. But in general usage ‘‘formal semantics’’ is a case of oxymoron and ‘‘linguistic semantics’’ a
case of pleonasm, and it would have been better not to create a situation where there is a call for distinguishing adjectives in the first place.
Tarski’s tort 4
Tarski’s usage did create such a situation, or at least the decision by most of his successors to follow him in his usage of ‘‘semantics’’ has done so. For his usage, though not in fact followed by
workers in first-order model theory, has generally been followed by philosophers, and even more so by computer scientists working in the field of modal, temporal, and related logics. Hence, for
instance, the title ‘‘Semantical considerations on modal logic’’ for the paper Kripke (1963) in which the inventor of Kripke models finally belatedly published his model theory for modal logic. Not
everyone who follows Tarski’s usage is confused, of course, but everyone who does so encourages confusion, and there has been confusion enough, and this of two kinds: spurious attributions of
ontological commitment to commonplace locutions and unwarranted complacency about the intelligibility of dubious notions. Let me take up the first of these phenomena first, illustrating by the case
of tense logic. In tense logic we have future-tense and past-tense operators F and P whose intended meaning is something like ‘‘it (sometime) will be the case that . . .’’ and ‘‘it (once) was the
case that . . .’’ These behave syntactically like negation: they are one-place connectives. But it is not hard to see that we cannot treat them model-theoretically like negation. These parallel
clauses: (5) M |¼ A iff it is not the case that M |¼ A (6) M |¼ FA iff it will be the case that M |¼ A
will not do because we are looking at mathematical models here, and mathematical facts about mathematical objects do not change. If a model does not satisfy a sentence now, there is no use waiting,
for it never will. The mathematical logician will naturally want to deal with this problem by adopting the kind of treatment of time that is used in mathematical physics, where time is in effect
represented as an extra spatial dimension. This amounts to purging the language of tense, replacing sentences of the type, ‘‘It is raining’’ in the present tense by formulas of the type ‘‘It be
raining at time t,’’ where the ‘‘be’’ is tenseless, or rather by the instantiation of such a formula where a constant c for the present is put in for the variable t. Then instead of ‘‘It will rain,’’
or in more stilted and stylized form, ‘‘It will be the case that it is raining,’’ we get ‘‘There exists a time t such that t is future relative to c and it be raining at time t.’’ This is, in Quinean
terms, regimentation in first-order logic. Or if the logician does not insist on
Mathematics, Models, and Modality
translating tense-logical sentences into first-order sentences quantifying over times, and introducing a relative futurity relation, the logician will anyhow, in presenting a model theory for tense
logic, use exactly the notion of model one would get if one did translate into first-order terms and use Tarski-style model theory for the latter. In other words, in place of a classical model, which
at the sentential level is just an assignment of truth values to sentence letters, one has something more complicated: a domain X of times and a relative-futurity relation R on it, and an assignment
to each a in X of a model of the classical kind, which is to say, an assignment of truth values to sentence letters. One then has to define not truth of a sentence in the model M, but satisfaction of
a formula in the model M by an element a of X. The relevant clauses then become as follows: (5 ) M |¼ A[a] iff it is not the case that M |¼ A[a] (6 ) M |¼ FA[a] iff it there is a b such that aRb and
M |¼ A[b].
Now philosophically, thinking tenselessly raises a whole raft of new questions, and above all this one: according to our ordinary tensed ways of thinking, I always have been (as long as I have been
around) and am always going to be (for as long as I am still around) three-dimensional. On this new tensed way of speaking, it seems that I be at different times, or if one prefers, in different
temporary states of the world, or if one really wants to encourage confusion, in different temporary worlds. But how can I be in more than one (temporal) ‘‘place’’? Should I conceive of myself as
being four-dimensional, with various three-dimensional stages, ‘‘temporal segments’’ of me, at various times? Should I conceive of myself as being only at the present time, and merely having past and
future counterparts? Quite a few puzzles arise. Nonetheless, it may be insisted that the tenseless way of thinking is more scientific, and perhaps even demanded by certain advances in physics during
the last century. That may be so, but even if it is, the old-fashioned tensed way of thinking is going to be around for a long time before the new-fangled tenseless way is universally adopted, and so
we would do well to understand the relationship between the two. This is what tense logic, as developed by Prior (1967b), in effect does. Physical assumptions about the structure of time correspond
naturally to assumptions about the structure or ‘‘frame’’ of times with the earlier–later relation (which is what relative-futurity amounts to), or in other words, different classes of models.
Completeness theorems relate various axiomatic systems of tense logic to various classes of
Tarski’s tort
models, so that, for instance, the assumption that the earlier–later relation on times is dense corresponds to the assumption that Fp ! FFp is valid. So far, so good. However, if one thinks of the
model theory as a meaning theory for tense logic, one will be led to the idea that the tenseless way of thinking is not some new-fangled techno-scientific development of the last century, but rather
has been what, despite superficial appearance to the contrary, our ordinary tensed ways of speaking ‘‘deep down’’ have meant all along. The notion of a durationless instant of time is not, on this
view, a sophisticated, advanced, historically late posit, but rather is something that has been implicitly assumed since time immemorial. In a word – or rather, in two words – even our ordinary,
old-fashioned, commonsense tensed ways of speaking and thinking are ontologically committed to instants of time – or worse, to ‘‘temporary worlds’’: ‘‘They aren’t there on the surface, but they’re
there in the semantics.’’ But in fact, their presence in the model theory – or if you must, in the formal ‘‘semantics’’ – is merely an artifact, an inevitable consequence of the fact that it is
mathematical, and therefore unchanging models that we are considering. The ‘‘ontological commitment’’ here is entirely spurious, based on a simple fallacy of equivocation between misnamed formal
‘‘semantics’’ and properly linguistic semantics: It’s there in the models. So it’s there in the semantics. So it’s there in the meaning.
Mathematical facts are necessary as well as permanent, and exactly parallel considerations apply to the modal case. Mathematical modelers will reach for a treatment of modality resembling that used
in mathematical probability and statistics, with a space of ‘‘possibilities,’’ or if you prefer ‘‘possible states of the world,’’ or if you really want to encourage confusion ‘‘possible worlds.’’ But
this gives absolutely no reason whatsoever to suppose that when I say ‘‘I could have got stranded in an airport owing to a snow storm’’ I mean there is a possibility or possible state of the world or
possible world where I, or a state or modal segment of me, or a counterpart of me, is stranded. The fact that what some people insist on calling ‘‘possible worlds’’ are there ‘‘in the semantics’’
tells us nothing about ‘‘ontological commitment’’ to them, if the ‘‘semantics’’ is only formal and not linguistic. Similar remarks apply to the supposed ontological commitment of plural logic to sets
or classes, and there are other examples as well. The clash between the usage of ‘‘semantics’’ by Tarski and his followers and the majority use in the educated public is forever misleadingly
suggesting existential implications that are not there.
Mathematics, Models, and Modality
It is a significant historical fact that the model theory for modal logics was worked out in the late fifties and early sixties, while the distinctions among logical demonstrability, logical
validity, analyticity, aprioricity, and necessity in the sense of ‘‘would have been no matter what’’ or ‘‘couldn’t have been otherwise’’ were largely overlooked until the later sixties and early
seventies – not that everyone pays sufficient attention to such distinctions even today. That is to say, we were in possession of a model theory for modal logic well before most modal logicians had
an unambiguous and unconfused understanding of what the modalities mean, or even any firm guarantee that the modalities do mean something. This fact alone should warn us that it is one thing to have
a theory of models, and another to have a theory of meaning. The following argument is utterly invalid: These sentences have models. These sentences have a semantics. These sentences have a meaning.
Philosophically speaking, the model theory for modal logic has much less direct value than does the parallel model theory for temporal logic. This is because different theories about time do
naturally present themselves as theories about the relative-futurity relation on stages of the world, whereas different conceptions of modality do not naturally present themselves as theories about
the relative-possibility relation on states of the world. It is very useful to know that a certain class of modal models corresponds to a certain axiomatic system, if there is reason to be interested
in that axiom system. But philosophically speaking, model theory is of little direct use in establishing that a given axiom system is appropriate for a given conception of modality. Indeed, though we
have had the model theory for getting on towards a half-century, the correct axiom system has been convincingly determined only for a couple of conceptions of necessity: truth by virtue of logical
form, and provability in a given formal system. I have addressed these topics elsewhere, but let me add just a word here about provability logic. Here we have a clear understanding or intended
interpretation of what the box and diamond are to mean, and hence a clear understanding or intended interpretation of what it is for a sentence involving boxes and diamonds to be a law of logic, true
in all instances – essentially, for all substitutions of sentences of first-order arithmetic for the sentence letters p, q, r, and so on. We have a proof that a certain axiomatic system, GL, is sound
for this intended interpretation. We have also, thanks
Tarski’s tort
to Segerberg, a proof that GL is both sound and complete for a certain class of modal models. But for a long time this is all that we had, and we did not have what we want from a theory of models,
namely, an assurance that truth in all models corresponds to truth in all instances, according to our intended interpretation. This is something like the Kreisel gap noted earlier in connection with
first-order logic, but very much more serious. But the serious lack was eventually supplied by the genius of Solovay. As it happened, he made use of Segerberg’s earlier result. That is to say, rather
than ignore the modal models altogether and directly establish that if a sentence is consistent with GL then it is true in some arithmetical instance, thus establishing the completeness of GL for the
intended interpretation, he showed how from a model of the appropriate class of a sentence to produce sentences of first-order arithmetic that, substituted for the sentence letters, would result in
an instance that is true according to the intended interpretation, thus showing that truth in all models does correspond to truth in all instances, and thus indirectly establishing the completeness
of GL for the intended interpretation, given Segerberg’s result about its completeness for models of the appropriate class. The model theory is useful, but it is useful only as an auxiliary.
Segerberg’s contribution did not go to waste, but Solovay’s contribution was crucial. For a full exposition of these matters, and references to the further literature, see Boolos (1993). There are
other examples of the same phenomenon, where the model theory plays only an auxiliary role. One pertains to intuitionistic logic, with Tarski and Kripke in the role of Segerberg and Kreisel in the
role of Solovay. For an exposition see Burgess (1981a). What reflection on these examples should make clear is that merely possessing a modeltheoretic characterization of a given axiomatic system of
modal logic does not suffice to tell us that the system captures (that is, is sound and complete for) any interesting interpretation or concept of necessity. Provision of a purely formal
‘‘semantics’’ for an axiomatic system that had been without one – for instance, the provision by Fine and by Meyer of a purely formal ‘‘semantics’’ for the system R of relevance/relevant logic – does
little, and by itself does nothing towards establishing the coherence and intelligibility of any underlying motivating ideas or intended intuitive interpretation. This last is one case where there
certainly was premature celebration, provoking a paper, ‘‘When is a semantics not a semantics?’’ by B. J. Copeland (1979), that is still well worth reading to dispel confusions. So Tarski’s using the
word ‘‘semantics’’ in connection with his truth definition has opened the door to confusion in the area of
Mathematics, Models, and Modality
modal and more generally intensional or philosophical logic, which people like Copeland and myself have then had to come along after and try to straighten out. 6
Now one who opens a door that should have been left closed cannot be held responsible for every mischievous thing that then walks through it, and Tarski was himself the first victim of mischief
resulting from his original offense. I think he was naughty, and should have been made to stand in a corner by linguists; but he actually suffered something considerably worse: he was stood on his
head by Donald Davidson and his disciples. For Tarski’s usage has not only tended to encourage the modal muddles I have been bemoaning, but also seems to me to have been in part responsible for the
insufficiently critical reception of the truth-conditional theory of meaning; and this theory is diametrically opposed to Tarski’s own views on the meaning of truth and to his views on the paradoxes
with which we began. Now while I hope it will be universally agreed that the modal muddles Tarski’s usage may have encouraged are bad things, this certainly will not be agreed about the acceptance of
a truth-conditional theory of meaning or the rejection of Tarski’s view of the paradoxes. The most I can hope will be agreed is that if Tarski’s usage contributed to the formation of a predisposition
to accept the one and reject the other that was independent of the real merits of the case, then that development was a bad thing. In any case, besides Tarski’s usage of ‘‘semantics’’ there is
another feature of Tarski’s definition of truth, or rather, of the Tarski–Vaught definition of the model, that may have been operative. To begin with, it may be noted that once we make the transition
to the Tarski–Vaught notion of truth in a model, we soon find ourselves departing from Tarski’s original notion that truth for a given language is to be defined in a metalanguage containing that
language. For the languages for which truth in a model are being defined are formal languages, while the papers on model theory containing the definitions are, like other mathematical papers,
invariably written in (a mathematicians’ dialect of) a natural language, nowadays usually English. Thus instead of (7) M |¼ (A B) « (M |¼ A M |¼ B) (8) M |¼ (A and B) iff (M |¼ A and M |¼ B)
Tarski’s tort
we find (9) M |¼ (A B) iff (M |¼ A and M |¼ B).
And whereas (7) and (8) do not on the face of it look as if they could be telling one anything about the meaning of ‘‘’’ or ‘‘and’’ that one did not already know – after all if one doesn’t know the
meaning of ‘‘’’ already one is not going to understand the right-hand side of (7), and if one does not know the meaning of ‘‘and’’ already one is not going to understand the right-hand side of (8) –
by contrast (9) does rather look as if it were telling us something about the meaning of ‘‘,’’ given that we already know the meaning of ‘‘and.’’ It is tempting to think of the status of (9) as being
something like the status of the following: (10) ( jai D) is true iff is true and D is true.
And (10) does tell us something – I do not say everything, but I do say something – about the meaning of the Greek word ‘‘jai.’’ But notice that (10) tells us something about the meaning of ‘‘jai’’
only because we already know the meaning of ‘‘and’’ and because we already know the meaning of ‘‘true,’’ at least to the extent of knowing Convention T. So there is a significant difference between
(10) and (9), since the latter involves the symbol ‘‘|¼’’ rather than the English word ‘‘true.’’ If we consider the clauses in a Tarski–Vaught style definition, thus: (11a) M |¼ A iff not M |¼ A
(11b) M |¼ (A B) iff (M |¼ A and M |¼ B) (11c) M |¼ (A B) iff (M |¼ A or M |¼ B)
we cannot take these to be telling us both what the double turnstile means and what the caret, wedge, tilde, and so on mean. For there are too many unknowns and not enough equations. The intention is
that the double turnstile is to be read as ‘‘true,’’ the caret as ‘‘and,’’ the wedge as ‘‘or’’; but the biconditionals would be equally appropriate if instead the double turnstile were read as
‘‘false’’ and the caret as ‘‘or’’ and the wedge as ‘‘and.’’ If a student with no previous knowledge of these matters is told that the caret, wedge, and so forth are to be read as ‘‘and,’’ ‘‘or,’’ and
so forth, then the student may be able to figure out that the double turnstile might be read as ‘‘true’’; inversely, if the student is told that the double turnstile is to be read as ‘‘true,’’ then
the student may be able to figure out that the caret might be read as ‘‘and,’’ the wedge as ‘‘or,’’ and so forth. But the student has to be given some clue or other.
Mathematics, Models, and Modality
Now, once the model theorist is past the student stage, and knows that the double turnstile is customarily to be read as ‘‘true,’’ the model theorist does learn something about how an addition to the
usual list of logical symbols might be read, if the colleague proposing the addition indicates how the usual definition of double turnstile is to be extended to the extended language. For instance, a
clause (12) M |¼ (A # B) iff not (M |¼ A and M |¼ B)
would suggest the reading ‘‘not both’’ for the down arrow. And to give a perhaps more realistic example, in papers on generalized quantifiers the same symbol Q gets used over and over, with different
clauses in the definition of double turnstile in different papers, suggesting different readings in different papers, among them ‘‘there exist infinitely many,’’ ‘‘there exist uncountably many,’’
‘‘most,’’ and others. The analogue of (12) in a given paper in such cases tells one something about what the symbol Q is being used to mean in that given paper. But our ability to guess the readings
is entirely dependent on our familiarity with the meanings of ‘‘infinitely many’’ and ‘‘uncountably many’’ and ‘‘most,’’ and on our familiarity with what ‘‘true’’ means, at least to the extent of
being familiar with Convention T, and with the custom of reading the double turnstile as ‘‘true.’’ And in any case, we are not really being told that ‘‘QxA(x)’’ in a given paper means ‘‘there exist
infinitely many x such that A(x),’’ but only that it means something that is true if and only if there exist infinitely many x such that A(x), a condition fulfilled not only by ‘‘there exist
infinitely many x such that A(x)’’ but also by ‘‘it is not the case that for all but finitely many x it is not the case that A(x),’’ and by infinitely many other alternatives. After all, merely to be
told that some sentence of a foreign language we do not understand is true if and only if, say, snow is white, does not tell us what the sentence means. Taken together with our knowledge that snow is
white, it does tell us that the sentence means something true. But that is all. Merely being given a list of items of the following kind, one for each sentence of the foreign language: (13) ‘‘
[FOREIGN SENTENCE]’’ is true iff [ENGLISH SENTENCE]
will not tell us what the sentences mean. If I tell you (14a) ‘‘To viomi eimai arpqo’’ is true iff snow is white (14b) ‘‘To jaqbotmo eimai latqo’’ is true iff coal is black
I allow you, given the common knowledge that snow is white and coal is black, to infer that the two Greek sentences quoted are true; but I do not divulge the meaning of any Greek sentence.
Tarski’s tort
Donald Davidson conjectured, however, in Davidson (1967) and sequels, that if we require some finite apparatus to generate recursively, with clauses like (10), a whole list of items of type (12) for
every Greek sentence, and if we impose some suitable further restrictions, then in fact what appear on the right-hand side of each item on the list will have to be English sentences with the same
meaning as the Greek sentence on the lefthand side. Or rather – since he was writing during the era of Quinean suspicion about meaning – the Greek and English sentences will have to be close enough
in meaning, according to our intuitive, pre-theoretic understanding of ‘‘meaning,’’ that appearing on opposite sides of a list generated in the manner indicated can serve as a workable substitute for
the intuitive, pre-theoretic, but to some suspect, understanding of sameness of ‘‘meaning.’’ In short, while in order to learn the meaning of Greek sentences and the words of which they are composed
it is not enough to be told things like (14a,b), Davidson conjectured that, in order to do so it may be enough to be told, or to come to know, such things in the right way. Davidson’s conjecture that
theories of truth can in this sense serve as theories of meaning eventually gave rise to what I will call Davidsonianism, without intending to imply that Davidson himself fully subscribed to it,
namely, the truth-conditional theory of meaning. Formally, indeed, the step from Davidson’s theory about how to tell the meanings of foreign sentences to speakers who already know the meaning of
English sentences to the Davidsonian theory that knowledge of the truth conditions of English sentences is what knowledge of the meaning of English sentences consists in, can be a very short one. For
the simplest version of Davidsonianism would simply be the analogue of what Davidson says about Greek and English applied to English and a hypothetical innate language of thought. Using the language
of Descartes to represent the language of thought, coming to know the meaning of the English sentences ‘‘Snow is white,’’ ‘‘Grass is green,’’ and so on, amounts to coming in the right way to know the
following: (15a) ‘‘Snow is white’’ est vraie ssi la neige est blanche (15b) ‘‘Grass is green’’ est vraie ssi l’herbe est verte.
Davidsonianism as such, it should be emphasized, is not committed to the language of thought hypothesis; I only say that Davidsonianism is immediate from the conjecture of Davidson if one accepts
that hypothesis. For some of us Davidsonianism seems, not least on account of its apparent assumption that truth is an innate idea, possession of which is a prerequisite for all language-learning, to
be preposterous. For others, the
Mathematics, Models, and Modality
Davidsonian assumption is so much taken for granted that it is hardly recognized as a substantive assumption at all. My concern here will be not with the enormous question of the merits or demerits
of the truthconditional theory of meaning, but only with the extent to which Tarski deserves a share of the blame or credit for its becoming so widely held a belief among philosophers as it currently
is. The first thing that must be said on this head is, of course, that Tarski’s responsibility is limited. Davidson himself is not fully responsible for his disciples’ extrapolations from his
conjectures, and Tarski is certainly not fully responsible for widespread sympathetic reception of those conjectures. It must be acknowledged, for one thing, that their sympathetic reception was
surely in part due to the prestige of the name of Davidson as a result of quite other achievements. But then again, was it not in part due to the prestige of the name of Tarski, which Davidson so
frequently invoked? Well, even if so, it must be acknowledged, for another thing, that the invocation of Tarski’s name was not entirely appropriate, since as Davidson, if not every one of his
disciples, was aware, those conjectures amount to an inversion of Tarski. For they make what for Tarski were clauses in a definition of truth in terms of already understood notions like negation and
conjunction and disjunction, into definitions of a kind of those operators, in terms of a notion of truth taken as primitive. We constantly find in the writings of Davidson and disciples mentions of
a ‘‘Tarskian’’ theory of truth, where ‘‘counter-Tarskian’’ or ‘‘anti-Tarskian’’ would have been more accurate, if less likely to confer borrowed prestige on bold (which is to say doubtful) new
conjectures. And Tarski, of course, is not responsible for this usage. But would the idea of invoking Tarski’s name at all in connection with a theory of meaning have occurred to anyone, if Tarski
had not himself attached to his theory a label ordinarily used for the theory of meaning, the label ‘‘semantics’’? 7
‘‘ S E M A N T I C ’’
Well, if so then one important consequence of Tarski’s usage of ‘‘semantics’’ was to help popularize a theory with implications diametrically opposed to his own view on the ‘‘semantic’’ paradoxes,
his so-called inconsistency theory of truth. This is the last point I wish to bring out, and it requires a little background. The natural anti-Davidsonian assumption is that understanding the meaning
of a word consists, not in knowledge of how it contributes to the truth conditions of sentences in which it
Tarski’s tort
appears, but rather in internalization of rules for its use. For a ‘‘use’’ theorist, it would be nothing short of a miracle if we never internalized any rules having lurking inconsistencies, and the
intractability of the liar and related paradoxes strongly suggests that in the case of ‘‘truth’’ the rules may be the obvious, inconsistent ones, permitting free passage back and forth between p and
‘‘it is true that p,’’ in accordance with an absolutely unrestricted version of Convention T. Now while Tarski neither explicitly endorses the ‘‘use’’ theory nor explicitly says that Convention T is
all there is to the intuitive meaning of ‘‘true’’ – though it represents all he is able to understand about the pretheoretic meaning of ‘‘true’’ – he does endorse one consequence of those views,
namely, the consequence that the intuitive notion of truth is the inconsistent one. For Tarski, what is to be expected of a theory of truth is therefore not a vindication of the intuitive notion, but
a restricted replacement for it: a serviceable substitute applicable not to all languages, but only to languages of a certain comparatively simple structure. This may be the view of most of us who
have worked on the technical comparative study of the various theories of truth that emerged in the wake of Kripke’s famous ‘‘outline’’ – or if not the majority view, anyhow the plurality view,
having more adherents than any particular one of the more positive theories. But among philosophers generally it is distinctly a minority view, subscribed to by my teacher Charles Chihara, my student
John Barker, and (not as the result of any influence of mine) my son Alexi Burgess, and by very few others. Indeed it seems that many if not most philosophers are so violently prejudiced against this
view that they do not even wish to contemplate what would be an appropriate response if the theory were accepted. While there may be various other reasons for this prejudice, surely one of the most
important is that the inconsistency theory of truth is incompatible with the truth-conditional theory of meaning. Insofar as his own usage of ‘‘semantic’’ tended to encourage the view that
‘‘Tarskian’’ truth conditions are central to meaning, Tarski has himself ironically helped to create a prejudice against his own views on the paradoxes and their lessons. (Here as in so many other
cases the malefactor has been himself the chief victim of his malefaction.) And though the question of merits or demerits of the inconsistency theory of truth, like the issue of the merits or
demerits of the truth-conditional theory of meaning, of which issue it is but one aspect, is too large a question to be gone into here, I hope it could at least be agreed that such questions ought to
be examined without prejudice. Since nothing, perhaps, does more to encourage a bias in favor of the view that ‘‘Tarskian’’ truth conditions are central to meaning than the fact that Tarski
Mathematics, Models, and Modality
himself calls the approach to truth involving such inductive clauses ‘‘semantic,’’ we have here another reason, additional to the modal muddles mentioned earlier, for avoiding that usage. Let us
therefore, in roaming the vast field Tarski opened up for us, not follow him in his one terminological trespass. Let us honor him for every aspect of what he called the ‘‘semantic’’ conception of
truth except for his calling it that.
Which modal logic is the right one?
Which if any of the many systems of modal logic in the literature is it whose theorems are all and only the right general laws of necessity? That depends on what kind of necessity is in question, so
I should begin by making distinctions. A first distinction that must be noted is between metaphysical necessity or inevitability – ‘‘what could not have been otherwise’’ – and logical necessity or
tautology – ‘‘what it is self-contradictory to say is otherwise.’’ The stock example to distinguish the two is this: ‘‘Water is a compound and not an element.’’ Water could not have been anything
other than what it is, a compound of hydrogen and oxygen; but there is no self-contradiction in saying, as was often said, that water is one of four elements along with earth and air and fire. The
logic of inevitability might be called mood logic, by analogy with tense logic. For the one aims to do for the distinction between the indicative ‘‘it is the case that . . .’’ and the subjunctive
‘‘it could have been the case that . . .,’’ something like what the other does for the distinction between the present ‘‘it is the case that . . .’’ and the future ‘‘it will be the case that . . .’’
or the past ‘‘it was the case that . . .’’ The logic of tautology might be called endometalogic, since it attempts to treat within the object language notions that classical logic treats only in the
metalanguage. However, it hardly deserves a name, since it immediately splits up into two subjects. For a second distinction must be between two senses of tautology. On the one hand, there is
model-theoretic logical necessity or validity, the nonexistence of a falsifying interpretation, ‘‘being true by logical form alone.’’ On the other hand, there is proof-theoretic logical necessity or
demonstrability, the existence of a verifying derivation, ‘‘being recognizable as true by logical considerations alone.’’ Likewise, there is a distinction between two notions of contradiction,
model-theoretic unsatisfiability and proof-theoretic 169
Mathematics, Models, and Modality
inconsistency, and between two notions of implication, model-theoretic consequence and proof-theoretic deducibility. There would be at least a conceptual distinction even if logic were understood
narrowly as first-order logic, where the model-theoretic and proof-theoretic notions coincide in extension by the Go¨del Completeness Theorem. There may be a difference in extension between them when
logic is understood more broadly: for instance, if it is taken to include higher-order logic and the mathematics that goes with it. Logicians often call model-theoretic and proof-theoretic necessity
semantic and syntactic logical necessity. However, there is a conflict between this usage and the older usage of linguists on which, roughly speaking, ‘‘semantic’’ means ‘‘pertaining to meaning’’ and
‘‘syntactic’’ means ‘‘pertaining to grammar.’’ There is a conflict between the two usages of ‘‘semantics,’’ especially, because there is or may be a gap between mathematical modeling and intended
meaning. In any case, shorter labels than ‘‘the logic of semantic logical necessity’’ and ‘‘the logic of syntactic logical necessity’’ would be useful. One might use proplasmatic logic and apodictic
logic, from the Greek for model and proof. But it may be more suggestive to use validity logic and demonstrability logic, by analogy with provability logic. The analogy between provability logic and
demonstrability logic is especially close, the one being concerned with what a theory can prove, the other with what we can demonstrate, the ‘‘can’’ in each pertaining to ability in principle,
regardless of practical limitations. The question which is the right system of tense logic is not one for the logician: the logician can indicate how this or that or the other system corresponds to
this or that or the other theory of the nature of time, but which is the right theory of the nature of time is a question for the physicist. Similarly, the question which is the right system of mood
logic would seem to be one not for the logician, but for the metaphysician. By contrast, the question which is the right system of validity or demonstrability logic cannot be passed off by logic to
some other discipline. The question which is the right validity logic has been answered at the sentential level, which is the only level that will be considered here: it is the system known as S5.
This result is essentially established already in Carnap (1946). The question which is the right demonstrability logic, again at the sentential level, goes back to the earliest days of modern modal
logic. For though the founder of the subject, C. I. Lewis, did not clearly distinguish among metaphysical, model-theoretic logical, and proof-theoretic logical modalities, still he did always write
of necessitation as implication, and did
Which modal logic is the right one?
often write of implication as deducibility, so that it is reasonable to conclude that by necessity he primarily meant tautology, by which in turn he primarily meant demonstrability. No one today,
however, takes seriously his suggestion that the right logic for this notion might be the feeble S1 or the bizarre S3. To the extent that there is any consensus or plurality view among logicians
today, I take the view to be that the right demonstrability logic is S4. (Even in ‘‘relevance’’ or ‘‘relevant’’ logic, where S4 cannot be literally accepted, since the classical sentential logic it
is based on is rejected, still it seems to be a consensus or plurality view that the right logic should be ‘‘S4-like.’’) The locus classicus for such a view is a paper from the proceedings of a
famous 1962 Helsinki conference on modal logic (Hallde´n 1963). While the argument for the soundness of S4 as a demonstrability logic given there seems as compelling as an ‘‘informally rigorous’’
argument can be, there is no real argument for completeness, which remains an open question. It therefore remains conceivable that the right logic is something stronger than S4: that it is something
intermediate between S4 and S5, such as S4.2 or S4.3; or that it is something stronger than S4 but incomparable with S5, such as the logic called Grz after Grzegorczyk (1967) and the logic that ought
to be called McK after McKinsey (1945). (In the literature it has heretofore been misleadingly called S4.1, though it is not intermediate between S4 and S5.) The issues are sufficiently illustrated
by the cases of the distinctive axioms of S4.2 and of McK, which are equivalent respectively to (&&p &&p), the principle that ‘‘nothing is both demonstrably not demonstrable true and demonstrably not
demonstrably false,’’ and to &&p &&p, the principle that ‘‘everything is either demonstrably not demonstrably true or demonstrably not demonstrably false.’’ Hallde´n rightly says of the latter – what
he could also have said of the former – that it is not an intuitively plausible principle when the box & is meant as demonstrability. But to say this is to do something less than to give an
‘‘informally rigorous’’ argument for the claim that either principle outright fails as a general law, let alone for the claim that any principle not a theorem of S4 does so. The question which is the
right provability logic has been answered, and though results are often stated for a single-theory, classical first-order arithmetic, many hold for all true theories satisfying certain minimum
requirements of strength. Actually, one must distinguish the question of which logic gives all and only those principles about provability all whose instances are provable by the theory in question
from the question which gives all and only those principles that are valid (or demonstrable by us).
Mathematics, Models, and Modality
The answer to the former question is given by a system GL, and to the latter question by a system GLS. Both differ from the Lewis systems S4 and S5 by lacking the law &(&p!p). The failure of this law
is, roughly speaking, the content of the Go¨del Incompleteness Theorems. The standard reference is of course Boolos (1993). Below, in x2 I will recall the case for the soundness of S5 as a validity
logic and of S4 as a demonstrability logic. In x3 I will recall the Carnapian case for the completeness of S5. In x4 I will indicate the minimal requirements of strength that are assumed in
provability logic, and that I will be assuming in demonstrability logic also, and attempt to clarify the relationship between the two logics. In x5 I will present a case against McK as a
demonstrability logic; and it will generalize to a case against any system not contained in S5, such as Grz. Finally, in x6 I will present a case against S4.2; and this will of course also constitute
a case against any stronger system, such as S4.3. But the case of weaker systems intermediate between S4 and S5 will be left open, and with it the general question. 2
A key consequence of the step of treating modality in the object language, treating & as a one-place connective on a par with , is that iterated modalities, modalities embedded inside modalities, as
in &&p, are allowed. By contrast, when ‘‘valid’’ is expressed only by a word of the metalanguage, applicable only to formulas of the object language, there can be no question of iterations like ‘‘it
is valid that it is not valid that . . .’’ All the modal systems most commonly considered in the literature agree with classical meta-logic, in the sense that where classical meta-logic has a law,
for example ‘‘a valid conclusion is a consequence of any premise,’’ these systems will have a corresponding law, in the example &p!(q )p). Agreeing as they all do with classical logic, these systems
agree with each other for formulas without iterated modalities. What distinguishes S5 is that it has laws that make every iterated formula more or less trivially equivalent to an uniterated formula.
The first step in establishing S5 as the right logic of validity is to establish the soundness of S5 as a validity logic: to establish that every theorem of S5 is correct as a general law about
validity, or what comes to the same thing, that every axiom of S5 is thus correct, and that every rule of S5 preserves such correctness. This is completely unproblematic for the non-modal axioms and
rules, which are simply those of the sentential component of classical, non-modal sentential logic. Moreover, though S5 is usually
Which modal logic is the right one?
formulated with a specifically modal rule allowing &A to be taken as a theorem whenever A is a theorem, this rule can be dispensed with in favor of adding &A as an axiom whenever A is an axiom of the
usual formulation, which is to say, whenever A is either a classical, non-modal axiom or one of the following modal axioms: (1) (2) (3) (4)
&p!p &(p!q)!(&p!&q) &p!&&p &p!&&p.
Again for the classical, non-modal axioms this is completely unproblematic, while in making a case for – which is to say, in demonstrating – any one of (1)–(4), one will at the same time be making a
case for its demonstrability, and a fortiori for its validity. Thus the problem of establishing the soundness of S5 for validity logic reduces to that of establishing the correctness of (1)–(4), and
similarly the problem of establishing the soundness of S4 for demonstrability logic reduces to that of establishing the correctness of (1)–(3). Indeed, for (1)–(3) correctness for the box as validity
and for the box as demonstrability can be established by more or less parallel arguments. Consider (1), for example. The arguments are simply the parallel ones that whatever is true by logical form
alone must be true, and that whatever can be recognized to be true by logical considerations alone must be true. But indeed, I need not enlarge on the case for (1)–(3), which is adequately made by
Hallde´n. It remains to consider the distinctive axiom (4), which of course is being proposed only as a general law of validity, not of demonstrability. Here the main point is just as follows.
Consider any particular instance of (4): (5) If it is not true by logical form alone that p, then it is true by logical form alone that it is not true by logical form alone that p.
Suppose that the antecedent of (5) is true, which is to say that the following is false: (6) It is true by logical form alone that p.
Since (6) is false, there must be some y of the same logical form as p that is false. Now consider anything else of the same logical form as (6). It will look like the following, wherein q has the
same logical form as p: (7) It is true by logical form alone that q.
Mathematics, Models, and Modality
But then y also has the same logical form as q, and since y is false, (7) is false. In other words, anything of the same logical form as (6) is false, and hence the following is true: (8) It is true
by logical form alone that it is not true by logical form alone that p.
Thus the consequent of (5) is true, as required. 3
In the case of provability logic, as expounded in Boolos (1993), and of intuitionistic logic, as expounded in Burgess (1981a), once the candidate logic S has been identified, the argument that it is
the right one consists of three parts: soundness, formal completeness, and material completeness. That is, it is shown that every theorem of SPis acceptable as a general law under the intended
interpretation; a class of mathematical models is identified and it is shown that (every theorem of S and) no non-theorem of P S comes out true in all models of class ; andPit is shown that any
formula that comes out untrue in some model of class is unacceptable as a general law under the intended interpretation. This last step, bridging the gap between ‘‘semantics’’ in the logicians’ sense
and ‘‘semantics’’ in something more like the linguists’ sense, is due in the case of provability logic to R. M. Solovay, and in the case of intuitionistic logic to Georg Kreisel, who coined the
phrase ‘‘informal rigor’’ in this connection. In both cases, the last step is the most difficult. The situation is rather similar in the case of validity logic. Beginning with formal completeness for
the case of validity logic (soundness having already been discussed), the kind of models now standard in modal logic first became widely known through another talk at the 1962 Helsinki conference,
this one by Saul Kripke; another kind of model near to those standardly used became widely known through yet another talk at the same conference, this one by Jaakko Hintikka. A frame model M, as in
Kripke (1963), consists of two parts, a frame and a valuation. The frame consists of a set W of indices, a two-place relation R on it, and a designated member w0 of it. A valuation V is a
specification for each x in W and each sentence letter p, q, r, . . . of whether or not the sentence letter counts as true in that index. The notion of truth in an index is extended to compound
formulas by recursion: A is true at x if and only if A is not true at x. A B is true at x if and only if A is true at x and B is true at x. &A is true at x if and only if A is true at y for every y
in W such that Rxy.
Which modal logic is the right one?
It is permitted to have two indices x and y at which exactly the same set of formulas are true, and such duplication is often important. A formula counts as holding in M if it is true at w0, and as
being valid in a class K of frames if it holds in every model whose frame is in that class. The proposal in Hintikka (1963) is less purely ‘‘semantic’’ or modeltheoretic: it is still ‘‘syntactic’’ or
proof-theoretic in that, while it has a relation R, what this relation relates are not abstract indices, but sets of formulas, and so one does not have duplication. But as it happens, in connection
with the system S5, differences between the approaches of Kripke and Hintikka, such as permitting or forbidding duplication, are unimportant; and so for that matter is the main similarity between the
two approaches: the presence of a relation R. For while the theorems of S5 can be characterized as the formulas valid for the class of reflexive, transitive, and symmetric frames, this
characterization reduces, by a series of steps too familiar to bear repetition here, to a much simpler one. Consider, for any k, the formulas involving only the sentential variables or atomic
formulas p1, . . ., pk. Then for such formulas, a model may be taken to consist simply of a non-empty subset W of the set of rows of the truth table p1, . . ., pk, with one such row w0 designated.
The notion of truth at a row x in the truth table is defined for compound formulas by a recursion in which the first two clauses are exactly the same as above, while the third reads as follows: &A
is true at x if and only if A is true at y for every y in W.
A formula counts as holding in such a model M if it is true at w0. The theorems of S5 may be characterized as the formulas that hold in all such models. Turning to material completeness for the case
of validity logic, it may be well to begin by considering an objection to the modeling just described that has been independently advanced by several writers. One of them put it as follows: What is
needed for logical necessity of a sentence p in a world w0 is more than its truth in each one of some arbitrarily selected set of alternatives to w0. What is needed is its truth in each logically
possible world. However, in Kripke semantics it is not required that all such worlds are among the alternatives to a given one.
It is then suggested that one should adopt not the standard model theory, or the simplification thereof described above, but rather a deviant model
Mathematics, Models, and Modality
theory, which after simplification amounts to just this, that the only model admitted is the one consisting of all rows of the truth table. There is a fallacy or confusion here. What is wanted is
that the technical notion of coming out true in all models should correspond to the intuitive notion of coming out true under all interpretations, or all substitutions of specific p1, . . . , pk for
the variables p1, . . . , pk. Since, for instance, among all the many substitutions available there are ones in which the p1 substituted for p1 is the same as the p2 substituted for p2, so that it is
impossible for p1 and p2 to have different truth values, there must correspondingly be among the models one available where the only rows of the truth table present are those for which the value
given to p1 is the same as the value given to p2. The confusion in the objection becomes apparent when one notes that in the deviant model theory suggested, &(p1 p2) counts as valid, whereas of
course &(p1 p1) does not, so that the standard rule of substitution fails. But the rule of substitution must hold so long as one adheres to the standard conception of the role of the variables p1, .
. . , pk, according to which arbitrarily selected p1, . . . , pk may be substituted for them. Indeed, the deviant model theory corresponds to a deviant conception on which independent p1, . . . , pk
must be substituted for distinct p1, . . . , pk. The confusion is worse confounded when it is suggested that the difference between the standard and deviant model theories somehow corresponds to a
difference between non-logical and logical notions of necessity. For what is at issue is, to repeat, differences in conceptions of the role of variables, not in conceptions of the nature of
necessity. Yet, confused as it is, the objection does serve to call attention to an important question. Each substitution of specific p1, . . . , pk for p1, . . . , pk determines a non-empty set of
rows of the truth table, consisting of all and only those rows x such that it is not impossible by the logical forms of the pi alone for them to have the truth values x assigns to the corresponding
pi. The question is, is it the case that for any arbitrarily selected non-empty subset W of the set of rows of the truth table, there are specific statements p1, . . . , pk that determine, in the
manner just described, exactly that subset? In other words, if a formula is not a theorem of S5, and therefore fails in some standard model, is there some specific instance in which it fails? An
affirmative answer to this question is precisely what is needed to establish the material completeness of S5 as validity logic. It is a reasonable assumption, and one presumably made by the critics
alluded to, that there exist indefinitely many a, b, g, . . . that are independent in the sense that any conjunction of some of them with the negations of the rest of them is possible, in the
relevant sense of possibility. For instance,
Which modal logic is the right one?
if a, b, g, . . . are of simple subject–predicate form with distinct subjects and predicates in each, they will be thus independent. Given this assumption, an affirmative answer to the foregoing
question is forthcoming. As this result has in effect already been expounded several times in the literature, in Carnap (1946), Makinson (1966), and S. K. Thomason (1973), there should be no need for
me to do more than give an illustrative example here. Indeed, a simple one, involving just three sentence letters p, q, r, should suffice. Consider the set W containing just the three rows in which
two of p, q, r are true and the other false. Call the one where r alone is false x, the one where q alone is false y, and the one where p alone is false z. What is to be established is that given
independent a, b, . . . , there are truth-functional compounds p, y, q thereof that might be substituted for p, q, r, for which the three rows indicated represent all and only the combinations of
truth values that are not false by logical form alone. To find the required compounds, one first finds three auxiliary compounds , u, , that are pairwise exclusive and jointly exhaustive, meaning
that the conjunction of any two must be false by logical form alone, while the disjunction of all three must be true by logical form alone. Setting ¼ a and u ¼ a b and ¼ a b will do. One next lets
the auxiliaries , u, correspond to the rows x, y, z, and takes as the substitute for a given one of p, q, r the disjunction of the auxiliaries corresponding to the rows in which it is true. Thus the
substitute p for p should be u or a (a b), which simplifies to a b. It can be worked out that the substitutes y and q for q and r simplify to a b and a, respectively. And it can then be worked out
that exactly two of the three, a b and a b and a, must be true, and that given the independence of a and b it may be any two of the three, as required. Before leaving the topic of validity logic it
may be mentioned that the fact that S5 is indeed the right logic can be confirmed in a different way. After soundness is established in order to show that no stronger system than S5 is acceptable,
one would appeal to the result of Scroggs (1951), according to which the only extensions of S5 are finitely-many-valued logics. One would then argue that no finitely-many-valued logic can be correct
for semantic logical necessity (given the same reasonable assumption as above, that there are indefinitely distinct independent statements). 4
A word must be said to clarify the relationship between demonstrability and provability logics, and to dispel a puzzle about that relationship.
Mathematics, Models, and Modality
The minimal assumptions of strength needed for provability logic are three. They may be formulated either as assumptions on the notion of proof for the theory, or as assumptions on the set of
theorems provable. On the formulation in terms of proofs, the first assumption would be that whether something is or is not a proof in the theory is decidable, which by Church’s Thesis implies that
the relation of proof-in-the-theory to theorem proved is recursive. The second assumption would be that the rules of classical first-order logic may be used in proofs in the theory. The third
assumption would be that certain basic, finite, combinatorial modes of reasoning – whose exact scope need not be gone into here, except to say that, since we want to get the Second Incompleteness
Theorem, the scope needs to be somewhat wider than it would need to be if we only wanted to get the First Incompleteness Theorem – may be used in proofs in the theory. On the formulation in terms of
theorems, it would first be assumed that the set of theorems provable is recursively enumerable. It would second be assumed that the set of theorems provable is closed under the rules of classical
first-order logic. And it would third be assumed that the set of theorems provable includes certain basic, finite, combinatorial results. Clearly, the list of assumptions on the theory stated earlier
yields the list of assumptions on the set of theorems just stated. And conversely, by Craig’s Lemma, any set of conclusions satisfying this latter list of assumptions coincides with the set of
conclusions provable in some theory satisfying the former list of assumptions. In demonstrability logic, at least as I will be considering it here, it is to be assumed that whether something
constitutes a demonstration of a given conclusion is decidable, that the rules of classical first-order logic may be used in demonstrations, and that certain basic, finite, combinatorial modes of
reasoning may be used in demonstrations. By what has already been said, it follows that the set of conclusions we can demonstrate coincides with the set of conclusions that can be proved in some
theory of the kind to which provability logic applies. And yet, provability logic and demonstrability logic are supposed to be different, in that by what has been said in earlier sections, (1) below
is false, while (2) below is true: (1) It can be proved in such-and-such a theory T that if something can be proved in such-and-such a theory T, then its negation cannot also be. (2) It can be
demonstrated by us that if something can be demonstrated by us, then its negation cannot also be.
Which modal logic is the right one?
What may be puzzling is how it can be that (1) fails while (2) holds and as already indicated the above-stated assumptions commit one to the truth of something of the following form: (3) What can be
demonstrated by us coincides with what can be proved in such-and-such a theory T.
Indeed, a notorious objection to (3) above, associated with the names of J. R. Lucas and Roger Penrose, claims that it, together with the true (2) above, yields the false (1) above. The solution to
the puzzle is to point out that this objection commits the fallacy of assuming that co-extensive terms, such as ‘‘what we can demonstrate’’ and ‘‘what such-and-such a theory T can prove’’ can be
substituted without change of truth value everywhere, even in intensional contexts, such as ‘‘we can demonstrate that . . . ’’ or ‘‘such-and-such a theory T can prove that . . .’’ To get (1) above
from (2) above one would need something stronger than (3) above, namely the following: (4) We can demonstrate (3).
The solution of the puzzle is that (3) does not yield (4). (By analogy, those familiar with the work of Solomon Feferman and William Tait on the standpoints of the predicativists and the finitists,
thinkers who owing to their philosophical prejudices cannot demonstrate all that we can, will recall that what is demonstrable from the standpoint of one or the other of these ’isms can indeed be
exactly captured by a theory, though the ’ists themselves cannot recognize as much.) The fallacy should become obvious on comparing (1)–(3) above with the following: (10 ) The only man with
such-and-such a number N of hairs on his head knows that the only man with such-and-such a number N of hairs on his head is gray-haired. (20 ) I know that I am gray-haired. (30 ) I am the only man
with such-and-such a number N of hairs on his head.
Clearly (30 ) above by itself does not, with (20 ) above, yield (10 ) above. Rather, one would need the following stronger assumption: (40 ) I know (30 ).
But (30 ) above does not yield (40 ) above.
Mathematics, Models, and Modality 5
Beginning with formal completeness for the case of demonstrability logic, soundness having already been discussed, the theorems of S4 can be characterized as the formulas valid for the class of
reflexive and transitive frames; and equally, they can be characterized as the formulas valid for the class of finite reflexive and transitive frames, a deeper result implying the decidability of the
logic. A historical fact is worth mentioning, that the result just stated follows immediately from two results already in the literature two decades before the famous Helsinki conference. One of
these, from McKinsey (1941), characterizes the theorems of S4 in terms of a class of finite models of a different kind, based not on frame structures but on algebraic structures of a certain kind.
The other of these, from Birkhoff (1937), connects finite algebraic structures of the kind in question with finite reflexive and transitive frame structures. (The former paper makes no mention of
frames, and the latter no mention of modal logic.) This history is worth mentioning among other reasons because the older algebraic modeling involved, which is sometimes not taught to students of the
subject today, still has its uses even after the development of frame models, and I will be citing an instance later. Turning to material completeness, no decisive results have yet been obtained, and
my aim will only be to present, case by case, some partial results. To begin with, it is not only a reasonable assumption, as already said in an earlier section, that there exist indefinitely many a,
b, g, . . . that are independent, but also a reasonable assumption that there exist indefinitely many a, b, g, . . . that are demonstrably independent. For instance, if a, b, g, . . . are
recognizably of simple subject–predicate form with distinct subjects and predicates in each, they will be thus demonstrably independent. It follows that if D is any compound formed by negation and
conjunction or disjunction from p, q, r, . . . such that D is a not a theorem of classical sentential logic, or in other words, such that D comes out false in some row of the pertinent truth table,
then the result D of substituting a, b, g, . . . for p, q, r, . . . in D will be demonstrably not demonstrable, or demonstrably indemonstrable. Similarly, if D is such that D is not a theorem of
classical sentential logic, or in other words, such that D comes out true in some row of the pertinent truth table, then D will be demonstrably not demonstrably false, or demonstrably irrefutable.
Thus from any D such that neither it nor its negation is a theorem of classical sentential logic, we get a counterexample D to the McK axiom that nothing is both demonstrably indemonstrable and
demonstrably irrefutable.
Which modal logic is the right one?
This argument can be generalized to apply to any axiom that is not a theorem of S5. Perhaps the easiest route to a generalization is to draw on the work of Slupecki and Bryll (1973). They pursue the
old idea of the Polish school that a logical system should have in addition to its axioms and rules of the ordinary kind, its axioms and rules of acceptance, indicating that certain formulas are
acceptable as general laws, some axioms and rules of an opposite kind, axioms and rules of rejection, indicating that certain formulas are unacceptable as general laws. Just as a formula P is a
theorem of the system, in symbols j- P, if there is a sequence of steps, each an axiom of acceptance or following from earlier ones by a rule of acceptance ending in P, so the goal would be to have
for each formula Q that is not a theorem of the system, in symbols -j Q, a sequence of steps involving axioms and rules of rejection ending in Q. For classical logic there would be the axiomatic
rejection of the constant false -j and ? rules of rejection that are the reverse of the usual classical rules of acceptance: if -j P 0 where P 0 is a substitution instance of P, then -j P, and if -j
Q and -j P Q, then -j P. For any modal logic there would be also the rule of rejection that is the reverse of the usual modal rule of acceptance: if -j &P then -j P. For each particular modal system
additional rules of rejection would be needed. For S5 Slupecki and Bryll show that just one additional rule suffices: if -j P!Q 1 and . . . and -j P!Q N then -j &P!(&Q1 . . . &QN)
where P and the Q i involve no modalities. In order to show that any non-theorem of S5 should be rejected as a general law of demonstrability, it will suffice therefore to argue that the above rule
of rejection is acceptable for demonstrability. And indeed, if the P!Q i are unacceptable as general laws, there must for each be a row xi of the truth table for the variables p, q, . . . involved on
which P comes out true and Q i comes out false. But then by Carnap’s result there are specific p, y, . . . that could be substituted for the variables p, q, . . . in P and the Q i to give sentences
and Ci, such that the xi represent all and only the possible combinations of truth values for the p, y, . . . It follows that is an instance of a theorem of classical sentential logic, hence
demonstrable, while each Ci is demonstrably indemonstrable by the considerations of the preceding paragraph. Thus the following: &!(&C1
. . . &CN)
fails, and the formula of which it is an instance, namely the following:
182 &P!(&Q 1
Mathematics, Models, and Modality . . . &Q N)
is not acceptable as a general law, as required. 6
AGAINST S4.2
The S4.2 principle says that everything is either demonstrably indemonstrable, or demonstrably irrefutable. An argument against the acceptability of this principle as a general law can be given. In
addition to assumptions about demonstrability listed in earlier sections, including the assumption that there are indefinitely many instances recognizable as being of simple subject–predicate form, I
need the reasonable assumption that there is an instance recognizable as being of simple subject–verb–object form, with no further pertinent logical structure. That is, I assume there is a two-place
predicate that is recognizably a two-place predicate with no further pertinent logical structure, so that it is recognizable that all its pertinent logical structure is represented when it is
represented by a simple two-place predicate variable F. Given such an example, for any compound formed from using negation, conjunction or disjunction, and universal or existential quantification,
the logical form of will recognizably be represented by a formula P of classical first-order logic formed from F using , or , and 8 or 9. Let be the set of all such compounds , and L the set of the
corresponding formulas P. Now suppose a formula P in L fails in no model of the kind used in classical first-order logic. Then P is a theorem of classical first-order logic by the Go¨del Completeness
Theorem; and by the assumption that the rules of classical first-order logic may be used in demonstrations, the corresponding in will be demonstrable, and demonstrably so. Then by general laws
represented by theorems of S4, ‘‘it is demonstrable that ’’ will be demonstrably irrefutable and not demonstrably indemonstrable. Now suppose the formula P of L fails in some finite model of the kind
used in classical first-order logic. Then basic, finite, combinatorial reasoning shows that it does so, and hence that it does not represent a correct general law; and by the assumption that basic,
finite, combinatorial reasoning may be used in demonstrations, the corresponding in will be indemonstrable, and demonstrably so. Then by general laws represented by theorems of S4, ‘‘it is
demonstrable that ’’ will be demonstrably indemonstrable and not demonstrably irrefutable.
Which modal logic is the right one?
For any formula P of L, let C(P) be ‘‘it is demonstrable that ,’’ where in is the result of substituting for F. Let X be the set of P such that C(P) is demonstrably irrefutable, and let Y be the set
of P such that C(P) is demonstrably indemonstrable. What has been established so far is that if P has no counter-model, then P belongs to the difference set X Y; while if P has a finite
counter-model, then P belongs to the difference set Y X. What the S4.2 principle yields is that the union set X [Y is all of L. To complete the case against the S4.2 principle, I must invoke the
assumption that the set of demonstrable conclusions is recursively enumerable, from which it follows that the sets X and Y are also recursively enumerable. Then by the Reduction Theorem for
recursively enumerable sets it follows that there are recursively enumerable sets X and Y satisfying the conditions that X ˝ X and Y ˝ Y and X \ Y ¼Ø and X [ Y ¼ X [ Y. These conditions imply that X
Y ˝ X and Y X ˝ Y . What the S4.2 principle yields is that X and Y are complements of each other in the recursive set L, which since both are recursively enumerable yields that both are recursive.
What was established earlier yields that if P has no countermodel, then P belongs to X , while if P has a finite counter-model, then P belongs to Y and so does not belong to X . And now we have a
contradiction, since by an elaborated version of Church’s Theorem, there is no recursive set Z separating the formulas with no counter-models from those with finite counter-models. The foregoing
argument applies to just one of the infinitely many formulas that are theorems of S5 but not of S4. Can all such formulas be rejected? Clearly, if they can, some general argument, not a case-by-case
examination of examples, will be needed to establish that fact. How might such an argument proceed? Well, rejection principles for S4 have been formulated by Valentin Goranko (1994), and simpler ones
have been found by Tomasz Skura (1995), who works with finite algebraic models of the kind alluded to earlier. Skura requires two principles, the first being a slight variant of the rejection
principle for S5 considered earlier. Unfortunately, Skura’s second principle, though simpler than Goranko’s principles, is complex enough that it is not very perspicuous, and it is not easy to argue
why it should be acceptable for syntactic logical necessity. (Goranko and Skura do not themselves consider such questions.) Fortunately, Skura does not claim his rules are the simplest feasible, but
on the contrary he explicitly poses it as an open question whether there are any simpler ones. It may be that this open question will have to be settled before one can settle the
Mathematics, Models, and Modality
status of the conjecture that S4 provides the answer to the question of which is the right demonstrability logic. At present this question, which as I have said goes back to the founders of modern
modal logic, remains after most of a century still without a definitive answer.
Can truth out? 1
It is rather discouraging that forty years have passed since Frederic Fitch first propounded his paradox of knowability without philosophers having achieved agreement on a solution.2 As a general
rule, when modal phenomena prove puzzling, it is a good idea to look at the corresponding temporal phenomena, and accordingly I propose to examine here not the knowability principle that whatever is
true can be known, but rather the discovery principle that whatever is true will be known. As Fitch’s modal paradox attacks the knowability principle, so an analogous temporal paradox threatens the
discovery principle. The formulation of the paradox is as follows. Start with the minimal tense logic with G and H for ‘‘it is always going to be . . .’’ and ‘‘it always has been . . .’’ as
primitive, and F and P for ‘‘it sometime will be . . .’’ and ‘‘it once was . . .’’ defined as G and H.3 Add a one-place epistemic operator K for ‘‘it is known that,’’ and add as axioms minimal
assumptions for this new operator, expressing that anything known is true, and that if a conjunction is known, so are both conjuncts: (1) Kp ! p (2) K(p & q) ! Kp & Kq.
In an attempt to formalize the discovery principle, add one further axiom: (3) p ! FKp.
First published in Slaerno (2007). (Fitch 1963). For a summary of recent debates, see B. Brogaard and J. Salerno, ‘‘Fitch’s Paradox of Knowability,’’ in Stanford Encyclopedia of Philosophy: http://
plato.stanford.edu/entries/fitchparadox/ (available only on-line). See (Burgess 1984a). The various theorems of tense logic cited below can all be found in this source.
Mathematics, Models, and Modality
The paradox is that one can then derive the following: (4) p ! Kp.
The derivation of (4) using (3) is, apart from replacing & and à by G and F, the same as Fitch’s derivation, which is too well known to bear repeating here. The operator K is intended to indicate
human knowledge, not divine omniscience. The grounds for belief in the discovery principle have indeed traditionally involved a belief in divine omniscience, but it is not this belief alone that
supports the principle, but rather this belief plus a further belief that on some future day God will bring it about that whatever is hidden is made manifest (quidquid latet apparebit). Obviously
that day has not yet come, and the conclusion (4), that everything true is already humanly known, is an absurdity, and so we have a reductio of the principle (3). The ‘‘dialethists’’ and other
proponents of radical revisions of classical logic can be counted on to tout their proposed revisions as solutions to this paradox, as they have touted them as the solutions to so many others. But a
priori it is overwhelmingly more likely that the problem lies not in the underlying classical logic, but in the least familiar element, the axiom (3), the only axiom in which temporal and epistemic
operators interact. And indeed, that is where the problem lies. One has to be careful in going back and forth between symbolism and English prose, and Fitch, or rather his hypothetical temporal
analogue, was not careful enough. In tense logic p, q, r, . . . are supposed to stand for tensed sentences, whose truth value may change with time (or if one wants to speak of ‘‘propositions,’’ then
they must be propositions in a traditional rather than a contemporary sense, propositions that are themselves tensed, and whose truth value may change with time). FA is supposed to be true at a given
time if A is true at some later time. What (3) actually expresses thus amounts to this: (5) If p is true now, then at some later time it will be known that p is true then.
The proposed formalization as (3) has in effect turned the principle that any truth will become known into the principle that any sentence that expresses a truth will come to be known to express a
truth. But this last formulation invites the immediate objection that the sentence in question may cease to express a truth before the knowledge of the truth it once expressed is acquired. And so (5)
surely does not express what Shakespeare meant in saying ‘‘Truth will out.’’ He meant to imply that if Smith murders Jones secretly,
Can truth out?
so that no one knows, then it will become known that Smith murdered Jones secretly, so that no one knew. He did not mean to imply that if what the form of words ‘‘Unknown to all, Smith has murdered
Jones’’ now expresses is true, then there will come a time when what that same form of words then expresses will be known to be true. Thus the temporal analogue of Fitch’s argument does not discredit
the discovery principle, because the target of that argument is not a correct expression of that principle. 2
That one particular objection to a principle fails is no proof of the principle itself, and indeed no proof that it may not be open to simple, straightforward objection along other lines. And in fact
the discovery principle is open to two kinds of objection, each of which requires us either to impose a restriction on the principle, or to assume charitably that a restriction on the principle was
already intended by its advocates. As background to a first objection consider the timing of the collision of two ordinary extended material objects. The boundaries of such objects generally are
sufficiently ill-defined on a scale of nanometers as to make dating their collision on a scale finer than nanoseconds meaningless. If murders, say, are all the events we want to talk about, we do not
need to conceive of ‘‘times’’ as durationless ‘‘instants,’’ but may conceive of them as very brief ‘‘moments,’’ of no more than, say, a nanosecond’s duration. In this case, chronometry – by which I
here mean no more than our usual ways of dating events by year, month, day, hour, minute, second, and on to milli- or micro- or nanosecond and beyond if one wishes, all tacitly understood relative to
some fixed time zone – supplies a term for every time. But it may be otherwise if we wish to speak of point-particles and their collisions. The worry is that there will be truths that can never be
known because they can never be stated. Suppose, for instance, that ¼ 0.182564793 . . . is an irrational number, and that exactly seconds before 12:00 p.m., particle i collided with particle j. Can
it ever become known that particles i and j collided at exactly seconds before 12:00 p.m. on June 1, 2003? According to the discovery principle, all the following will become known: (1) Particles i
and j collided at 182 milliseconds before 12:00 p.m. on June 1, 2003.
Mathematics, Models, and Modality
(2) Particles i and j collided at 182564 microseconds before 12:00 p.m. on June 1, 2003. (3) Particles i and j collided at 182564793 nanoseconds before 12:00 p.m. on June 1, 2003.
Here ‘‘’’ abbreviates ‘‘the nearest unit,’’ to the nearest milli- or micro- or nanosecond, as the case may be. (For the sake of argument, set aside any quantum-mechanical doubts about whether the
series (1)–(3) really could be continued indefinitely.) But for it to be knowable that i and j collided at exactly seconds before 12:00 p.m., would it not have to be sayable that i and j collided at
exactly seconds before 12:00 p.m.? And for this to be sayable, there would have to be some means in language or thought of referring to the irrational number – I mean, of course, some means other
than referring to it as the number of seconds before 12:00 p.m. when i and j collided. Mathematics supplies such means for few irrational numbers, such as p2, p, e, and so forth. Coincidence may
supply a few others: the time when i and j collided may be describable also as the time when k and l collide, if the two collisions happen to be simultaneous. But by cardinality considerations we
inevitably lack means of reference to most irrational numbers. The discovery principle must be understood to exclude ineffable truths. It must be understood as restricted to truths expressible in our
language. Such a restriction will be built into any tense-logical formalization of the principle, if the letters p, q, r, . . . are understood as standing for sentences of our language. Such a
restriction seems in one sense not too serious, because the principle still tells us that for any question we have the language to ask, the true answer will become known. 3
A second objection to the discovery principle is more subtle. Suppose that as I write it is 12:00 p.m., June 1, 2003. Then the following is true: (1) Now, this moment, it is 12:00 p.m., June 1, 2003.
Obviously (1) itself will never be true in the future. And it seems that no sentence of our language will ever express in the future exactly what (1) expresses now. Thus the truth that (1) now
expresses seems to be one that will be unknowable in the future because it is unsayable in the future. Moreover, the demonstrative ‘‘this moment’’ and the indexical ‘‘now’’ are both pleonastic, what
they indicate being already sufficiently indicated
Can truth out?
by the fact that the verb ‘‘is’’ is in the present tense. Thus what has just been said about (1) is equally true of the following: (2) It is 12:00 p.m., June 1, 2003.
And indeed, if now, this minute, Smith is murdering Jones, then the following is another example subject to the same difficulty as (1) and (2). (3) Smith is murdering Jones.
The truth that Smith is (now, this moment) murdering Jones seems one that will be unsayable and therefore unknowable in the future, even though it is sayable now and even knowable now. The discovery
principle must be understood to exclude not only ineffable truths, which are never expressible in our language, but also ephemeral truths, which are expressible for a moment, and then never again.
Such a restriction seems in one sense not too serious, because it does not leave us with any question that can always be asked and never be answered. The ephemeral will be equally inexpressible
interrogatively as assertorically. Such a restriction seems not too serious for another reason, because the truths it excludes from human knowledge in the future are excluded even from divine
knowledge in eternity, if one follows those theologians who make the latter knowledge timeless. For (1)–(3) are no more true in a timeless eternity than they will be true in the seconds and minutes
and hours and days and months and years to come. The old riddle that suggests an exception to the principle that God can see anything I can see is a joke.4 But the counterexamples (1)–(3) to the
principle that God knows anything I can know are not. This point seemed worth digressing to mention, if only because a desire to have a formal apparatus in which such issues could be discussed was an
important part of the motivation of the creation of tense logic by Arthur Prior. 4
We have seen that (1.3) – displayed item (3) of x1 – is not the right formalization of the discovery principle. What is? It cannot be claimed
I mean the riddle: Q. What is it that God never sees, that the king seldom sees, but that you and I see every day? A. An equal. This seems less a problem for theologians than for partisans of
‘‘substitutional quantification.’’
Mathematics, Models, and Modality
that a complete solution to the paradox has been obtained until this question is answered. One answer suggests itself at once. Now that we have restricted the principle to truths that will remain
expressible in our language in the future, it is tempting to formulate the principle as the principle that any sentence that will continue to express a truth in the future will come to be known to
express a truth. This goes over into symbols as follows: (1) Gp ! FKp.
And (1) is, unlike (1.3), immune to Fitch-style paradox, even if one considerably strengthens the background tense logic. For definiteness, let us consider the tense logic, call it Llinear, that is
appropriate for linearly ordered time without a last time. Then the immunity of (1) from Fitchstyle paradox is the content of the following proposition. Proposition. Let T be Llinear plus (1.1),
(1.2), and (1). Then (1.4) is not a theorem of T. Proof. Consider an auxiliary theory T , obtained from L by adding a constant p and the following axiom: (2) Fp.
Then p is not a theorem of T . For if we take any of model of Llinear, and let p be true at and only at the times later than the present, then (2) will be true at all times, but p will not, being
false at all past times and at the present time. Next assign each formula A of the language of T a translation A into the language of T , by taking Kp to abbreviate p & p. Thus (1.1), (1.2), (1), and
(1.4), respectively, are translated as follows: (3) (4) (5) (6)
p&p!p (p & q) & p ! (p & p) & (q & p) Gp ! F(p & p) p ! p & p.
Note that the translation (6) of (1.4) is not a theorem of T . For if it were, substituting p for p and applying truth-functional logic, p would be a theorem, as we have seen it is not. To show that
(1.4) is not a theorem of T, it will suffice to show that the translation of any theorem of T is a theorem of T . And to show this, it will suffice to show that the translations (3)–(5) of the three
axioms of T are theorems of T . For the first two axioms this is trivial, since (3) and (4) are truth-functional tautologies. For the third axiom, the following is a theorem of Llinear:
Can truth out?
(7) Gp & Fq ! F(p & q).
And (5) follows by truth-functional logic from (2) and (7), to complete the proof.
The formalization (4.1) has several corollaries worth noting. Proposition. Let T be as in x4. Then the following are theorems of T: (1) (2) (3) (4)
Pp ! FKPp p ! FKPp Fp ! FKPp Gp ! FKGp.
Proof. First note that each of the following is either an axiom or a theorem of Llinear: (5) (6) (7) (8) (9)
Pp ! GPp p ! GPp Fp ! FGPp FFp ! Fp Gp ! GGp.
Also, the following is a derived rule of Llinear: (10) If A ! B is a theorem, then FA ! FB is a theorem.
(For the cognoscenti, the assumption here is that the rule of temporal generalization, on which (10) depends, continues to apply after the formal language has been enriched by the addition of the
epistemic operator K.) (1), (2), and (4) are immediate from (5), (6), and (9), respectively. As for (3), it can be derived as follows: (11) FPp ! FFKPp from (1) by (10) (12) FPp ! FKPp from (11) and
To illustrate these corollaries just derived, if Smith has murdered Jones, or is murdering Jones, or will murder Jones, then according to whichever of
Mathematics, Models, and Modality
(5.1)–(5.3) is applicable, it will become known that Smith has murdered Jones. Let us write brackets around present-tense verbs to indicate omnitemporality, so that, for instance (1) Smith [murders]
is to be understood as meaning (2) Smith has murdered, is murdering, or will murder Jones.
Then we may say that if Smith [murders] Jones, then it will become known that Smith murdered Jones. And similarly in any other case. Murder cannot be hid – though (4.1) does not go so far as to join
the Bard in claiming (unfortunately, erroneously) that murder cannot be hid long. And if the memory of Smith’s victim will never cease to be honored, then according to (5.4) this fact will become
known – though there is (again, unfortunately) no guarantee it will become known soon enough to comfort the victim’s grieving friends and relations. And if the universe will be forever expanding,
according to (5.4) this fact, too, will eventually become known – though there is (yet again, unfortunately) no guarantee it will become known soon enough to satisfy the curiosity of present-day
cosmologists. Still, despite its corollaries, (4.1) may look unsatisfactory for the following reason. Consider what the corollary (5.2) tells us about a present truth: (3) If p is true now, then at
some later time it will be known that p was true once.
The ‘‘once’’ here invites the question, ‘‘When?’’ And (5.2) provides no answer. Or so it may seem. But in a sense (5.2), taken together with chronometry, does provide an answer. If (3.2) and (3.3)
are true, then the following conjunction is true: (4) It is 12:00 p.m., June 1, 2003, and Smith is murdering Jones.
Applying (5.2) not to (3.3) alone, but to this conjunction, we obtain It will become known that it was once 12:00 p.m., June 1, 2003, and Smith was murdering Jones.
or more idiomatically (5) It will become known that Smith murdered Jones at 12:00 p.m., June 1, 2003.
What more could one want by way of answer to a when-question? Quite generally, an event occurs at a given time, one can conjoin to a sentence p
Can truth out?
asserting the event’s occurrence a sentence q giving the standard chronometric specification of the time, and then apply (5.2), not to p alone, but to the conjunction. 7 ‘‘ N O W ’’ Nonetheless, it
may seem that the most obvious correction of (1.5) would be the following: (1) If p is true now, then at some later time it will be known that p was true now.
And (1) seems to tell us more than (4.1) (by way of (6.3)) tells us. It is known that (1) cannot be expressed using just the temporal operators G and H and F and P. But tense logicians have
considered other operators. Most to the point in the present context, they have considered a ‘‘now’’ operator J, so interpreted that even within the scope of a past or future operator Jp still
expresses the present, not the past or future, truth of p. And with this operator (1) can be symbolized, as follows: (2) p ! FKJp.
One may be tempted to think that (2) would do better as a formalization of the discovery principle than does (4.1). But this is a misleading way of putting the issue. For if the operator J is
admitted, subject to its usual laws, then (4.1) implies (2). For one of the usual laws is precisely (3) p ! GJp
and (2) is immediate from (3) and (4.1). So the temptation here is simply the temptation to add J to the language.5 I think the temptation should be resisted for a double reason. My first reason is
that introducing the J-operator is unnecessary in order to answer a when-question. For I have just finished arguing that (4.1) does, after all, provide answers to such questions. Against this it may
be said that (2) appears to have the advantage of doing so without depending on chronometry. But my second reason for avoiding the J-operator is that this apparent advantage comes at the cost of
involving us with the problematic notion of a de re attitude towards a time. This truth is perhaps most easily brought to light by switching temporarily from regimentations using tense operators to
regimentations using
I owe this observation to Williamson.
Mathematics, Models, and Modality
explicit quantification over times. So let t, u, v, . . . range over times. And let t < u mean that time t is earlier than time u, or equivalently, time u is later than time t. Let each tensed p be
replaced by a one-place p (t) for ‘‘p [is] the case at time t.’’ Every formula A built up from the letters p, q, r, . . . will similarly be replaced by an open formula A (t). PA and FA, respectively,
will be replaced by (4a) 9u(u < t & A (u))
(4b) 9u(t < u & A (u)).
In a formula A(t) the parameter t may be thought of as standing for that time which is now present. Leaving open how to symbolize the epistemic operator, (5.2) and (2) above go halfway into symbols
as follows: (5) p(t) ! 9u(t < u & it is known at time u that 9v(v < u & p(v))) (6) p(t) ! 9u(t < u & it is known at time u that p(t)).
There is this difference between the two semi-formalizations, that what occurs towards the end of (5) can be understood in a de dicto way, thus: (7) At time u, ‘‘p was true once’’ [is] known to be
By contrast, what occurs towards the end of (6) must be understood in a de re way, thus: (8) At time u, ‘‘p was true then’’ [is] known to be true of time t.
The symbol-complex KJp in (2) above may be pronounced ‘‘it is known that p was true now,’’ but what it really amounts to is more like this: (9) It is known of t that p was true then, where t is that
time which is now present.
There are (at least) three major difficulties in making sense of the notion of a de re knowledge about an object a. Or to put the matter another way, there is only one obvious strategy for making
sense of the notion of a de re attitude, namely, reduction to a de dicto attitude, and there are (at least) three major obstacles to this strategy. The strategy is to understand a subject as knowing
of an object a that F(x) holds of it if and only if the subject knows that F(a) where a is a term denoting a. The three obstacles or problems relate to the choice of term a. A first general problem
with de re knowledge is that of anonymity. There may simply be no term a denoting a. This problem has been encountered
Can truth out?
in the case of times in x2, and given the restriction on the discovery principle imposed there, it may be set aside here. A second general problem with de re knowledge, and one relevant to the
question whether J should be admitted, is the problem of aliases. The problem is that there may be two terms a and b denoting an object a, and it may be that the subject knows that F(a) but does not
know that F(b), or the reverse. The star whose common name is ‘‘Aldebaran’’ has also the official name ‘‘Alpha Tauri.’’ It seems that a subject may have been told by different authoritative sources,
and hence may know that (1) Aldebaran is orangish. (2) Alpha Tauri is the thirteenth brightest star.
and yet, being in ignorance that the two names are names for one and the same heavenly body, may not know that (3) Alpha Tauri is orangish. (4) Aldebaran is the thirteenth brightest star.
And this makes it hard to answer the question whether the subject knows of the star itself, independently of how it is named, that it is orangish, or the thirteenth brightest. The existence of
aliases is a problem insofar as privileging one of them over the other seems arbitrary. The same problem can arise for times. Robinson may know that one rainy day Smith committed murder, and may know
that Jones was murdered, and not know that the murder Smith committed was that of Jones. In this case Robinson will know that (5) At the time when Smith committed murder, it was rainy.
but not that (6) At the time when Jones was murdered, it was rainy.
And this makes it hard to answer the question whether Robinson knows of the time itself, independently of how it is described, that it was rainy then. Where there exists some standard term for each
object of a given kind, one can always stipulate that a subject is to be credited with de re knowledge about the object a that F(x) holds of it, if and only if the subject has de dicto knowledge that
F(a) where a is the standard term for a. Admittedly, such a stipulation may be more a matter of giving a sense to a kind of locution (ascriptions of de re knowledge) that previously had none, than of
finding out what sense this kind of locution had all along. Pretty clearly it would be a case of giving rather than finding if
Mathematics, Models, and Modality
one took as canonical terms for heavenly bodies the official names adopted by international scientific bodies, preferring ‘‘Alpha Tauri’’ over ‘‘Aldebaran.’’ For times, the obvious candidates for
standard terms are those provided by chronometry. If one is content with (4.1), there is no need to enter into the problem of de re knowledge about times at all, and so no need to fix on any standard
terms for times. If one adopts (7.2), reliance on chronometry is the only obvious way to impose a solution on the problem of aliases. But in that case the one advantage (7.2) appeared to have over
(4.1), that of not depending on chronometry, must be recognized to have been illusory. This consideration argues, I claim, in favor of the J-free formalization (4.1) and against the J-laden
formalization (7.2). A third general problem with de re belief is the problem of demonstratives (and with them indexicals). When the star Alpha Tauri, alias Aldebaran, is visible in the night sky,
one can point to it and say ‘‘that star,’’ and so achieve reference to it. Now it seems someone looking at the star may well know (5) That star is orangish.
and yet not knowing the name of the star may well not know either (1) or (2). This is, so far, just a special case of the problem of aliases. But demonstratives are especially troublesome because, on
the one hand, when available, they seem to provide so direct a way of referring that it is hard to insist that nonetheless it is some other way of referring that provides the canonical terms for
reduction of de re to de dicto; but on the other hand, demonstratives themselves are not viable candidates for canonical terms, simply because they are usually not available: if we took
demonstratives as canonical terms, most objects would suffer from anonymity most of the time. Demonstratives act, so to speak, as spoilers, making any other candidates for the office of canonical
term look unworthy, while themselves not being eligible for that office. But this problem has been encountered in the case of times in x3, and given the restriction on the discovery principle imposed
there, it may be set aside here, as the problem of anonymity was set aside. The problem of aliases, I claim, is enough to make the admission of J undesirable. 9
I have done with the topic of the discovery principle. But what of the knowability principle, and the original, modal version of Fitch’s paradox?
Can truth out?
Table 10.1
(1.3) (4.1) (5.2) (7.2)
Discovery Principle present vs future tense G, F Llinear p ! FKp Gp ! FKp p ! FKPp now J p ! FKJp times, instants, moments chronometry
Knowability Principle indicative vs subjunctive mood &, } S5 p ! àKp &p ! àKp p ! àKàp actually @ p ! àK@p possibilities, worlds, situations ???
(1) (2) (3) (4)
I began this essay by recalling that there is a close parallel between temporal and modal. I should now note that while there are many analogies, in connection with Fitch’s paradox there is also one
glaring disanalogy, that makes the original, modal problem more refractory than its temporal analogue. Perhaps the best way of proceeding would be to begin by simply listing pairs of analogous
notions in parallel columns as in Table 10.1. But returning to what is formally representable, I have recalled in the left margin in the table the numbers of temporal formulas we have met earlier,
and assigned in the right margin numbers to the analogous modal formulas. Fitch’s (1) is as quickly dismissible as its analogue (1.3).6 The difficulty comes when one seeks a replacement. The absence
of any obvious analogue for possibilities of standard chronometric specifications for times makes (2) and (3) much less satisfactory than (4.1) and (5.2) – and (4) correspondingly much more tempting
than (7.2). But the same absence makes the problem of de re knowledge of possible situations connected with (4) at once more critical and more difficult to solve or evade than was the problem of de
re knowledge of temporal moments connected with (7.2). I will not enlarge further here, partly because it would be a good exercise for readers to work out the analogy for themselves, but mainly
because 6
For a full exposition of the essentially grammatical fallacy in the paradox, see Ru¨ckert (2004). Ru¨ckert draws on Wehmeier (forthcoming).
Mathematics, Models, and Modality
I would be largely repeating points that have been made by Dorothy Edgington in her proposed solution to the paradox, and by Timothy Williamson in his criticisms thereof.7 A further disanalogy
emerges in discussion of Edgington and Williamson that is not formally representable, and is therefore not indicated in Table 10.1. It is just this, that generally speaking the fact that something is
only actually true and not necessarily true tends to matter less to us than the fact that something is only at the present moment true and not permanently true. Or to put the matter another way, what
will be true when the world is older matters more to us than what could have been true if the world had been otherwise, since we hope to live on into ‘‘future worlds’’ but do not expect to
transmigrate into ‘‘possible worlds.’’ So far as the present investigation is concerned, it seems that the analogy between mood and tense takes us only so far, and in the end provides us not with a
solution, but only with a better understanding of just what makes the problem difficult. 10
Before giving up, however, perhaps we should try the combination of temporal and modal. That is to say, perhaps instead of considering the knowability principle as the principle that anything true
could have been known, we should consider it as the principle that anything true could become known. The natural setting for such a principle would be a system like Prior’s logic of ‘‘historical
necessity.’’8 In the most elaborate version, which he calls ‘‘Ockhamist,’’ Prior uses both tense operators G, H, F, P, subject to the axioms for Llinear, and modal operators &, }, subject to the
axioms of S5. But the modal operators are themselves understood in a tensed way, as meaning necessity and possibility given the course of history up to the present. Prior uses special letters a, b,
c, . . . for sentences with the special property that their truth is independent of the future course of history in addition to the usual letters p, q, r, . . . for arbitrary sentences. Not all
formulas, but only certain special ones, with the same special property as
See Edgington (1985) and Williamson (1987). For further relevant publications see Brogaard and Salerno (note 2). See Prior (1967b, chapter VII).
Can truth out?
the special letters, may be substituted for those special letters. These include the special letters themselves, any formula beginning with a modal operator & or }, and any formula obtainable from
formulas of these two kinds using the truth functions and past-tense operators H and P. A single axiom links the temporal and modal operators: (1) a ! &a.
One can obtain by substitution (2) Pa ! &Pa.
One cannot derive (3) Fa ! &Fa.
Taking for a in (1)–(3) ‘‘A sea fight is occurring,’’ in Prior’s system one can conclude that if a sea fight is occurring or has occurred, then the occurrence of a sea fight is (historically)
necessary; but even if a seafight is only going to occur, then its occurrence is (historically) contingent, though once it does occur, it will become (historically) necessary. A version of the
knowability principle can be expressed in this context by the formula (4) &Ga ! }FKa.
And from (4) one can derive, using various tense-logical and modal theorems, the following rough analogues of the corollaries in the proposition of x3: (5) (Pa Ú a Ú Fa) ! }FKPa (6) Ga ! }FK}Ga.
The details will not be given here, because the system is ultimately unsatisfactory. Let me explain how. From (4), by way of its corollaries, one can conclude the following, wherein I contract
‘‘possibly will’’ to ‘‘may’’: (7) If Smith is murdering Jones, then it may become known that Smith has murdered Jones. (8) If the memory of Smith’s victim will always be honored, then it may become
known that the memory of Smith’s victim may always be honored. (9) If the universe is always going to be expanding, then it may become known that the universe may be always going to be expanding.
Mathematics, Models, and Modality
What one cannot conclude is: (10) If the memory of Smith’s victim will always be honored, then it may become known that the memory of Smith’s victim will always be honored. (11) If the universe is
always going to be expanding, then it may become known that the universe is always going to be expanding.
So (4) seems too weak. The strengthening of (4) to (12) Ga ! }FKGa
would provide assurance of (10) and (11), but unfortunately (12) is too strong. For it would also provide assurance of the absurd (13) If Smith murdered Jones but will forever escape detection, then
it may become known that Smith murdered Jones but will forever escape detection.
This is Fitch’s paradox, adapted to the present context.
In sum, Prior’s Ockhamist framework fails to provide a formula that is not too weak and not too strong, but just right. A glance back at the examples above will help us localize the difficulty: it is
with truths about the actual future (and more particularly about what will always hold throughout that actual future). There are philosophers, however, who question the very meaningfulness of
assertions about the actual future.9 And Prior has developed a logic he calls ‘‘Peircean’’ for them. In this logic one has only the special letters a, b, c, . . ., and only the formulas built up from
them using truth functions, past-tense operators, and four operators amounting to the combinations &G, }G, &F, }F. Substitution for the letters a, b, c, . . . is allowed for all formulas so built up.
The operators &, }, G, F do not appear separately, apart from the four combinations just mentioned. The pertinent feature of this logic in the present context is that it bans as meaningless the
examples that caused trouble in the preceding section, and (10.4) seems adequate as an expression of the knowability principle for all such sentences as are still accepted as meaningful.
For a recent expression of this view see Belnap and Green (1994).
Can truth out?
Banning statements about the actual future is a radical step. Presumably the friends and relations of Jones know that his memory possibly will always be honored, and possibly will not always be
honored. They know }Ga and &Ga, where a is (1) The memory of Smith’s victim is honored.
The Peircean, however, rejects as meaningless (2) The memory of Smith’s victim will always be honored.
unless ‘‘will’’ is either strengthened to ‘‘necessarily will’’ or weakened to ‘‘possibly will.’’ The Peircean it seems, cannot allow the friends and relations to hope that (2) is true, or to fear
that it is not.10 Likewise, cosmologists presumably already know that it is possible the universe will expand forever, and possible that it will not. The Peircean cannot allow them to wonder if it in
actual fact will. So Peirceanism is a radical doctrine. But then, so is the knowability principle. The question is, do the two forms of radicalism cohere? If an adherent of the knowability principle
were to embrace Peirceanism, would the resulting position have any coherent motivation? Or would embracing Peirceanism be mere ad hoc epicycling, avoiding counterexamples by declaring them
meaningless? This is too large, and too non-logical, an issue to go into here, but at least a word may be said about the historical sources of epistemological views like the discovery and knowability
principles on the one hand, and of Peirceanism on the other. Belief in the discovery principle, I said at the outset, has traditionally rested on theological grounds. Belief in the knowability
principle has, by contrast, been mainly an expression of a commitment to a certain philosophical theory of meaning, verificationism. The radical epistemological view that there are no unknowable
truths has usually been a consequence of the even more radical semantical view that understanding a sentence consists in grasping under what conditions it would be known to be true. Belief in
Peirceanism has had several sources. Prior cites late-medieval logicians who have held a similar view on theological grounds, but the more recent proponents of the view seem to base their adherence
on grounds that ultimately are verificationist. Thus combining the
This observation is repeated from Burgess (1979d).
Mathematics, Models, and Modality
knowability principle with Peirceanism could be viewed as combining two manifestations of an underlying verificationism. Of course, there are many varieties of verificationism, and it remains to be
seen whether a single variety can cogently motivate both these manifestations simultaneously. A key issue will be the verificationist’s attitude towards the reality of the past.
Quinus ab omni naevo vindicatus
1.1 Quine and his critique Today there appears to be a widespread impression that W. V. Quine’s notorious critique of modal logic, based on certain ideas about reference, has been successfully
answered. As one writer put it some years ago: ‘‘His objections have been dead for a while, even though they have not yet been completely buried.’’1 What is supposed to have killed off the critique?
Some would cite the development of a new ‘‘possible-worlds’’ model theory for modal logics in the 1960s; others, the development of new ‘‘direct’’ theories of reference for names in the 1970s. These
developments do suggest that Quine’s unfriendliness towards any formal logics but the classical, and indifference towards theories of reference for any singular terms but variables, were unfortunate.
But in this study I will argue, first, that Quine’s more specific criticisms of modal logic have not been refuted by either of the developments cited, and further, that there was much that those who
did not share Quine’s unfortunate attitudes might have learned about modality and about reference by attention to that critique when it first appeared, so that it was a misfortune for philosophical
logic and philosophy of language that early reactions to it were as defensive and uncomprehending as they generally were. Finally, I will suggest that while the lessons of Quine’s critique have by
now in one way or another come to be absorbed by many specialists, they have by no means been fully absorbed by everyone, and in this sense there is still something to be learned from Quine’s
critique today. 1
Hintikka (1982, opening paragraph). In context it is clear this is only a description, and not necessarily an endorsement, of a widespread impression.
Mathematics, Models, and Modality
x3 below will list some lessons from Quine’s critique, after x2 has examined the early responses to it. Since I will be arguing that most of these simply missed the point, I should say at the outset
that this is easier to see by hindsight than it was from expositions of the critique available at the time, and that the early responses were useful insofar as they provoked new expositions. That
there are flaws in Quine’s own presentations is conceded even by such sympathetic commentators as Dagfinn Føllesdal and Leonard Linsky, and at least as regards his earliest presentations by Quine
himself.2 To remove flaws is the aim of the present x1, and the aim suggested by my title, which readers familiar with the history of mathematics will recognize as echoing Saccheri’s Euclides ab omni
naevo vindicatus or Euclid Freed from Every Blemish. Such readers will also recall that though Saccheri’s aim was to defend Euclid, ironically his work is today remembered as a contribution to
non-Euclidean geometry. While I hope to avoid a similar irony, I do not hesitate to depart from Quine on occasion, and begin with two limitations that I think need more explicit emphasis than they
get from Quine. 1.2 Non-trivial de re modality A first restriction is that Quine’s critique is limited to predicate as opposed to sentential modal logic, his complaint being that modal predicate
logic resulted from mechanically combining the apparatus of classical predicate and modal sentential logic, without thinking through philosophical issues of interpretation.3 Quine does sometimes
suggest that engaging in modal logic would be pointless unless one were eventually going to go beyond the sentential to the predicate level, so that though his critique deals explicitly only with
predicate modal logic, it is tantamount to a critique of all modal logic; but the suggestion is not strenuously argued.4 The restriction to predicate logic has two aspects. First, the critique is
limited to de re as opposed to de dicto modality, to modalities within the scope of quantifiers as opposed to quantifiers within the scope of 2
The most important of Quine’s presentations is ‘‘Reference and modality,’’ in From a Logical Point of View (Quine 1953/1961/1980). Citations of this twice-revised work here will be by internal
section and paragraph divisions, the same from edition to edition. This work supersedes the earlier Quine (1947a). For commentary see the editor’s introduction to Linsky (1971a) and Linsky (1971b).
See also Føllesdal (1969) and Føllesdal (1986). A theme in his reviews (Quine 1946, 1947b). See the last paragraph of the third section of ‘‘Reference and modality,’’ ending: ‘‘for if we do not
propose to quantify across the necessity operator, the use of that operator ceases to have any clear advantage over merely quoting a sentence and saying that it is analytic.’’
Quinus ab omni naevo vindicatus
modalities, to modalities applying to open formulas as in 9x&Fx, rather than modalities applying to closed formulas as in &9xFx. Second, the critique is limited to non-trivial de re modality. The
first point has been generally understood. Not so the second, which calls for some explanation. I begin with an analogy. One can contrive systems of sentential modal logic that admit modalities
notationally, but that make every modal formula more or less trivially equivalent to a non-modal formula. It suffices to add as a further axiom the following, whose converse is already a theorem in
the common systems: (1) P ! &P.
This corresponds to a definition according to which P holds necessarily just in case P holds – a definition that could silence any critic who claimed the notion of necessity to be unclear, but would
do so only at the cost of making the introduction of the modal notation pointless. Analogously, one can contrive systems of predicate modal logic that admit de re modalities notationally, but that
make every de re formula more or less trivially equivalent to a de dicto formula. The precise form a trivialization axiom would take depends on whether one is considering monadic or polyadic
predicate logic, and on whether one is admitting or excluding an existence predicate or an identity predicate or both. In the simplest case it suffices to add as a further axiom the following, whose
converse is already a theorem in the common systems: (2) 8x(&Fx ! &8yFy).
This corresponds to the trivializing definition according to which F holds necessarily of a thing just in case it is necessary that F holds of everything – a definition that could silence any critic
who claimed the notion of de re modality to be more obscure than that of de dicto modality, but would do so only at the cost of making the introduction of de re notation pointless. When Quine
complains of the difficulty in defining de re modality, he is tacitly assuming the trivializing definition above has been rejected; so his critique is tacitly limited to systems that, like all the
common ones, do not have the trivialization axiom as a theorem. To accept such a system as the correct system, the one whose theorems give all and only the general laws necessarily holding in all
instances, is to reject the trivialization axiom as not being such a general law, and hence is to reject the trivializing definition, which would make it one. Note that Quine’s objection is thus to
the unprovability of something, namely trivialization, not the provability of anything.
Mathematics, Models, and Modality 1.3 Strict necessity
A second restriction is that Quine’s critique is limited to what he calls ‘‘strict’’ necessity, identified with analyticity, as opposed to what may be called ‘‘subjunctive’’ necessity, involved in
counterfactuals. For Quine the former belongs to the same circle of ideas as synonymy and definition, and the latter to the same circle as similarity and disposition. Quine sometimes explicitly
states this limitation; but he also often suggests that his argument generalizes to all intensional operators, or at least that there is an obstacle to making sense of quantification into intensional
contexts in general (which obstacle is insurmountable in the case of quantification into contexts of strict modality in particular).5 Insofar as I wish to defend it, I take Quine’s critique to be
limited to strict modality, and his suggestion about generalization to be an attachment to it, not a component of it. In connection with different senses of necessity there is a feature of the
terminology current in the 1940s through 1960s that needs to be explicitly emphasized, lest one fall into anachronistic misreadings: the tendency to use interchangeably with each other, as adjectives
modifying the noun ‘‘truth,’’ all the expressions in the left-hand column below (and similarly for the right-hand column).6 Each row merits separate comment. Necessary Linguistic A priori Analytic
Contingent Empirical A posteriori Synthetic Non-logical
Logical truth and analytic truth. Quine distinguished a narrower notion of ‘‘logical’’ truth, roughly truth by virtue of syntactic form alone, from a broader notion of ‘‘analytic’’ truth, roughly
truth by virtue of this plus semantic factors such as definition and synonymy. He notoriously thought the latter, broader notion unclear, and so had a double objection to the first of the following
Contrast the opening section of ‘‘Reference and modality,’’ on knowledge and belief contexts, with the antepenultimate paragraph of the paper, beginning: ‘‘What has been said in these pages relates
only to strict modality . . .’’ For a contemporary account deploring such tendencies, Kneale and Kneale (1962, pp. 628ff). Such tendencies are exemplified by the usage of all the participants in the
exchange discussed in x2 below.
Quinus ab omni naevo vindicatus
(3) It is analytically true that all bachelors are unmarried. (4) It is logically true that all unmarried men are unmarried. (30 ) ‘‘All bachelors are unmarried’’ is analytically true. (40 ) ‘‘All
unmarried men are unmarried’’ is logically true.
One objection was to the common feature of (3) and (30 ), involvement with broadly analytic rather than narrowly logical truth; another, to the common feature of (3) and (4), treatment of modality as
a connective in the object language applying to sentences, rather than a predicate in the metalanguage applying to quotations. What is important to understand is that in his critique of modal logic
Quine presses only his objection to the second feature – a feature presupposed by quantified modal logic, since quantification into quotation contexts is obvious nonsense – waiving his objection to
the first for the sake of argument. Others of the period shared neither Quine’s worries about the broad, semantic notion, nor his concern to distinguish it from the narrow, syntactic notion, and
often wrote ‘‘logical’’ when they meant ‘‘analytic.’’ Analytic truth and a priori truth. Quine’s first and foremost target, Rudolf Carnap, and others of the period, took the distinction between
analytic and synthetic to be central to epistemology because they took it to coincide with the distinction between a priori and a posteriori. They recognized not a trichotomy of ‘‘analytic’’ and
‘‘synthetic a priori’’ and ‘‘a posteriori,’’ but a dichotomy of ‘‘analytic’’ and ‘‘a posteriori.’’ A priori truth and linguistic truth. Quine often complained that others were sloppy about
distinguishing use and mention. If one is sloppy, quibbles and confusions can result if, as was commonly done, one uses ‘‘linguistic’’ interchangeably with ‘‘analytic’’ or ‘‘a priori’’ and
‘‘empirical’’ interchangeably with ‘‘synthetic’’ or ‘‘a posteriori’’ respectively. For consider: (5) Planetoids are asteroids. (6) Ceres is the largest asteroid. (50 ) In modern English,
‘‘planetoids’’ and ‘‘asteroids’’ refer to the same things. (60 ) In modern English, ‘‘Ceres’’ and ‘‘the largest asteroid’’ refer to the same thing.
As to (5), discovery that planetoids are asteroids requires (for a fully competent speaker of modern English) mere reflection, not scientific investigation. As to (6), discovery that Ceres is the
largest asteroid requires natural-scientific investigation of the kind engaged in by astronomers. Discovery that (50 ) is the case (understood as about the common language, not just one’s personal
idiolect) requires social-scientific investigation of the kind engaged in by linguists. Discovery that (60 ) is the case requires
Mathematics, Models, and Modality
both kinds of scientific investigation. Since linguistics is an empirical science, using ‘‘linguistic’’ and ‘‘empirical’’ for ‘‘analytic’’ and ‘‘aposteriori’’ can be confusing when dealing with
meta-level formulations like (50 ) and (60 ) rather than object-level formulations like (5) and (6); but such usage was common. Linguistic truth and necessary truth. Quine distinguished strict and
subjunctive modality, but whereas the default assumption today might be that someone who writes ‘‘necessary’’ sans phrase intends subjunctive necessity, this was not so for Quine, let alone modal
logicians of the period. Originally the primitive notion of modal logic was ‘‘implication’’ P Q, with ‘‘necessity’’ defined as ØP P; later necessity &P was taken as primitive, with implication
defined as &Ø(P Ù ØQ). But even then, the notion of implication of primary interest was strict, so that the notion of necessity of primary interest also had to be, and was often enough explicitly
stated to be, strict. It was commonly assumed, if not that all necessity is linguistic or semantic or verbal necessity, then at least that the primary notion of necessity was that of verbal
necessity. In reading the older literature, the default assumption must be that strict necessity is intended when one finds sans phrase the word ‘‘necessary.’’ 1.4 ‘‘Aristotelian essentialism’’
Preliminary restrictions having been enumerated, the critique proper begins by indicating what would have to be done to make sense of such notation as 9x&Fx. Given that 9 is to be read in what has
always been the standard way, as an existential quantifier, and that & is to be read in what was at the time the prevailing way, as a strict modality, the following are equivalent: (7a) (7b) (7c)
9x&Fx holds there is some thing such that &Fx holds of it there is some thing such that Fx holds necessarily of it there is some thing such that Fx holds analytically of it.
The commitment then is to making sense (in a non-trivial way) of the notion of an open formula or open sentence Fx holding analytically of a thing. Now traditional accounts of analytic truth in
philosophy texts provide only an explanation of what it is for a closed sentence to be analytically true, and do not even purport to provide any explanation of a notion of an open sentence being
analytically true of a thing. (And rigorous analyses of logical
Quinus ab omni naevo vindicatus
truth in logic texts again supply only a definition of what it is for a closed formula to be logically true, and do not even purport to supply any definition of a notion of an open formula being
logically true of a thing.) The notion of analyticity as it stands simply does not apply literally to an open sentence or formula relative to a thing, and the most one can hope to do is to extend the
traditional notion from de dicto to de re – or to put the matter the other way round, reduce the notion for de re modality to the traditional one for de dicto – while remaining faithful to the spirit
of strict modality. This presumably means remaining attached to a conception of necessity as purely verbal necessity, and confined within the circle of ideas containing definition and synonymy and
the like, not bringing in physical notions of disposition or similarity, let alone Peripatetic or Scholastic metaphysical notions of matter and form or potency and act or essence and accident. Quine
expresses pessimism about the prospects for defining de re modality subject to this restriction by suggesting that quantified modal logic is committed to ‘‘Aristotelian essentialism.’’ While Quine’s
own approach is resolutely informal, there is a technical result of Terence Parsons that is illuminating here, even though Parsons’ usage of ‘‘commitment to essentialism’’ differs in a potentially
confusing way from Quine’s. Roughly speaking, Parsons shows that though the common systems are in the sense indicated earlier committed to the failure of trivialization as a general law, yet no
specific instance of such failure is provable in the common systems even with the addition of any desired consistent set of de dicto assumptions.7 (On Parsons’ usage the result is somewhat
confusingly stated as saying that though the common systems are ‘‘committed to essentialism’’ in one, weaker sense, essentially Quine’s, they are not ‘‘committed to essentialism’’ in another,
stronger sense, Parsons’ own.) This being so, any attempt to make sense of de re strict modality by reducing it to de dicto faces a dilemma. On the one hand, if one adopts some general law permitting
passage from de dicto to de re, one will in effect be adding a new general passage law as an axiom to the common systems. But with any such addition of a new formal axiom one is already rejecting the
common systems as incomplete if not as incorrect. Worse, there is a threat that the new axiom will yield trivialization; or worse still, will yield a contradiction. On the other hand, if one allows
passage from de dicto to de re only selectively, one will in effect be
For a less rough formulation, see T. Parsons (1969).
Mathematics, Models, and Modality
adding a new selection principle as an ingredient to the concept of modality. But with any such addition of a new intuitive ingredient there is a danger that one will be making one’s conception no
longer one of merely verbal necessity; or worse, that one will be making it arbitrary and incoherent. This abstract dilemma is concretely illustrated by Quine’s mathematical cyclist example, an
elaboration of an old example of Mill’s, and his morning star example, an adaptation of an old example of Frege’s. The only obvious approach to reducing the application of modal notions to a thing to
an application of modal notions to words, would be to represent or replace a thing by a word or verbal expression appropriately related to it. In fact, there are two strategies here, the most obvious
one being to take the expression to be a term referring to the thing, and an only slightly less obvious one being to take the expression to be a predicate satisfied by the thing. Hence the need for
two examples. 1.5 The mathematical cyclist One strategy would be to count Fx as holding necessarily of a thing just in case F is necessarily implied by some predicate(s) P satisfied by the thing. On
the one hand, if we are non-selective about the predicates, this leads to contradiction with known or plausible non-modal or de re premises, such as the following: (8a) (8b) (8c) (8d)
It is necessarily the case that all mathematicians are rational. It is at best contingently the case that all mathematicians are bipeds. It is necessarily the case that all cyclists are bipeds. It is
at best contingently the case that all cyclists are rational.
(These are plausible at least if we take rationality to mean no more than capability for verbal thought, and bipedality to mean no more than having at least two legs, and count mathematicians who
have lost limbs as nonbipeds, and count bicycle-riding circus animals as cyclists.) Non-selective application of the strategy to (8a–d) yields: (9a) (9b) (9c) (9d)
Any mathematician is necessarily rational. Any mathematician is at best contingently a biped. Any cyclist is necessarily a biped. Any cyclist is at best contingently rational.
Together (9a–d) contradict the known actual existence of persons who are at once mathematicians and cyclists.
Quinus ab omni naevo vindicatus
More formally, allowing non-selective application of the strategy amounts to adopting the following as an axiom, which can be seen to collapse modal distinctions all by itself: (10) 8x(Px ! (&Fx $ &
8y(Py ! Fy))).
This is the first horn of the dilemma. On the other hand, the obvious fall-back would be to allow (10) to apply only selectively, only to certain selected ‘‘canonical’’ predicates. In order for (10),
restricted to canonical predicates, to give an adequate definition of de re modality, it would suffice for two things to hold. It would suffice to have first that for each thing there is (or can be
introduced) some canonical predicate it satisfies; and second that for any two canonical predicates A, B we have: (11) 9x(Ax Ù Bx) ! &8y(Ay $ By).
This condition would preclude taking both ‘‘x is a mathematician’’ and ‘‘x is a cyclist,’’ or both Plato’s ‘‘x is a featherless biped’’ and Aristotle’s ‘‘x is a rational animal,’’ as canonical. But
how is one to select what predicates are admitted as canonical? It seems that making a selection, choosing for instance between Plato and Aristotle, would require reviving something like the ancient
and medieval notion of ‘‘real definitions’’ as opposed to ‘‘nominal definitions’’; and this is something it seems impossible to square with regarding the necessity with which we are concerned as
simply verbal necessity. 1.6 The morning star The second strategy would be to count Fx as holding necessarily of a thing just in case Ft holds necessarily for some term(s) t referring to the thing.
On the one hand, if we are non-selective about the terms, applying the strategy to all terms equally, then whenever two terms s and t refer to the same thing, Fx holding necessarily of that thing
will be equivalent to Fs holding necessarily and equally to Ft holding necessarily, so that Fs holding necessarily and Ft holding necessarily will have to be equivalent to each other. But this result
leads to inferences from known or arguably true premises to known or arguably false conclusions, even in the very simple case where Fx is of the form x ¼ t, since t ¼ t will in all cases be
necessarily true though s ¼ t may in some cases be only contingently true.
Mathematics, Models, and Modality
For instance, the following are true: (12) The evening star is the morning star. (13) Necessarily, the morning star is the morning star.
And the following false: (14) Necessarily, the evening star is the morning star.
More formally, allowing non-selective application of the strategy amounts to adopting the following as an axiom: (15) 8x(x ¼ t ! (&Fx $ &Ft)).
And this can be seen to collapse modal distinctions (at least if enough apparatus for converting predicates to terms is available). This is the first horn of the dilemma. On the other hand, the
obvious fall-back would be to allow (15) to apply only selectively, to certain selected ‘‘canonical’’ terms. In order for (15), restricted to canonical terms, to give an adequate definition of de re
modality, two things would be required to hold. It would suffice to have first that for each thing there is (or can be introduced) some canonical term referring to it; and second that for any two
canonical terms a, b we have: (16) (a ¼ b) ! & (a ¼ b).
Now the following is a theorem of the common systems:8 (17) (x ¼ y) ! & (x ¼ y).
But (17) involves only variables x, y, . . . , corresponding to pronouns like ‘‘he’’ or ‘‘she’’ in natural language, not constants a, b, . . . or function terms fc, gc, . . . , corresponding to names
like ‘‘Adam’’ and ‘‘Eve’’ or descriptions like ‘‘the father of Cain’’ and ‘‘the mother of Cain.’’
It may be worth digressing to mention that Quine’s one and only contribution to the formal side of modal logic occurred in connection with this law, though the history does not always emerge clearly
from textbook presentations. The earliest derivations of the law took an old-fashioned approach on which identity is a defined second-order notion, and on such an approach the derivation was anything
but straightforward. Quine was one of the first to note that on a modern approach with identity a primitive first-order notion, the derivation becomes trivial, and goes through for all systems at
least as strong as the minimal normal system K. This is alluded to in passing in the penultimate paragraph of the third section of ‘‘Reference and modality.’’ For the original presentation see
(Barcan 1947). For a modern textbook presentation see Hughes and Creswell (1968, p. 190).
Quinus ab omni naevo vindicatus
So (17) leaves open what terms should be allowed to be substituted for variables.9 What (16) says is that for the fall-back strategy being contemplated to work, we must be able to go beyond (17) to
the extent of allowing canonical terms to be substituted for the variables. This condition would preclude taking both ‘‘the morning star’’ and ‘‘the evening star’’ as canonical. But owing to the
symmetry involved, it would be entirely arbitrary to select ‘‘the morning star’’ as canonical and reject ‘‘the evening star’’ as apocryphal (or the reverse), and it would seem almost equally
arbitrary to reject both and select some other term such as ‘‘the second planet.’’ This is the second horn of the dilemma. And with this observation Quine rests his case, in effect claiming that
since the obvious strategies for doing what needs to be done have been tried and found to fail, the burden of proof is now on the other side to show, if they can, just how, in some unobvious way,
what needs to be done can be. And with this observation, I too rest my case for the moment. 1.7 Coda Quine’s critique was directed toward the strict kind of modality and toward quantification over
ordinary sorts of objects: persons, places, things. Much of his discussion generalizes to other kinds of modal or intensional operators and other sorts of objects, to show that for them, too, the
most obvious strategy for making sense of quantifying over such objects into such modal or intensional contexts faces an obstacle. But whether this obstacle can be surmounted, by the most obvious
fall-back strategy of identifying an appropriate class of canonical terms or in some other way, needs to be considered case by case. The most important case of a non-strict modality for which a
reasonable choice of canonical terms seems to be available (for almost any sort of objects) will be mentioned at the very end of this study. Here I want to mention a case of a special sort of object
for which a reasonable choice of canonical terms seems to be available (for almost any kind of intensional operator). For several writers, beginning with Diana Ackerman, have pointed out that
numerals suggest themselves as non-arbitrary candidates for canonical
In the original paper where (17) was derived there were no singular terms but variables, and nothing was said about application to natural language. For an idea of the range of options formally
available, see the taxonomy in Garson (1984).
Mathematics, Models, and Modality
terms if one is going to be quantifying only over natural numbers. And the numerals are in effect taken as canonical terms in two flourishing enterprises, intensional mathematics and provability
logic, where the modality in question is a version or variant of strict modality.10 Still, natural numbers are a very special sort of object. Workers in the cited fields have noted the difficulty of
finding canonical terms as soon as one goes beyond them even just to other sorts of mathematical objects, such as sets or functions. To avoid difficulties over there simply being too many objects to
find terms for them all, let us restrict attention to recursively enumerable sets of natural numbers and recursive partial functions on natural numbers, where there is actually a standard way of
indexing the objects in question by natural numbers or the numerals therefor. Even here there does not seem to be any non-arbitrary way of selecting canonical terms, since there will be many indices
for any one set or function, and two indices for the same object will not in general be provably indices for the same object.11 Whatever successes have been or may be obtained for non-strict
modalities and ordinary objects, or for strict modalities and non-ordinary objects, they only make it the more conspicuous how far we are from having any reasonable candidates for canonical terms in
the case to which Quine’s critique is directed. 2
2.1 Quine and his critics Today when one thinks of model theory for modal logic, or the application of theories of reference to it, one thinks first of Saul Kripke, whose relevant work on the former
topic only became widely known after his presentation at a famous 1962 Helsinki conference,12 and on the latter only after his
See Ackermann (1978). Lectures of Kripke have brought this formerly under-appreciated paper to the attention of a wider audience. See also Shapiro (1985) and especially Boolos (1993, pp. xxxiv and
226). Workers in the cited fields have in effect suggested that something like indices can serve as canonical terms for more fine-grained intensional analogues of recursive sets and functions. But
these too would be very special objects. The best discussion of these matters known to me is in some work not fully published of Leon Horsten. Whose published proceedings make up a memorable issue of
Acta Philosophica Fennica, and include not only Kripke (1963) but Hintikka (1963).
Quinus ab omni naevo vindicatus
celebrated 1970 Princeton lectures.13 But the impression that somehow an appropriate theory of models or of reference can refute Quine’s critique can be traced back a full half-century. For less
sophisticated model theories for quantified modal logic go back to some of the first publications on the subject, by Rudolf Carnap, in the 1940s;14 and the application of less sophisticated theories
of reference to modal logic goes back to one of the first reviews of Quine’s critical writings, by Arthur Smullyan, again in the 1940s.15 For purposes of examining the main lines of response to
Quine’s critique prior to the new developments in model theory and the theory of reference in the 1960s and 1970s, and Quine’s rebuttals to these responses, it is almost sufficient to consider just
three documents, together constituting the proceedings of a notorious 1962 Boston colloquium. The main talk, by Quine’s most vehement and vociferous opponent, Ruth (Barcan) Marcus, was a compendium
of almost all the responses to Quine that had been advanced over the preceding fifteen years, plus one new one. The commentary, by Quine himself, marked an exception to his apparent general policy of
not replying directly to critics, and gives his rebuttal to almost all early objections to his critique. An edited transcript of a tape recording of a discussion after the two talks among the two
invited speakers and some members of their audience, notably Kripke, was published along with the two papers, and clarifies some points.16 2.2 Potpourri A half-dozen early lines of response to the
critique may be distinguished. Most appear with differing degrees of explicitness and emphasis in the 13 15
Kripke (1972/1980). 14 Carnap (1946, 1947). Smullyan (1947), with elaboration in Smullyan (1948). Smullyan’s priority for his particular response to Quine has been recognized by all competent and
responsible commentators. See Linsky (1971b, note 15) and Føllesdal (1969, p. 183). Thus the items are: (i) the compendium (Marcus 1963a); (ii) the comments (Quine 1963) later retitled ‘‘Reply to
Professor Marcus’’; and (iii) the edited discussion (Marcus, Quine, Kripke et al. 1963).They appear together in the official proceedings volume (Wartofsky 1963). The same publisher had printed them
in 1962 in Synthese in a version that is textually virtually identical down to the placement of page breaks, (i) and (ii) in a belated issue of the volume for 1961, and (iii) in an issue of the
volume for 1962. (There have been several later, separate reprintings of the different items, but these incorporate revisions, often substantial.) Two of the present editors of Synthese, J. Fetzer
and P. Humphreys, have proposed publishing the unedited, verbatim transcript of the discussion, with a view to shedding light on some disputed issues of interpretation; but according to their
account, one of the participants, Professor Marcus, has objected to circulation of copies of the transcript or the tape.
Mathematics, Models, and Modality
compendium, and most are rebutted in the commentary thereupon. They all involve essentially the same error, confusing Quine’s philosophical complaint with some formal claim. Since – despite the best
efforts of Quine himself in his rebuttal and of subsequent commentators – such confusions are still common, it may be in order to review each response and rebuttal briefly. (A) The development of
possible-worlds semantics shows that there is no problem of interpreting quantified modal logic. This response is represented in the compendium by the suggestion that disputes about quantified modal
logic should be conducted with reference to a ‘‘semantic construction,’’ in which connection the now superseded approach of Carnap is expounded (with the now standard, then unpublished, approach of
Kripke being alluded to as an alternative in the discussion). Perhaps Quine thought the fallacy in this response obvious, since he makes no explicit response to it in his commentary; but it has
proved very influential, albeit perhaps more as an inchoate feeling than as an articulate thought. The fallacy is one of equivocation, confusing ‘‘semantics’’ in the sense of a mathematical theory of
models, such as Carnap and Kripke provided, with ‘‘semantics’’ in the sense of a philosophical account of meaning, which is what Quine was demanding, and thus neglecting the dictum that ‘‘there is no
mathematical substitute for philosophy.’’17 A mathematical theory of models could refute a technical claim to the effect that the common systems are formally inconsistent, but without some further
gloss it cannot say anything against a philosophical claim that the common systems are intuitively unintelligible. In the case of Carnapian model theory this point perhaps ought to have been obvious
from the specifics of the model, which validates some highly dubious theses.18 In the case of Kripkean model theory the point perhaps ought to be obvious from the generality of the theory, from its
ability to accommodate the widest and wildest variety of systems, which surely cannot all make good philosophical sense.
These are the closing words of Kripke (1976). The fallacy recurs again and again in other contexts in the literature. See Copeland (1979). Notably the Barcan or Carnap–Barcan formulas, which give
formal expression to F. P. Ramsey’s odd idea that whatever possibly exists actually exists, and whatever actually exists necessarily exists. (The ‘‘Barcan’’ label is the more customary, the
‘‘Carnap–Barcan’’ label the more historically accurate according to Cocchiarella (1984), which also explains the connection with Ramsey.) If these formulas are rejected, one must distinguish a
thing’s having a property necessarily (for every possible world it exists there and has the property there) from its having the property essentially (for every possible world, if it exists there,
then it has the property there). I have slurred over this distinction so far, and will for the most part continue to do so.
Quinus ab omni naevo vindicatus
(B) Quantified modal logic makes reasonable sense if 8 and 9 are read as something other than P ordinary quantifiers, such as Lesniewski-style substitution operators and . This is the one substantial
novelty in the compendium. One rebuttal, of secondary importance to Quine, is that if one allows oneself to call substitution operators ‘‘quantifiers,’’ one can make equally good or poor sense of
‘‘quantification’’ not only into modal but into absolutely any contexts whatsoever, including those of quotation. But quantification into quotation contexts is obvious nonsense – on any reasonable
understanding of ‘‘quantification.’’19 Still, the rebuttal of primary importance to Quine is a different and more general one, applying also to the next response. (C) Quantified modal logic makes
reasonable sense if & and } are read as something other than strict modalities, such as Prior-style temporal operators G and F. This response is represented in the compendium by the suggestion, made
in passing in the introduction, that modal logic is worth pursuing because of the value of studies of various non-alethic ‘‘modalities.’’ The specific example of temporal ‘‘modalities’’ was suggested
by Quine in his last remarks in the discussion, his purpose being to bring out his primary point of rebuttal to the previous response, that Lesniewski’s devices are just as irrelevant as Prior’s
devices, given the nature of his complaint. If his complaint had been that there is a formal inconsistency in the common systems, then it would have been cogent to respond by considering those
systems as wholly uninterpreted notations, and looking for some reading of their symbolism under which they would come out saying something true or plausible. But the nature of the critique is quite
different, the complaint being that the combination 9x& is philosophically unintelligible when the components 9 and & are interpreted in the usual way.20 (D) Quantified modal logic is not committed
to essentialism because no formula expressing such a commitment (no instance of the negation of (2)) is deducible in the common systems, even with the addition of any desired set of consistent de
dicto axioms. This response does not explicitly occur as
As shown by examples in the opening section of ‘‘Reference and modality.’’ This point seems to be conceded even by some who otherwise take an uncritically positive view of the compendium, as in the
review Forbes (1995). The last sections of Kripke (1976) in effect point out that the claim that the ordinary language ‘‘there is’’ in its typical uses is a ‘‘substitutional quantifier’’ devoid of
‘‘ontological commitment’’ is absurd, since ‘‘ontological commitment’’ is by definition whatever it is that the ordinary language ‘‘there is’’ in its typical uses conveys. ‘‘What I’ve been talking
about is quantification, in a quantificational sense of quantification, into modal contexts, in a modal sense of modality’’ (Wartofsky 1963, p. 116).
Mathematics, Models, and Modality
such in the compendium, and would have been premature, since the results of Parsons which it quotes did not come until a few years later. But it is advanced in a slightly later work of the same
author, and has been influential in the literature.21 It could be construed as merely a generalization of the next response on the list, and Quine’s rebuttal to the next response would apply to this
one, too. Basically, the response is the result of terminological confusion, since its first clause is only relevant if ‘‘commitment to essentialism’’ is understood in Quine’s sense, but its second
clause is only true if ‘‘commitment to essentialism’’ is understood in a different sense partly foreshadowed in the compendium and explicitly introduced as such by Parsons. It has already been noted
in the exposition of the critique both that Quine’s complaint is not about the provability of anything, and that Parsons’ results substantiate some of Quine’s suspicions. (E) The mathematical cyclist
example does not show there is any problem, because no de re conclusions of the kind that figure in the example (conclusions (9a–d)) provably follow in the common systems from such de dicto premises
as figure in the example (premises (8a–d)). While the example gives a legitimate counter-instance to the law that figures in it (law (10)), that law is not a theorem in the common systems. This
response occurs in a section of the compendium where Quine’s criticisms are said to ‘‘stem from confusion about what is or is not provable in such systems,’’ and where it is even suggested that Quine
believes & (P!Q)!(P!&Q) to be a theorem of the common systems!22 This response, which accuses Quine of committing a howler of a modal fallacy, is itself a howler, getting the point of Quine’s example
exactly backwards. The complaint that we cannot deduce examples of non-trivial de re modality from plausible examples of non-trivial de dicto modality by taking something like (10) as an axiom,
because we would get a contradiction, is misunderstood as a formal claim that something like (10) is an axiom, and we do get a contradiction. Quine’s rebuttal in his commentary borders on
indignation: ‘‘I’ve never said or,
Marcus (1967). And about the same time we find even the usually acute Linsky (1971a, p. 9) writing: ‘‘Terence Parsons bases his search for the essentialist commitments of modal logic on Kripke’s
semantics, and he comes up (happily) empty-handed . . . He finds modal logic uncontaminated.’’ The continuation of this passage better agrees with Parsons’ own account of his work and its bearing on
Quine’s critique. See Wartofsky (1963, pp. 90–2). It is just conceivable that this is deliberate exaggeration for effect, a rhetorical flourish rather than a serious exegetical hypothesis. Marcus
(1967) cites some other authors who have written in a similar vein about the example.
Quinus ab omni naevo vindicatus
I’m sure, written that essentialism could be proved in any system of modal logic whatsoever.’’23 (F) The morning star example does not show there is any problem, because while the law that figures in
the example (law (17)) is a theorem of the common systems, the example does not give a legitimate counter-instance, as can be seen by applying an appropriate theory of reference. This response is
repeated, with elaboration but without expected acknowledgments – it is described as ‘‘familiar,’’ but no specific citation is given – in the compendium. The citation ought to have been to
Smullyan.24 This response again mistakenly takes Quine to be claiming to have a counterexample to a formal theorem of the common systems. (And if Quine had claimed that (12) and (14) constitute a
counterexample to (17), it would have sufficed to point out that one is not required, just because one recognizes an expression to be a real singular term, to recognize it as legitimately
substitutable for variables in all contexts. This point has been noted already in the exposition of the critique, but the response under discussion seems to miss it.) Nonetheless, response (F) is
worthy of more extended attention. 2.3 Smullyanism or neo-Russellianism While responses (A)–(E) are entirely skew to Quine’s line of argument, response (F) (when fully articulated) makes tangential
contact with it, and shows that a minor addition or amendment to the critique as expounded so far is called for. Another reason response (F) calls for more attention than the others is that for a
couple of decades it was the conventional wisdom among modal logicians. It was endorsed not only by (in chronological order) Smullyan, Fitch, and Marcus, but also by Arthur Prior and others. It was
the topic of two talks at the famous 1962 Helsinki conference and was put forward in major and minor encyclopedias.25 Yet another reason
And ‘‘I did not say that it could ever be deduced in the S-systems or any systems I’ve ever seen’’ (Wartofsky 1963, p. 113). Despite these forceful remarks, the understanding of Quine’s views has not
much improved in the later Marcus (1967). An earlier paper by the author of the compendium (Marcus 1960) gives a more concise statement of the response in its last paragraph, where a footnote
acknowledges the author’s teacher Frederic Fitch. The latter, in Fitch (1949) and Fitch (1950), acknowledges Smullyan. (See footnote 4 in the former, footnote 12 in the latter, and the text to which
they are attached.) The major one being Weiss (1967), and the minor one the collection of survey articles (Klibansky 1968). The former contains Prior (1967a) while the latter contains Marcus (1968).
The conference talks (Marcus 1963b, Prior 1963) are to be found in the previously cited Helsinki proceedings. Another advocate of closely related ideas has been J. Myhill.
Mathematics, Models, and Modality
response (F) calls for more attention than the others is that it represents an early attempt to apply a theory of reference distinguishing names from descriptions to the interpretation of modal
logic, and understanding why this attempt was unsatisfactory should lead to increased appreciation of more successful later attempts. The ideas on reference that are involved derive from Russell. The
writings of Ramsey, alluded to in passing in the compendium, and of Carnap, with whom the author of the compendium at one time studied, may have served to transmit Russell’s influence, though of
course Russell himself was still writing on reference in the 1950s, and still living in the 1960s, and should not be considered a remote historical figure like Locke or Mill. But whether his
influence on them was direct or indirect, Smullyan’s disciples are unmistakably Russell’s epigones, even though they seldom directly quote him or cite chapter and verse from his writings.26 The
Smullyanite response, it will be seen, splits into two parts, one pertaining to descriptions, the other to names. The theory of descriptions presupposed by the Smullyanites is simply the very
well-known theory of Russell. The theory of names presupposed is the less well-known theory Russell always took as a foil to his theory of descriptions. This is perhaps best introduced by contrasting
it with the theory of Frege, according to which the reference of a name to its bearer is descriptively mediated, is accomplished by the name having the same meaning as some description, and the
description being uniquely true of the bearer. The theory of Russell is the diametrically opposed one that the reference of a name to its bearer is absolutely immediate, in a sense implying that the
meaning of a name is simply its bearer, from which it follows that two names having the same bearer have the same meaning. It is taken to follow (‘‘compositionality’’ being tacitly assumed) that two
sentences involving two different names with the same bearer, but otherwise the same, have the same meaning, and hence the same truth value (with one sole exception, usually left tacit, the exception
for meta-linguistic contexts, for those sentences, usually involving quotation, where the names are being mentioned as words rather than being used to refer). This theory is Russell’s account of how
names in an ideal sense would function. While Russell illustrated his theory by examples involving names in the ordinary sense, he actually more or less agreed with
Let me not fail to cite chapter and verse myself. For the most relevant pages of the most recently reprinted work, see Russell (1985), pp. 113–15.
Quinus ab omni naevo vindicatus
Frege about these (so that the Fregean theory is often known as the Frege–Russell theory). Moreover, he held that ordinary, complex things are not even capable of being given names in his ideal
sense; that names in the ideal sense could be given only to special, simple things (such as sense data). There is an ambiguity running through the writings of all the Smullyanites as to whether they
do or do not wish to claim that names in the ordinary sense function as names in the ideal sense. But they do unambiguously wish to claim, contrary to Russell, that whether or not they are already in
existence, names in an ideal sense can at least be introduced for ordinary things. For this reason, while the Smullyanites may be called ‘‘Russellians,’’ it is perhaps better to add the
distinguishing prefix ‘‘neo-.’’ So much for the background assumptions of response (F). Its further articulation has several components: (F0) Quine’s example is ambiguous, since the key terms ‘‘the
morning star’’ and ‘‘the evening star’’ might be either mere definite descriptions or genuine proper names. (F1a) If the key phrases are taken to be descriptions, then they are only apparently and
not really singular terms, and (12) is only apparently and not really a singular identity, so one gets only an apparent and not a real counterexample to (17). (F1b) Moreover, though the foregoing
already suffices, it may be added that (13) and (14) are ambiguous, and it is not unambiguously the case that they are of opposite truth value, the former true and the latter false, as the example
claims. (F2) If the key phrases are taken to be names, then (14) means the very same thing as, and is every bit as true as, (13), contrary to what the example claims.
To dispose of the issue (F0) of ambiguity, the example may be restated twice: (12a) (13a) (14a) (12b) (13b) (14b)
Hesperus is Phosphorus. Necessarily, Phosphorus is Phosphorus. Necessarily, Hesperus is Phosphorus. The brightest star of the evening is the brightest star of the morning. Necessarily, the brightest
star of the morning is the brightest star of the morning. Necessarily, the brightest star of the evening is the brightest star of the morning.
2.4 Quine’s rebuttal to neo-Russellianism on descriptions The main claim (F1a) of the descriptions side of the Smullyanite response is immediate from Russell’s theory, on which (12b) really
abbreviates something more complex involving quantifiers:
Mathematics, Models, and Modality
(12c) There exists a unique brightest star of the evening and there exists a unique brightest star of the morning, and whatever is the brightest star of the evening and whatever is the brightest star
of the morning, the former is the same as the latter.
The subsidiary claim (F1b) is also almost immediate, since on Russell’s theory in all but the simplest cases expressions involving descriptions involve ambiguities of ‘‘scope,’’ and for instance
there is one disambiguation of (14b) that follows by (17) from (12c): (14c) There exists a unique brightest star of the evening and there exists a unique brightest star of the morning, and whatever
is the brightest star of the evening and whatever is the brightest star of the morning, necessarily the former is the same as the latter.
In rebuttal to all this, the main point is that the example was not intended as a counter-instance to (17) or any other theorem of the common systems, but as an illustration of an obstacle to
reducing de re to de dicto modality, so that response (F1a) is wholly irrelevant. Response (F1b) is partly relevant, however, because it does show that the example needs to be worded more carefully
if the Russellian theory of descriptions is assumed. The strategy against which the example was directed was that of defining &Fx to hold of a thing if and only if &Ft holds where t is a term
referring to that thing. But assuming the Russellian theory of descriptions, there is actually more than one strategy here (when t is a description) because &Ft is ambiguous between a ‘‘narrow’’ or a
‘‘wide’’ reading. Also the predicate &Fx used in the example, ‘‘Necessarily x is the brightest star of the morning’’ is similarly ambiguous. To eliminate this last ambiguity, take the predicate to be
something like ‘‘Necessarily, (if x exists then) x is the brightest star of the morning.’’ Then on the narrowscope reading &Ft and &Fs boil down to: (13c) Necessarily, if there exists a unique
brightest star of the morning then it is the brightest star of the morning. (14c) Necessarily, if there exists a unique brightest star of the evening then it is the brightest star of the morning.
So in this case the reduction strategy fails for the reason originally given, since (13c) and (14c) are of opposite truth value, the former being true and the latter false. But on the wide-scope
reading &Ft and &Fs boil down instead to:
Quinus ab omni naevo vindicatus
(13d) There exists a unique brightest star of the morning and necessarily, (if it exists then) it is the brightest star of the morning. (14d) There exists a unique brightest star of the evening and
necessarily, (if it exists then) it is the brightest star of the morning.
In this case the reduction strategy fails for a more basic reason, since (13d) and (14d) themselves still involve unreduced de re modalities. The claim that the strategy breaks down thus does not
have to be retracted, though the explanation why it does so needs to be reworded. Response (F1b) is almost the only significant response to Quine in the early literature not reproduced in the
compendium, and for Quine’s own statement of a rebuttal to it we need to look beyond his commentary at the colloquium. We find the following formulation, where ‘‘non-substitutive position’’ means a
position, such as that of x in &Fx, where different terms referring to the same thing are not freely intersubstitutable: [W]hat answer is there to Smullyan? Notice to begin with that if we are to
bring out Russell’s distinction of scopes we must make two contrasting applications of Russell’s contextual definition of description [as in the (c) versions versus the (d) versions]. But, when the
description is in a non-substitutive position, one of the two contrasting applications of the contextual definition [namely, the (d) versions] is going to require quantifying into a non-substitutive
position. So the appeal to scopes of descriptions does not justify such quantification, it just begs the question.27
2.5 Neo-Russellianism on names Response (F2) is immediate assuming the neo-Russellian theory of names. Indeed, what neo-Russellianism assumes about names is more than enough to guarantee that they
would have all the properties required of canonical terms.28 Thus whereas in rebuttal to (F1) Quine did not have to reject Russell’s theory of descriptions, he does have to reject the neo-Russellian
theory of names. Response (F2) is so immediate assuming the neo-Russellian theory that it is stated without elaboration by Smullyan and his early disciple Fitch
Reply to Sellars, in Davidson and Hintikka (1969, p. 338). This formulation is the earliest adequate one known to me, the rebuttal even in the 1961 version of ‘‘Reference and modality’’ being
inadequate. As was pointed out in Kripke’s last few remarks in the discussion at the colloquium. Quine seems to accept the observation in his last remark. Marcus had apparently ceased to follow by
this point.
Mathematics, Models, and Modality
as if it were supposed to be self-evident.29 Elaboration is provided by later disciples in the compendium and elsewhere. The elaboration in Prior’s talk at the 1962 Helsinki conference is of especial
interest because it anticipates in a partial way a significant later contribution to the theory of reference. Since this has not hitherto been widely noted, I digress to quote the relevant passage:
It is not necessary, I think, for philosophers to argue very desperately about what is in fact ‘‘ordinary’’ and what is not; but let us say that a name in Russell’s strict sense is a simple
identifier of an object . . . [T]here is no reason why the same expression, whether it be a single word like ‘‘This’’ or ‘‘Tully,’’ or a phrase like ‘‘The man who lives next door’’ or ‘‘The man at
whom I am pointing,’’ should not be used sometimes as a name in Russell’s strict sense and sometimes not. If ‘‘The man who lives next door’’ is being so used, and successfully identifies a subject of
discourse, then ‘‘The man who lives next door is a heavy smoker’’ would be true if and only if the subject thus identified is a heavy smoker, even if this subject is in a fact a women and doesn’t
live next door but only works there. And if ‘‘Tully,’’ ‘‘Cicero,’’ ‘‘The Morning Star’’ and ‘‘The Evening Star’’ are all being so used, then ‘‘Tully is Cicero’’ and ‘‘The Morning Star is the Evening
Star’’ both express necessary truths, to the effect that a certain object is identical with itself.30
The distinctive part of the passage, not in the founder or other members of the Smullyanite school, is the middle, where it is suggested that even an expression that is not a name in the ordinary
sense may sometimes function as a name. This is a different point from the trivial observation that names often have descriptive etymologies, and those familiar with the later literature will
recognize how what is said about ‘‘the man who lives next door’’ partially anticipates what was later to be said about ‘‘referential’’ as opposed to ‘‘attributive’’ uses of descriptions. 2.6 Quine’s
rebuttal The elaboration in Marcus’s talk at the same conference, a kind of sequel to the compendium, is of especial interest because it makes more explicit than any other published Smullyanite work
the implication that was to be
Fitch (1949) explicitly claims that Quine’s contention is ‘‘clearly’’ false if the key expressions are taken to be names. Prior (1963, pp. 194–5). Prior was from Balliol, and I have heard it asserted
– though I cannot confirm it from my own knowledge – that there was a tradition of setting examples of this kind in undergraduate examinations at Oxford in the 1960s.
Quinus ab omni naevo vindicatus
most emphatically rejected by later work in the theory of reference: the epistemological implication that discoveries like (14a) are not ‘‘empirical’’ (at least not in a non-quibbling sense), and are
not properly astronomical discoveries: [T]o discover that we have alternative proper names for the same object we turn to a lexicon, or, in the case of a formal language, to the meaning postulates..
. . [O]ne doesn’t investigate the planets, but the accompanying lexicon.31
The same thought had been expressed in slightly different words – ‘‘dictionary’’ for ‘‘lexicon,’’ for instance – in the discussion at the colloquium.32 The picture underlying such remarks had been
sketched in the compendium itself: For suppose we took an inventory of all the entities countenanced as things by some particular culture through its own language . . . And suppose we randomized as
many whole numbers as we needed for a one-to-one correspondence, and thereby tagged each thing. This identifying tag is a proper name of the thing.33
To talk of an ‘‘inventory,’’ and especially to presuppose that we know how many numbers would be ‘‘needed for a one-to-one correspondence,’’ is to assume that we are dealing with a known number of
unproblematically identifiable items. If it is a matter of applying tags to such items, then of course we should be able to keep a record of when we have assigned multiple tags to a single one of
them, though our record would perhaps more colloquially be called a ‘‘catalogue’’ than an ‘‘accompanying lexicon’’ or set of ‘‘meaning postulates.’’ The rebuttal to the Smullyanites on names consists
in observing that what is said in the last few quotations is false. Take first Prior. If one defines ‘‘names in the strict sense’’ as expressions with the magical property of presenting their bearers
so absolutely immediately as to leave no room for empirical questions of identity, then there never have been in any historically actual language and never can be in any humanly possible language any
such things as ‘‘names in the strict sense.’’ As Russell himself noted, even ‘‘this is the same as this,’’ where one points to the same object twice, is
Marcus (1963b, p. 132). Note the characteristically Carnapian expression ‘‘meaning postulates.’’ For the published version, too familiar to bear quoting again, see Wartofsky (1963, p. 115). This is
one of the parts of the discussion where comparison with the verbatim transcript could be most illuminating. It is a shame that the scholarly public should be denied access to so significant a
historical document. Wartofsky (1963, pp. 83–4). This passage has sometimes been misleadingly cited in the later literature as if it were unambiguously about ordinary names in ordinary language.
Mathematics, Models, and Modality
not a linguistic and non-empirical truth, if the object in question is complex, and one points to a different component each time. Take now the compendium and its sequel. Assigning names to heavenly
bodies may be like tagging, but it is not like tagging individuals from among a known number of unproblematically identifiable items, since we always have unresolved questions before us about the
identity of asteroids or comets, as Frege long ago noted. And to resolve such questions one must investigate not some ‘‘accompanying lexicon’’ or ‘‘meaning postulates,’’ but the planet(oid)s
themselves. In brief, the following have the same status as (6) and (60 ) respectively, and not as (5) and (50 ): (12a) Hesperus is Phosphorus. (12a0 ) In modern English, ‘‘Hesperus’’ and
‘‘Phosphorus’’ refer to the same thing.
Quine’s own formulation of this rebuttal is almost too well known to bear quotation. But while what Quine means is what I have just said, what Quine says may be open to quibbles, since taken with
pedantic literalness it would seem to be about (12a0 ) rather than (12a): We may tag the planet Venus, some fine evening, with the proper name ‘‘Hesperus.’’ We may tag the same planet again, some day
before sunrise, with the proper name ‘‘Phosphorus.’’ When at last we discover that we have tagged the same planet twice, our discovery is empirical. And not because the proper names were
3.1 Hints from Quine for the formal logic of modalities With the wisdom of hindsight it can be seen that there are several important lessons about modality and reference directly taught or indirectly
hinted in Quine’s critique. For modal logic, the first lesson from Quine is that strict or (as many have called it) ‘‘logical’’ modality and subjunctive or (as we now call it) ‘‘metaphysical’’
modality are distinct. A further lesson is that quantification into contexts of strict modality is difficult or
Wartofsky (1963, p. 101). Quine surely means that (12a0 ) is not just a linguistic empirical discovery but a properly astronomical empirical discovery. By contrast, Marcus in Wartofsky (1963, p.
115), distinguishes ‘‘such linguistic’’ inquiry as leads to discoveries like (12a0 ) from ‘‘properly empirical’’ methods such as lead to discoveries about orbits.
Quinus ab omni naevo vindicatus
impossible to make sense of. A yet further lesson is that quantification into contexts of subjunctive modality is virtually indispensable. This last lesson is not as explicitly or emphatically taught
as the other two, and moreover Quine’s remarks are flawed by a tendency to conflate subjunctive or ‘‘metaphysical’’ modality with scientific or ‘‘physical’’ modality – as if we could not speak in the
subjunctive of counterfactual hypotheses to the effect that the laws of science or physics were violated. But due allowance being made for this flaw, I believe that the work of Quine, supplemented by
that of his student Føllesdal, gives a broad hint pointing in the right direction. Føllesdal’s treatment of the topic begins by quoting and stressing the importance of some of Quine’s remarks about
the question of the meaningfulness of quantification into contexts of subjunctive modality: It concerns . . . the practical use of language. It concerns, for example, the use of the contrary-to-fact
conditional within a quantification . . . Upon the contrary-tofact conditional depends in turn, for instance, this definition of solubility in water: To say that an object is soluble in water is to
say that it would dissolve if it were in water. In discussions in physics, naturally, we need quantifications containing the clause ‘‘x is soluble in water.’’35
Such passages stop just short of saying, what I think is true, that while quantification into contexts of strict modality may be nonsense, quantification into contexts of subjunctive modality is so
widespread in scientific theory and commonsense thought that we could not abandon it as nonsensical even if we wanted to. Putting the lessons cited together, it follows that there is a difference
between strict and subjunctive modality as to what expressions should be accepted as meaningful formulas and so a fortiori as to what formulas should be accepted as correct laws. The strictly or
‘‘logically’’ possible, what it is not self-contradictory to say actually is, and the subjunctively or ‘‘metaphysically’’ possible, what could potentially have been, differ in the formalism
appropriate to each. 3.2 A hint from Quine for the theory of reference of names The article on modal logic in the minor encyclopedia alluded to earlier devotes a section to objections, of which the
very first (3.1) is Quine’s morning star example. In the next section the following is said: 35
The quotation from Quine is from ‘‘Reference and modality,’’ antepenultimate paragraph. The work of Føllesdal where it is quoted is Føllesdal (1965). Føllesdal’s final footnote suggests that ‘‘causal
essentialism’’ is better off than ‘‘logical essentialism,’’ and that Quine’s own proposal to treat dispositions as inhering structural traits of objects is a form of ‘‘causal essentialism.’’
Mathematics, Models, and Modality
Before proceeding to a summary of recent work in modal logic which is directed toward clear solutions to [such] problems . . . it is important to realize that the perplexities about interpretation
can only be understood in terms of certain presuppositions held by Quine and others which I will call ‘‘the received view’’ (rv).
A bit later one finds the assertion that: ‘‘The Russellian theory of descriptions and the distinction between proper names and descriptions is rejected by rv.’’ This is immediately followed by the
assertion that the morning star example (3.1) is ‘‘resolved on Russellian analysis as was shown by Smullyan . . . and others,’’36 and somewhat later by the insistence that ‘‘The usefulness of the
theory of descriptions and the distinction between descriptions and purely referential names was argued long before it proved applicable to modal logic,’’ so that one cannot simply reject them, as
Quine is alleged to do. Now some of this account is quite correct, since the theory of descriptions and of the distinction between them and names as one finds it in the compendium, for instance, did
not originate there, or even with Smullyan, who first applied it to the interpretation and defense of modal logic, but was indeed argued by Russell long before. But some of this account is quite
incorrect. It is not true that Quine’s rebuttal to Smullyan on descriptions requires rejection of Russell’s theory of descriptions.37 And it is not unambiguously true that Quine’s rebuttal to
Smullyan on names requires rejection of ‘‘the distinction between descriptions and proper names.’’ It is true that it requires rejection of the neo-Russellian conception of that distinction, but it
is not true that Quine insists on rejecting any distinction between descriptions and proper names. This should be clear from the last half-sentence of the rebuttal quoted earlier: ‘‘And not because
the proper names were descriptions.’’ Before Quine, difficulties with the theory that the reference of a name to its bearer is absolutely immediate had been recognized by Føllesdal and Alonzo
Church.38 And before Quine, difficulties with the theory that the 36
Klibansky (1968, pp. 91ff). This echoes Fitch (1950, p. 553) where it is said that: ‘‘Smullyan has shown that there is no real difficulty if the phrase [sic] ‘the Morning Star’ and ‘the Evening Star’
are regarded either as proper names or as descriptive phrases in Russell’s sense.’’ The syntactic ambiguity in this last formulation as to whether ‘‘in Russell’s sense’’ is supposed to modify
‘‘proper names’’ as well as ‘‘descriptive phrases’’ matches the ambiguity in the formulation quoted earlier as to whether ‘‘Russellian’’ is supposed to modify ‘‘the distinction between proper names
and descriptions’’ as well as ‘‘theory of descriptions.’’ The ambiguity is appropriate, since the theory of names in question is neo-Russellian. Though this may not yet have been made clear at the
time the encyclopedia article was written, since the formulation of the rebuttal I have quoted dates from two years later. See Føllesdal (1961), x17, pp. 96ff) and Church (1950). Both address
Smullyan and Fitch.
Quinus ab omni naevo vindicatus
reference of a name to its bearer is descriptively mediated had also been recognized.39 But before Quine, those who recognized the difficulties with the absolute immediacy theory generally either did
not take them to be decisive or took them to be arguments for the descriptive mediation theory, and vice versa. But if the first lesson of Quine’s critique for the theory of reference is that the
neo-Russellian theory of names is untenable, the last half-sentence of his rebuttal suggests a second lesson, that this first lesson is not in and of itself an argument for the Fregean theory.
Putting these lessons together, it is not to be assumed that there are just two options; there is space for a third alternative. 3.3 Formal differences between logical and metaphysical modality A few
words may be in order about post-Quinine work on ‘‘logical’’ or strict versus ‘‘metaphysical’’ or subjunctive modalities. The locus classicus for the distinction is of course ‘‘Naming and
necessity,’’ but my concern here will be with formal differences, which are not what was of primary concern there. Three apparent such differences have emerged. First, there is the difference at the
predicate level. The conventional apparatus allows de re modalities, as in &Rxy, but does not allow application of different modalities to the different places of a manyplace relation. The conclusion
that, if one is concerned with logical modality, then the conventional apparatus goes too far when it allows de re modality, has been endorsed on lines not unrelated to Quine’s by a number of
subsequent contributors to modal logic, a notable recent example being Hartry Field.40 The complementary conclusion that, if one is concerned with metaphysical modality, then the conventional
apparatus of quantified modal logic does not go far enough, when it disallows the application of different modalities to the different places of a many-place relation, has also been advanced by a
number of modal logicians, a notable recent example being 39
For work on difficulties with the Fregean theory in the 1950s and early 1960s, see the discussion in Kripke (1972/1980), and Searle (1967). The doctrines in ‘‘Naming and necessity’’ were first
presented in seminars in 1963–4, and whereas that work apologizes for being spotty in its coverage of the literature of the succeeding years, it is pretty thorough in its discussion of the relevant
literature (work of P. Geach, P. Strawson, P. Ziff, and others) from the immediately preceding years. (Searle discusses work of yet another contributor, Elizabeth Anscombe.) In Field (1989, chapter
3). Field also cites several expressions of the same or related views from the earlier literature, and such citations could in a sense be carried all the way back to the ‘‘principle of predication’’
(von Wright 1951).
Mathematics, Models, and Modality
Max Creswell.41 (What is at issue in the latter connection is that a two-place predicate Rxy may correspond to a phrase with two verbs, such as ‘‘x is richer than y is,’’ each of which separately can
be left in the indicative or put in a non-indicative mood, as in ‘‘x would have been richer than y is’’ contrasting with ‘‘x would have been richer than y would have been,’’ so as to allow
cross-comparison between how what is is, and how what could have been would or might have been.) Second, there may well be a formal difference already at the sentential level. For logical modality,
at least in some of its versions or variants, iterated modalities make good sense. I allude here again to work on intensional mathematics and provability logic, where being unprovable is to be
distinguished from being provably unprovable. For metaphysical modality, it is much less clear that iteration makes sense. In Prior’s well-known work on systems combining subjunctive mood operators
with past and future tense operators, for instance, iterated modal operators collapse, unless separated by temporal operators: there is no distinction recognized between what is as of today possibly
possible and what is as of today possible, though there is a distinction between what as of yesterday it was possible would be possible as of today and what after all is possible as of today. In
later work also on the interaction of mood and tense the purely modal part of the logic adopted amounts to S5, which collapses iterated modalities.42 Third, there is the difference that while logical
possibility does not admit of degrees – a theory cannot be just a little bit inconsistent – metaphysical possibility seems to, with some possibilities being more remote than others. At any rate, this
is the thought that underlies theories of counterfactuals since the pioneering work of R. Stalnaker.43 In particular, miraculous possibilities, involving violations of the laws of physics, are in
general more remote than non-miraculous possibilities, a fact that may make the error of earlier writers in associating counterfactuals with physical necessity in some respects a less serious one.
Thus there is a fair amount of work that has been – or can be construed as – exploration of the formal differences between the two kinds of modality. 41
In Creswell (1990). Cresswell also cites several expressions of the same or related views from the earlier literature, and such citations could in a sense be carried all the way back (Lewis 1970).
This is the earliest relevant publication known to me, but its author has suggested that there was very early unpublished work on the topic by A. P. Hazen and by D. Kaplan. The parallel phenomenon
for tense in place of mood was noted even earlier by P. Geach. See Prior (1967b, chapter VII), and among later work R. H. Thomason (1984). The purely modal part is also S5 for virtually all the
workers there cited, as well as later ones like A. Zanardo. Stalnaker (1968). This feature becomes even more prominent in later work on the same topic by D. K. Lewis and others.
Quinus ab omni naevo vindicatus
As apparent formal differences accumulate, the situation comes to look like this: there is one philosophically coherent enterprise of logical modal logic, attempting to treat in the object language
what classical logic treats in the metalanguage; there is another philosophically coherent enterprise of metaphysical modal logic, attempting to do for grammatical mood something like what temporal
logic does for grammatical tense; there is a mathematically coherent field of non-classical logics dealing with technical questions about both these plus intuitionistic, temporal, and other logics;
but there is no coherent field broad enough to include both kinds of ‘‘modal logic,’’ but still narrower than non-classical logic as a whole. In this sense, there is no coherent enterprise of ‘‘modal
logic’’ – a conclusion that may be called Quinesque. 3.4 New alternatives in the theory of reference for names A few words may also be in order about post-Quinine work on theories of reference for
names that reject both the Fregean descriptive mediation and the neo-Russellian absolute immediacy views. The locus classicus for such an alternative is of course again ‘‘Naming and necessity.’’ One
can perhaps best begin to bring out how the new theory of that work relates to the old theory of Quine’s opponents by considering what similarities and differences are emphasized in the only early
extended response to the new theory by the one former adherent of the old theory who remained living and active in the field through the 1970s and 1980s and beyond.44 First, the one area of real
agreement between the new theory and the old is emphasized, that both are ‘‘direct’’ (in the minimal sense of ‘‘anti-Fregean’’) theories; and the new theory is praised for providing additional
arguments: Kripke’s criticism of the ‘‘Frege–Russell’’ view . . . is presented . . . Among the arguments he musters are that competent speakers communicate about individuals, using their names,
without knowing or being able to produce any uniquely identifying conditions short of circular ones . . . Unlike descriptions, proper names are indifferent to scope in modal (‘‘metaphysical’’)
contexts . . . Contra Frege he points up the absurdity of claiming that counterfactuals force a shift in the reference of a name.
Second, another area of apparent agreement, over the ‘‘necessity of identity’’ (in some sense), is also emphasized, with the new theory again being praised for providing additional arguments: 44
Unfortunately this comes in the form of a review of a book by a third party, and is subject to the limitations of such a form. The third party is Leonard Linsky; the book is Linsky (1977); the review
is Marcus (1978). The three quotations to follow come from pp. 498, 501, and 502–3.
Mathematics, Models, and Modality
It is one of the achievements of Kripke’s account, with its effective use of the theory of descriptions, the theory of proper names, the distinction between metaphysical and epistemological
modalities (for example, necessary vs. a priori), that it provides us with a more coherent and satisfactory analysis of statements which appear to assert contingent identities.
Third, the contribution most praised is the provision of a novel account of the mechanism by which a name achieves reference to its bearer: Kripke provided us with a ‘‘picture’’ which is far more
coherent than what had been available. It preserves the crucial differences between names and descriptions implicit in the theory of descriptions. By distinguishing between fixing the meaning and
fixing the reference, between rigid and nonrigid designators, many nagging puzzles find a solution. The causal or chain of communications theory of names (imperfect and rudimentary as it is) provides
a plausible genetic account of how ordinary proper names can acquire unmediated referential use.
All this amounts to something approaching an adequate acknowledgment of substantial additions by the new theory to the old, but what needs to be understood is that the new theory in fact proposes
substantial amendments also. The new theory is not ‘‘direct’’ in anywhere near as extreme a sense as the old. On the new theory, which is a ‘‘third alternative,’’ the reference of a name to its
bearer is neither descriptively mediated nor absolutely immediate, but rather is historically mediated, accomplished through a chain of usage leading back from present speakers to the original
bestower of the name. Also the new theory does not endorse the ‘‘necessity of identity’’ in anything like so broad a sense as does the old theory, or on anything like the same grounds. On the new
theory, ‘‘Hesperus is Phosphorus’’ is only subjunctively or metaphysically necessary – not strictly or logically necessary like ‘‘Phosphorus is Phosphorus.’’ And moreover the metaphysical necessity
of identity is the conclusion of a separate argument involving considerations peculiar to subjunctive contexts, about crosscomparison between actual and counterfactual situations – not an immediate
corollary or special case of some general principle of the intersubstitutability of coreferential names in all (except meta-linguistic) contexts.45 The gap between the old, neo-Russellian theory and
the new, antiRussellian theory is large enough to have left space for the development 45
In this connection mention may be made of one serious historical inaccuracy – of a kind extremely common when authors quote themselves from memory decades after the fact – to be found in the book
review, where it is said that the compendium maintained ‘‘that unlike different but coreferential descriptions, two proper names of the same object were intersubstitutable in modal contexts’’ (p.
502). In actual fact, in the compendium it is repeatedly asserted that two proper names of the same object are intersubstitutable in all contexts.
Quinus ab omni naevo vindicatus
of several even newer fourth and fifth alternatives, semi- or demi-semi- or hemi-demi-semi-Russellian intermediate views, of which the best known is perhaps Nathan Salmon’s.46 These differ from the
Kripkean, antiRussellian theory in that they want to say that in some sense ‘‘Hesperus is Phosphorus’’ and ‘‘Phosphorus is Phosphorus’’ have the same ‘‘semantic content.’’47 They differ from the
Smullyanite, neo-Russellian theory in that there is full awareness that in some sense assertive utterance of ‘‘Hesperus is Phosphorus’’ can make a difference to the ‘‘epistemic state’’ of the hearer
in a way that assertive utterance of ‘‘Phosphorus is Phosphorus’’ cannot. How it could be that utterances expressing the same semantic content have such different potential effects on epistemic
states is in a sense the main problem addressed by such theories. My concern here is not to offer any evaluation, or even any exposition, of the solutions proposed, but only to point out that they
all operate in the space between Fregeanism and neoRussellianism – and therefore in a space of whose existence Quine was one of the first to hint. 3.5 Have the lessons been learned? It would be
absurd to claim that Quine anticipated all the many important developments in modal logic or the theory of reference to which I have been alluding. But it is not absurd to suggest that some of them
might have been arrived at sooner if the reaction to Quine’s critique had been more attentive. Is the matter of more than antiquarian interest today? Well, certainly there are many workers in
philosophical logic and philosophy of language (only a few of whom I have had occasion to mention) who have long since fully absorbed every lesson there was to be learned from Quine. And yet,
scanning the literature, it seems to me that specialists in the relevant areas do not always clearly express these lessons in their writings, and that (surely partly in consequence) many
non-specialists interested in applying theories of modality or reference to other areas have not yet fully learned these lessons. Take modal logic first. It is said that when Cauchy lectured on the
distinction between convergent and divergent series at the Acade´mie des Sciences, Laplace rushed home to check the series in his Me´canique Ce´leste. 46
Salmon (1986). While the early Marcus followed Smullyan, the later Marcus has developed in response to Kripke an idiosyncratic theory that may be described as intermediate in degree of Russellianism
between Salmon’s and Smullyan’s. See Marcus (1990). For Kripke’s rejection of this view, see the closing paragraphs of the preface to Kripke (1980).
Mathematics, Models, and Modality
But when Kripke lectured on the distinction between logical and metaphysical modality, modal logicians did not rush home to check which conclusions hold for the one, which conclusions hold for the
other, and which result from a fallacious conflation of the two. It is a striking fact that the basic article – an article written by two very eminent authorities – on modal logic in that standard
reference work, the multi-volume encyclopedia mistitled a ‘‘handbook’’ of philosophical logic, makes no mention at all of any such distinction and its conceivable relevance to choosing among the
plethora of competing modal systems surveyed.48 No wonder then that workers from other areas interested in applying modal logic seem often not fully informed about formal differences between the two
kinds of modality. To cite only the example I know best, consider philosophy of mathematics, and debates over nominalist attempts to provide a modal reinterpretation of applied mathematics, where
quantification into modal contexts is unavoidable. Those on the nominalist side have quite often supposed that they could get away with quantifying into contexts of logical modality, while those on
the antinominalist side have quite often supposed that anyone wishing to make use of modality must stick to the traditional formal systems, which do not allow for cross-comparison. Both suppositions
are in error.49 Take the theory of reference now. Here a great many people seem to have difficulty discerning the important differences among distinct antiFregean theories. To mention again the
example I know best, many nominalists seem to think that the work of Kripke, David Kaplan, Hilary Putnam, and others has established something implying that it is impossible to make reference to
mathematical or other abstract, causally inert objects.50 Such misunderstandings are encouraged by the common sloppy use by specialists of ambiguous labels like ‘‘causal theory of reference’’; and
even those who carefully avoid ‘‘causal theory’’ in favor of ‘‘direct theory’’ are often sloppy in their usage of the latter, encouraging other confusions. Of late, not only has the confused opinion
become quite common that Quine’s critique has somehow been answered by the new theory of 48
Bull and Segerberg (1984). Other articles in the same work, some of which I have already cited, do recognize the importance of the distinction. It would be out of place to enter into technicalities
here. See Burgess and Rosen (1997). In actual fact, on Kripke’s theory, for instance, a name can be given to any object that can be described, not excluding mathematical objects. But again see
Burgess and Rosen (1997). (The theory of P. Geach probably deserves and the theory of M. Devitt certainly deserves the label ‘‘causal,’’ and does have nominalistic implications.)
Quinus ab omni naevo vindicatus
names (the one coming from ‘‘Naming and necessity’’); but so has the even more confused opinion that Quine’s critique was already answered by an old theory of names (the one coming from Russell
through Smullyan to the compendium); and so too has the most confused opinion of all, that there is no important difference between the old and new theories. Confusion of this kind is found both
among those who think of themselves as sympathizers with ‘‘the’’ theory in question,51 and among those who think of themselves as opponents of ‘‘it.’’52 The latter cite weaknesses of the old theory
as if pointing them out could refute the new theory – a striking example of how confusion over history of philosophy can lead to confusion in philosophy proper. There is hardly a better way to sort
out such confusions than by considering the relations of the old and the new theory to Quine’s critique, from which therefore some people still have something to learn. Neither the old theory nor the
new provides a refutation of that critique, but the reasons why are radically different in the two cases. The old theory attempted to refute that critique, but in doing so it arrived at consequences,
notably the one made explicit in the ‘‘lexicon’’ passage quoted earlier, that reduced the theory to absurdity. Quine’s rebuttal, pointing out the untenability of these consequences, refuted the old
theory. Quine’s critique does not refute the new theory, but then neither does the new theory refute Quine’s critique, nor does it even attempt to do so. The new theory would refute any incautious
claim to the effect that ‘‘quantification into any intensional context is meaningless,’’ since it shows that proper names have all the properties required of canonical terms for contexts of
subjunctive modality. But Quine’s critique was addressed to strict modality, and as for that, the main creator of the new theory of names has said as I do:53 ‘‘Quine is right.’’ 51 52
For comparatively moderate instance see the review Lavine (1995). For an extreme instance see Hintikka and Sandu (1995). This work acknowledges no important differences among: (i) the neo-Russellian
theory of Smullyan as expounded by the early Marcus (which incidentally is erroneously attributed to Marcus as something original, ignoring the real authors Smullyan and Russell); (ii) theories
adopted in reaction to Kripke by the later Marcus; and (iii) the theory of Kripke. In context, what is said to be right is specifically the rebuttal to Smullyanism on names quoted earlier. See Kripke
(1972, p. 305).
Translating names
Mill taught that the signification of a word has in general two components, denotation and connotation, but that in the special case of a proper name there is no connotation, and the signification of
the word is just its denotation. According as ‘‘meaning’’ is aligned with ‘‘connotation’’ or with ‘‘signification,’’ this doctrine comes out as ‘‘A proper name has no meaning’’ or as ‘‘The meaning of
a proper name is just its denotation.’’ Today ‘‘Millianism’’ is most often used as a label for the latter version: (1) The meaning of a name is its denotation.
An immediate consequence of (1) is the following: (2) Two names with the same denotation have the same meaning.
An immediate objection to (2) is that different names for the same item may be distinguished in level (formal, familiar). Such features are very important for usage. (Imagine what a diplomatic
contretemps would result if President Chirac were to write President Bush a letter beginning ‘‘Yo, Dubya!’’) And with words that (unlike proper names) appear in dictionaries, such features are
commonly noted in their definitions, presumably as part of the meaning of the word. It is therefore, the objector claims, reasonable to take them to be part of the meaning of a name as well. To this
objection a Millian may reply by insisting that level is a feature of usage but not of meaning. This reply illustrates in miniature the fact that in this area there are no conclusive proofs or
refutations, but in the end only judgments as to the overall plausibility of proposals as to where to draw the line between semantics and pragmatics. Alternatively, a Millian might concede that (1)
and (2) do indeed need to be qualified, while insisting that the objection does not touch what is really important in Millianism, of which the unqualified (1) and (2) are inadequate formulations.
This 236
Translating names
alternative illustrates the fact that it is difficult to find formulations agreeable to all parties of the main theses in dispute. Here I will for brevity let the simple (1) and (2) stand as
formulations of Millianism. (Features of level have been mentioned only to illustrate the two facts about the nature of the debate just indicated, and otherwise will play no role in the discussion
below.) I will likewise stick with fairly simple formulations of two other theses, one common to most Millians and anti-Millians alike, the other a premise of one popular form of anti-Millianism. The
common thesis is compositionality, thus: (3) Short, simple sentences differing only by substituting one word for another with the same meaning have the same meaning.
Then (2) and (3) together at once give the following: (4) Short, simple sentences differing only by substituting one name for another with the same denotation have the same meaning.
The anti-Millianism thesis is so-called transparency, thus: (5) When short, simple sentences have the same meaning, a subject’s asserting or assenting to one and denying or dissenting from the other
is an indication of a deficiency in the subject’s linguistic knowledge.
All examples below will involve short, simple sentences (usually three-word sentences of name–copula–adjective form). In particular, none will involve such complications as embedding in ‘‘So-and-so
believes that . . .’’ contexts. The anti-Millian argument I wish to consider produces certain examples, and appeals to pre-theoretic intuitions to support the following claim about them: (6) There
are cases of two short, simple sentences that differ only by substitution of one name for another having the same denotation, where a subject may assert or assent to one sentence and deny or dissent
from the other sentence, not for lack of linguistic knowledge, but rather for lack of some other kind of knowledge.
The other knowledge in the best-known examples is biographical or astronomical or geographical or historical (counting etymological knowledge, which is not needed for fluency in the present form of a
language, as historical rather than linguistic). While the examples most often cited pertain to persons (Marcus Tullius Cicero) and planets (Venus), there are also examples about cities. Thus Jones
may while doing some tourism have cruised by moonlight past the
Mathematics, Models, and Modality
famous skyline of the former Ottoman capital, and may while negotiating a business deal have paid a rapid visit to an undistinguished commercial district on the outskirts of the same city. If the
place was consistently called ‘‘Byzantium’’ by the tour guides and ‘‘Istanbul’’ by the business people, Jones may end up asserting ‘‘Byzantium is sublime’’ and ‘‘Istanbul is tacky’’ while denying
‘‘Byzantium is tacky’’ and ‘‘Istanbul is sublime.’’ The antiMillian’s intuition is that this shows Jones to be a geographical ignoramus, but does not show Jones to be a linguistic incompetent like
the prizefighting fan Pete, who applauds whenever someone calls Muhammed Ali ‘‘a great boxer’’ but would take offense if anyone called him ‘‘a great pugilist.’’ This particular case involves
binomialism of the first (Hesperus/ Phosphorus) kind, where an item has two different names tracing back to two different acts of naming (by Megarian colonists in the seventh century BC E , and by
the Atatu¨rk government in the 1930s). But similar examples can arise with binomialism of the second (Cicero/Tully) kind, where two names trace back to the same original act of naming along divergent
paths of transmission. A tourist trip to ‘‘Peking’’ may be confined mainly to the Forbidden City, while a business meeting in ‘‘Beijing’’ may be held in a quarter built in the 1950s in high Stalinist
style. 2
Now (4) and (5) and (6) cannot all be true, and an accumulation of examples makes (6) hard to deny. The anti-Millian, assuming (5), concludes that the Millian thesis (4) is false. A militant Millian
may simply insist that since (4) is true, (5) must be false. But many Millians would, I think, prefer to find some independent argument against (5), not relying on (4), and one place where they have
looked for the materials to construct such an argument has been in Saul Kripke’s notorious ‘‘A puzzle about belief’’ (Kripke 1979). That paper contains two examples that, however the author intended
them, anti-anti-Millians might appropriate and exploit for purposes of arguing against (5), and it is the anti-anti-Millian appropriation and exploitation of the first of these, the Pierre example,
that I wish to consider here. The example involves the translation of proper names, and so I should acknowledge at the outset the fact that in common parlance one seldom speaks of ‘‘translating’’
proper names at all. In the broad sense used here, whatever expression is used in a translation of a sentence at the place corresponding to the place where a name is used in the sentence being
Translating names
translated may be called a ‘‘translation’’ of that name. But it must be acknowledged that most proper names simply do not have non-trivial translations: typically a name is not replaced by something
else when translating a sentence in which it occurs, but simply taken over for use in the language into which the sentence is being translated, as a so-called exonym. It would be an absurd
affectation for a native English speaker, describing to other native English speakers a recent trip to Italy, to speak of having been to ‘‘Roma’’ and ‘‘Napoli’’ and ‘‘Firenze.’’ For the famous
tourist destinations of Rome and Naples and Florence are among the minority of Italian cities whose names do have non-trivial English translations, or ‘‘Anglicizations’’ as they are more ordinarily
called. But if I wish to mention some less famous Italian place, such as Princeton’s sister city Pettoranello, there is no alternative to using the Italian name. More precisely, one uses in speech as
close an approximation to the Italian name as one can manage, given the phonetic differences between Italian and English. In writing, one can just use in English the original name if translating from
a language written in the Roman alphabet, perhaps modulo some diacritical marks and ligatures. (Thus in the days of old-fashioned manual typewriters, ‘‘Go¨del’’ was often written ‘‘Goedel,’’ and even
today ‘‘Gauß’’ is still written ‘‘Gauss.’’) With languages written in another alphabet one uses what is ordinarily called a ‘‘transliteration,’’ and with languages not written alphabetically a
‘‘transcription.’’ The Pierre example involves ‘‘London’’ as the name of the great city in England, which unlike ‘‘Liverpool’’ or ‘‘London’’ as the name of the town in Ontario does have a non-trivial
French translation, ‘‘Londres.’’ The most direct way to turn the Pierre example into an argument against (5) begins with the following comparatively uncontroversial assumptions: (7a) ‘‘Londres’’ is
the correct French translation of the English ‘‘London.’’ (7b) ‘‘est joli[e]’’ is the correct French translation of the English ‘‘is pretty.’’ (7) ‘‘Londres est joli[e]’’ is the correct French
translation of the English ‘‘London is pretty.’’1
The next step would be a comparatively uncontroversial inference from (7) to the following:
Should it be ‘‘Londres est joli’’ or ‘‘Londres est jolie’’? Having consulted with quite a number of informants, French and Que´becois, I can report that native Francophones themselves are undecided
as to the genders of most city names, ‘‘Londres’’ included.
Mathematics, Models, and Modality
(8) The meaning of ‘‘Londres est joli[e]’’ (in French) and the meaning of ‘‘London is pretty’’ (in English) are the same.
Kripke then describes the case of one Pierre, of whom he claims the following: (9) Pierre assents to ‘‘Londres est joli[e]’’ and dissents from ‘‘London is pretty.’’ (10a) There is no deficiency in
Pierre’s knowledge of French. (10b) There is no deficiency in Pierre’s knowledge of English. (10) There is no relevant deficiency in Pierre’s linguistic knowledge.
Taken together, (8) and (9) and (10) give a counterexample to (5), which may be claimed to neutralize the anti-Millian force of the Byzantium/ Istanbul and Peking/Beijing examples, and of the
better-known Cicero/ Tully and Hesperus/Phosphorus examples.
I do not wish to question here the truth of (7), or the cogency of the inference thence to (8). Moreover, in the example as Kripke describes it, (9) seems indisputable, too. I think, however, that
doubts may be raised about (10), and in order to raise them I wish to question not the truth of but the grounds for (7a). So let me begin by mentioning a way to argue for the correctness of
translating ‘‘London’’ as ‘‘Londres’’ that would be wholly inappropriate in the present context. The argument I have in mind takes as its initial premise the one undisputed fact in this area: (11)
‘‘Londres’’ (in French) and ‘‘London’’ (in English) denote the same place.
It then proceeds to this intermediate step: (12) ‘‘Londres’’ (in French) and ‘‘London’’ (in English) have the same meaning.
And it then proceeds to the final conclusion (7a). Such an argument would be inappropriate in the present dialectal context, because we are supposed to be looking for an anti-anti-Millian argument
independent of distinctively Millian assumptions, while it is precisely the distinctively Millian thesis (2) that would be needed to get from (11) to (12). Questions of dialectic aside, it seems
clear that while sameness of denotation is a necessary condition, it is simply not a sufficient condition for correctness of translation of place names. If some Francophone counterpart of Jones
exclaims, ‘‘Que Byzance est belle!’’ and ‘‘Que Stamboul est laid!’’ he surely has to be translated as saying ‘‘How beautiful Byzantium is!’’
Translating names
and ‘‘How ugly Istanbul is!’’ and not ‘‘How beautiful Istanbul is!’’ and ‘‘How ugly Byzantium is!’’ Nor is this just because the person whom we are translating is confused about the identity of the
city. ‘‘Staline restait a` Moscou’’ and ‘‘Djougachvili se cachait’’ surely have to be translated ‘‘Stalin remained in Moscow’’ and ‘‘Djugashvili was hiding,’’ not ‘‘Djugashvili remained in Moscow’’
and ‘‘Stalin was hiding,’’ even if the translation is from the writings of Trotsky and is being prepared for a groupuscule of Trotskyites all thoroughly aware that Stalin/Staline and Djugashvili/
Djougachvili are one and the same person. These examples may make it look as if an etymological connection and/ or a resultant phonetic or orthographic relationship were the key. But such links are
neither necessary nor sufficient for correct translation. Some of the most ancient and famous countries – Egypt, India, Greece – have long been known to most of the outside world by names having
nothing to do with their native names. Any one of these cases shows etymological and phonetic and orthographic links are not necessary, as would the case of ‘‘Deutschland’’ aka ‘‘Germany’’ aka
‘‘Allemagne.’’ The Greek case shows that they are not sufficient, either. For there actually exists in English a name for the country in question, the name ‘‘Hellas,’’ that is etymologically
connected and phonetically and orthographically related to the native name; but nonetheless it is the unconnected, unrelated ‘‘Greece’’ that is the correct translation of the native name in all
common, prosaic contexts. (Use of ‘‘Hellas’’ and ‘‘Hellenic’’ is appropriate only in the kind of contexts where ‘‘Hibernia’’ and ‘‘Hibernian’’ might be used instead of ‘‘Ireland’’ and ‘‘Irish.’’) But
if not merely denotation, or that plus etymology, what does make a given translation of a given name correct or incorrect? A clue is given, I think, by the case of the translation into western
European languages – more usually called the ‘‘Romanization’’ – of Chinese proper names. Historically, approximate phonetic transcriptions were used. But since the differences between the principal
Chinese ‘‘dialects’’ are so great, and the different European languages are so many, that practice led to chaos, and hence to a demand for a fixed system. Unfortunately not one but two systems were
fixed (both based on Mandarin pronunciation on the Chinese side, but who knows what on the European side), one by European scholars and another by the Chinese government. As a result, most important
Chinese geographical places and historical personages can be translated in either of two ways into English or French. And often, as with ‘‘Hsu¨antsang’’ aka ‘‘Xua´nza`ng’’ (the historical personage
on whom the character Tripitaka in A Journey to the West is based), it is difficult for the average
Mathematics, Models, and Modality
Anglophone or Francophone reader to trace any link between the Wade–Giles and Pinyin versions. But at least the situation is better than it was when this same personage was also being called by the
names ‘‘Hiouentang’’ and ‘‘Yuan Chwang’’ and half a dozen others. While the case of the Romanization of Chinese, where conventions were consciously and deliberately and explicitly adopted (and where
there are two rival sets of conventions), is unusual, I take it to illustrate a more general principle. The general principle is that what counts as the (or a) correct translation of a name from one
language to another is determined by the conventions and customs of the community of bilinguals and specifically of translators. Needless to say, those conventions and customs are constrained by the
requirement that the name and its translation must have the same denotation, and may be influenced, though not determined, by etymological or phonetic or orthographic considerations. In the case of
second-, third-, and fourth-hand borrowings, as when, say, an Arabic or Hebrew place-name migrates via Greek and Latin and French to English, suffering distortions at each step, the correct
formulation of the general principle would have to be more complicated. (The case where there is no link of any kind, even through a chain of intermediaries, need not be considered, since in this
case all names will initially be like ‘‘Pettoranello’’ in lacking nontrivial translations.) But leaving aside details, if anything like this principle is granted, there would seem to follow another
important principle, to the effect that bilingual competence involves something more than just competence in each of the two languages separately, something that can only be acquired by contact with
members of the bilingual community, direct or through their writings. Now to return to Pierre, the general principle just formulated raises doubts whether (10a) and (10b) are enough to imply (10).
Kripke describes Pierre as lacking even the minimal kind of contact with the bilingual community he could acquire by looking into a French–English dictionary. Supposing Pierre keeps a diary in his
native French of his experiences in the English city where he finds himself, he will write for the name of that city ‘‘London’’ if he has seen the local name written, or if he has only heard it
spoken, something on the order of ‘‘Lonnedonne.’’ Either way, a single glance at his diary would suffice to show any bilingual that Pierre lacks the knowledge, which a French–English dictionary could
provide him with, that ‘‘London’’ is one of the small minority of English place-names that has a non-trivial French translation, and that this translation is ‘‘Londres.’’ I wish to suggest that what
this shows is that Pierre is lacking bilingual competence, and that this is a lack of a kind of linguistic knowledge,
Translating names
making (10) false. Thus at bottom, I suggest, Pierre’s state is not after all so very different from that of Ali’s linguistically challenged fan Pete.
As we have seen, many examples point to the following principle: (13) Sameness of denotation of a name in one language with a name in another language is not in general sufficient to make the latter
a correct translation of the former.
Many examples also indicate that sameness of denotation of a name in one language and a description in another language are not in general sufficient to make the latter a correct translation of the
former. Indeed, something much stronger holds: (14) A description is never a correct translation of a proper name.
And it should be emphasized that (14) applies to metalinguistic descriptions every bit as much as to any others. It is always wrong to translate ‘‘Voltaire’’ in a French text as ‘‘the person called
‘Voltaire’’’ or ‘‘the person who called himself ‘Voltaire’,’’ and this is so even though ‘‘Voltaire’’ was only a nom de plume. The observations (13) and (14) suggest that Millianism and descriptivism
are both alike false. Needless to say, these observations do not prove any such conclusions, for the reason I stated at the outset: in any case of apparent counterexample, the militant Millian and
the die-hard descriptivist can simply suggest that whatever phenomena are cited by their critics pertain to pragmatics rather than semantics. In the present case, the Millian or descriptivist alike
could simply claim that sameness of meaning is insufficient to make for correct translation. Yet, to repeat, considerations of translation, while not refuting the widespread assumption that
Millianism and descriptivism are the only alternatives, do suggest doubts about it. For if one thinks of meaning as that which translation aims to preserve, then a third alternative suggests itself.
This third alternative is that different names for the same item have different meanings, even though there is nothing to be said to an English speaker about what ‘‘London’’ (in English) means,
except that it means London – nothing to be said about what ‘‘London’’ (in English) means that might be informative in the way that it might be informative to tell Pete ‘‘‘pugilist’ means boxer,’’ or
for that matter to tell Pierre ‘‘‘London’ (en anglais) veut dire Londres.’’
Mathematics, Models, and Modality
Kripke says that Frege is to be criticized for confusing two senses of ‘‘sense.’’ If we ignore Frege’s concrete examples, where the senses of names seem to be said to be those of certain associated
descriptions, and consider only Frege’s abstract formulations, according to which the sense of a name is its mode of presenting its bearer, then the view that different names with the same denotation
have different senses certainly seems correct. Presenting a fugitive novelist as ‘‘John Doe’’ and presenting him as ‘‘Salmon Rushdie’’ are certainly, in any ordinary sense of the phrase ‘‘way of
presenting,’’ two very different ways of presenting him – with perhaps all the difference between life and death between them. The view that different names for the same person or place may have
different meanings, even though no name has the same meaning as any description, may thus be restated as the view that different names have different senses in one sense of ‘‘senses’’ (modes of
presentation), though they have no senses at all in another sense of ‘‘senses’’ (associated descriptions). Mill’s error lies, on this way of putting the matter, not in his claiming that names have no
connotation (in the sense of associated descriptions), but rather in his assuming there is nothing more to signification than denotation plus connotation; but this is an error of which descriptivists
are guilty as well. In addition to what is signified, there is the way it is signified, which in the case of names is not by connoting some description of it. However the view is stated, it is one
that gains some support from consideration of translation, and one that deserves more attention than it has heretofore been given. It, too, should be taken into account in considering other puzzling
cases beyond the kind I have been discussing here. Indeed, the real aim of the present note has been not to solve the puzzle of Pierre, but to call attention to this comparatively neglected view. 5
About the Pierre puzzle there remains one further point to be acknowledged. Suppose Pierre were informed that the French translation of ‘‘London’’ is ‘‘Londres.’’ This might well clear up his
confusion, but then again it might not. For Pierre might conclude that there are two homonymous names ‘‘Londres’’ in French, one denoting a pretty place he saw in pictures back in France and one the
ugly place where he lives now that he has come to England, just as there are two homonymous names ‘‘Bretagne,’’ one denoting a peninsula that is part of France, and one denoting a large island across
the Channel from France. There is this difference, that in the case of ‘‘Bretagne’’ there are two different names in
Translating names
English, ‘‘Brittany’’ and ‘‘Britain,’’ while in the case of ‘‘Londres’’ there is (so far as Pierre knows) in English just ‘‘London.’’ Pierre may be somewhat puzzled why the English have not had the
wit to add something to one or both of the names, as the French distinguish their peninsula ‘‘Bretagne’’ from the island ‘‘Grande Bretagne’’ or the German town ‘‘Aix-la-Chapelle’’ from the French
town ‘‘Aix-en-Provence.’’ But perhaps after his ugly experiences living in London he expects no better of the English. It is clear that if Pierre were to fall into the kind of confusion I have just
been describing, then the Pierre example would reduce to a variant of Kripke’s other example, the Paderewski example. But this example raises a quite different set of issues from the issues about
translation with which I have been concerned here, and must be left for another occasion.
Relevance: a fallacy?
Responding to Harvey’s theories about the circulation of the blood, Dr. Diafoirus argues (a) that no such theory was taught by Galen, and (b) that Harvey is not licensed to practice medicine in
Paris. Plainly there is something wrong with a response of this sort, however effective it may prove to be in swaying an audience. For either or both of (a) and (b) might well be true without
Harvey’s theory being false. So Diafoirus’s argument can serve only to divert discussion from the real question to irrelevant sideissues. The traditional term for such diversionary debating tactics
is ‘‘fallacy of relevance.’’ In recent years this tradition has come to be used in a quite untraditional sense among followers of N. D. Belnap, Jr., and the late A. R. Anderson. (All citations of
these authors are from their masterwork Anderson and Belnap (1975), and are identified by page number.) According to these selfstyled ‘‘relevant logicians,’’ it is items (IA) and (IIA) in Table 13.1
that constitute the archetypal ‘‘fallacies of relevance.’’ (In the table , &, and Ú stand for truth-functional negation, conjunction, and disjunction, respectively.) These forms of argument, say
Anderson and Belnap, are ‘‘simple inferential mistake[s], such as only a dog would make’’ (p. 165). The authors can hardly find terms harsh enough for those who accept these schemata: they are called
‘‘perverse’’ (p. 5) and ‘‘psychotic’’ (p. 417). Needless to say, (IA) and (IIA), which can be traced back at least to Chrysippus, were not traditionally regarded as fallacious. The Anderson– Belnap
notion of ‘‘relevance,’’ whatever it may amount to, must be something quite different from the traditional notion, which ‘‘was central to logic from the time of Aristotle’’ (p. xxi). And yet the
authors declare their so-called ‘‘relevant logic’’ to be a commonsense philosophy, in accord with the intuitions of ‘‘naive freshmen’’ (p. 13) and others who have not been ‘‘numbed’’ (p. 166) by a
course in classical logic. Moreover, whereas other 246
Relevance: a fallacy?
Table 13.1 (I) (IA) (IB)
p or q not p q pÚq p q pþq p q
(II) (IIA) (IIB)
not both p and q p not q (p & q) p q (p q) p q
dissident logicians (e.g. intuitionists) hold that some forms of argument always accepted and used without question by mathematicians in their proofs are in fact untrustworthy, Anderson and Belnap
are at some pains to explain (pp. 17–18 and 261–2) that their brand of non-classical logic does not conflict with the practice of mathematicians, but only with the classical logician’s account of
that practice. In view of the fact that everyday arguments and mathematical proofs abound in instances of (I) and (II), one may wonder how Anderson and Belnap could hope to reconcile their objections
to (IA) and (IIA) with the claim that their ‘‘relevant logic’’ is compatible with commonsense and accepted mathematical practice. The answer is that the authors believe that ordinary-language
argument patterns (I) and (II) should be represented as expressions of the ‘‘intensional’’ schemata (IB) and (IIB), which are relevantistically acceptable, and not of the ‘‘extensional’’ schemata
(IA) and (IIA), which relevantists reject. The compound p þ q in (IB) is supposed to be an ‘‘intensional disjunction’’ stronger than the truth-functional p Ú q in that mutual relevance of p and q is
required for its truth. This p þ q is not entailed by p, nor even by p & q, since it might be false even though p and q were both true (or even necessary). This happens in the case of irrelevant
pairs such as p ¼ ‘‘Bach wrote the Coffee Cantata’’ and q ¼ ‘‘The Van Allen belt is doughnut-shaped’’ (p. 30). Dually, the p q of (IIB) is an ‘‘intensional conjunction,’’ better called
‘‘cotenability’’ or ‘‘non-preclusion,’’ a compound weaker than the truth-functional p & q in that mutual irrelevance of p and q is sufficient for its truth. This p q does not entail p, or even p Ú q,
since it might be truth even though p and q were both false (or even impossible).
Mathematics, Models, and Modality
The relevantists’ claim that (IB) and (IIB) best represent (I) and (II) admits of two formulations: a stronger and a weaker. The stronger claim would be that the ordinary-language ‘‘or’’ and ‘‘and’’
literally mean þ and rather than Ú and &. The weaker claim would be that anyone basing an argument on the premise that p or q, or that not both p and q, will at least be in a position to assert that
p þ q and (p q) as the case may be. (The latter claim is weaker than the former because even if ‘‘or’’ and ‘‘and’’ meant Ú and &, it might still be that arguments of form (IA) and (IIA) could always
be avoided in practice because in any instance where one might wish to argue from p Ú q or from ( p & q) the stronger premises p þ q and ( p q) would be available.) The stronger of these two
relevantistic claims seems quite untenable. True, Anderson and Belnap do make a feeble attempt (pp. 176–7) to argue that the ordinary-language ‘‘or’’ usually means þ rather than Ú. Their argument,
however, is scarcely original, amounting to no more than a repetition of the arguments that used to be used by P. F. Strawson and other Oxford philosophers in their diatribes against modern logic. To
the serious objections against such Oxonian arguments that have emerged from H. P. Grice’s work on conversational implicature the authors of Anderson and Belnap (1975) attempt no reply. In any case,
even if the claim that ‘‘or’’ means þ is credited with a certain intuitive plausibility, the same cannot be done for the claim that ‘‘and’’ means . To be sure, Strawson and others have claimed that
the meaning of ‘‘and’’ sometimes diverges from that of & (that in some uses ‘‘and’’ does duty for ‘‘and subsequently’’ or ‘‘and as a result’’); but these divergences are always in the direction of
something stronger than &, not something weaker. The relevantists themselves shrink from identifying with ‘‘and.’’ While asserting (pp. 344–5) that cotenability is an ‘‘analogue’’ that ‘‘in some ways
. . . looks like’’ conjunction, they concede that ‘‘it isn’t conjunction.’’ The untenability of the strong claim that everyday and mathematical instances of (I) and (II) are literally meant as
instances of (IB) and (IIB) might already be thought to do considerable damage to the relevantists’ claim to be espousing a commonsense philosophy of logic. My aim in this paper is to cast further
doubt on that claim by presenting counterexamples to the weaker claim that everyday and mathematical instances of (I) and (II) can at least be avoided in favor of (IB) and (IIB). I will present some
examples, taken from everyday life and mathematical practice, of arguments of the forms (I) and (II) which can be neither read as nor replaced by instances of (IB) and (IIB).
Relevance: a fallacy? 2
Background to Example 1 The game of Mystery Cards is played thus: the red and black cards from an ordinary deck are separated. One red and one black ‘‘mystery card’’ are set aside face down, without
having been seen by any player. The remaining twenty-five red and twenty-five black cards are combined, shuffled, and dealt out to the players, whose object is to guess the mystery cards. The players
take turns questioning each other. The player whose turn it is addresses the player of his choice asking a question of the form, ‘‘Is it the such-and-such red card and the thus-and-so black card?’’
If the player questioned has either or both of the cards named in his hand, he must answer ‘‘No’’; otherwise he must answer ‘‘Maybe.’’ Both question and answer are audible to all players. If a player
feels ready to guess the mystery cards, then on his next turn, instead of asking a question he may make a statement, saying ‘‘It’s the such-and-such red card and the thus-and-so black card!’’ He then
looks at the mystery cards. If his guess is correct, he turns them face up and is declared the winner. If wrong, he puts them back face down, exposes his own hand, and is disqualified from further
play. Admittedly this game is a dull one, but it exhibits in simplified form the principle at work in several more interesting games (e.g. the one marketed under the trade-name CLUE, the importance
of which was pointed out to me by D. K. Lewis). Example 1. Argument During the course of a game of Mystery Cards, Wyberg hears von Eckes ask Zeemann, ‘‘Is it the deuce of hearts and the queen of
clubs?’’ He hears Zeemann reply ‘‘No.’’ Later in the game he manages to figure out that it is the deuce of hearts. He argues: it isn’t both the deuce of hearts and the queen of clubs; but it is the
deuce of hearts; so it isn’t the queen of clubs. He goes on to use this information to win the game. Example 1. Analysis Let p ¼ ‘‘The mystery red card is the deuce of hearts,’’ q ¼ ‘‘The mystery
black card is the queen of clubs.’’ Zeemann’s hint is no more and no less than that (p & q). Her statement is made on purely truth-functional grounds: she sees the queen in her hand. Her statement is
not made on
Mathematics, Models, and Modality
the basis of any ‘‘relevance’’ between p and q: the two mystery cards were chosen entirely independently of each other. Zeemann is justified in denying a truth-functional conjunction, but would not
be justified in denying cotenability. Since the premise (p q) is not available to Wyberg, his argument is an instance of (II) that can be neither read as nor replaced by an instance of (IIB). Had
Wyberg been a relevantist, unwilling to make a deductive step not licensed by the Anderson–Belnap systems E and R, he would have been unable to eliminate the queen of clubs from his calculations, and
would have lost the game. A relevantist would fare badly in this game and others, and in game-like situations in social life, diplomacy, and other areas – unless, of course, he betrayed in practice
the relevantistic principles he espouses in theory. Background to Example 2 Dr. Zeemann has just been awarded her degree for a dissertation in number theory. Her main result is a proof that every
natural number n has either a certain property A(n) or a certain property B(n). As written up in her thesis, the proof is by induction on n, as follows: Case n ¼ 0. We show that A(0). [Here follows a
proof.] Case n ¼ 1. We show that B(1). [Here follows a proof.] Case n 2. We assume as induction hypothesis that either A(n 1) and A(n 2), or A(n 1) and B(n 2), or B(n 1) and A(n 2), or else B(n 1)
and B(n 2). [Here follows a proof treating each of the four cases separately.]
She remarks that the famous d’Aubel–Hughes Conjecture would imply that B(0), whereas the equally famous conjecture of MacVee would imply that A(1), but reports that she has no light to shed on these
old conjectures. Commentary Before proceeding, let us note that, following the universal practice of mathematicians, Zeemann has taken her proof that A(0) to dispose of the case n ¼ 0 of the general
theorem that for all n, either A(n) or B(n). In other words, she argues from the premise A(0) to the conclusion that A(0) or B(0). This is worth mentioning because relevantistically inclined writers
have been known to claim that no one ever seriously argues from p to p or q. Indeed, in everyday conversation we are, in R. C. Jeffrey’s words, ‘‘at a loss
Relevance: a fallacy?
to know what the motive could be’’ for someone to pass from p to the longer and less informative statement that p or q. ‘‘Knowing the premise, why not assert it, rather than the conclusion?’’
However, in mathematics we often have good reason to say less than we know: We will assert less than we could about the cases n ¼ 0 and n ¼ 1 in order to incorporate these cases in a generalization
holding for all values of n. Now the inference from A(0) to A(0) or B (0) is only valid if ‘‘or’’ is taken as Ú rather than þ. Hence Zeemann’s theorem must be formalized as (8n)(A(n) Ú B(n)) and not
(8n)(A(n) þ B(n)). This means that any argument of form (I) in which the major premise is supplied by Zeemann’s theorem will be an instance of (I) that can be neither read as nor replaced by an
instance of (IB). Let us proceed to examples. Example 2a. Argument Zeemann applies her work to give bounds to the number of solutions to Tiegh’s equation, thus: Tiegh himself has shown that the
number t of solutions to his equation is 13. Now a little elementary algebra shows that we cannot have A(t). Hence by our main result, we must have B(t). But no n with 5 n 16 can satisfy B(n), as is
clear from some more elementary algebra. Hence t 4.
Example 2a. Analysis This is a typical mathematical argument of the form (I). The premise A(t) or B(t) must be represented as a truth-functional, not an ‘‘intensional’’ disjunction. The ‘‘unknown’’ t
might for all we know be equal to 1, and no ‘‘relevant’’ connection has been established between A(1) and B(1) – indeed, as Zeemann herself reports, she has been unable to establish anything about B
(1). Example 2b. Argument Professor Wyberg has been working for years on the celebrated conjecture of von Eckes, but has got no further than showing that the conjecture follows from the assumption
that B(1), a result he considers not worth publishing. Just recently he has given up work on von Eckes’ conjecture in disgust, and has turned to other matters. In particular, he has just refuted an
old conjecture of MacVee by proving that A(1). Now he reads an
Mathematics, Models, and Modality
announcement of Zeemann’s result. The details of her proof are not available – it takes years for theses to come out in print – but he recognizes the significance of her results. In particular, they
enable him to prove von Eckes’ conjecture at last. He writes a set of notes, ‘‘A proof of von Eckes’ conjecture,’’ with the following structure: First comes his proof that A(1). Second comes a
linking passage: And so we see that the MacVee conjecture fails. Now Zeemann has recently announced the result that for all n, either A(n) or B(n). Hence we must have B(1). We now proceed to put this
fact to good use.
Third follows the derivation of von Eckes’ conjecture from B(1). Example 2b. Analysis Since what is established by Zeemann is just A(1) Ú B(1), not A(1) þ B(1), we have here another mathematical
instance of (I) that can neither be read as nor replaced by an instance of (IB). It is a slightly atypical instance. Had he known the details of Zeemann’s work, had he known that she actually proves
B(1) outright, Wyberg would surely have just cited this fact that B(1) from her thesis, rather than give the roundabout argument that he did. But this is not to say that the proof of von Eckes’
conjecture that Wyberg did give is erroneous. One must distinguish inelegance from incorrectness, as even most relevantists allow (p. 279). To sharpen the intuition here, suppose that six months
after Wyberg, von Eckes himself notices that his conjecture can be derived from B(1). Suppose further that von Eckes, unlike Wyberg, has access to a photocopy of Zeemann’s thesis, and so knows that
she has proved B(1). Von Eckes then writes a paper, ‘‘Proof of a conjecture in number theory,’’ in which he cites the fact that B(1) from her thesis and then proceeds to derive his old conjecture
from B(1) in a manner indistinguishable from that of Wyberg. In this situation, nobody in his right mind would say that von Eckes had produced ‘‘the first correct proof ’’ of the conjecture; the
honor of priority goes to Wyberg. One afflicted with relevantistic scruples could not have argued as Wyberg did, but would have had to wait for the publication of Zeemann’s work before claiming to
have settled von Eckes’ conjecture. By that time the less scrupulous Wyberg and the better-placed von Eckes would already be contending for priority. A follower of Anderson and Belnap would not
prosper in the world of contemporary mathematics – unless, that is, he sometimes conveniently ‘‘forgot’’ his philosophy of logic.
Relevance: a fallacy? 3
No doubt the reader can construct further examples. One might consider, for instance, the case of a person who remembers that once upon a time he was told either that p or that q, but cannot now
remember which. Investigating a bit, he quickly establishes that p, and so concludes that q. Such examples, I submit, show that as far as negation, conjunction, and disjunction are concerned,
‘‘classical’’ logic (and with it the whole logical tradition from Chrysippus onwards) is far closer to commonsense and accepted mathematical practice than is the ‘‘relevant’’ logic of Anderson and
Belnap. One ploy the relevantist might use in trying to escape from our counterexamples may already have occurred to the reader. What if we take the ‘‘relevance’’ required for the truth of p þ q and
the falsehood of p q not as something objective and absolute, but as something subjective and relative? We might then say this of the Mystery Cards, for example: in objective fact there is no
connection between its being the deuce of hearts and its being the queen of clubs, the red and black cards having been chosen separately. In Zeemann’s mind there is no such connection, her statement
that it is not both being based solely on her knowledge that it is not the latter. But Zeemann’s information establishes such a connection for Wyberg, so that he is in a position to assert what she
is not, namely p þ q. Hence his argument can be represented as a case of (IIB). I doubt such a subjectivization and relativization of ‘‘relevance’’ offers a viable way out to followers of Anderson
and Belnap. If (IB) and (IIB) are to cover all instances of (I) and (II) in mathematics and everyday argumentation, ‘‘relevance’’ will have to be not just subjectivized but trivialized. Any grounds
for assertion p Ú q short of the simple knowledge that p, or that q, will have to be taken as sufficient grounds for asserting p þ q: the statement by a reliable person that she either knows p or
knows q though she is not saying which; the knowledge that p holds for m ¼ 0 and q holds for m ¼ 1, coupled with ignorance as to whether m ¼ 0 or 1; the simple recollection that one once knew p or
knew q though one has now forgotten which. (And paradoxically, the acquisition of more information could threaten one’s right to assert p þ q: if one’s informant decides to provide more specific
information, if the value of m is settled, if one’s memory improves, one may suddenly lose the right to assert p þ q.) Relevantism would reduce to the position that (IA) is valid when and only when
one’s grounds for asserting p Ú q are something other than the simple knowledge that q. Such a position, however, looks suspiciously like a confusion of the criteria
Mathematics, Models, and Modality
for the validity of a form of argument with the criteria for its utility, a confusion of logic with epistemology. Indeed, some writers have been willing to dismiss the whole relevantistic movement as
a simple case of confusion between the logical notion of implication and the methodological notion of inference. The following (unpublished) remarks of G. Harman on this point will bear quoting: By
reasoning or inference I mean a process by which one changes one’s views, adding some things and subtracting others. There is another use of the term ‘‘inference’’ to refer to what I will call
‘‘argument’’, consisting in premises, intermediate steps, and a conclusion. It is sometimes said that each step of an argument should follow from the premises or prior steps in accordance with a
‘‘rule of inference’’. I prefer to say ‘‘rule of implication’’, since the relevant rules do not say how one may modify one’s views in various contexts. Nor is there a very direct connection between
rules of logical implication and principles of inference. We cannot say, for example, that one may infer anything one sees to be logically implied by one’s prior beliefs. Clearly one should not
clutter up one’s mind with many of the obvious consequences of things one believes. Furthermore, it may happen that one discovers that one’s beliefs are logically inconsistent and therefore logically
imply everything. Obviously, one ought not to respond to such a discovery by believing as much as one can. Some philosophers and logicians [the reference is to Anderson and Belnap] have imagined that
the remedy here is a new logic in which logical contradictions do not logically imply everything. But this is to miss the point that logic is not directly a theory of reasoning at all.
And indeed if ‘‘relevance’’ is taken to be something subjective and relative (according to the proposal discussed above), I do not see how the relevantists could escape Harman’s charge that they
confuse implication and (useful) inference. I do not, however, believe that the authors of Anderson and Belnap (1975) understand by ‘‘relevance’’ something subjective. What little they tell us about
the nature of ‘‘relevance’’ (e.g. pp. 32–3, where they quote with approval from several sources) strongly suggests that it is a matter of meaning. Certainly their commonest charge against classical
logic (first raised on p. xxii and repeated ad nauseam) is that it ignores ‘‘intension’’ and meaning. Meaning, however, is something that, generally speaking, will be the same for Wyberg as it is for
Zeemann. That relevance is meant to be a semantical, and hence impersonal, notion and not a matter of individual psychology, is further suggested by the relevantists’ criticism of T. J. Smiley (p.
217), who is faulted for ‘‘epistemologizing’’ and ‘‘psychologizing’’ the logical notion of entailment. Thus if the authors of Anderson and Belnap (1975) intend by ‘‘relevance’’ something less than
objective, they are highly
Relevance: a fallacy?
remiss in failing to alert readers to the fact; while if ‘‘relevance’’ is supposed to be impersonal, then the claim that the relevantistic position is (even in a weak sense) compatible with
commonsense and accepted mathematical practice succumbs to the counterexamples presented above. In closing, let me reiterate that I have been concerned here solely with the original Anderson–Belnap
account of ‘‘relevant’’ logic, and with their claim that their systems E, R, etc., are in better agreement with common sense than is classical logic. I have not been concerned with other rationales
for developing these systems, nor with the possibility of imposing interpretations on them that were not originally intended by their authors. (It has been suggested, for instance, that some of the
formalism created by relevantists might be useful in developing a logic of ambiguity, or of truthin-fiction.) Workers in category theory, one of the least constructive branches of modern mathematics,
have found certain technical uses for intuitionistic logic; but no one imagines that this vindicates Brouwer’s philosophy of mathematics. Similarly, the discovery of serendipitous applications of
some of the formalism created by Anderson and Belnap would not justify the claim that their logical systems are accurate formalizations of current mathematical practice. Still less could it justify
the abusive tone of their remarks about classical logicians.
Dummett’s case for intuitionism
Some philosophers approach mathematics saying, ‘‘Here is a great and established branch of knowledge, encompassing even now a wonderfully large domain, and promising an unlimited extension in the
future. How is mathematics, pure and applied, possible? From its answer to this question the worth of a philosophy may be judged.’’ Other philosophers approach mathematics in a quite different
spirit.1 They say, ‘‘Here is a body, already large and still being extended, of what purports to be knowledge. Is it knowledge, or is it delusion? Only philosophy and theology, from their standpoint
prior and superior to that of mathematics and science, are worthy to judge.’’ While this inquisitorial conception of the relation between philosophy and science is less widely held today that it was
in Cardinal Bellarmine’s time, it continues to have many distinguished advocates. Prominent among these is Michael Dummett, who has repeatedly advanced arguments for the claim that much of current
mathematical theory is delusory and much of current mathematical practice is in need of revision – arguments for the repudiation, within mathematical reasoning, of the canons of classical logic in
favor of those of intuitionistic logic. While nearly everything Dummett has written is pertinent in one way or another to his case for intuitionism, there are two texts especially devoted to stating
that case: his much anthologized article (Dummett 1973a) on the philosophical basis of intuitionistic logic; and the concluding philosophical chapter of his guidebooks (Dummett 1977) to the elements
of intuitionism. The present paper offers a critical examination of these two texts. 1
For more on the contrast between the two approaches to philosophy of mathematics, see the editorial introduction to Benacerraf and Putnam (1964).
Dummett’s case for intuitionism 2
Dummett has remarked of his case for intuitionism that ‘‘it is virtually independent of any considerations relating specifically to the mathematical character of the statements under discussion. The
argument involve[s] only certain considerations within the theory of meaning of a high level of generality, and could, therefore, just as well have been applied to any statements whatsoever, in
whatever area of language’’ (1973a, p. 226). Hence it is best to begin an examination of his case by considering some of his views on meaning. Especially important for Dummett are what I will call
neutrally theories of language of the first type. On a theory of this type, the meaning of a sentence is identified with the conditions for correctness of the sentence (or pedantically: of an
assertion made by uttering the sentence) as a representation of reality. A speaker’s ability to use the sentence is explained by reference to his grasp of these correctness conditions. Correctness
may be conceived of in more than one way, and hence more than one subtype within the first type of theory of language is possible. Especially important for Dummett is the distinction between those
conceptions on which correctness is, and those on which it is not, something always at least in principle potentially recognizable by human beings. The best-known theory operating with a conception
of correctness as always recognizable is the intuitionistic proof-conditional theory of meaning for the mathematical part of language. (The provability of a mathematical conjecture is recognizable by
discovering a proof.) The generalization of this theory to the whole of language would be a verification-conditional or verificationist theory. The best-known correctness conception on which
correctness is sometimes recognizable, sometimes unrecognizable, is the usual conception of truth. (On the usual conception, the truth of a mathematical conjecture need not imply the existence of a
proof of the conjecture or any other means of recognizing the conjecture as true.) To avoid ambiguities, I will call truth-asusually-conceived verity. Many distinguished philosophers have advocated
verity-conditional or verist theories of meaning. Dummett calls the specialization of such a theory to the mathematical part of language a Platonist theory (though I imagine that he would be hard
pressed to locate a passage in the Republic or the Timaeus where such a theory is taught). How can graspable correctness conditions be assigned to each of the indefinitely many sentences of a
language? The only answer that immediately suggests itself is: inductively. The theories of language of the first type
Mathematics, Models, and Modality
considered by Dummett take the sentences of the language to fall into a hierarchy of degrees of complexity, with the correctness conditions for those of higher degree being determined inductively
from the correctness conditions for those of lower degree. For example, such theories include an induction clause indicating how the correctness conditions of a disjunction are determined from those
of its disjuncts. On an intuitionist or verificationist theory this clause takes the form: (1) A proof (or verification) of a disjunction consists in the specification of one of its disjuncts
together with a proof (or verification) of that disjunct.
On a Platonist or verist theory this clause takes the form: (2) A disjunction is true if and only if at least one of its disjuncts is true.
On any theory of language of the first type there is an external standard against which rules of implication are to be judged. A rule is acceptable if and only if it preserves correctness, leading in
all instances where the premises are correct to a conclusion that is correct. It is by appeal to such induction clauses as (1) and (2) above that one can seek to demonstrate that certain rules are
correctness-preserving or sound. Given the Platonist theory of meaning, the usual soundness proof for classical logic established that all the rules of that logic are acceptable. Given the
intuitionist theory of meaning, the usual soundness proof for intuitionistic logic establishes acceptability for all intuitionistic rules, but not for all classical rules: the acceptability of rules
depending on the laws of double negation or excluded middle is doubtful when such rules are applied to sentences for which an effective decision procedure is lacking, such as those involving
unbounded quantification over an infinite domain. Dummett’s strategy is to argue for the repudiation of classical logic in favor of intuitionistic logic by arguing for the repudiation of Platonist
(or more generally: verist) theories of meaning in favor of intuitionist (or more generally: verificationist) theories. On a verist theory, sentences whose correctness need not be recognizable may be
said to represent transcendent features of reality, while sentences whose correctness must be recognizable may be said to represent immanent features of reality. Dummett, like Brouwer, denies that
any sentence of any possible language can represent transcendent features of reality. But where Brouwer sees this denial as expressing a limitation on reality, Dummett sees it as expressing a
limitation on language. Dummett’s case against verism rests on principles summed up in the slogan that meaning is use. He offers various formulations of these
Dummett’s case for intuitionism
principles, some in terms of meaning, others in terms of understanding, some telling us what these consist in, others through what they are exhaustively manifested. The following are typical (1973a,
pp. 216, 217): The meaning of . . . a statement cannot be, or contain as an ingredient, anything which is not manifest in the use made of it, lying solely in the mind of the individual who apprehends
that meaning: if two individuals agree completely about the use to be made of the statement, then they agree about its meaning. The reason is that the meaning of a statement consists solely in its
roˆle as an instrument of communication between individuals . . . An individual cannot communicate what he cannot be observed to communicate . . . [T]here must be an observable difference between the
behavior or capacities of someone who is said to have . . . knowledge [of the meaning of an expression] and someone who is said to lack it. Hence it follows . . . that a grasp of the meaning of a . .
. statement must, in general, consist of a capacity to use that statement in a certain way, or to respond in a certain way to its use by others.
Shared by all such formulations is an association of meaning with public and observable use of language as a vehicle of communication, and a dissociation of meaning from private and hidden use of
language as a vehicle of thought.2 Likewise, any association of meaning or understanding with something in the conscious or unconscious mind, or in the structure or functioning of the brain, is
rejected. Though he himself avoids the label, Dummett may be called a behaviorist in his approach to meaning, provided this label is understood in a broad enough sense to cover not only the
stimulus-response behaviorism of Skinner, but also the logical behaviorism of Ryle. A thoroughgoing behaviorist will require that any apparatus posited by a semantic theory must be identified or
directly correlated with some isolable features of publicly observable verbal behavior. As Dummett formulates it, the behaviorist demand is that there must be a ‘‘one-one correspondence between the
details’’ of the apparatus posited by a semantic theory and ‘‘observable features of the phenomenon’’ (1977, p. 377). Dummett describes rejection of this behaviorist demand as one form that the
rejection of the principle that meaning is use might take. The great majority of contemporary linguists reject this behaviorist demand as likely to lead only to sterility and stagnation in semantics.
The great majority of contemporary linguists posit in their semantic theories an apparatus neither identified nor directly correlated with any 2
For more on the contrast between the two sorts of use of language, see the opening paragraphs of Harman (1982).
Mathematics, Models, and Modality
set of isolable features of publicly observable verbal behavior. The original version of Chomsky’s semantic theory, for example, posited an apparatus of deep structures. Chomsky and the great
majority of contemporary linguists claim that the apparatus posited in their semantic theories is psychologically real, represented in ways as yet undiscovered in the mind or brain.3 But they do not
claim the apparatus to be directly represented in behavior. Thus the principle that meaning is use, on which Dummett bases his case for a revision of current mathematics, itself already amounts to a
demand for a revision of current linguistics. For this reason Dummett’s arguments for the principle are of interest apart from their role in his case for intuitionism. These arguments have often been
criticized by Davidsonians. Here they will be criticized from a viewpoint closer to that of the Chomskians. Two arguments for the principle that meaning is use are to be found in the texts under
examination. They are versions of what have come to be called the acquisition and manifestation arguments. The first (1973a, p. 217) begins: [O]ur proficiency in making the correct use of the
statements and expression of the language is all that others have from which to judge whether or not we have acquired a grasp of their meanings. Hence it can only be in the capacity to make a correct
use of the statements of the language that a grasp of their meaning, and of those of the symbols and expressions which they contain, can consist.
The rather familiar line of this opening4 alerts us that the argument is going to turn on how an observer Y, say a teacher, can judge that a speaker X, say a learner, attaches the standard meaning to
an expression E. Perhaps before proceeding further it would be well to review schematically the accounts of such judgments offered by behaviorists, on the one hand, and by those anti-behaviorists who
associate meaning or understanding with a state of the mind or brain, on the other. On the behaviorist account, for X to attach the standard meaning to E is for X to be able to use E standardly. On
this account, Y’s judgment that X 3
A minority of linguists adopt the position advocated in Soames (1985), regarding such claims of psychological reality as at best premature, but nonetheless insisting on the legitimacy of introducing
an apparatus of deep structures or the like in semantic theory, despite behaviorist objections. Charles Chihara has pointed out in conversation the parallelism between the argument of Dummett just
quoted and the notorious argument of Norman Malcolm against the conception of dreams as mental or neural activity taking place at specific times during sleep. Malcolm’s argument may be paraphrased:
our telling stories when we wake up is all that others have from which to judge whether we have dreamt. Hence it can only be in the disposition to tell stories when we wake up that having dreamt can
Dummett’s case for intuitionism
attaches the standard meaning to E is a simple inductive inference from the premise that X has been able to use E standardly in all observed instances to the conclusion that X will be able to use E
standardly in all instances. The anti-behaviorist account is much more complex. The first step towards an anti-behaviorist position is acceptance of the general psychological principle that different
people similar in their outward behavior are normally similar also in the inward mental or neural states that causally underlie behavior. The second step is acceptance, as a special linguistic
instance, of the hypothesis that there exists a mental or neural state S(E) normally causally underlying the ability to use E standardly. Thus far a behaviorist may or may not go along. Where the
behaviorist must refuse to follow is at the anti-behaviorist’s third step, the identification of attaching the standard meaning to E with being in the state S(E) rather than directly with being able
to use E standardly. To appreciate the rather subtle distinction here, imagine an abnormal case where a native English speaker X is able to communicate with a native Chinese speaker W only because X
has implanted inside his skull a minisupercomputer programmed to translate back and forth between English and Chinese sentences. There may be no ‘‘difference in the behavior or capacities’’ of X and
W that is observable to those of us lacking telepathic powers and X-ray vision. X and W may ‘‘agree completely about the use to be made’’ of various Chinese words and phrases. Yet on account of the
absence of ‘‘an ingredient . . . lying solely in the mind’’ or brain, the antibehaviorist will deny that X attaches the standard meanings, or any meanings at all, to those words and phrases. The
status of such science-fiction examples is in itself a matter of slight importance, but there are more important further differences between behaviorists and anti-behaviorists. One who identifies
attaching the standard meaning to E with being the hypothetical state S(E) will presumably be willing to entertain hypotheses about the composition and components of S(E) and to permit a theory of
the standard meaning of E to posit an apparatus correlated with these hypothetical components of S(E). But while S(E) itself is normally correlated with the ability to use E standardly, there is no
reason to suppose its components to be directly correlated with any ‘‘isolable, though interconnected, practical abilities’’ (Dummett 1977, p. 377). Hence the anti-behaviorist rejection of the
requirement that the apparatus posited in a theory of the standard meaning of E must be directly correlated with isolable features of publicly observable verbal behavior, which we have already seen
to be the issue dividing Dummettians and Chomskians.
Mathematics, Models, and Modality
On the anti-behaviorist account, for X to attach the standard meaning to E is for X to be in a mental or neural state S(E) posited to underlie, in normal cases, the ability to use E standardly. On
this account, Y’s judgment that X attaches the standard meaning to E rests on: (a) the evidence for the presupposition that there exists a mental or neural state underlying, in a normal case, the
ability to use E standardly; (b) the evidence that X’s case is a normal one; and (c) the evidence that X has been able to use E standardly in all observed instances. The evidence (c) is the only
evidence cited in the behaviorist account. The evidence (b) may consist in no more than the absence of evidence that X’s case is an abnormal one. The evidence (a) may consist in no more than the
evidence for the general psychological principle that different people who are similar in their outward behavior are normally similar also in their inward mental and neural states. A behaviorist, of
course, may question the strength of the evidence for this general psychological principle. Quineans, for example, have claimed that different people identical in their outward behavior may be ‘‘like
different bushes trimmed to resemble identical elephants.’’ Dummett’s objections to anti-behaviorism, however, do not take this form, being a priori and philosophical rather than a posteriori and
psychological. Returning now to the argument whose opening was quoted above, it continues (1973a, pp. 217–18): To suppose that there is an ingredient of meaning which transcends the use that is made
of that which carries the meaning is to suppose that someone might have learned . . . [to] behave in every way like someone who understands the language, and yet might not actually understand, or
understand it only incorrectly. But to suppose this is to make meaning ineffable, that is, in principle incommunicable. If this is possible, then no one individual ever has a guarantee that he is
understood by any other individual; for all he knows, or can ever know, everyone else may attach to his words . . . a meaning quite different from that which he attaches to them. A notion of meaning
so private to the individual is one that has become completely irrelevant to mathematics as it is actually practised, namely as a body of theory on which many individuals are corporately engaged, an
enquiry within which each can communicate his results to others.
In the earlier parts of this passage, Dummett claims that if there is anything more to X’s attaching the standard meaning to E than X’s being able to use E standardly, then Y can never know or have a
guarantee that X attaches the standard meaning to E. Two comments are called for. First, on both the anti-behaviorist and the behaviorist accounts, Y’s judgment that X attaches the standard meaning
to E is an inductive inference. On the behaviorist account, it is an inference from a limited number of
Dummett’s case for intuitionism
observed instances of use to an unlimited number of possible future instances of use. No inductive inference can provide certain knowledge or an indubitable guarantee. But if the possibility of
skeptical doubt and uncertainty somehow undermines a theory of meaning, it must undermine not just the antibehaviorist theory, on which understanding is something mental or neural transcending
behavior, but also the behaviorist theory, on which understanding is an open-ended behavioral ability or capacity, transcending any finite number of its manifestations. This point has been mentioned
in passing by Susan Haack (1974, pp. 107–8) and developed at length by Crispin Wright (1980, pp. 123–8). Second, it is far from obvious that the impossibility of skepticism-proof guaranteed knowledge
in any way undermines a theory of meaning. In the later parts of the passage under examination, Dummett seems to try to draw out damaging consequences from the absence of such guaranteed knowledge.
He seems to claim that its absence somehow makes communication between mathematicians impossible, and hence makes mathematics as an activity involving communication impossible. Surely such a claim
would be mistaken. For whether mathematicians X and Y succeed in communicating through their use of expression E surely depends only on whether X and Y do in actual fact attach the same meaning to E,
and not on whether they possess skepticism-proof guaranteed knowledge that they do so. One hesitates to accuse a distinguished authority on modal logic of arguing from àp to àp, but Dummett does
almost seem to wish to move from the (epistemic) possibility that X and Y do not succeed in communicating to the (metaphysical) impossibility of X and Y succeeding in communicating. To avoid
fallacies in the ‘‘acquisition’’ argument we must distinguish the claim that a language learner cannot come to attach the standard meaning to an expression from the claim that no one can have
guaranteed knowledge that the language learner has attached the standard meaning to the expression. Dummett fails to show that the former follows from the antibehaviorist approach to meaning; and
while the latter may follow, it is not obviously unacceptable. This last point has been noted by Dag Prawitz, who writes (1977, p. 10) of Dummett’s argument that: One could contest it by arguing that
when we learn a language by seeing how its sentences are used, we only get some hints about their meaning. The samples of use with which we are presented never completely determine the meaning but
only enable us to form some theories or hypotheses about the meaning. (The fact that we nevertheless agree rather well about meaning could perhaps be explained by
Mathematics, Models, and Modality
reference to a genetic disposition to see certain kinds of patterns and hence to form certain kinds of theories upon seeing a few examples.) Such a view would entail that we could never be sure that
we knew the meaning of a sentence; a new unexpected use of it could show us that we had misunderstood the meaning and would force us to revise our theory. And to some extent this may be a correct
picture of our situation.
Dummett’s other argument calls for less comment. It runs as follows (1977, p. 217): Now knowledge of meaning . . . is frequently verbalisable knowledge, that is, knowledge which consists in the
ability to state the rules in accordance with which the expression or symbol is used . . . But to suppose that, in general, a knowledge of meaning consisted in verbalisable knowledge would involve an
infinite regress: if a grasp of the meaning of an expression consisted, in general, in the ability to state its meaning, then it would be impossible for anyone to learn a language who was not already
equipped with a fairly extensive language. Hence that knowledge which, in general, constitutes the understanding of language . . . must be implicit knowledge. Implicit knowledge cannot, however,
meaningfully be ascribed to someone unless it is possible to say in what the manifestation of that knowledge consists: there must be an observable difference between the behavior or capacities of
someone who is said to have that knowledge and someone who is said to lack it. Hence it follows, once more, that a grasp of the meaning of a . . . statement must, in general, consist of a capacity to
use that statement in a certain way, or to respond in a certain way to its use by others.
In the first part of this passage, Dummett invokes infinite-regress considerations to establish that knowledge of meaning is not ‘‘in general’’ verbalizable, and even that it is ‘‘in general’’
unverbalizable. If one tries to restate the argument without the use of the puzzling phrase ‘‘in general,’’ then one finds that all the infinite-regress considerations seem to establish is that for
the part of language learned first, the most elementary part, knowledge of meaning is unverbalizable. It would then seem that any behavioristic conclusions drawn from the argument as a whole ought to
be restricted to this part of language: nothing follows about the part of language learned later, the more advanced part.5 In the second part of the passage under examination, Dummett invokes a
premise about unverbalizable knowledge to reach a conclusion about knowledge of meaning. To avoid equivocation, we must distinguish four claims here, some stronger, some weaker, some more general,
some more specific: 5
Paul Benacerraf suggested to me in general terms that Dummett’s arguments might have force for one part of language but not another.
Dummett’s case for intuitionism
(a) Ascriptions of knowledge of meaning must be supported by appeal to observable evidence. (b) Knowledge of the meaning of an expression consists in no more than the ability to use it in a certain
way. (c) Ascriptions of implicit knowledge must be supported by appeal to observable evidence. (d) Implicit knowledge consists in no more than the ability to behave in a certain way.
The anti-behaviorist rejects (b) but accepts (a). (The anti-behaviorist account of the observable evidence supporting an ascription of knowledge of meaning has been reviewed schematically above.)
Since the conclusion Dummett desires is (b) and not (a), the premise he requires is (d) and not (c), even if his own formulations are less than unequivocal (1973a, p. 217; 1977, p. 376): Implicit
knowledge cannot, however, meaningfully be ascribed unless it is possible to say in what the manifestation of that knowledge consists . . . [A]n ascription of implicit knowledge must always be
explainable in terms of what counts as a manifestation of that knowledge, namely the possession of some practical ability.
The anti-behaviorist will argue that if – as the implanted-computer example suggests – (b) is false, then this implies that (d) is false. Dummett, however, invokes the controversial premise (d)
without supporting considerations, as if it were self-evident. For this reason anti-behaviorists may with some justice reject the ‘‘manifestation’’ argument as manifestly questionbegging.
Circularities in Dummett’s arguments for behaviorism do not, however, deprive his case against verism of all its force. Dummett’s complaint against verism comes down to this, that verists have
returned no answer, formulated in behavioral terms, to the following question: in what can a grasp of the correctness conditions for a sentence consist if the correctness of that sentence need not
even in principle be potentially recognizable by human beings? So long as one insists that verists must return an answer, formulated in psychological terms, to the foregoing question, one will have
to sympathize with Dummett’s complaint against verism, even if one does not sympathize with his behaviorism, and adopts the approach of introspective or of physiological rather than behavioral
psychology. For the best-known advocates of verism either return no answer at all to Dummett’s question; or worse, they answer that to grasp the truth conditions of a sentence is to associate with
that sentence the set of possible worlds where it is true, but do not
Mathematics, Models, and Modality
explain how a mind or brain confined to the actual world can effect such an association. One might also sympathize with Dummett’s rejection of verism for any of a number of reasons quite unlike
Dummett’s own, for example, on account of the apparent conflict between truth-conditional theories of meaning and theories of truth in the style of Tarski or Kripke. This route to a rejection of
verism is worth mentioning here because Dummett himself sometimes touches on it tangentially in his writings. In one paper (Dummett 1959) he notes that there appears to be a conflict between the view
that a biconditional like (2) above constitutes an account of the meaning of ‘‘or’’ and the view that it constitutes part of an account of the meaning of ‘‘true,’’ a point that Tarski has also
discussed in one of his papers (1944) as an anonymous objection against his theory. Truth-conditional theories of meaning appear to regard truth as a primitive concept, possession of which is a
prerequisite for any language-learning, while a theory like Kripke’s appears to regard the concept of truth as one acquired fairly late in the process of language-learning, when the learner has
acquired a fairly extensive ability to talk of persons, places, and things, and is beginning to learn to talk of talk.6 For any of a number of reasons, good or bad, like or unlike Dummett’s, many
philosophers of language now reject verism. In inveighing against verism, Dummett is to a large extent preaching to the converted. Dummett himself recognizes that Wittgenstein, for one, and Quine,
for another, have rejected verism. Yet somewhat surprisingly he can be found writing: ‘‘[T]he idea that a grasp of meaning consists in a grasp of truth-conditions was [in 1959] and still is [in
1978], part of the received wisdom among philosophers’’ (1978, p. xxi). A poll of my own department convinces me that Dummett is wrong here.7 How could a theory rejected by a constellation of such
luminaries as Wittgenstein, Tarski, Quine, Kripke, and Harman, not 6
A not unrelated reason for rejecting truth-conditional theories of meaning is advanced in Harman (1982): Davidson, Lewis, and others have argued that an account of the truth conditions of sentences
of a language can serve as an account of the meanings of those sentences. But this seems wrong. Of course, if you know the meaning in your language of the sentence S, and you know what the word
‘‘true’’ means, you will also know something of the form ‘‘S is true if and only if . . .’’; for example, ‘‘‘Snow is white’ is true if and only if snow is white’’ or ‘‘‘I am sick’ is true if and only
if the speaker is sick at the time of utterance’’. But this is a trivial point about the meaning of ‘‘true’’, not a deep point about meaning.
For more on the philosophical significance (or lack of it) of the concept of truth, see Soames (1984). Saul Kripke has suggested that Dummett’s statement may be accurate as an account of local
conditions at Oxford. But surely it would bespeak a certain parochialism to confuse ‘‘fashionable among Oxford philosophers’’ with ‘‘received among philosophers generally.’’
Dummett’s case for intuitionism
to mention Dummett himself, be considered ‘‘received wisdom among philosophers’’? For many philosophers what is most puzzling about Dummett’s case for intuitionism will not be the question arising in
his case against Platonism: (a) Why are we supposed to reject verist semantics?
but rather the question arising in his case against formalism: (b) How is the rejection of verist semantics supposed to lead to the rejection of classical mathematics?
This latter question will now be taken up. 3
Criticism of a theory of language may take any of three forms. The ordinary descriptive critic advances evidence that we do not actually speak a language of the sort the theory depicts, whether or
not we ought to. The radical descriptive critic advances evidence that we could not possible speak a language of that sort, so that the question whether we ought to do does not arise. The
prescriptive critic (or advocate) advances motives why we ideally ought not (or ought) to speak such a language, whether or not we currently do. Dummett is a philosopher not primarily renowned for
the clarity of this prose, and the interpretation of his works will always be a matter of controversy, not least because he declines to distinguish explicitly factual or descriptive from normative or
prescriptive considerations. As I interpreted it in x2, Dummett’s criticism of Platonist or verist theories of meaning was descriptive and radical, claiming that we could not possibly possess
(because we could not possibly acquire or manifest) a grasp of correctness conditions that are transcendent rather than immanent. As I interpret it, Dummett’s advocacy of intuitionist or
verificationist theories of meaning is prescriptive. For surely he cannot claim that such theories describe and explain the actual, current patterns of usage of any but a tiny minority of (Dutch)
mathematicians. As I interpret it, Dummett’s position is that no theory of language of the first type, verist or verificationist, provides a description and explanation of the actual, current
patterns of use of the overwhelming majority of mathematicians. Looking beyond theories of the first type, especially important for Dummett are what I will call neutrally theories of language of the
second type or dualist theories. Such theories depict language as containing two different
Mathematics, Models, and Modality
kinds of sentences: primary sentences, possessing decidable correctness conditions, and secondary sentences, lacking correctness conditions. The best-known theories of this type are the theory of
mathematical language associated with the name of Hilbert (1925), and the theory of scientific language associated with the name of Quine (1951b). On the former theory, primary sentences, called
inhaltlich, consist of simple arithmetical sentences decidable by computation; secondary sentences, called ideal, may contain non-computational mathematical vocabulary (e.g. that of set theory). On
the latter theory, primary sentences, said to ‘‘lie on the periphery,’’ consist of simple empirical sentences decidable by observation; secondary sentences, said to ‘‘lie in the interior,’’ may
contain nonobservational scientific vocabulary (e.g. that of quantum theory). Hilbert’s overall position in philosophy of mathematics is usually called formalism. Quine’s overall position in
philosophy of science is usually called holism. Both labels have been used in the literature with so many different connotations that they are perhaps best avoided. Dummett repeatedly stresses the
affinities between Hilbert and Quine (1973a, p. 219; 1977, p. 397).8 On either theory, primary sentences are distinguished from secondary sentences by their restricted vocabulary. Not only is their
non-logical, mathematical or scientific, vocabulary restricted to be computational or observational, but also their logical vocabulary is restricted. According to the precise version of the theory
being considered, primary sentences are required to be either atomic, containing no logical particles at all, or else to be quantifier-free, containing only connectives. On either theory, secondary
sentences serve merely as intra-linguistic instruments for deducing primary sentences, and not as representations of any extra-linguistic reality. Computational and observational facts are
represented by primary sentences. It is claimed that the scope, accuracy, and efficiency of the representation of computational and observational facts is enhanced by the presence in the language of
sentences that do not themselves represent such facts but that can be used to deduce sentences that do. On either theory, the question arises how the ability to use secondary sentences can be
learned. For example, how is the ability to use disjunctions
It is something of an oversimplification to describe Quine as a dualist, inasmuch as he often indicates that he regards the distinction between the observational periphery and the theoretical
interior as a matter of degree rather than kind. But for Dummett the similarities between Quine’s position and that of the prototypical dualist Hilbert are more important than such differences.
Dummett’s case for intuitionism
in deductions acquired? On a verist or verificationist theory the answer is: by grasping the correctness conditions (1) or (2) above. This answer is not available on a dualist theory, and no explicit
answer is offered in Hilbert (1925) or Quine (1951b). There is, however, an answer that immediately suggests itself, namely, that the ability is acquired by directly grasping such rules of
implication as the following: (3) A disjunction is implied by each of its disjuncts.
A disjunction implies whatever is implied by each of its disjuncts. In other words, the ‘‘meaning of the logical constants’’ – if what determines their use may be called their ‘‘meaning’’ – consists
‘‘directly in the validity or invalidity of possible forms of inference’’ (Dummett 1977, p. 363). It seems to be this answer that Dummett associates with dualism. It is worth mentioning that quite
apart from any general dualist views, the specific view that an account of the ‘‘meaning’’ of the logical particles is best given in terms of such implication conditions as (3) rather than such truth
conditions as (2) has had many distinguished advocates, including (according to Prior 1960) several of Dummett’s Oxford colleagues. Dualist theories may be called semi-verificationist. They are not
verificationist in the strict sense, since some sentences are not assigned correctness conditions. They are verificationist in a loose sense, since all sentences that are assigned correctness
conditions are assigned decidable, recognizable, verifiable correctness conditions. May dualist theories of language be called theories of meaning? Dummett sometimes takes ‘‘(theory of) meaning’’ in
a broad sense, and insists on an affirmative answer (1973c, p. 378): A model of language may also be called a model of meaning, and the importance of the conception of language sketched at the end of
‘‘Two Dogmas’’ was that it gave in succinct form the outline of a new model of meaning. It is well known that some disciples of Quine have heralded his work as allowing us to dispense with the notion
of meaning. But even the most radical of such disciples can hardly propose that we may dispense with the notion of knowing, or having mastery of, a language; and there is nothing more that we can
require of a theory of meaning than that it give an account of what someone knows when he knows a language . . . [W]hatever warrant there may be for asserting that Quine has destroyed the concept of
meaning does not appear from the ‘‘Two Dogmas’’ model of language taken by itself. That has merely the shape of one theory or model of meaning among other possible ones.
However, Dummett sometimes takes ‘‘(theory of) meaning’’ in a narrow sense, and insists on a negative answer (1973b, p. 309):
Mathematics, Models, and Modality
The theory of meaning, which lies at the foundation of the whole of philosophy, attempts to explain the way in which we contrive to represent reality by means of language. It does so by giving a
model for the content of a sentence, its representative power. Holism is not, in this sense, a theory of meaning: it is the denial that a theory of meaning is possible.
When in the narrow, negative mood (as throughout 1977) Dummett is prepared to join Brouwer and Heyting in declaring that many of the sentences of classical mathematics are ‘‘incoherent’’ and
‘‘unintelligible.’’ This sounds odd. For Dummett can hardly deny that the sentences of classical mathematics possess a definite usage within pure mathematics and a definite utility through applied
mathematics. How can he, as a professed adherent of the slogan that meaning is use, then deny that those sentences have a meaning? Taken literally, the slogan implies that a sentence having a use
thereby has a meaning. The answer, the explanation of the oddity, is, of course, that Dummett, as we have already seen, adheres to the slogan that meaning is use only in a non-literal, almost
idiosyncratic, sense. The narrow, negative terminology need not be misleading provided the following point is never forgotten: When Dummett says that many sentences of classical mathematics lack
meaning-in-the-narrow-sense, he is only saying (in highly emotive terms) that the theory of meaning-as-conditions-forcorrectness-as-a-representation-of-reality is inapplicable to those sentences.
This factual claim about how language is cannot by itself imply any normative claim about how mathematics ought to be. Some extra, tacit premise of a normative or prescriptive character is needed. As
I interpret it, Dummett’s criticism of dualism is prescriptive, and rests on an extra, tacit premise of anti-instrumentalism or representationalism, according to which every sentence of a language
ideally ought to play a representational rather than a merely instrumental role. Thus when Dummett writes: ‘‘A sentence is a representation of some facet of reality’’ (1973b, p. 309), according to my
interpretation he has not quite accurately reflected his own view: ‘‘ought to be’’ ought to be where ‘‘is’’ is in the quoted formulation. The requirement of representationality is, of course,
accepted by Platonists, who hold, in opposition to intuitionists and formalists, that this requirement is already met by our current language. Representationalism unites Platonists and intuitionists
in opposition to formalists, much as behaviorism unites Quineans and Dummettians in opposition to Davidsonians and Chomskians. (There are, however, important differences between the Harvard
behaviorism of Skinner or Quine and the Oxford behaviorism of Ryle or Dummett. Moreover, it is not obvious that a verificationist or dualist must be a behaviorist.)
Dummett’s case for intuitionism
Consider the situation of a philosopher initially sympathetic, for behavioristic or other reasons, to a naive descriptive verificationism like that of the early positivists, who comes to appreciate
that such a theory is inadequate as an account of the actual, current patterns of use in our language. One response would be to revise the theory to fit the facts of language, perhaps falling back to
a semi-verificationist, dualist position. Another response would be to require a revision of language, to fit the norms of the theory. Quine and Dummett exemplify these two responses. Dummett’s
against Quine stands or falls with the success or failure of his attempts to motivate the requirement of representationality. In both the texts under examination Dummett discusses, by way of offering
such motivation, the following worry about languages of the sort depicted by dualist theories: in such a language, there is a threat of deducing incorrect primary sentences by means of secondary
sentences. In one text, the worry seems to be that incorrect primary sentences might be deduced from (theories composed of) secondary sentences (1973a, p. 220): With what right do we feel assurance
that the observational statements deduced with the help of complex theories, mathematical, scientific and otherwise, embedded in the interior of the total linguistic structure, are true, when these
observation statements are interpreted in terms of their stimulus meanings? To this the holist attempts no answer, save a generalised appeal to induction: these theories have ‘‘worked’’ in the past,
in the sense of having for the most part yielded true observation statements, and so we have confidence that they will continue to work in the future.
This worry, or rather, the demand for a guarantee against it, is easily dismissed. Of course there is a threat that a scientific theory about, say, black holes or quarks may have incorrect
observational consequences. We have seen such threats realized many times in the history of science, and we have known since the time of Hume that there can be no guarantee against them. And of
course there is a threat that a mathematical theory about, say, l-adic cohomology or o-complete ultrafilters may have incorrect and even inconsistent computational consequences. We have seen such
threats realized a few times in the history of mathematics (in connection with infinitesimal calculus and naive set theory), and we have known since the work of Go¨del that there can be no guarantee
against them. If and when such threats are again realized, we will, as we always have in the past on such occasions, revise our theories. But why even then, let alone now, revise our logic? It is a
delusion to imagine that a preemptive change of logic could provide a guarantee against such threats, unless, indeed, the new logic were
Mathematics, Models, and Modality
so restrictive as to make the formulation of any non-trivial theories impossible. In the other text, the worry seems to be that incorrect primary sentences might be deduced from correct primary
sentences by way of secondary sentences. On a sequential formulation of logic, this is the worry that a sequent ( ) A1, . . . , An ) B
with the Ai primary and correct and B primary and incorrect, might be deducible by pure classical logic, if secondary sentences are allowed to appear in the deduction. If only primary sentences are
allowed to appear in the deduction, there is nothing to worry about, since primary sentences are decidable, and not even intuitionists doubt the trustworthiness of classical logic as applied to
decidable sentences. As Dummett says, it would be a ‘‘severe defect’’ in the classical rules of implication if by means of them ‘‘we can construct a deductive chain leading from correct premises to
an incorrect conclusion’’ (1977, p. 364). He reminds us that even on a dualist theory there is an external standard against which the acceptability of rules of implication is to be judge. A rule is
acceptable only if it is sound, only if it preserves correctness in all instances where the notion of correctness is applicable, that is, in all instances where the premises and conclusion are all
primary sentences. (Thus even if the ‘‘meaning’’ of the logical particles is given by implication conditions, not just any old particles and conditions will do. This point has been illustrated by
Prior (1960). Dummett does not claim that classical logic is, in this sense, demonstrably unsound (as is Prior’s ‘‘tonk’’ logic). What worries Dummett is that classical logic seems to be not
demonstrably sound. He desires a guarantee of soundness, or what would, as we have seen above, be sufficient for this, a guarantee of conservativeness, a guarantee that the addition of the secondary
sentences to the language does not permit the deduction of any sequences ( ) involving only primary sentences that were not deducible already (1977, pp. 363–4). Dummett seems to hold that such a
guarantee could only be provided by a semantic soundness or conservativeness proof, and that such a proof or ‘‘justification’’ will be available only if we revise the language and extend the
assignment-of-correctness-conditions or ‘‘interpretation’’ to ‘‘all statements or formulas with which we are concerned’’ (1977, p. 220). Dummett seems to overlook the possibility of a purely
syntactic proof of soundness or conservativeness. As Richard Grandy has pointed out in a perceptive review (Grandy 1982), just such a guarantee as Dummett seems to desire is provided by the famous
Cut-elimination Theorem of Gentzen, according
Dummett’s case for intuitionism
to which any sequent ( ) that has a deduction at all has a deduction in which no symbols occur that do not occur in ( ) already. Moreover, though Gentzen’s theorem is about classical logic, Gentzen’s
proof is given in intuitionistic metamathematics. Thus the threat that worries Dummett seems elusive, to say the least. In any case, the guarantee he desires would be intangible, on his own
admission. For it is precisely the theme of Dummett (1973b) that no ‘‘justification of deduction’’ or soundness of proof can be ‘‘suasive,’’ that is, can persuade anyone sincerely in doubt as to the
soundness of the logic. For any such proof, being a proof, would itself use logic. In opposition to Kreisel, Dummett is concerned to argue for intuitionism not as one legitimate form of mathematics
among others, but as the sole legitimate form (1977, p. 360). Dummett is concerned to argue for a revision amounting not to a reform, but to a revolution, in mathematics. Any revolution involves
costs that the benefit of an intangible guarantee against an elusive threat of unsoundness seems insufficient to outweigh. It seems that its desirability as a means toward the end of guaranteeing
soundness is not a consideration sufficient to motivate the requirement of representationality. Dummett might, of course, rest his case against formalism on the desirability of representationality as
an end in itself. That he indeed values representationality highly for its own sake is suggested by his applying the uncalled-for emotive term ‘‘unintelligible’’ to sentences that he knows perfectly
well how to use, but that happen to lack meaning-in-thenarrow-sense-of-conditions-for-correctness-as-a-representation-of-reality. Dummett’s value judgment might, however, be questioned by many
mathematicians. It hardly needs saying that the requirement of representationality will be rejected by the many pure mathematicians who value mathematics as an art. From their point of view, there is
no sole legitimate form of mathematics. A mathematician may work now in intuitionistic, now in classical mathematics, just as a painter may work now in a representational, now in an abstract style.
Personal taste will dictate how much time is devoted to each, though it may be said that the overwhelming majority of mathematicians find more beauty in the classical than in the intuitionistic
style. What does perhaps need saying is that the requirement of representationality may also be questioned by the many applied mathematicians who value mathematics for its contribution, through
science, to the theoretical prediction and practical control of experience. From their point of view, it is essential that language should contain some sentences to serve as records
Mathematics, Models, and Modality
or predictions of experience, as representations of empirical reality. But beyond this it seems wisest to accept the advice of Carnap (1950) and be ‘‘tolerant in permitting linguistic forms.’’ It is
questionable whether the scope, accuracy, and efficiency of applications to the empirical world would be enhanced by imposing the restriction that all sentences must play a representational rather
than a merely instrumental role in language. Many physicists, mathematicians, logicians, and philosophers have suggested precisely the contrary: that intuitionistic restrictions on mathematics would
be detrimental to applications. Such views as the following are often voiced (Manin 1977, pp. 172–3): [C]onstructivism is in no sense ‘‘another mathematics’’. It is, rather, a sophisticated subsystem
of classical mathematics, which rejects the extremes in classical mathematics, and carefully nourishes its effective computational apparatus. Unfortunately, it seems that it is these ‘‘extremes’’ –
bold extrapolations, abstractions which are infinite and do not lend themselves to a constructive interpretation – which make classical mathematics effective. One should try to imagine how much help
mathematics could have provided twentieth century quantum physics if for the past hundred years it had developed using only abstractions from ‘‘constructive objects.’’
I do not pretend to be an expert in such matters, but there are several studies in the literature that seem to me to indicate that such complaints are not entirely without foundation. As one example,
there is an important series of papers by Pour-El and Richards (1979–87) establishing that much of the machinery of functional analysis deployed in quantum physics cannot be developed in its usual
form with recursive analysis. And experience shows that what can or cannot be done recursively is a usually reliable (though by no means infallible) guide to what can or cannot be done
intuitionistically. As another example, there is a paper of Douglas Bridges (1981), examining quantum physics from an intuitionistic viewpoint. Bridges is, to be sure, a follower of Bishop’s
intuitionism-without-choice-sequences rather than of Brouwer’s intuitionism-with-choice-sequences, but Bishop’s school has thus far been able to go further than Brouwer’s school in reconstructing
applicable portions of functional analysis. Bridges is obliged to concede that ‘‘a constructive examination of the mathematical foundations of quantum physics does reveal substantial problems.’’ It
is also worth mentioning that even if the indications cited are misleading and it turns out to be possible in principle to get by with intuitionistic functional analysis in applications to quantum
physics, getting by in this way is very likely to be infeasible in practice.
Dummett’s case for intuitionism
Table 14.1 Philosophies of mathematics PLATONISM
(Hilbert) Associated theories of meaning VERISM
(Quine) Character of objection to theory of meaning RADICAL DESCRIPTIVE
We do not and could not speak a language We ought not to speak a language of the sort of the sort described by the theory described by the theory Principle on which the objection is based BEHAVIORISM
There can be nothing more to knowing the meaning of a sentence than being able to use it
Every sentence ought to serve as a representation of extra-linguistic reality, not a mere intra-linguistic instrument for deducing other sentences Argument for principle Representationality is
desirable If there were anything more to such (a) as a means towards guaranteeing knowledge, the extra ingredient could soundness be neither acquired nor manifested. (b) as an end in itself Comment
The acquisition and manifestation (a) Soundness can be guaranteed without arguments involve fallacies and circularities representationality (b) Representationality may conflict with the desirable end
of applicability
Whether applications to the empirical world are of value is a question on which philosophers’ judgments vary over a wide spectrum. On the far right stands Plato, who regarded such applications as
evil. On the far left stand the Gang of Four, who regarded the development of mathematics for any purpose but such applications as evil. Among intuitionists, Brouwer was, in this respect, a
thoroughgoing Platonist, with an attitude not of passive indifference, but of active hostility towards applications (see van Stigt 1979). Weyl, however, seems to have been uneasy over his inability
to reconcile his philosophical attraction towards intuitionism with his scientific interest in applications. One need not be a Maoist to sympathize with this unease, and to be disturbed by an
argument for the claim that intuitionism is the sole legitimate form of mathematics in which any consideration of widely
Mathematics, Models, and Modality
held doubts as to the adequacy of intuitionism for applications is omitted. (The omission is the more surprising in an argument directed against Quine, since doubts as to adequacy for applications
have been central among Quine’s objections against other revisionist proposals such as those of the nominalists.) The omission suggests a tacit system of values so unworldly as to be irresponsible.
Dummett may perhaps be absolved personally from charges of irresponsibility and inquisitorial interference with science. For though he advances an argument for intuitionistic revisionism, he is
cautious enough to distance himself personally somewhat from that argument. He is not so bold as to claim that his conclusion ought to be accepted and put into practice. What he claims is that it is
an argument ‘‘of considerable power’’ (1973a, p. 226). In view of the gaps and weaknesses in the argument that I have tried to point out, even this more cautious claim might well be challenged. 4
Table 14.1 summarizes my interpretation of and commentary on Dummett’s case for intuitionism.
Annotated bibliography
‘‘Forcing’’ (Burgess 1977a) The continuum hypothesis (CH) states that the continuum, the cardinal of the set of real numbers, is equal to aleph-one, the least cardinal greater than that of the set of
natural numbers. The two great methods for proving the consistency and independence relative to the usual Zermelo–Frankel axioms of set theory are the method of inner models, exemplified by Go¨del’s
constructible sets, through which he proved the consistency of CH, and the method of forcing, through which Paul Cohen proved the independence of CH. Burgess (1977a) is an exposition for forcing
intended to make the method available for use by non-specialists, by following Robert Solovay and reducing what needs to be understood about forcing in order to apply the method to three ‘‘axioms of
forcing’’ whose proof can be left to specialists.
‘‘Consistency proofs in model theory’’ (Burgess 1978a) Just as ordinary arithmetic, the theory of addition and multiplication, has an analogue in transfinite cardinal and ordinal arithmetics, so
ordinary combinatorics, the theory of permutations and combinations, has a transfinite analogue in combinatorial set theory. Generalized-quantifier logic is a family of extensions of first-order
logic that retains the same notion of model, but adds additional clauses to the definition of truth-in-a-model to cover such operators as ‘‘there exist infinitely many’’ or ‘‘there exist uncountably
many.’’ Hypotheses in combinatorial set theory turn out to have implications for generalized-quantifier logic. Ronald Jensen, through a deep analysis of the fine structure of Go¨del’s constructible
sets, proved the consistency of many combinatorial principles, and therewith of certain principles about generalized-quantifier logic implied by them. This paper shows how one of Jensen’s deepest
consistency results about generalized-quantifier logic can be obtained more easily using the method of forcing. 277
Annotated bibliography ‘‘Descriptive set theory and infinitary languages’’ (Burgess 1977b)
Descriptive set theory is the branch concerned with definable sets of real numbers or linear points. Infinitary logic is a family of extensions of first-order logic that retains the same notion of
model, but adds additional clauses to the definition of truth-in-a-model to cover such operations as the conjunction and disjunction of infinite sets of formulas. Several workers, most notably Robert
Vaught, showed the applicability of some results in descriptive set theory to infinitary logic. This paper presents my contributions to Vaught’s project, the main one being as follows. Jon Barwise
developed a notion of ‘‘absoluteness’’ for logics, and characterized the logics absolute relative to the weak Kripke–Platek set theory as the sublogics of a certain well-known infinitary logic. This
paper characterizes the logics absolute relative to standard Zermelo–Frankel set theory, essentially as those whose sentences admit suitable ‘‘approximations’’ by sentences of that same infinitary
‘‘Equivalence relations generated by families of Borel sets’’ (Burgess 1978b) ‘‘A reflection phenomenon in descriptive set theory’’ (Burgess 1979a) ‘‘Effective enumeration of classes in a R11
equivalence relation’’ (Burgess 1979b) The special sets of real numbers (or pairs of real numbers) that are studied in descriptive set theory are called projective, and they are divided into several
classes of increasing complexity and diminishing tractability: Borel or D11, analytic or R11, coanalytic or 11, D12, R12, 12, and so on. A basic question of set theory is how many elements a set of
real numbers can have. For projective sets up to analytic it can be proved that they contain either countably many or perfectly many (a stronger condition implying but not implied by continuum many);
for higher projective sets the same result requires large cardinal axioms going beyond Zermelo–Frankel set theory. A related question is how many equivalence classes an equivalence relation that is
projective (considered as a set of pairs of real numbers) can have. Jack Silver proved that the answer is countably or perfectly many if the equivalence relation is coanalytic. It had long been known
that if the equivalence relation is analytic there is a third possibility: aleph-one many but not perfectly many. Harvey Friedman asked whether these are the only possibilities. The first two papers
listed above together supply the affirmative answer that was the main result of my doctoral dissertation, written under Silver’s direction, while the third offers a refinement. The first shows that
any analytic equivalence relation is the intersection of aleph-one Borel equivalence relations, and the second uses Silver’s theorem to show that the intersection of aleph-one Borel equivalence
relations has countably many, aleph-one
Annotated bibliography
many, or perfectly many classes, a result later extended by Saharon Shelah to further projective equivalence relations using large-cardinal assumptions.
‘‘A selection theorem for group actions’’ (Burgess 1979c) ‘‘A measurable selection theorem’’ (Burgess 1980a) ‘‘Se´lections mesurables pour relations d’e´quivalence a` classes G’’ (Burgess 1980b)
‘‘Careful choices: a last word on Borel selectors’’ (Burgess 1981d) ‘‘From preference to utility: a problem of descriptive set theory’’ (Burgess 1985) Given a function assigning to each real number x
a set of real numbers F(x), the axiom of choice (AC) guarantees that there exists a function defined for those x for which F(x) is non-empty, and assigning to each such x an element f (x) of F(x).
But AC does not guarantee that this f has any nice properties of the kind one studies in calculus, such as differentiability or integrability. Measurable selection theorems are results to the effect
that, assuming certain niceness conditions on F, there follows the existence of an f with certain corresponding niceness conditions. The first four papers in this group provide three examples,
differing in their niceness assumptions about F and niceness conclusions about f. Measurable selection theorems are used in what I have elsewhere called ‘‘the (hyper)theoretical fringes of subjects
whose core is applied.’’ The last paper in the group uses one to solve a problem raised by R. D. Mauldin in mathematical economics.
‘‘What are R-sets?’’ (Burgess 1982a) ‘‘Classical hierarchies from a modern standpoint, parts I & II’’ (Burgess 1983a and b) One of the main tools in modern descriptive set theory is the ‘‘game
quantifier’’ of Yannis Moschovakis (which is also, to allude back to the discussion of infinitary logic above, the basis for the most important logic that is Zermelo–Frankel but not Kripke–Platek
absolute). It is known that, in a sense that can be made precise, application of this quantifier to Borel sets yields D12 sets. Borel sets themselves are divided into classes of increasing
complexity: closed or 01 sets, open or R01 sets, D02 sets, 02 sets, R02 sets, D03 sets, and so on. Moschovakis had already established that applying the game quantifier to closed and open sets yields
analytic and coanalytic sets, respectively. The first paper listed above is a semi-popular introduction to the pair of papers listed below it, which show that from D02 and D03 sets one obtains,
respectively, families of sets studied during the 1920s and 1930s under the names of C-sets and R-sets.
Annotated bibliography ‘‘The truth is never simple’’ (Burgess 1986) ‘‘Addendum to ‘The truth is never simple’’’ (Burgess 1988)
The first paper is concerned with determining the place in the classifications of descriptive set theorists of the set of all (Go¨del numbers of) truths of arithmetic according to various recent
theories of truth: four versions of Kripke’s view (with the Kleene three-valued or with van Fraassen supervaluation schemes for handling truth-value gaps, and with the minimal fixed point or with the
maximal intrinsic fixed point) and three versions of the revision view (Gupta’s, Hertzberger’s, and Belnap’s). A complete set of answers is obtained except for one case that had to wait for the
second paper, and one other case that my work left open. In the course of working out the answer, examples are provided of sentences that are true on some of the views but not on the others; indeed,
examples of all possible combinations are given.
‘‘Sets and point-sets’’ (Burgess 1990) Perhaps the most surprising discoveries to emerge from foundational work in the last century are two. First, all the natural choices turn out to be, for reasons
for which we at present lack any clear explanation, linearly ordered in strength (for the cognoscenti, I mean in consistency strength), so that given any two choices, it always turns out that one is
stronger than the other, unless they turn out to be, despite superficial differences, of equal strength. Second, stronger and stronger axioms, though about objects of higher and higher level or type,
continue to have more and more implications even about objects of the very lowest level and type. What this latter cryptic assertion means is spelled out in the case of geometrical theories in this
semi-technical, largely expository paper. The aim is to explode the myths, which have gained some currency among philosophers not well-trained in logic, that ‘‘mathematics is conservative over
physics’’ and that ‘‘higher mathematics is conservative over lower mathematics.’’
‘‘A remark on Henkin sentences and their contraries’’ (Burgess 2003b) Jaakko Hintikka has for some time now been advocating what he has called IF (for ‘‘independence-friendly’’ or
‘‘information-friendly’’) logic. This logic is ‘‘non-classical,’’ not in the way that modal and tense and probability and conditional and intuitionistic logic are, requiring a different notion of
model from that used in first-order logic, but rather in the way that generalized-quantifier and infinitary logic are. The logic has been found interesting from a technical point of view even by many
who have not found Hintikka’s large philosophical claims for it convincing. The paper establishes a technical result to the effect that the logic admits no semantic operation of negation, a result
that it is hard to interpret as having any but negative implications about the philosophical claims for the logic, though this point is not strenuously argued in the paper.
Annotated bibliography
‘‘Basic tense logic’’ (Burgess 1984a) Basic tense logic adds to classical sentential logic new one-place connectives P for ‘‘it was the case that’’ and F for ‘‘it will be the case that.’’ Soundness
and completeness theorems establish that different sets of axioms for P and F exactly correspond to different assumptions about the structure of time. (Is it linearly ordered? Does it have a first or
last moment? Are the moments of time densely ordered or discrete?) When I first read the statements of a group of such theorems, I worked out for myself proofs based on what has come to be called the
‘‘step-bystep’’ method, and erroneously concluded that this must be the method everyone was using. In fact, only a few were, while most were using Segerberg’s ‘‘bulldozing’’ and ‘‘unraveling’’
methods instead; but my adoption of the step-by-step method in this expository chapter helped popularize it, first among philosophical logicians, then among theoretical computer scientists.
‘‘The unreal future’’ (Burgess 1979d) ‘‘Decidability and branching time’’ (Burgess 1980c) These two were originally a single item. Division was suggested by Krister Segerberg, regular editor of
Theoria and guest editor of Studia Logica. One of the first applications of modern tense logic envisioned by its founder, Arthur Prior, was to the analysis of traditional debates over future
contingents, and of the underlying picture of a time in which the one past behind us branches into many possible futures before us. As part of this analysis Prior distinguished two positions called,
after two historical figures, ‘‘Peircean’’ and ‘‘Ockhamist.’’ The former does not, while the latter does, consider it meaningful to speak, even now, of some unknown one from among the many possible
futures as being the future that will become actual. The first section of ‘‘The unreal future’’ offers a more formal account of Prior’s conception of the interaction between temporal and modal
operators than he himself gave, then gives an informal summary of the formal results in ‘‘Decidability and branching time,’’ which shows that the set of sentences valid in all Peircean models is
decidable, while for Ockhamism one must distinguish standard from non-standard models. The second section of the same paper was ostensibly a survey of conceptual issues that needed to be addressed
before moving beyond the sentential to a predicate logic of branching time, but was really my first attempt to come to grips with ‘‘Naming and necessity.’’ Underlying it was, in embryonic and
inchoate form, the thought that ‘‘metaphysical’’ modality might be demystified by tracing its source to our sortal classifications of the objects of our thought, and our conventions as to what does
and what does not count as another appearance of the same thing of a
Annotated bibliography
given sort. The idea was not successfully worked out in the paper, and I have not as yet even today achieved a development of it that is wholly satisfactory even to myself, though the idea still
seems to me to be promising.
‘‘Axioms for tense logic’’ (Burgess 1982b) The more important first part of this two-part paper concerns the two-place ‘‘since’’ and ‘‘until’’ operators by which Hans Kamp enriched Prior’s tense
logic. The paper provides an axiomatization proved sound and complete by an extension of the step-by-step method.
‘‘The decision problem for linear temporal logic’’ (Burgess and Gurevich 1985) Segerberg’s ‘‘filtration’’ method can be used to prove the decidability of the tense logics appropriate to many
different models of time, but not the model on which the moments of time are ordered like the real numbers. This paper presents two different proofs of the decidability of that logic, along quite
different lines. The first is mine. Gurevich, the referee for the paper in which I wrote up this method, suggested the second, using more advanced techniques and thereby providing some additional
information. We agreed it was best for him to become a co-author, so that both methods could be made available in a single paper.
‘‘Probability logic’’ (Burgess 1969) This paper provides a complete axiomatization and a decision theory for a sentential modal logic enriched with an operator ‘‘it is probable that’’ in addition to
the operator ‘‘it is necessary that.’’ The axioms are those appropriate for a qualitative notion of probability, though results are also presented on quantitative notions.
‘‘Quick completeness proofs for some logics of conditionals’’ (Burgess 1981b) Robert Stalnaker and David Lewis independently developed similar but not identical views on counterfactual or subjunctive
conditionals, and there is a third variant as well. This paper provides complete axiomatizations and therewith proofs of decidability for a range of variants. The systems have since been found to
admit another interpretation, in terms of nonmonotonic logics of the kind pioneered by Kraus, Lehmann, and Magidor.
Annotated bibliography
‘‘The completeness of intuitionistic propositional calculus for its intended interpretation’’ (Burgess 1981a) The question addressed here is whether Heyting’s axiomatization of intuitionistic logic
is complete in the sense that every sentential formula all of whose instances (obtained by substituting formulas of intuitionistic arithmetic or analysis for its sentential variables) are
intuitionistically correct is a thesis of the system. Such a question is not answered merely by a proof of completeness for some formal ‘‘semantics’’ on the order of topological models or Kripke
models. Georg Kreisel was able, using topological models as a starting point, to obtain an affirmative answer, making certain assumptions about intuitionistic analysis (the theory of ‘‘lawless’’
sequences). The paper shows how, on the same assumptions, to obtain an affirmative answer taking Kripke models as a starting point.
Ackermann, Diana [Ackermann, Felicia Nimue] (1978) ‘‘De re propositional attitudes toward integers,’’ Southwestern Journal of Philosophy vol. 9, pp. 145–53. Anderson, Alan Ross and Belnap, Nuel D.,
Jr. (1975) Entailment: The Logic of Relevance and Necessity vol. I (Princeton, NJ: Princeton University Press). Anderson, C. A. and Zele¨ny, M. (2002) (eds.) Logic, Meaning, and Computation: Essays
in Memory of Alonzo Church (Dordrecht: Kluwer). Azzouni, Jodi (2004) Deflating Existential Consequence: A Case for Nominalism (Oxford: Oxford University Press). Baire, Rene´, Borel, E´mil, Hadamard,
Jacques, and Lebesgue, Henri (1905) ‘‘Cinque lettres sur la the´orie des ensembles,’’ Bulletin de la Socie´te´ Mathe´matique de France vol. 33, pp. 261–73. Balaguer, Mark (1998) Platonism and
Anti-Platonism in Mathematics (Oxford: Oxford University Press). Barcan, Ruth C. [Marcus, Ruth Barcan] (1946) ‘‘A functional calculus of first order based on strict implication,’’ Journal of Symbolic
Logic vol. 11, pp. 1–16. (1947) ‘‘Identity of individuals in a strict functional calculus of second order,’’ Journal of Symbolic Logic vol. 12, pp. 12–15. Bar-Hillel, Y., Poznanski, E. I. J., Rabin,
M. O., and Robinson, A. (1961) (eds.) Essays on the Foundations of Mathematics (Jerusalem: Magnes Press). Barwise, K. Jon (1977) Handbook of Mathematical Logic (Amsterdam: North Holland). Belnap,
Nuel D., Jr. and Green, Mitchell (1994) ‘‘The thin red line,’’ in Tomberlin (1994), pp. 365–88. Benacerraf, Paul and Putnam, Hilary (1964) Philosophy of Mathematics: Selected Readings (Englewood
Cliffs, NJ: Prentice-Hall). (1983) Philosophy of Mathematics: Selected Readings, 2nd edn (Cambridge: Cambridge University Press). Bernays, Paul (1961) ‘‘Zur Frage der Unendlichkeitsschemata in der
axiomatischen Mengenlehre,’’ in Bar-Hillel et al. (1961), pp. 3–49. (1976) ‘‘On the problem of schemata of infinity in axiomatic set theory,’’ English translation of Bernays (1961) by J. Bell and M.
Pla¨nitz, in Mu¨ller (1976), pp. 121–72. 284
Birkhoff, Garrett (1937) ‘‘Rings of sets,’’ Duke Mathematical Journal vol. 3, pp. 443–54. Bochenski, I., Church, A., and Goodman, N. (1956) The Problem of Universals: A Symposium (Notre Dame, IN:
Notre Dame University Press). Boolos, George (1984) ‘‘To be is to be the value of a variable (or to be some values of some variables),’’ Journal of Philosophy vol. 81, pp. 430–39, reprinted in
(Boolos 1997), pp. 54–72. (1985) ‘‘Nominalist Platonism,’’ Philosophical Review vol. 94, pp. 327–44, reprinted in Boolos (1997a), pp. 73–87. (1987) ‘‘The consistency of Frege’s Foundations of
Arithmetic,’’ in Thomson (1987), pp. 3–20; reprinted in Demopoulous (1995), pp. 211–33, and in Boolos (1997a), pp. 182–201. (1993) The Logic of Provability (Cambridge: Cambridge University Press).
(1997a) Logic, Logic, and Logic (Cambridge, MA: Harvard University Press). (1997b) ‘‘Must we believe in set theory?’’ in Boolos (1997a), pp. 120–32. Borges, Jorge Luis (1962) ‘‘Tlo¨n, Uqbar, Orbis
Tertius,’’ translated from the Spanish by Alastair Reid, in A. Kerrison (ed.) Ficciones (New York: Grove Press). Bridges, Douglas (1981) ‘‘Towards a constructive foundation for quantum mechanics,’’
in Richman (1981), pp. 260–73. Brouwer, L. E. J. (1975) Collected Works, vol. I: Philosophy and Foundations of Mathematics (Amsterdam: North Holland). (1976) Collected Works, vol. II: Geometry,
Analysis, Topology and Mechanics (Amsterdam: North Holland). Bull, R. A. and Segerberg, Krister (1984) ‘‘Basic modal logic,’’ in Gabbay and Guenthner (1984), pp. 1–88. Burgess, John P. (1969)
‘‘Probability logic,’’ Journal of Symbolic Logic vol. 34, pp. 264–74. (1977a) ‘‘Forcing’’ in Barwise (1977), pp. 403–52. (1977b) ‘‘Descriptive set theory and infinitary languages,’’ in Proceedings of
the 1977 Belgrade Symposium on Set Theory and Foundations of Mathematics, Mathematical Institute, Belgrade, pp. 9–30. (1978a) ‘‘Consistency proofs in model theory: a contribution to Jensenlehre,’’
Annals of Mathematical Logic vol. 14, pp. 1–12. (1978b) ‘‘Equivalence relations generated by families of Borel sets,’’ American Mathematical Society Proceedings vol. 69, pp. 323–6. (1979a) ‘‘A
reflection phenomenon in descriptive set theory,’’ Fundamenta Mathematicae vol. 104, pp. 127–39. P1 (1979b) ‘‘Effective enumeration of classes in a 1 equivalence relation,’’ Indiana University
Mathematical Journal vol. 28, pp. 353–64. (1979c) ‘‘A selection theorem for group actions,’’ Pacific Journal of Mathematics vol. 80, pp. 333–6. (1979d) ‘‘The unreal future,’’ Theoria vol. 44, pp.
157–79. (1980a) ‘‘A measurable selection theorem,’’ Fundamenta Mathematicae vol. 100, pp. 91–100.
(1980b) ‘‘Se´lections mesurables pour relations d’e´quivalence a` classes Gd,’’ Bulletin des Sciences Mathe´matiques vol. 104, pp. 435–40. (1980c) ‘‘Decidability and branching time,’’ in K. Segerberg
(ed.) Trends in Modal Logic, Studia Logica vol. 39, pp. 203–18. (1981a) ‘‘The completeness of intuitionistic propositional calculus for its intended interpretation,’’ Notre Dame Journal of Formal
Logic vol. 22, pp. 17–28. (1981b) ‘‘Quick completeness proofs for some logics of conditionals,’’ Notre Dame Journal of Formal Logic vol. 22, pp. 76–84. (1981c) ‘‘Relevance: a fallacy?,’’ Notre Dame
Journal of Formal Logic vol. 22, pp. 97–104. (1981d) ‘‘Careful choices: a last word on Borel selectors,’’ Notre Dame Journal of Formal Logic vol. 22, pp. 219–26. (1982a) ‘‘What are R-sets?’’ in
Metakides (1982), pp. 307–24. (1982b) ‘‘Axioms for tense logic, I. Since and until,’’ Notre Dame Journal of Formal Logic vol. 23, pp. 367–74. (1983a) ‘‘Classical hierarchies from a modern standpoint,
I. C-sets,’’ Fundamenta Mathematicae vol. 115, pp. 81–96. (1983b) ‘‘Classical hierarchies from a modern standpoint, II. R-sets,’’ Fundamenta Mathematicae vol. 115, pp. 97–105. (1983c) ‘‘Common sense
and ‘relevance,’’’ Notre Dame Journal of Formal Logic vol. 24, pp. 41–53. (1983d) ‘‘Why I am not a nominalist,’’ Notre Dame Journal of Formal Logic vol. 24, pp. 93–105. (1984a) ‘‘Basic tense logic,’’
in Gabbay and Guenthner (1984), pp. 89–134. (1984b) ‘‘Read on relevance: a rejoinder,’’ Notre Dame Journal of Formal Logic vol. 25, pp. 217–23. (1984c) ‘‘Dummett’s case for intuitionism,’’ History
and Philosophy of Logic vol. 5, pp. 177–94. (1985) ‘‘From preference to utility: a problem of descriptive set theory,’’ Notre Dame Journal of Formal Logic vol. 26, pp. 106–14. (1986) ‘‘The truth is
never simple,’’ Journal of Symbolic Logic vol. 51, pp. 663–81. (1988) ‘‘Addendum to ‘The truth is never simple,’’’ Journal of Symbolic Logic vol. 53, pp. 390–2. (1989) ‘‘Epistemology and
nominalism,’’ in Irvine (1989), pp. 1–15. (1990) ‘‘Sets and point-sets,’’ in Fine and Lepin (1990), pp. 456–63. (1992) ‘‘Proofs about proofs: a defense of classical logic, I,’’ in Detlefsen (1992),
pp. 79–82. (1993) ‘‘How foundational work in mathematics can be relevant to philosophy of science,’’ in Hull et al. (1993), pp. 433–41. (1995) ‘‘Frege and arbitrary functions.’’ in Demopoulos (1995),
pp. 89–107. (1996) ‘‘Marcus, Kripke, and names,’’ Philosophical Studies vol. 84, pp. 1–47, reprinted in Humphreys and Fetzer (1998), pp. 89–124. (1998a) ‘‘How not to write history of philosophy,’’ in
Humphreys and Fetzer (1998), pp. 125–36.
(1998b) ‘‘Occam’s razor and scientific method,’’ in Schirn (1998), pp. 195–214. (1998c) ‘‘Quinus ab omni naevo vindicatus,’’ in A. A. Kazmi (ed.) Meaning and Reference: Canadian Journal of Philosophy
Supplement vol. 23, pp. 25–65. (1999) ‘‘Which modal logic is the right one?’’ Notre Dame Journal of Formal Logic vol. 40, pp. 81–93. (2001) Review of Balaguer (1998), Philosophical Review, vol. 101,
pp. 79–82. (2002a) ‘‘Nominalist paraphrase and ontological commitment,’’ in Anderson and Zele¨ny (2002), pp. 429–44. (2002b) ‘‘Is there a problem about deflationary theories of truth?’’ in Horsten
and Halbach (2002), pp. 37–56. (2003a) ‘‘Numbers and ideas,’’ Richmond Journal of Philosophy vol. 1, pp. 12–17. (2003b) ‘‘A remark on Henkin sentences and their contraries,’’ Notre Dame Journal of
Formal Logic vol. 44, pp. 185–8. (2004a) ‘‘Quine, analyticity, and philosophy of mathematics,’’ Philosophical Quarterly vol. 54, pp. 38–55. (2004b) ‘‘Mathematics and Bleak House,’’ Philosophia
Mathematica vol. 12, pp. 18–36. (2004c) ‘‘E pluribus unum: plural logic and set theory,’’ Philosophia Mathematica vol. 12, pp. 193–221. (2004d) review of Azzouni (2004) Bulletin of Symbolic Logic
vol. 10, pp. 573–7. (2005a) ‘‘No requirement of relevance,’’ in Shapiro (2005), pp. 727–50. (2005b) Fixing Frege (Princeton, NJ: Princeton University Press). (2005c) ‘‘Translating names,’’ Analysis
vol. 65, pp. 196–204. (2005d) ‘‘Being explained away,’’ Harvard Review of Philosophy vol. 13, pp. 41–56. (2005e) ‘‘On anti-anti-realism,’’ Facta Philosophica vol. 7, pp. 121–44. (forthcoming)
‘‘Protocol sentences for lite logicism,’’ in Lindstro¨m (forthcoming). Burgess, John P. and Gurevich, Yuri (1985) ‘‘The decision problem for linear temporal logic,’’ Notre Dame Journal of Formal
Logic vol. 26, pp. 115–28. Burgess, John P. and Hazen, A. P. (1998) ‘‘Arithmetic and predicative logic,’’ Notre Dame Journal of Formal Logic vol. 39, pp. 1–17. Burgess, John P. and Rosen, Gideon
(1997) A Subject With No Object: Strategies for Nominalistic Interpretation of Mathematics (Oxford: Oxford University Press). Cantor, Georg (1885) review of Frege (1884), Deutsche Literaturzeitung,
vol. 6, pp. 728–9. Carnap, Rudolf (1946) ‘‘Modalities and quantification,’’ Journal of Symbolic Logic vol. 11, pp. 33–64. (1947) Meaning and Necessity: A Study in Semantics and Modal Logic (Chicago:
University of Chicago Press). (1950) ‘‘Empiricism, semantics, and ontology,’’ Revue Internationale de Philosophie vol. 4, pp. 20–40. Chihara, Charles (1973) Ontology and the Vicious Circle Principle
(Ithaca, NY: Cornell University Press).
(1989) ‘‘Tharp’s ‘Myth and Mathematics,’’’ Synthese vol. 81, pp. 153–65. (1990) Constructibility and Mathematical Existence (Oxford: Oxford University Press). Chomsky, Noam (1959) review of Skinner
(1957) Language vol. 35, pp. 26–58. Church, Alonzo (1950) review of Fitch (1949) Journal of Symbolic Logic vol. 15, p. 63. Cocchiarella, Nino (1984) ‘‘Philosophical perspectives on quantification in
tense and modal logic,’’ in Gabbay and Guenthner (1984), pp. 309–53. Cohen, R. S. and Wartofsky, M. W. (1965) (eds.) Boston Studies in the Philosophy of Science, vol. II (New York: Humanities Press).
Copeland, B. J. (1979) ‘‘When is a semantics not a semantics: some reasons for disliking the Routley–Meyer semantics for relevance logic,’’ Journal of Philosophical Logic vol. 8, pp. 399–413.
Creswell, Max (1990) Entities and Indices (Dordrecht: Kluwer). Davidson, Donald (1967) ‘‘Truth and meaning,’’ Synthese vol. 17, pp. 304–23. Davidson, Donald and Harman, Gilbert (1972) (eds.)
Semantics of Natural Language (Dordrecht: Reidel). Davidson, Donald and Hintikka, Jaakko (1969) (eds.) Words and Objections: Essays on the Work of W. V. Quine (Dordrecht: Reidel). Demopoulos, William
(1995) (ed.) Frege’s Philosophy of Mathematics (Cambridge, MA: Harvard University Press). Detlefsen, Michael (1992) (ed.) Proof, Logic and Formalization (London: Routledge). Diogenes Laertius (1925)
Lives and Opinions of Eminent Philosophers, translated from the Greek by R. D. Hicks, Loeb Classical Library (Cambridge, MA: Harvard University Press). Dummett, Michael (1959) ‘‘Truth,’’ in Dummett
(1978), pp. 1–24. (1973a) ‘‘The philosophical basis of intuitionistic logic,’’ in Dummett (1978), pp. 215–47. (1973b) ‘‘The justification of deduction,’’ in Dummett (1978), pp. 290–318. (1973c) ‘‘The
significance of Quine’s indeterminacy thesis,’’ in Dummett (1978), pp. 375–419. (1977) Elements of Intuitionism (Oxford: Oxford University Press). (1978) Truth and Other Enigmas (Cambridge, MA:
Harvard University Press). Edgington, Dorothy (1985) ‘‘The paradox of knowability,’’ Mind vol. 94, pp. 557–68. Evans, Gareth and McDowell, John (1976) (eds.) Truth and Meaning: Essays in Semantics
(Oxford: Oxford University Press). Farber, M. (1950) (ed.) Philosophic Thought in France and the United States (Buffalo, NY: University of Buffalo Press). Feferman, Solomon (1977) ‘‘Theories of
finite type related to mathematical practice,’’ in Barwise (1977), pp. 913–72. Field, Hartry H. (1980) Science Without Numbers: A Defense of Nominalism (Princeton, NJ: Princeton University Press).
(1989) Realism, Mathematics and Modality (Oxford: Basil Blackwell).
Fine, A. and Lepin, J. (1990) PSA 88 [Proceedings of the 1988 Convention of the Philosophy of Science Association], vol. II (East Lansing, MI: Philosophy of Science Association). Fine, Kit (2002) The
Limits of Abstraction (Oxford: Oxford University Press). Fitch, Frederic (1949) ‘‘The problem of the morning star and the evening star,’’ Philosophy of Science vol. 16, pp. 137–41. (1950) ‘‘Attribute
and class,’’ in Farber (1950), pp. 640–7. (1963) ‘‘A logical analysis of some value concepts,’’ Journal of Symbolic Logic vol. 28, pp. 135–42. Føllesdal, Dagfinn (1961) ‘‘Referential opacity and
modal logic,’’ Harvard University doctoral dissertation, reprinted as Føllesdal (1966). (1965) ‘‘Quantification into causal contexts,’’ in Cohen and Wartofsky (1965), pp. 263–74; reprinted in Linsky
(1971a), pp. 52–62. (1966) Referential Opacity and Modal Logic, Filosofiske Problemer, vol. XXXII (Oslo: Oslo Universitetsforlaget). (1969) ‘‘Quine on modality,’’ in Davidson and Hintikka (1969), pp.
175–85. (1986) ‘‘Essentialism and reference,’’ in Hahn and Schlipp (1986), pp. 97–113. Forbes, Graeme (1995) review of Marcus (1993), Notre Dame Journal of Formal Logic vol. 36, pp. 336–9. Frege,
Gottlob (1879) Begriffsschrift: eine der arithmetischen nachgebildete Formelsprache des reinen Denkens (Halle: Louis Nebert). (1884) Die Grundlagen der Arithmetik: Eine logisch-mathematische
Untersuchung u¨ber den Begriff der Zahl (Breslau: Wilhelm Koebner). (1893/1903) Grundgesetze der Arithmetik, begriffsschriftlich abgeleitet, 2 vols. (Jena: Pohle). (1950) The Foundations of
Arithmetic, translation of Frege (1884) from the German by J. L. Austin (London: Blackwell). (1967) Begriffsschrift, translation from the German of Frege (1879) by S. BauerMengelberg, in van
Heijenoort (1967), pp. 1–82. French, Peter A. and Wettstein, Howard K. (2001) (eds.) Midwest Studies in Philosophy XXV: Figurative Language (London: Blackwell). Gabbay, D. and Guenthner, F. (1984)
(eds.) Handbook of Philosophical Logic, vol. II: Extensions of Classical Logic (Dordrecht: Reidel). Gabbay, D., Rahman, S., Symons, J., and van Bendegen, J. P. (2004) (eds.) Logic, Epistemology, and
the Unity of Science (Dordrecht: Kluwer). Garson, James (1984) ‘‘Quantification in modal logic,’’ in Gabbay and Guenthner (1984), pp. 249–308. Goodman, Nelson (1956) ‘‘A world of individuals,’’ in
Bochenski et al. (1956), pp. 197–210; reprinted in Benacerraf and Putnam (1964). Goodman, Nelson and Quine, W. V. O. (1947) ‘‘Steps toward a constructive nominalism,’’ Journal of Symbolic Logic vol.
12, pp. 97–122. Goranko, Valentin (1994) ‘‘Refutation systems in modal logic,’’ Studia Logica vol. 53, pp. 299–324. Grandy, Richard (1982) review of Dummett (1973a), Prawitz (1977), etc. Journal of
Symbolic Logic vol. 47, pp. 689–94.
Grice, H. P. and Strawson, P. F. (1956) ‘‘In defense of a dogma,’’ Philosophical Review vol. 65, pp. 141–58. Grzegorczyk, Andrzej (1967) ‘‘Some relational systems and the associated topological
spaces,’’ Fundamenta Mathematicae vol. 60, pp. 223–31. Haack, Susan (1974) Deviant Logic (Cambridge: Cambridge University Press). Hahn, L. E. and Schlipp, P. A. (1986) The Philosophy of W. V. Quine
(LaSalle, IL: Open Court). Ha´jek, Petr and Pudlak, Pavel (1998) Metamathematics of First-Order Arithmetic (Berlin: Springer). Halde´n, Søren (1963) ‘‘A pragmatic approach to modal theory,’’ Acta
Philosophica Fennica vol. 16, pp. 53–64. Hale, Bob and Wright, Crispin (2001) The Reason’s Proper Study: Essays towards a Neo-Fregean Philosophy of Mathematics (Oxford: Oxford University Press).
Hallett, Michael (1984) Cantorian Set Theory and Limitation of Size (Oxford: Clarendon Press). Harman, Gilbert (1982) ‘‘Conceptual role semantics,’’ Notre Dame Journal of Formal Logic vol. 23, pp.
242–56. Heck, Richard G., Jr. (1996) ‘‘On the consistency of predicative fragments of Frege’s Grundgesetze der Arithmetik,’’ History and Philosophy of Logic vol. 17, pp. 209–20. Heijenoort, Jean van
(1967) (ed.) From Frege to Go¨del: A Sourcebook in Mathematical Logic, 1879–1931 (Cambridge, MA: Harvard University Press). Hempel, Carl G. (1950) ‘‘Problems and changes in the empiricist criterion
of meaning,’’ Revue Internationale de Philosophie vol. 4, pp. 41–63. Hersh, Reuben (1997) What is Mathematics, Really? (Oxford: Oxford University Press). Hilbert, David (1925/1967) ‘‘On the
infinite,’’ translated from the German by Stefan Bauer-Mengelberg, in van Heijenoort (1967), pp. 367–92. Hintikka, Jaakko (1963) ‘‘Modes of modality,’’ Acta Philosophica Fennica vol. 16, pp. 65–82.
(1982) ‘‘Is alethic modal logic possible?’’ Acta Philosophica Fennica vol. 35, pp. 89–105. Hintikka, Jaakko and Sandu, Gabriel (1995) ‘‘The fallacies of the new theory of reference,’’ Synthese vol.
104, pp. 245–83. Hofweber, Thomas and Everett, Anthony (2000) (eds.) Empty Names, Fiction and the Puzzles of Non-Existence (Chicago: CSLI). Horsten, Leon and Halbach, Volker (2002) (eds.) Principles
of Truth (Frankfurt: Ha¨nsel-Hohenhausen). Hrbacek, K. and Jech, T. (1999) Introduction to Set Theory, 3rd edn (New York: Marcel Dekker). Hughes, G. E. and Creswell, M. J. (1968) An Introduction to
Modal Logic (London: Methuen). Hull, D., Forbes, M., and Okruhlik, K. (1993) (eds.) PSA 92 [Proceedings of the 1992 Convention of the Philosophy of Science Association], vol. II (East Lansing, MI:
Philosophy of Science Association).
Humphreys, P. and Fetzer, J. (1998) (eds.) The New Theory of Reference, Synthese Library vol. CCLXX (Dordrecht: Kluwer). Irvine, Andrew (1989) (ed.) Physicalism in Mathematics (Dordrecht: Kluwer).
James, William (2000) ‘‘Pragmatism,’’ in Pragmatism and Other Writings, ed. G. Gunn (New York: Penguin), pp. 1–132. Jeffrey, Richard C. (1996) ‘‘Logicism 2000,’’ in Stich and Morton (2002), pp.
1–132. (2002) ‘‘Logicism lite,’’ Philosophy of Science vol. 69, pp. 447–51. Kahle, Reinhard (forthcoming) (ed.) Intensionality: An Interdisciplinary Discussion (Boston: A. K. Peters, Lecture Notes in
Logic). Katz, Jerrold (1985) (ed.) The Philosophy of Linguistics (Oxford: Oxford University Press). Kitcher, Philip (1978) ‘‘The plight of the platonist,’’ Nouˆs, vol. 12, pp. 119–36. Klibansky,
Raymond (1968) Contemporary Philosophy, 4 vols. (Florence: Editrice Nuova Italia). Kneale, William and Kneale, Mary (1962) The Development of Logic (Oxford: Clarendon Press). ¨ ber die Grundlagen der
Mengenlehre und das Ko¨nig, Julius (1905) ‘‘U Kontinuumproblem,’’ Mathematische Annalen vol. 61, pp. 156–60. (1967) ‘‘On the foundations of set theory and the continuum problem,’’ translation of
Ko¨nig (1905) from the German by Stefan Bauer-Mengelberg, in van Heijenoort (1967), pp. 145–9. Kranz, D., Luce, R., Suppes, P., and Tversky, A. (1971) Foundations of Measurement (New York: Academic
Press). Kreisel, Georg (1967) ‘‘Informal rigour and completeness proofs,’’ in Lakatos (1967), pp. 138–57. Kripke, Saul (1963) ‘‘Semantical considerations on modal logic,’’ Acta Philosophica Fennica
vol. 16, pp. 83–94. (1972) ‘‘Naming and necessity: Lectures give to the Princeton University Philosophy Colloquium, January, 1970,’’ in Davidson and Harman (1972), pp. 253–355 and 763–9; reprinted
with a new preface as Kripke (1980). (1976) ‘‘Is there a problem about substitutional quantification?’’ in Evans and McDowell (1976), pp. 325–420. (1979) ‘‘A puzzle about belief,’’ in Margalit
(1977), pp. 239–83. (1980) Naming and Necessity (Cambridge, MA: Harvard University Press). (1982) Wittgenstein on Rules and Private Language (Cambridge, MA: Harvard University Press). Lavine,
Shaughan (1995) review of Marcus (1993), British Journal of the Philosophy of Science vol. 46, pp. 267–74. Lakatos, Imre (1967) (ed.) Proceedings of the International Colloquium in the Philosophy of
Science, London, 1965, vol. I (Amsterdam: North Holland). Lee, O. H. (1936) Philosophical Essays for A. N. Whitehead (New York: Longmans). Lewis, David K. (1969) Convention (Cambridge, MA: Harvard
University Press). (1970) ‘‘Anselm and actuality,’’ Nouˆs vol. 4, pp. 175–88. (1991) Parts of Classes (Oxford: Oxford University Press).
Linsky, Leonard (1971a) (ed.) Reference and Modality (Oxford: Oxford University Press). (1971b) ‘‘Essentialism, reference, and modality,’’ in (Linsky (1971a), pp. 88–100. (1977) Names and
Descriptions (Chicago: University of Chicago Press). Linstro¨m, Sten (forthcoming) (ed.) Logicism, Intuitionism, Formalism. What Has Become of Them? (Berlin: Springer). Maddy, Penelope (1980)
‘‘Perception and mathematical intuition,’’ Philosophical Review vol. 89, pp. 163–96. (1984) ‘‘Mathematical epistemology: what is the question?’’ The Monist vol. 67, pp. 46–55. (1990) ‘‘Mathematics
and Oliver Twist,’’ Pacific Philosophical Quarterly vol. 71, pp. 189–205. Makinson, David (1966) ‘‘How meaningful are modal operators?’’ Australasian Journal of Philosophy vol. 44, pp. 331–7. Manin,
Yuri (1977) A Course in Mathematical Logic, translated from the Russian by N. Koblitz (Berlin: Springer). Marcus, Ruth Barcan (1960) ‘‘Extensionality,’’ Mind vol. 69, pp. 55–62. (1963a) ‘‘Modalities
and intensional languages,’’ in Wartofsky (1963), pp. 77–96. (1963b) ‘‘Attribute and class in extended modal systems,’’ Acta Philosophical Fennica vol. 16, pp. 123–36. (1967) ‘‘Essentialism in modal
logic,’’ Nouˆs vol. 1, pp. 90–6. (1968) ‘‘Modal logic,’’ in Klibansky (1968), pp. 87–101. (1978) Review of Linsky (1977), Philosophical Review vol. 87, pp. 497–504. (1990) ‘‘Some revisionary
proposals about belief and believing,’’ Philosophy and Phenomenological Research vol. 50 (Supplement), pp. 133–53. (1993) Modalities: Philosophical Essays (Oxford: Oxford University Press). Marcus,
R. B., Quine, W. V., Kripke, S. A. et al. (1963) Discussion of Marcus (1963a) in Wartofsky (1963), pp. 105–16. Margalit, Avishai (1977) Meaning and Use (Dordrecht: Reidel). Martinich, A. P. (1979)
The Philosophy of Language (Oxford: Oxford University Press). Matiyasevich, Yuri (1993) Hilbert’s Tenth Problem (Cambridge, MA: MIT Press). McKinsey, J. C. C. (1941) ‘‘A solution to the decision
problem for the Lewis systems S2 and S4, with an application to topology,’’ Journal of Symbolic Logic vol. 6, pp. 117–34. (1945) ‘‘On the syntactical construction of modal logic,’’ Journal of
Symbolic Logic vol. 10, pp. 83–96. Metakides, George (1982) (ed.) Proceedings of the First Patras Logic Symposion (Amsterdam: North Holland). Montagna, Franco and Mancini, Antonella (1994) ‘‘A
minimal predicative set theory,’’ Notre Dame Journal of Formal Logic vol. 35, pp. 186–203. Mu¨ller, Gert-Heinz (1976) Sets and Classes (Amsterdam: North-Holland). Nabokov, Vladimir (1980) Lectures on
Literature, ed. F. Bowers (New York: Harcourt Brace Jovanovich).
Newman, J. R. (1956) (ed.) The World of Mathematics, 4 vols. (New York: Simon and Schuster). Parsons, Charles (1980) ‘‘Mathematical intuition,’’ Proceedings of the Aristotelian Society, vol. 80, pp.
145–68. Parsons, Terence (1969) ‘‘Essentialism and quantified modal logic,’’ Philosophical Review vol. 78, pp. 35–52; reprinted in Linsky (1971a), pp. 73–87. (1987) ‘‘On the consistency of the
first-order portion of Frege’s logical system,’’ Notre Dame Journal of Formal Logic vol. 28, pp. 61–8; reprinted in Demopoulos (1995), pp. 422–31. Pollard, Stephen (1996) ‘‘Sets, wholes, and limited
pluralities,’’ Philosophia Mathematica, vol. 4, pp. 42–58. Pour-El, M. and Richards, I. (1979) ‘‘A computable ordinary differential equation which possesses a computable solution,’’ Annals of
Mathematical Logic vol. 17, pp. 61–90. (1981) ‘‘A wave equation with computable initial data such that its unique solution is not computable,’’ Advances in Mathematics vol. 39, pp. 215–39. (1983)
‘‘Noncomputability in analysis and physics: a complete determination of the class of noncomputable linear operators,’’ Advances in Mathematics vol. 48, pp. 44–74. (1987) ‘‘The eigenvalues of an
effectively determined self-adjoint operator are computable, but the sequence of eigenvalues is not,’’ Advances in Mathematics vol. 63, pp. 1–41. Prawitz, Dag (1977) ‘‘Meaning and proofs: on the
conflict between classical and intuitionistic logic,’’ Theoria vol. 43, pp. 2–40. Prior, Arthur N. (1960) ‘‘The runabout inference-ticket,’’ Analysis vol. 21, pp. 38–9. (1963) ‘‘Is the concept of
referential opacity really necessary?’’ Acta Philosophical Fennica vol. 16, pp. 189–99. (1967a) ‘‘Logic, modal,’’ in Weiss (1967), vol. V, pp. 5–12. (1967b) Past, Present, and Future (Oxford:
Clarendon Press). Putnam, Hilary (1971) Philosophy of Logic (New York: Harper). (1975) ‘‘Truth and necessity in mathematics,’’ in Mathematics, Matter, and Method (Cambridge: Cambridge University
Press), pp. 1–11. Quine, W. V. O. (1936) ‘‘Truth by convention,’’ in Lee (1936), pp. 90–124. (1946) Review of Barcan (1946), Journal of Symbolic Logic vol. 11, pp. 96–7. (1947a) ‘‘The problem of
interpreting modal logic,’’ Journal of Symbolic Logic vol. 12, pp. 43–8. (1947b) Review of Barcan (1947), Journal of Symbolic Logic vol. 12, pp. 95–6. (1951a) ‘‘Carnap’s views on ontology,’’
Philosophical Studies vol. 2, pp. 65–72. (1951b) ‘‘Two dogmas of empiricism,’’ Philosophical Review vol. 60, pp. 20–43. (1953) From a Logical Point of View (Cambridge, MA: Harvard University Press).
(1960) Word and Object (New York: John Wiley and Sons). (1961) From a Logical Point of View, 2nd edn (Cambridge, MA: Harvard University Press). (1963) Comments on Marcus (1963a) in Wartofsky (1963),
pp. 97–104.
(1969) ‘‘Reply to Sellars,’’ in Davidson and Hintikka (1969), p. 338. (1970) Philosophy of Logic (Englewood Cliffs, NJ: Prentice-Hall). (1980) From a Logical Point of View, 3rd edn (Cambridge, MA:
Harvard University Press). (1981) ‘‘Response to David Armstrong,’’ in Theories and Things (Cambridge: Harvard University Press), pp. 182–4. Ramsey, Frank Plumpton (1925) ‘‘The foundations of
mathematics,’’ Proceedings of the London Mathematical Society vol. 25, pp. 338–84. Rayo, Augustin and Uzquiano, Gabriel (1999) ‘‘Towards a theory of second-order consequence,’’ Notre Dame Journal of
Formal Logic vol. 40, pp. 315–25. Rescher, Nicholas (1968) (ed.) Studies in Logical Theory (Oxford: Basil Blackwell). Richman, F. (1981) (ed.) Constructive Mathematics [Springer Lecture Notes in
Mathematics 873] (Berlin: Springer). (1975) Logic Colloquium ’73 (Amsterdam: North Holland). Rosen, Gideon and Burgess, John P. (2005) ‘‘Nominalism reconsidered,’’ in Shapiro (2005), pp. 460–82.
Ru¨kert, Helge (2004) ‘‘A solution to Fitch’s paradox of knowability,’’ in Gabbay et al. (2004), pp. 351–80. Russell, Bertrand (1902/1967) letter to Frege, translated from the German by Beverly
Woodward, in van Heijenoort (1967), pp. 124–5. (1985) The Philosophy of Logical Atomism, ed. David Pears (LaSalle, IL: Open Court). Salerno, J. (2008) New Essays on the Knowability Paradox (Oxford:
Oxford University Press). Salmon, Nathan (1986) Frege’s Puzzle (Cambridge, MA: MIT Press). Schindler, Ralf-Dieter (1994) ‘‘A dilemma in the philosophy of set theory’’, Notre Dame Journal of Formal
Logic vol. 35, pp. 458–63. Schirn, Matthias (1998) (ed.) Philosophy of Mathematics Today (Oxford: Oxford University Press). Scroggs, Schiller Joe (1951) ‘‘Extensions of the Lewis system S5,’’ Journal
of Symbolic Logic vol. 16, pp. 112–20. Searle, John R. (1967) ‘‘Proper names and descriptions,’’ in Weiss (1967) vol. VI, pp. 487–91. (1979) ‘‘Metaphor,’’ in Martinich (1979), pp. 92–123. Shahan, R.
W. and Swoyer, C. (1979) (eds.) Essays on the Philosophy of W. V. Quine (Norman: University of Oklahoma Press). Shapiro, Stewart (1985) (ed.) Intensional Mathematics (Amsterdam: North Holland).
(1987) ‘‘Principles of reflection and second-order logic,’’ Journal of Philosophical Logic vol. 16, pp. 309–33. (1997) Philosophy of Mathematics: Structure, Ontology, Modality (Oxford: Oxford
University Press). (2005) (ed.) The Oxford Handbook of Philosophy of Mathematics and Logic (Oxford: Oxford University Press). Skinner, B. F. (1957) Verbal Behavior (New York:
Skura, Tomasz (1995) ‘‘A Lukasiewicz-style refutation system for the modal logic S4,’’ Journal of Philosophical Logic vol. 24, pp. 573–82. Slupecki, Jerzy and Bryll, Grzegorz (1973) ‘‘Proof of the
L-decidability of Lewis system S5,’’ Studia Logica vol. 24, pp. 99–105. Smullyan, Arthur (1947) review of Quine (1947a), Journal of Symbolic Logic vol. 12, pp. 139–41. (1948) ‘‘Modality and
description,’’ Journal of Symbolic Logic vol. 13, pp. 31–7. Soames, Scott (1984) ‘‘What is a theory of truth?’’ Journal of Philosophy vol. 84, pp. 411–29. (1985) ‘‘Semantics and psychology,’’ in Katz
(1985), pp. 204–26. Stalnaker, Robert (1968) ‘‘A theory of conditionals,’’ in Rescher (1968), pp. 98–112. Stanley, Jason (2001) ‘‘Hermeneutic fictionalism,’’ in French and Wettstein (2001), pp.
36–71. Stich, S. and Morton, A. (2002) (eds.) Benacerraf and his Critics (London: Blackwell). Stigt, W. P. van (1979) ‘‘The rejected parts of Brouwer’s dissertation on the foundations of
mathematics,’’ Historia Mathematica vol. 6, pp. 385–404. Tarski, Alfred (1931) ‘‘Sur les ensembles de´finissables de nombres re´els,’’ Fundamenta Mathematicae vol. 17, pp. 210–39. (1935) ‘‘Der
Wahrheitsbegriff in den formalisierten Sprachen,’’ Studia Philosophica vol. 1, pp. 261–405. ¨ ber den Begriff der logischen Folgerung,’’ in Actes du Congre`s (1936) ‘‘U International de Philosophie
Scientifique, vol. VII (Paris: Hermann), pp. 1–11. (1944) ‘‘The semantic conception of truth,’’ Philosophy and Phenomenological Research vol. 4, pp. 341–75. (1983a) Logic, Semantics, Metamathematics,
2nd edn, ed. J. Corcoran (Indianapolis: Hackett). (1983b) ‘‘On definable sets of real numbers,’’ translation of Tarski (1931) from the French by J. H. Woodger, in Tarski (1983a), pp. 110–42. (1983c)
‘‘The concept of truth in formalized languages,’’ translation of Tarski (1935) from the German by J. H. Woodger, in Tarski (1983a), pp. 152–278. (1983d) ‘‘On the concept of logical consequence,’’
translation of Tarski (1936) from the German by J. H. Woodger, in Tarski (1983a), pp. 409–20. Tarski, Alfred and Vaught, Robert (1956) ‘‘Arithmetical extensions of relational systems,’’ Compositio
Mathematica vol. 13, pp. 81–102. Tarski, A., Mostowski, A., and Robinson, R. M. (1953) Undecidable Theories (Amsterdam: North Holland). Thomas, Robert (2000) ‘‘Mathematics and fiction I:
identification,’’ Logique et Analyse vol. 43, pp. 301–40. (2002) ‘‘Mathematics and fiction II: analogy,’’ Logique et Analyse vol. 45, pp. 185–228. Thomason, R. H. (1984) ‘‘Combination of tense and
modality,’’ in Gabbay and Guenthner (1984), pp. 135–65. Thomason, S. K. (1973) ‘‘A new representation of S5,’’ Notre Dame Journal of Formal Logic vol. 14, pp. 281–7.
Thomson, J. J. (1987) On Being and Saying: Essays for Richard Cartwright (Cambridge, MA: MIT Press). Tomberlin, James E. (1994) (ed.) Logic and Language [Philosophical Perspectives, vol. VIII]
(Atascadero, CA: Ridgeview Publishing). Uzquiano, Gabriel (2003) ‘‘Plural quantification and classes,’’ Philosophia Mathematica, vol. 11, pp. 67–81. Wartofsky, Max (1963) (ed.) Proceedings of the
Boston Colloquium for the Philosophy of Science 1961/1962 (Dordrecht: Reidel). Wehmeier, Kai (forthcoming) ‘‘Modality, mood, and descriptions,’’ to appear in Kahle (forthcoming). Weiss, Paul (1967)
(ed.) Encyclopedia of Philosophy, 6 vols. (New York: Macmillan). Weyl, Hermann (1944) ‘‘David Hilbert and his Mathematical Work,’’ Bulletin of the American Mathematical Society vol. 50, pp. 612–54.
White, Leslie A. (1947) ‘‘The locus of mathematical reality: an anthropological footnote,’’ Philosophy of Science vol. 14, pp. 289–303, reprinted in Newman (1956), vol. IV. Whorf, Benjamin Lee (1956)
‘‘Science and linguistics,’’ in Language, Thought, and Reality: Selected Writings, ed. J. B. Carroll (Cambridge, MA: MIT Press), pp. 207–19. Williamson, Timothy (1987) ‘‘On the paradox of
knowability,’’ Mind vol. 96, pp. 256–61. Woods, John (2002) Paradox and Paraconsistency: Conflict Resolution in the Abstract Sciences (Cambridge: Cambridge University Press). Wright, Crispin (1980)
Wittgenstein on the Foundations of Mathematics (Cambridge: Cambridge University Press). (1983) Frege’s Conception of Numbers as Objects (Aberdeen: Scots Philosophical Monographs). Wright, G. H. von
(1951) An Essay in Modal Logic (Amsterdam: North Holland). Yablo, Steven (2000) ‘‘A paradox of existence,’’ in Hofweber and Everett (2000), pp. 275–311.
Absolute, the, see monism abstractness versus concreteness, 24, 31 Ackermann, Diana (Felicia Nimue), 213 acquisition argument, 260 Addison, John W., 151 adjunction, axiom of, 138 aliases, problem of,
195 alpha symbol (a), 128 Alston, William, 4, 85, 87 analyticity, 6, 77–9, 80, 82, 84, 153, 206 Anderson, A. R., 16, 17, 246–8, 250, 252–5 anonymity, problem of, 194 Anscombe, Elizabeth, 229
anti-realism, see Dummett, Michael Archytas of Tarentum, 52 Aristotle, 88, 209, 211 attitudes, de dicto and de re, 193–6 Azzouni, Jodi, 91–2
Boolos–Bernays set theory (BB), 119, 123, 124, 126–7, 134 Borges, Jorge Luis, 7, 98 Bouchard, Pierre, xiii Bridges, Douglas, 274 Brouwer, L. E. J., 3, 11, 18, 55–6, 59, 81, 82, 258, 274–5 Bryll,
Grzegorz, 181 Burgess, Alexi, 12, 167 Buss, Sam, 145 Byzantium and Istanbul, 238, 240
Bacon, John, 100 Balaguer, Mark, 63 Barcan, Ruth C., see Marcus, Ruth Barcan Barcan formula, 216 Barker, John, 12, 167 Barwise, Jon, 278 behaviorism, 19, 79–80, 259–65, 270, 272 Bellarmine, Robert
Cardinal, 59, 88, 256 Belnap, Nuel D., Jr., 16, 17, 246–8, 250, 252–5, 280 Benacerraf, Paul, 85, 86, 88, 264 Benthem, Johann van, 100 Berkeley, George, 98 Bernays, Paul, xii, 8, 9, 117, 119, 120,
124, 125, 128, 129, 134 Berry, G. G., 149–51 Bigfoot, 25–7 Birkhoff, Garrett, 180 Bishop, Errett, 274 Bleak House, 50, 60, 68, 69, 73, 79, 83 Boole, George, 127 Boolos, George, xii, 8–9, 53, 59, 68,
106, 112, 129, 130, 132, 134, 137, 172, 174
Cantor, Georg, 49, 104–5, 114, 115, 116, 117, 127, 129, 130, 143 Carnap, Rudolf, 77, 93–5, 220, 274 modal logic and, 170, 172, 177, 181, 215, 216 ontology and, 5–6, 59–64, 68–9, 85, 87 Quine and, 69,
71–2, 74–6, 78 Castan~eda, Carlos, 64, 91 Cauchy, Augustin, 233 Chihara, Charles, 1, 3, 7, 12, 32, 33–4, 35, 36, 38–9, 46, 52, 53, 89, 167, 260 Chinese, 241–2, 261 choice, axiom of (AC), 116, 151
Chomsky, Noam, 19, 71, 79, 260, 270, 271 chronometry, 187, 192–3, 196–202 Church, Alonzo, 228 Church’s theorem, 12 Church’s thesis, 178 Chrysippus, 246, 253 Cicero and Tully, 238, 240 classes, 9,
112–13 Cocchiarella, Nino, 216 Cohen, Paul, 151, 277 compositionality, 237 comprehension, axiom of, 109, 135 concept (Begriff ), 114, 135 conceptualism, 3, 24–30 conditional logic, 283
conservativeness, 45, 270, 272, 280 constructibility, 124
continuum hypothesis (CH), 277 Convention T, 152–3, 163, 164, 167 Copeland, B. J., 161 Craig’s lemma, 178 Creswell, Max, 230 Curley, E. M., 17 cut-elimination, 19, 272 Davidson, Donald and
Davidsonianism, 162, 165–6, 260, 266, 270, 271 Davis, Martin, 142 definability, 149–51 definitions, status of in mathematics, 152–3 demonstrability, 13, 169–71, 172, 173, 177, 178–84 demonstratives
and indexicals, 196 Descartes, Rene´, 2, 165, 166 Devitt, Michael, 234 dialethism, 186 disjunction, intensional versus extensional, 247 meaning of, 258, 269 Dickens, Charles, see Bleak House
discovery, principle of, 185–96, 201 Dixon, Thomas, 69 Diogenes, 28 dualism, 267–9, 270, 272 Dummett, Michael, 3, 12, 18–20, 63, 82, 85, 87, 256, 260, 264, 266, 268 Edgington, Dorothy, 198 E´gre´,
Paul, xiii Einstein, Albert, 55 epistemology, 5, 39–41, 71, 88–9 naturalized versus alienated, see naturalism epsilon symbol (e), 128 equivalence, 278–9 essentialism, 209, 217 Euclid, 104 extension
(Umfang), 114–15, 135, 136 extensionality, axiom of, 110–11, 121, 123, 124, 135, 137 fables, 50 Fara, Michael, xii Feferman, Solomon, xii, 42, 43, 179 Fermat–(Wiles) theorem, 139 Fetzer, James, 215
Feynman, Richard, 96 fictionalism, 4, 5–6, 47, 48–51, 52–7, 58, 59, 72–4, 76, 83, 91 Field, Hartry, 1, 3, 7, 32, 33–4, 35, 36, 38–9, 40, 46, 47, 61, 72, 89, 229 figuralism, 91 Fine, Kit, 17, 137, 161
finitism, 140, 179
Fitch, Frederic, 14, 185–6, 187, 196, 197, 219, 223, 224, 228 Føllesdal, Dagfinn, 227, 228 formalism, 11, 135, 140, 268, 270, 273 Forbes, Graeme, 217 foundation, axiom of, 123, 124, 125 foundations
of mathematics, 7, 66–8 Frankel, Abraham, 116 Frege, Gottlob, 18, 81 anti-psychologism and, 3, 25, 48 logicism and, 10, 78, 135–6, 137, 143, 145 names and, 153, 210, 220–1, 226, 229, 231, 244 Frege’s
theorem, 66–8, 114, 137 French, 239–41, 242, 244 Friedman, Harvey, xii, 17, 125, 140, 278 Galilei, Galileo, 1, 55, 59, 68, 69, 93, 95 Ganea, Mihai, 140 Geach, Peter, 107, 229, 230, 234 Gell-Mann,
Murray, 55 generalized-quantifier logic, 164, 277, 281 general relativity, 35, 58, 73 Gentzen, Gerhard, 271, 272 God, 2, 6, 47–8, 63–4, 69–70, 71–2, 92–3, 94, 186, 189 Go¨del, Kurt, 85, 89, 124–5,
151, 277 completeness theorem, 144, 155, 182 incompleteness theorems, 58, 63, 140, 142, 172, 270, 271 Goodman, Nelson, 31–2, 33, 37, 85, 90 Goranko, Valentin, 183 Grandy, Richard, 270, 272 Greece and
Hellas, 241 Grelling, Kurt, 150 Grice, H. P., 78, 79, 248 Grzegorczyk, Andrzej, 171 Gupta, Anil, 280 Gurevich, Yuri, 282 Haack, Susan, 263 Hadamard, Jacques, 151 Halde`n, Søren, 171, 173 Hale, Bob,
61 haplism, 96, 97 Harman, Gilbert, xiii, 254, 266 Hazen, A. P., 139, 140, 230 Heck, Richard, 10, 11, 135, 140 Heidegger, Martin, 95 Heijenoort, Jean van, 104 Hellman, Geoffrey, 4, 7, 46, 89
heredity, 110, 111, 113, 123 hermeneuticists, 3–7, 16, 34, 51–7, 58, 90–2 Hersh, Ruben, xi
Index Herzberger, Hans, 280 Hesperus and Phosphorus, 15, 221, 226, 232, 238, 240 Heyting, Arend, 3, 81, 283 Hilbert, David, 11, 55–6, 82, 127, 140–3, 144, 268–9 Hintikka, Jaakko, 174–5, 203, 235, 281
Hodges, Wilfrid, 154, 155 holism, 11, 268, 270 Horsten, Leon, 214 Hume, David, 19, 58, 72, 93, 98, 135, 136 Hume’s principle (HP), 67–8, 76, 78, 83, 136, 137, 270, 271 Humphreys, Paul, 215 idealism,
3, 24–30, 98 ideology, 86, 101, 102 implication versus inference, 254 impredicativity, see predicativity and impredicativity incompleteness theorems, see Go¨del, Kurt independence-friendly (IF)
logic, 281 indiscernibility of identicals, 107, 109, 123 indispensability, 33–4, 101 infinitary logic, 278, 281 infinity, axiom of, 116, 121 instrumentalism, 4, 11, 41, 47 introduction and
elimination, 19–20 intuitionism and intuitionistic logic, 1, 11, 18–20, 81–2, 83, 174, 270, 274, 281, 283 Dummett and, 1, 3, 257, 258, 267 James, William, 92–3, 95, 102 Jarndyce and Jarndyce, see
Bleak House Jeffrey, Richard, xiii, 11, 135, 140–1, 144, 250 Jensen, Ronald, 277 Kamp, Hans, 282 Kant, Immanuel, 93–4 Kaplan, David, 107, 230, 234 Kepler, Johannes, 2, 69, 93, 95 Khayyam, Omar, 52
Kleene, S. C., 151 knowability, 14, 185, 196–202 Ko¨nig, Julius, 150–1 Korzybski, Alfred, 156 Kreisel, Georg, 132, 155, 161, 174, 273, 283 Kripke, Saul, 14, 16, 17, 35, 167, 214, 215, 223, 233, 235,
266, 280 models for modal logic, 13, 129, 161, 175, 216, 218, 283 names and, 15, 174–5, 229, 231–2, 233, 234, 238, 240, 242, 244 Kripke–Platek set theory (KP), 278 Kronecker, Leopold, 52
language of thought, 165 Laplace, Pierre-Simon Marquis de, 233 Lavine, Shaughn, xi Lesniewski, Stanislaw, 217 Levy, Azriel, 117 Lewis, C. I., 13, 170, 172, 230 Lewis, David, xiii, 59, 63, 230, 249,
266, 283 limitation of size, 114, 116, 117, 129 Lindemann, Ferdinand von, 52 Linsky, Leonard, 218, 231 literalness, 53, 54, 56–7 Locke, John, 220 logic, descriptive vs prescriptive, 16, 18 logicism,
10–11, 135–40, 142, 143, 145 London, see Puzzling Pierre Lo¨wenheim–Skolem theorem, 155 Lucas, J. R., 179 lumpers and splitters, 112 Maddy, Penelope, xi, xii, 40, 47, 57 Makinson, David, 177 Malcolm,
Norman, 88, 260 Mancini, Antonella, 138 manifestation argument, 260 Manin, Yuri, 38, 274 Maoism, 275 Marcus, Ruth Barcan, 15, 215, 218, 219, 223, 224–5, 226, 233, 235 Matiyasevich, Yuri, 11, 142, 145
Mauldin, Daniel, 279 maximality, principle of, 116, 117 McKinsey, J. C. C., 171, 180 meaning, 12, 19, 78, 79, 80, 163–4, 257 descriptive versus prescriptive theories of, 267, 270 truth-conditional or
‘‘verist’’ theory of, 12, 162, 257, 258, 266, 267, 269 see also compositionality, disjunction, dualism, names, representationalism, semantics, translation, transparency, verificationism Menchu`,
Rigoberta, 64 Mendel, Gregor, 80 Meyer, Robert, 161 Mill, John Stuart, 15, 210, 220, 236, 244 Millianism, 236–44 modality and modal logic, 13, 16, 157, 160, 169–84, 185, 203–4, 231 de dicto and de
re, 14, 204–5, 209 quantification and, 14–15, 204–27 models, 12–13, 16, 157–61, 174–6 monism, 7, 100–1 Montagna, Franco, 138 Moore, G. E., 53
Morning Star and Evening Star, 212, 221, 228 see also Hesperus and Phosphorus Mortensen, Chris, 17 Myhill, John, 219 mystery cards, game of, 249–50, 253 Nabokov, Vladimir, 50 Nading, Inga, 69 names,
proper, 14, 15, 221, 224, 231–3, 236–45 naturalism, 2, 7, 48, 74, 87 necessity, 13 as analyticity, 206 logical, 13–14, 169, 229–30, 234 metaphysical, 14, 169, 226–7, 229–30, 234 Nelson, Edward, 137,
138 neo-Fregeanism, 135, 137 neo-logicism, see logicism neo-intuitionism, see intuitionism Neumann, John von, 86, 116, 138 Nietzsche, Friedrich, 63 nihilism, 100–1 nominalism, 1, 3–7, 20, 23–30,
31–45, 46–7, 51, 52, 71, 72–4, 85–92, 95–7, 103 hermeneutic, see hermeneuticists instrumentalist, see instrumentalism revolutionary, see revolutionaries nonmonotonic logic, 283 Nootka, 98 numbers,
23–4, 27–8, 51, 52, 70–1, 86, 149 Ockhamism, 198–200, 282 ontology, 6, 86, 91–2, 94–5, 98, 101, 102 Paderewski, 245 pairing, axiom of, 116, 121 paradise, Cantor’s, 71, 127, 134 paradoxes, 10, 12, 14,
130, 150–1, 166, 185, 196, 200 Parsons, Charles, 6, 62, 76 Parsons, Terence, 137, 140, 209, 218 Peano postulates, 136, 137 Peirceanism, 200–1, 282 Penrose, Roger, 179 Pi-one (1) sentences, 139,
141–2, 145 Plato, 28, 57, 211, 274–5 Platonism, 69, 90, 95, 257, 267, 270 plurals and plural logic, 9, 106–9, 129–30 Poincare´, Henri, 150 Pollard, Stephen, 9 positivism, 271, 272 post-modernism, 102
Pour-El, Marian, 274 power, axiom of, 121 pragmatism, 47, 102 Prawitz, Dag, 19, 263 predicate-functor logic, 99–100
predicativity and impredicativity, 10, 11, 41, 42, 136, 137, 145, 179 Pressburger’s theorem, 142 primary versus secondary sentences, 268, 270, 271, 272 Prior, Arthur, 14, 78, 158, 189, 217, 220–1,
224, 225, 230, 270, 272, 282 probability, logic of, 282 provability, logic of, 160–1, 170, 171, 174, 177 purity, axiom of, 121–3, 124, 125 Putnam, Hilary, 33, 34, 36, 61, 101–2, 142, 234 Puzzling
Pierre, 15, 238–40, 244–5 quantification, generalized, see generalized-quantifier logic modality and, see modality, quantification and plural, see plurals and plural logic substitutional, 217 quantum
mechanics, 35, 58, 274 Quine, W. V., 19, 47–8, 52, 54, 68–9, 70, 74–5, 78, 80, 82, 85, 92, 99–100, 157–9, 262, 266, 268–9, 270–1 analyticity and, 76, 77–9, 82–3, 153 Carnap and, 6, 69, 71–2, 75, 78,
94–5 modality and, 14–15, 203–29 nominalism and, 32, 33, 34, 60, 61–2, 71–3, 85, 90, 101–2, 276 Ramsey, F. P., 136, 216, 220 Rayo, Augustin, 9 Read, Stephen, 17 realism, 1–2, 23–30, 47–8, 64, 95
metaphysical, 1, 46, 47, 72 naturalist, see naturalism reducibility, axiom of, 136, 137 reflection, principle of, 117–19, 120, 122, 133, 134 regimentation, 157 relativization, 118, 122, 124, 125
relevance, logic of, 16–18, 20, 246–55 replacement, axiom of, 112, 116, 121 representationalism, 270, 273–4 revolutionaries, 3, 7, 16, 18, 34, 51, 57–8, 59, 87–9 Richard, Jules, 150, 151 Richards,
Ian, 274 Riemann, Bernhard, 79 Robinson, Julia, 11, 142, 145 Robinson, Raphael, 137 Rorty, Richard, 101, 102 Rosen, Gideon, 3, 5, 46, 47, 51, 59, 60, 87, 90, 91 Ross, Arnold, 149 Ru¨ckert, Helge, xii
Russell, Bertrand, 10, 46, 64, 81, 136, 143, 150, 224, 225, 228, 235 Russellianism, 14, 220–1, 223, 228, 231, 232–3 Ryle, Gilbert, 19, 270, 272
Index S4 (modal system), 13 S5 (modal system), 13 Salmon, Nathan, 233 Sandu, Gabriel, 235 Sartre, Jean-Paul, 95 Schindler, Ralf-Dieter, 112 Schlick, Moritz, 94–5 Scroggs, S. J., 177 Searle, John, 19,
229 second-order logic, 131, 135, 156 Segerberg, Krister, 161, 281, 282 selection, measurable, 279 semantics, 12–13, 129–30, 159, 165, 166, 168, 216, 259 separation, axiom of, 8, 114–15, 134 set
theory, 104–29, 277–81 see also Zermelo–Frankel set theory Shapiro, Stuart, 5, 9, 57 Shelah, Saharon, 279 Silver, Jack, 278 skepticism, 19, 96, 97, 263 Skinner, B. F., 19, 80, 270, 271 Skura, Tomasz,
183 Slupecki, Jerzy, 181 Smielew, Wanda, 138–9 Smiley, T. J., 254 Smullyan, Arthur, and Smullyanism, 215, 219–20, 221, 223, 228, 233, 235 Soames, Scott, xiii, 260 Solovay, Robert, 137, 138, 161, 174,
277 Stalin and Djugashvili, 241 Stalnaker, Robert, 230, 283 Stanley, Jason, 52 Strawson, P. F., 79, 229, 248 supertransitivity, 120 Tait, William, 140, 174 Tarski, Alfred, 12–13, 129, 130, 138–9,
149–50, 151–2, 153–7, 161, 162, 163, 166–8, 266 Tarski–Kuratowski algorithm, 151 temporal logic, see tense logic Tennant, Neil, 17, 19
tense logic, 157–9, 170, 185–202, 281–2 Tharp, Leslie, 49 Thomason, S. K., 177 Tlo¨n, 98 transitivity, 120 translation, 238–44 transparency, 237 truth, 12, 149, 151–2, 154, 280 union, axiom of, 121
Urquhart, Alasdair, 17 Uzquiano, Gabriel, 9, 112 validity, 13, 169–70, 172–4, 176–7 Van Fraassen, Bas, 47, 280 Vaught, Robert, 155, 278 verificationism, 201, 257, 267, 269, 270, 272 verism, see
meaning, truth-conditional theory of vicious circle principle, 136, 137 Visser, Albert, 140 Wang, Hao, 42 Wehmeier, Kai, xii Weinstein, Scott, 45 Whorf, Benjamin Lee, 97, 98 Wiles, Andrew, 49, 53
Wilkie, Alex, 139 Williamson, Timothy, xii, 193, 198 Wittgenstein, Ludwig, 88, 266 Woodin, Hugh, 151 Wright, Crispin, 60–1, 137, 263 Wright, G. H. von, 229 Yablo, Steve, 5, 49, 52, 53, 60, 87, 90, 91
Zanardo, Alberto, 230 Zeno of Elea, 150 Zermelo, Ernst, 86, 114, 116, 125, 138, 151 Zermelo–Frankel set theory (ZFC), 8–9, 11, 112, 116, 119, 124, 125, 156, 277, 278 Ziff, Paul, 229
E-Book Information
• Year: 2,008
• Pages: 317
• Pages In File: 317
• Language: English
• Topic: 195
• Identifier: 0521880343,9780521880343,9780511388200
• Org File Size: 1,526,593
• Extension: pdf
• Tags: Философские дисциплины Философия науки
• Toc: Cover......Page 1
Half-title......Page 3
Title......Page 5
Copyright......Page 6
Dedication......Page 7
Contents......Page 9
Preface......Page 11
Source notes......Page 13
ABOUT ‘‘ REALISM’’......Page 17
AGAINST HERMENEUTIC AND REVOLUTIONARY NOMINALISM......Page 19
AGAINST FICTIONALIST NOMINALISM......Page 21
FOUNDATIONS OF MATHEMATICS : SET THEORY......Page 23
FOUNDATIONS OF MATHEMATICS: LOGICISM......Page 26
MODELS AND MEANING......Page 28
MODELS AND MODALITY......Page 29
MODALITY AND REFERENCE......Page 30
HERMENEUTIC CRITICISM OF CLASSICAL LOGIC: RELEVANTISM......Page 32
REVOLUTIONARY CRITICISM OF CLASSICAL LOGIC: INTUITIONISM......Page 34
PART I Mathematics......Page 37
1 REALISM VS NOMINALISM......Page 39
2 BIGFOOT......Page 41
3 NUMBERS......Page 42
4 REALISM VS NOMINALISM REVISITED......Page 44
1 IN STRUMENTALIST NOMINALISM......Page 47
2 SCIENTIFIC DISPENSABILITY AND NON EXISTENCE......Page 49
3 HERMENEUTIC NOMINALISM......Page 50
4 REVOLUTIONARY NOMINALISM......Page 52
5 NOMINALISM, ONTOLOGICAL AND EPISTEMOLOGICAL......Page 55
A Chihara’s modal nominalism......Page 57
B Field’s spatiotemporal nominalism......Page 59
1 ‘‘NOMINALISM’’ AND ‘‘REALISM’’......Page 62
2 LITERARY GENRES......Page 64
3 NON–LITERAL LANGUAGE......Page 67
4 CONTRASTING CASES......Page 71
5 UNDECIDABLE QUESTIONS......Page 73
6 INTERMINABLE DEBATE......Page 75
7 MEANINGLESS QUESTIONS......Page 79
1 TWO SENSES OF ‘‘FOUNDATIONS OF MATHEMATICS’’......Page 82
2 QUINIANISM VS PLATONISM......Page 85
3 QUINIANISM VS ‘‘FICTIONALISM’’......Page 88
4 QUINIANISM VS CARNAPIANISM......Page 91
5 QUINIANISM VS CARNAPIANISM, BIS......Page 93
6 QUINIANISM VS INTUITIONISM......Page 96
1 A LOGICIAN LOOKS AT NOMINALISM......Page 101
2 REVOLUTIONARY RUMBLINGS......Page 103
3 HERMENEUTICAL HI-JINKS......Page 106
4 READING GOD’S MIND OR IMPOSING A SCHEME ON THE WORLD?......Page 108
5 ABSTRACT SKEPTICISM VS CONCRETE CREDULITY......Page 112
6 TALKING OF OBJECTS – OR NOT......Page 113
7 THE DARK SIDE......Page 117
1 THE ORIGIN OF SETTHEORY......Page 120
2 PLURAL LOGIC......Page 123
3 EXTENSIONALITY......Page 125
4 LIMITATION OF SIZE......Page 130
5 THE REFLECTION PRINCIPLE......Page 133
6 DEDUCING EXISTENCE AXIOMS OF ZF......Page 135
7 EXTENSIONALITY REVISITED......Page 137
8 THE AXIOM OF CHOICE......Page 140
9 SET-THEORETIC MODELS FOR PLURAL LOGIC......Page 145
1 NEO-NEO- LOGICISM......Page 151
2 LITE LOGICISM......Page 156
PART I I Models, modality, and more......Page 163
1 DEFINABILITY INDISREPUTE......Page 165
2 MATHEMATICAL DEFINITION VS LINGUISTIC ANALYSIS......Page 168
3 TARSKI’S TRIUMPH AND TRESPASS......Page 170
4 MODAL MUDDLES: SPURIOUS COMMITMENTS......Page 173
5 4 MODAL MUDDLES: UNWARRANTED COMPLACENCY......Page 176
6 FROM TARSKI TO DAVIDSONIANISM......Page 178
7 THE ‘‘SEMANTIC’’ PARADOXES......Page 182
1 THE QUESTION......Page 185
2 SOUNDNESS......Page 188
3 COMPLETENESS FOR VALIDITY LOGIC......Page 190
4 DEMON STRABILITY AND PROVABILITY......Page 193
5 AGAINST McK......Page 196
6 AGAINST S4.2......Page 198
1 THE DISCOVERY PRINCIPLE......Page 201
2 INEFFABLE TRUTHS......Page 203
3 EPHEMERAL TRUTHS......Page 204
4 ARE FORMULATION......Page 205
6 EXAMPLES......Page 207
7 ‘‘NOW’’......Page 209
8 DE RE ATTITUDES......Page 210
9 AN IMPERFECT ANALOGY......Page 212
10 COMBINING TEMPORAL AND MODAL FEATURES......Page 214
11 VERIFICATIONISM......Page 216
1.1 Quine and his critique......Page 219
1.2 Non-trivial de re modality......Page 220
1.3 Strict necessity......Page 222
1.4 ‘‘Aristotelian essentialism’’......Page 224
1.5 The mathematical cyclist......Page 226
1.6 The morning star......Page 227
1.7 Coda......Page 229
2.1 Quine and his critics......Page 230
2.2 Potpourri......Page 231
2.3 Smullyanism or neo-Russellianism......Page 235
2.4 Quine’s rebuttal to neo-Russellianism on descriptions......Page 237
2.5 Neo-Russellianism on names......Page 239
2.6 Quine’s rebuttal......Page 240
3.1 Hints from Quine for the formal logic of modalities......Page 242
3.2 A hint from Quine for the theory of reference of names......Page 243
3.3 Formal differences between logical and metaphysical modality......Page 245
3.4 New alternatives in the theory of reference for names......Page 247
3.5 Have the lessons been learned?......Page 249
1 MILLIANISM AND ANTI -MILLIANISM......Page 252
2 ANTI - ANTI - MILLIANISM AND TRANSLATION......Page 254
3 ANTI- ANTI - ANTI - MILLIANISM......Page 256
4 ANTI - MILLIANISM AND ANTI-DESCRIPTIVISM......Page 259
5 ENVOI......Page 260
1 INTRODUCTION......Page 262
Example 1. Analysis......Page 265
Commentary......Page 266
Example 2b. Argument......Page 267
Example 2b. Analysis......Page 268
3 CONCLUS ION......Page 269
1 TEXTS......Page 272
2 DUMMETT’S CASE AGAINST PLATONISM......Page 273
3 DUMMETT’S CASE AGAINST FORMALISM......Page 283
4 SUMMARY......Page 292
‘‘Consistency proofs in model theory’’ (Burgess 1978a)......Page 293
‘‘Equivalence relations generated by families of Borel sets’’ (Burgess 1978b)......Page 294
‘‘What are R-sets?’’ (Burgess 1982a) ‘‘Classical hierarchies from a modern standpoint, parts I & II’’ (Burgess 1983a and b)......Page 295
‘‘A remark on Henkin sentences and their contraries’’ (Burgess 2003b)......Page 296
‘‘The unreal future’’ (Burgess 1979d) ‘‘Decidability and branching time’’ (Burgess 1980c)......Page 297
‘‘Quick completeness proofs for some logics of conditionals’’ (Burgess 1981b)......Page 298
‘‘The completeness of intuitionistic propositional calculus for its intended interpretation’’ (Burgess 1981a)......Page 299
References......Page 300
Index......Page 313 | {"url":"https://vdoc.pub/documents/mathematics-models-and-modality-selected-philosophical-essays-5ohvqj1c0750","timestamp":"2024-11-04T07:53:33Z","content_type":"text/html","content_length":"842671","record_id":"<urn:uuid:c039491e-5a22-46e2-af1a-5d1462a58d34>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00581.warc.gz"} |
gcc/ada/libgnat/a-crbtgk.adb - gcc - Git at Google
-- --
-- GNAT LIBRARY COMPONENTS --
-- --
-- ADA.CONTAINERS.RED_BLACK_TREES.GENERIC_KEYS --
-- --
-- B o d y --
-- --
-- Copyright (C) 2004-2022, Free Software Foundation, Inc. --
-- --
-- GNAT is free software; you can redistribute it and/or modify it under --
-- terms of the GNU General Public License as published by the Free Soft- --
-- ware Foundation; either version 3, or (at your option) any later ver- --
-- sion. GNAT is distributed in the hope that it will be useful, but WITH- --
-- OUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY --
-- or FITNESS FOR A PARTICULAR PURPOSE. --
-- --
-- As a special exception under Section 7 of GPL version 3, you are granted --
-- additional permissions described in the GCC Runtime Library Exception, --
-- version 3.1, as published by the Free Software Foundation. --
-- --
-- You should have received a copy of the GNU General Public License and --
-- a copy of the GCC Runtime Library Exception along with this program; --
-- see the files COPYING3 and COPYING.RUNTIME respectively. If not, see --
-- <http://www.gnu.org/licenses/>. --
-- --
-- This unit was originally developed by Matthew J Heaney. --
package body Ada.Containers.Red_Black_Trees.Generic_Keys is
pragma Warnings (Off, "variable ""Busy*"" is not referenced");
pragma Warnings (Off, "variable ""Lock*"" is not referenced");
-- See comment in Ada.Containers.Helpers
package Ops renames Tree_Operations;
-- Ceiling --
-- AKA Lower_Bound
function Ceiling (Tree : Tree_Type; Key : Key_Type) return Node_Access is
-- Per AI05-0022, the container implementation is required to detect
-- element tampering by a generic actual subprogram.
Lock : With_Lock (Tree.TC'Unrestricted_Access);
Y : Node_Access;
X : Node_Access;
-- If the container is empty, return a result immediately, so that we do
-- not manipulate the tamper bits unnecessarily.
if Tree.Root = null then
return null;
end if;
X := Tree.Root;
while X /= null loop
if Is_Greater_Key_Node (Key, X) then
X := Ops.Right (X);
Y := X;
X := Ops.Left (X);
end if;
end loop;
return Y;
end Ceiling;
-- Find --
function Find (Tree : Tree_Type; Key : Key_Type) return Node_Access is
-- Per AI05-0022, the container implementation is required to detect
-- element tampering by a generic actual subprogram.
Lock : With_Lock (Tree.TC'Unrestricted_Access);
Y : Node_Access;
X : Node_Access;
-- If the container is empty, return a result immediately, so that we do
-- not manipulate the tamper bits unnecessarily.
if Tree.Root = null then
return null;
end if;
X := Tree.Root;
while X /= null loop
if Is_Greater_Key_Node (Key, X) then
X := Ops.Right (X);
Y := X;
X := Ops.Left (X);
end if;
end loop;
if Y = null or else Is_Less_Key_Node (Key, Y) then
return null;
return Y;
end if;
end Find;
-- Floor --
function Floor (Tree : Tree_Type; Key : Key_Type) return Node_Access is
-- Per AI05-0022, the container implementation is required to detect
-- element tampering by a generic actual subprogram.
Lock : With_Lock (Tree.TC'Unrestricted_Access);
Y : Node_Access;
X : Node_Access;
-- If the container is empty, return a result immediately, so that we do
-- not manipulate the tamper bits unnecessarily.
if Tree.Root = null then
return null;
end if;
X := Tree.Root;
while X /= null loop
if Is_Less_Key_Node (Key, X) then
X := Ops.Left (X);
Y := X;
X := Ops.Right (X);
end if;
end loop;
return Y;
end Floor;
-- Generic_Conditional_Insert --
procedure Generic_Conditional_Insert
(Tree : in out Tree_Type;
Key : Key_Type;
Node : out Node_Access;
Inserted : out Boolean)
X : Node_Access;
Y : Node_Access;
Compare : Boolean;
-- This is a "conditional" insertion, meaning that the insertion request
-- can "fail" in the sense that no new node is created. If the Key is
-- equivalent to an existing node, then we return the existing node and
-- Inserted is set to False. Otherwise, we allocate a new node (via
-- Insert_Post) and Inserted is set to True.
-- Note that we are testing for equivalence here, not equality. Key must
-- be strictly less than its next neighbor, and strictly greater than
-- its previous neighbor, in order for the conditional insertion to
-- succeed.
-- Handle insertion into an empty container as a special case, so that
-- we do not manipulate the tamper bits unnecessarily.
if Tree.Root = null then
Insert_Post (Tree, null, True, Node);
Inserted := True;
end if;
-- We search the tree to find the nearest neighbor of Key, which is
-- either the smallest node greater than Key (Inserted is True), or the
-- largest node less or equivalent to Key (Inserted is False).
Lock : With_Lock (Tree.TC'Unrestricted_Access);
X := Tree.Root;
Y := null;
Inserted := True;
while X /= null loop
Y := X;
Inserted := Is_Less_Key_Node (Key, X);
X := (if Inserted then Ops.Left (X) else Ops.Right (X));
end loop;
if Inserted then
-- Key is less than Y. If Y is the first node in the tree, then there
-- are no other nodes that we need to search for, and we insert a new
-- node into the tree.
if Y = Tree.First then
Insert_Post (Tree, Y, True, Node);
end if;
-- Y is the next nearest-neighbor of Key. We know that Key is not
-- equivalent to Y (because Key is strictly less than Y), so we move
-- to the previous node, the nearest-neighbor just smaller or
-- equivalent to Key.
Node := Ops.Previous (Y);
-- Y is the previous nearest-neighbor of Key. We know that Key is not
-- less than Y, which means either that Key is equivalent to Y, or
-- greater than Y.
Node := Y;
end if;
-- Key is equivalent to or greater than Node. We must resolve which is
-- the case, to determine whether the conditional insertion succeeds.
Lock : With_Lock (Tree.TC'Unrestricted_Access);
Compare := Is_Greater_Key_Node (Key, Node);
if Compare then
-- Key is strictly greater than Node, which means that Key is not
-- equivalent to Node. In this case, the insertion succeeds, and we
-- insert a new node into the tree.
Insert_Post (Tree, Y, Inserted, Node);
Inserted := True;
end if;
-- Key is equivalent to Node. This is a conditional insertion, so we do
-- not insert a new node in this case. We return the existing node and
-- report that no insertion has occurred.
Inserted := False;
end Generic_Conditional_Insert;
-- Generic_Conditional_Insert_With_Hint --
procedure Generic_Conditional_Insert_With_Hint
(Tree : in out Tree_Type;
Position : Node_Access;
Key : Key_Type;
Node : out Node_Access;
Inserted : out Boolean)
Test : Node_Access;
Compare : Boolean;
-- The purpose of a hint is to avoid a search from the root of
-- tree. If we have it hint it means we only need to traverse the
-- subtree rooted at the hint to find the nearest neighbor. Note
-- that finding the neighbor means merely walking the tree; this
-- is not a search and the only comparisons that occur are with
-- the hint and its neighbor.
-- Handle insertion into an empty container as a special case, so that
-- we do not manipulate the tamper bits unnecessarily.
if Tree.Root = null then
Insert_Post (Tree, null, True, Node);
Inserted := True;
end if;
-- If Position is null, this is interpreted to mean that Key is large
-- relative to the nodes in the tree. If Key is greater than the last
-- node in the tree, then we're done; otherwise the hint was "wrong" and
-- we must search.
if Position = null then -- largest
Lock : With_Lock (Tree.TC'Unrestricted_Access);
Compare := Is_Greater_Key_Node (Key, Tree.Last);
if Compare then
Insert_Post (Tree, Tree.Last, False, Node);
Inserted := True;
Conditional_Insert_Sans_Hint (Tree, Key, Node, Inserted);
end if;
end if;
pragma Assert (Tree.Length > 0);
-- A hint can either name the node that immediately follows Key,
-- or immediately precedes Key. We first test whether Key is
-- less than the hint, and if so we compare Key to the node that
-- precedes the hint. If Key is both less than the hint and
-- greater than the hint's preceding neighbor, then we're done;
-- otherwise we must search.
-- Note also that a hint can either be an anterior node or a leaf
-- node. A new node is always inserted at the bottom of the tree
-- (at least prior to rebalancing), becoming the new left or
-- right child of leaf node (which prior to the insertion must
-- necessarily be null, since this is a leaf). If the hint names
-- an anterior node then its neighbor must be a leaf, and so
-- (here) we insert after the neighbor. If the hint names a leaf
-- then its neighbor must be anterior and so we insert before the
-- hint.
Lock : With_Lock (Tree.TC'Unrestricted_Access);
Compare := Is_Less_Key_Node (Key, Position);
if Compare then
Test := Ops.Previous (Position); -- "before"
if Test = null then -- new first node
Insert_Post (Tree, Tree.First, True, Node);
Inserted := True;
end if;
Lock : With_Lock (Tree.TC'Unrestricted_Access);
Compare := Is_Greater_Key_Node (Key, Test);
if Compare then
if Ops.Right (Test) = null then
Insert_Post (Tree, Test, False, Node);
Insert_Post (Tree, Position, True, Node);
end if;
Inserted := True;
Conditional_Insert_Sans_Hint (Tree, Key, Node, Inserted);
end if;
end if;
-- We know that Key isn't less than the hint so we try again, this time
-- to see if it's greater than the hint. If so we compare Key to the
-- node that follows the hint. If Key is both greater than the hint and
-- less than the hint's next neighbor, then we're done; otherwise we
-- must search.
Lock : With_Lock (Tree.TC'Unrestricted_Access);
Compare := Is_Greater_Key_Node (Key, Position);
if Compare then
Test := Ops.Next (Position); -- "after"
if Test = null then -- new last node
Insert_Post (Tree, Tree.Last, False, Node);
Inserted := True;
end if;
Lock : With_Lock (Tree.TC'Unrestricted_Access);
Compare := Is_Less_Key_Node (Key, Test);
if Compare then
if Ops.Right (Position) = null then
Insert_Post (Tree, Position, False, Node);
Insert_Post (Tree, Test, True, Node);
end if;
Inserted := True;
Conditional_Insert_Sans_Hint (Tree, Key, Node, Inserted);
end if;
end if;
-- We know that Key is neither less than the hint nor greater than the
-- hint, and that's the definition of equivalence. There's nothing else
-- we need to do, since a search would just reach the same conclusion.
Node := Position;
Inserted := False;
end Generic_Conditional_Insert_With_Hint;
-- Generic_Insert_Post --
procedure Generic_Insert_Post
(Tree : in out Tree_Type;
Y : Node_Access;
Before : Boolean;
Z : out Node_Access)
TC_Check (Tree.TC);
if Checks and then Tree.Length = Count_Type'Last then
raise Constraint_Error with "too many elements";
end if;
Z := New_Node;
pragma Assert (Z /= null);
pragma Assert (Ops.Color (Z) = Red);
if Y = null then
pragma Assert (Tree.Length = 0);
pragma Assert (Tree.Root = null);
pragma Assert (Tree.First = null);
pragma Assert (Tree.Last = null);
Tree.Root := Z;
Tree.First := Z;
Tree.Last := Z;
elsif Before then
pragma Assert (Ops.Left (Y) = null);
Ops.Set_Left (Y, Z);
if Y = Tree.First then
Tree.First := Z;
end if;
pragma Assert (Ops.Right (Y) = null);
Ops.Set_Right (Y, Z);
if Y = Tree.Last then
Tree.Last := Z;
end if;
end if;
Ops.Set_Parent (Z, Y);
Ops.Rebalance_For_Insert (Tree, Z);
Tree.Length := Tree.Length + 1;
end Generic_Insert_Post;
-- Generic_Iteration --
procedure Generic_Iteration
(Tree : Tree_Type;
Key : Key_Type)
procedure Iterate (Node : Node_Access);
-- Iterate --
procedure Iterate (Node : Node_Access) is
N : Node_Access;
N := Node;
while N /= null loop
if Is_Less_Key_Node (Key, N) then
N := Ops.Left (N);
elsif Is_Greater_Key_Node (Key, N) then
N := Ops.Right (N);
Iterate (Ops.Left (N));
Process (N);
N := Ops.Right (N);
end if;
end loop;
end Iterate;
-- Start of processing for Generic_Iteration
Iterate (Tree.Root);
end Generic_Iteration;
-- Generic_Reverse_Iteration --
procedure Generic_Reverse_Iteration
(Tree : Tree_Type;
Key : Key_Type)
procedure Iterate (Node : Node_Access);
-- Iterate --
procedure Iterate (Node : Node_Access) is
N : Node_Access;
N := Node;
while N /= null loop
if Is_Less_Key_Node (Key, N) then
N := Ops.Left (N);
elsif Is_Greater_Key_Node (Key, N) then
N := Ops.Right (N);
Iterate (Ops.Right (N));
Process (N);
N := Ops.Left (N);
end if;
end loop;
end Iterate;
-- Start of processing for Generic_Reverse_Iteration
Iterate (Tree.Root);
end Generic_Reverse_Iteration;
-- Generic_Unconditional_Insert --
procedure Generic_Unconditional_Insert
(Tree : in out Tree_Type;
Key : Key_Type;
Node : out Node_Access)
Y : Node_Access;
X : Node_Access;
Before : Boolean;
Y := null;
Before := False;
X := Tree.Root;
while X /= null loop
Y := X;
Before := Is_Less_Key_Node (Key, X);
X := (if Before then Ops.Left (X) else Ops.Right (X));
end loop;
Insert_Post (Tree, Y, Before, Node);
end Generic_Unconditional_Insert;
-- Generic_Unconditional_Insert_With_Hint --
procedure Generic_Unconditional_Insert_With_Hint
(Tree : in out Tree_Type;
Hint : Node_Access;
Key : Key_Type;
Node : out Node_Access)
-- There are fewer constraints for an unconditional insertion
-- than for a conditional insertion, since we allow duplicate
-- keys. So instead of having to check (say) whether Key is
-- (strictly) greater than the hint's previous neighbor, here we
-- allow Key to be equal to or greater than the previous node.
-- There is the issue of what to do if Key is equivalent to the
-- hint. Does the new node get inserted before or after the hint?
-- We decide that it gets inserted after the hint, reasoning that
-- this is consistent with behavior for non-hint insertion, which
-- inserts a new node after existing nodes with equivalent keys.
-- First we check whether the hint is null, which is interpreted
-- to mean that Key is large relative to existing nodes.
-- Following our rule above, if Key is equal to or greater than
-- the last node, then we insert the new node immediately after
-- last. (We don't have an operation for testing whether a key is
-- "equal to or greater than" a node, so we must say instead "not
-- less than", which is equivalent.)
if Hint = null then -- largest
if Tree.Last = null then
Insert_Post (Tree, null, False, Node);
elsif Is_Less_Key_Node (Key, Tree.Last) then
Unconditional_Insert_Sans_Hint (Tree, Key, Node);
Insert_Post (Tree, Tree.Last, False, Node);
end if;
end if;
pragma Assert (Tree.Length > 0);
-- We decide here whether to insert the new node prior to the
-- hint. Key could be equivalent to the hint, so in theory we
-- could write the following test as "not greater than" (same as
-- "less than or equal to"). If Key were equivalent to the hint,
-- that would mean that the new node gets inserted before an
-- equivalent node. That wouldn't break any container invariants,
-- but our rule above says that new nodes always get inserted
-- after equivalent nodes. So here we test whether Key is both
-- less than the hint and equal to or greater than the hint's
-- previous neighbor, and if so insert it before the hint.
if Is_Less_Key_Node (Key, Hint) then
Before : constant Node_Access := Ops.Previous (Hint);
if Before = null then
Insert_Post (Tree, Hint, True, Node);
elsif Is_Less_Key_Node (Key, Before) then
Unconditional_Insert_Sans_Hint (Tree, Key, Node);
elsif Ops.Right (Before) = null then
Insert_Post (Tree, Before, False, Node);
Insert_Post (Tree, Hint, True, Node);
end if;
end if;
-- We know that Key isn't less than the hint, so it must be equal
-- or greater. So we just test whether Key is less than or equal
-- to (same as "not greater than") the hint's next neighbor, and
-- if so insert it after the hint.
After : constant Node_Access := Ops.Next (Hint);
if After = null then
Insert_Post (Tree, Hint, False, Node);
elsif Is_Greater_Key_Node (Key, After) then
Unconditional_Insert_Sans_Hint (Tree, Key, Node);
elsif Ops.Right (Hint) = null then
Insert_Post (Tree, Hint, False, Node);
Insert_Post (Tree, After, True, Node);
end if;
end Generic_Unconditional_Insert_With_Hint;
-- Upper_Bound --
function Upper_Bound
(Tree : Tree_Type;
Key : Key_Type) return Node_Access
Y : Node_Access;
X : Node_Access;
X := Tree.Root;
while X /= null loop
if Is_Less_Key_Node (Key, X) then
Y := X;
X := Ops.Left (X);
X := Ops.Right (X);
end if;
end loop;
return Y;
end Upper_Bound;
end Ada.Containers.Red_Black_Trees.Generic_Keys; | {"url":"https://gnu.googlesource.com/gcc/+/refs/tags/basepoints/gcc-13/gcc/ada/libgnat/a-crbtgk.adb?autodive=0%2F%2F%2F%2F","timestamp":"2024-11-08T05:21:08Z","content_type":"text/html","content_length":"226373","record_id":"<urn:uuid:fcea96da-fcaa-4315-9d25-fab24f2dc415>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00391.warc.gz"} |
Correlation, Causation, and Change
Data journalists often have to assess shifts in quantities and the relationship between variables in the course of their analysis. Understanding how to represent those shifts and what the correct
terminology for describing relationships are is therefore important for doing good reporting.
Understanding Correlations
The term “correlation” is often used colloquially to refer to the idea that two variables are related to each other in some fashion. However, “correlation” also has a statistical meaning and it can
be measured statistically.
In a statistical sense, the term “correlation” refers to the extent to which two variables are dependent on each other. So, when we talk about two variables being correlated, what we are saying is
that a change in X coincides with a change in Y because of that dependence.
Much of the time, correlations are measured through the linear relationship between variables and are described by Pearson’s Product-Moment Correlation Coefficient, which is typically denoted by an
italicized, lowercase r. This statistic is calculated by using data from continuous variables (e.g., interval or ratio values), such as the relationship among basketball players between the variable
for height (i.e., number of inches) and the variable for number of blocks.
However, it is possible to assess nonlinear relationships or a rank-order relationship that assesses ordinal data using other statistical tests. (For example, I could measure the correlation between
that same height variable and a yes/no variable for whether the player made All-NBA team.)
As such, it is important to understand that “correlation” actually refers to a fairly complex statistic.
Features of Correlations
There are two key features of correlations that data journalists should be aware of: the strength of the correlation and the type of correlation.
You can get a sense of these two features by looking at a scatter plot, where the X axis represents one variable and the Y axis represents a second variable. The data points on that scatter plot
represent the respective values of the two variables for each observation in the dataset.
Strength of Correlation
The strength of the correlation refers to how closely dependent two variables are.
You can have no correlation when there is no dependence between two variables. This means that a change in X does not result in a corresponding change in Y. In this example, the Pearson r would be 0.
On the other end, you can have perfect correlation when a linear equation perfectly describes the relationship between X and Y, such that all the data points would appear on a line for which Y
changes as X changes. For example, a one-unit change in X (the basketball player grows by an inch) might always result in a three-unit change in Y (the basketball player has three more blocks). A
perfect correlation would be represented by a Pearson r of 1 (or -1, for anticorrelation, or a perfectly negative correlation).
You can also have low correlation, medium correlation, and high correlation. The parameters for what constitutes low, medium, and high are arbitrary and dependent on the phenomenon being studied.
However, a Pearson r above 0.7 is usually viewed as indicating high correlation, above 0.5 as moderate correlation, above 0.3 as having low correlation. A phenomenon with a Pearson r below 0.3 is
generally seen as having no real correlation.
Type of Correlation
The type of correlation refers to whether the nature of the relationship between two variables.
A positive correlation indicates that a positive change in X results in a positive change in Y, whereas a negative correlation indicates that a positive change in X results in a negative change in Y.
Because of this, Pearson r values may range from -1 to a positive 1.
You can also have a curvilinear correlation, where the relationship is not defined by a straight line but rather through some sort of curve (e.g., an exponential relationship). Similarly, the
curvilinear relationship may even oscillate between positive and negative (i.e,, positive until a certain point and then becomes negative, like a semi-circle). A Pearson correlation coefficient would
not be appropriate for this sort of relationship, which is an important thing to keep in mind if you are trying to quantify the strength of the relationship.
Finally, you can have a partial correlation where there is a strong (or moderate) correlation until a certain point. Past that point, however, the relationship weakens or becomes non-existent.
Reporting Correlation Coefficients
Data journalists will rarely include a correlation statistic in their news story. That’s because the statistic will be largely meaningless for the general public, which lacks a general sense of how
to interpret it.
Instead, the journalist will usually use the language of “strong” or “clear” “relationship” between variables, or perhaps even a “modest” one. They may even note that “there is no clear relationship”
between two variables, even if there is a small statistical relationship. (Note the use of “relationship” instead of “correlation” to not raise any formal statistical expectations.) Thus, when it
comes to ascertaining the strength and type of relationships between variables, the journalist will often put on their sense-making hat and simply describe the take-away message.
Because of this, it is often more helpful to turn to a scatter plot (like the ones above) in order to get a visual sense of the relationship between the variables. Not only can that help you get a
better sense of the data but it will allow you to spot potential relationships that might be missed by simple, linear correlation tests.
Correlation and Causation
It is also crucial that data journalists remember to popular idiom that “correlation does not imply causation.”
A causal inference would suggest that a change in X is responsible for a change in Y. However, it is not possible to make that statement from a simple correlation test or visual examination of a
scatter plot.
For example, there is a very strong correlation between U.S. spending on Science, Space and Technology and the number of suicides by hanging, strangulation and suffocation.
A causal interpretation might be that spending on STEM is responsible for suicides — so we should stop investing in STEM! However, there is no reason to believe that is the case. Instead, this is
most likely a spurious correlation, meaning that while there is a strong mathematical association between the variables, the reason for the relationship is either random or due to a confounding
factor (a third, unseen variable that is actually responsible for the causal change).
Additionally, even if there is a causal relationship, a correlation does not tell us which variable is the causal variable (the one driving the change). For that, we need to pair statistics with
reasoning. For example, in our earlier example, it is highly unlikely that the number of blocks a player has is driving the change in height. Instead, it is more probable that it is the change in
height that is driving the change in the number of blocks they have.
You shouldn’t just dismiss correlations, though. Even though they don’t directly allow for causal inferences, they often give good reason to explore a relationship further. This is where good human
sourcing can come in and fill in the gaps.
In general, researchers tend to use more advanced statistical modeling (such as regression analysis and structural equation modeling) to identify the causal variable(s) and isolate their impacts by
statistically controlling for other potentially relevant variables. Outside of specialty data journalism outlets, few data journalists tend to adopt this level of sophistication in their analyses,
though. Instead, they tend to use more limited original analyses and contextualize them with findings from research studies or interviews with subject experts.
So, remember, “correlation does not imply causation.”
Describing Change
Data journalists spend a lot of time assessing and describing changes in quantities. It is thus important to understand the difference between absolute change and relative change, and to know when to
use each.
Absolute Change
Absolute change refers to the presentation of the change between two values by using the same scale.
For example, the statement that “there were 211 assaults in 2021, or 15 fewer than in 2020,” would refer to change in an absolute sense.
The same is true for the statements, “in-state tuition and fees at UMass increased from $16,389 in 2019-2020 to $16,439 in 2020-2021” and “alpaca attacks increased from one in 2020 to three in 2021.”
Relative Change
Relative change refers to the presentation of the change between two values by making it relative to the original size, often through the use of a percentage.
For example, the statement that “there were 211 assaults in 2021, a 7.1 percent decrease from 2020,” would refer to change in a relative sense.
The same is true for the statements, “in-state tuition and fees at UMass increased by 1.3 percent from 2019-2020 to 2020-2021” and “alpaca attacks increased by 200 percent in 2015.”
How to Describe Change
Neither of these presentation modes is inherently better and so there is no “right” choice that covers all situations. Instead, it is important to recognize that the way numbers are presented can
have a significant effect on how people react to and interpret information.
For example, if I report the absolute change in alpaca attacks (increased by 2 attacks per year), it might not seem like a big deal. However, if I choose to report the relative change (an increase of
200 percent), then it sounds like we might have an alpacalypse on your hands.
This is an example of how statistics can be used to “lie,” in this case by taking a small change to a low base value and making it seem like a HUGE relative change.
The choice of which presentation mode to use thus comes down to understanding the context. In this case, even a 200% increase still makes an alpaca attack quite rare, so it makes the most sense to
focus on the absolute change. However, for huge figures (like change in federal government spending), a relative change is typically more informative.
It is not uncommon for journalists to blend the two by reporting the percent increase from (or to) an absolute value. For example, a data journalist may write that “tuition and fees at UMass totaled
$16,439 this academic year, marking a 1.3 percent increase from the previous year.”
In short, keep in mind as you analyze relationships and change that there are particular terms that have statistical meaning (that may come with expectations of formal statistical tests) and that the
way in which you choose to present information can have an important impact on how people come to interpret it. | {"url":"https://dds.rodrigozamith.com/intermediate-exploratory-analysis/correlation-causation-and-change/","timestamp":"2024-11-05T14:07:40Z","content_type":"text/html","content_length":"68402","record_id":"<urn:uuid:8f63e27d-9d6c-4fda-b2c0-cc15e9593060>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00245.warc.gz"} |
General Trivia Quiz
What is the minimum depth of the hole in professional golf?
What do indefinite pronouns refer to?
Which of the following is a multihulled vessel?
Instead of hyperspace, "Asteroids Deluxe" had what special protective feature for your spaceship?
What describes both direction and how fast something is going?
Which list of numbers goes from largest to smallest?
What are actors Donald and Keifer's last name?
How many hours are there in a day?
What scientific term is used for living in or on water?
Which of these is one of France's chief products?
World Geography Quiz
10 questions to test your knowledge
Trivia Quiz
Are you smart enough for this one?
** General Trivia Quiz **
>> 10 mixed questions <<
Movie Quiz for fans
10 actionpacked questions
Movie Quiz
10 mixed movie questions
Food Quiz For Home Cooks
10 mixed questions
General Trivia IQ challenge
Is your IQ high enough for these questions?
History Trivia Quiz
10 mixed questions
World History Quiz
10 mixed questions
Food & Cooking Quiz
10 mixed questions for you
1970s Song Quiz
How many correct answers will you get?
Astronomy Quiz
What do you know about outer space?
Trivia Quiz : General Knowle
10 fun questions
A trivia test covering vario
A collection of 10 amusing yet challenging tr..
Things that happened in the
A Trivia Quiz About The Exciting Decade
A quiz consisting of random
Could you kindly provide me with a set of 10 ..
History 10 question quiz
Trivia fun
General Trivia Quiz
10 mixed questions
Food & Cooking Trivia Quiz
10 flavourful questions
Trivia Smart Quiz
10 Smart Questions
General Knowledge For Trivia
Let us know your score in the comments
Quiz : 1960s Songs
How many correct will you get?
1970s Song Quiz
10 songs to guess
What do the words mean?
10 questions to test your knowledge
Music Quiz
How many correct will you get?
World History Trivia Quiz
10 fun questions
1970s Lyrics Quiz for music
If you lived during the 1970s, youre going to..
1960s Song Quiz
10 questions
We challenge the whole Inter
How smart are you out there?
Trivia Quiz
10 Hard Questions
Quiz on general trivia.
Here are 10 questions designed to assess your..
Food & cooking quiz
10 mixed questions
Quiz for smart people
10 questions to test your knowledge
Trivia Question Quiz
10 general knowledge questions
1970s Lyrics Quiz
10 mixed questions
1967 Song Quiz
Can you guess these 10 singers/bands?
Movie Quiz
10 questions to test your knowledge
The big quiz about The Unite
Are you British enough for this one?
Brain test : 10 trivia quest
Can you go above 7 out of 10 correct answers?
World Geography Quiz
10 hard questions
History Quiz (Medium Difficu
Can you answer 7/10 of these questions?
Trivia Quiz with 10 mixed qu
10 fun trivia questions
Geography Quiz Trivia
10 trivia questions about the world
Quiz : Movie Quotes of the 1
Let's see how smart you really are!
General Trivia IQ challenge
10 mixed trivia questions
General Trivia IQ challenge
10 mixed trivia questions
10 mixed categories question
Let's see how smart you really are!
Science Quiz
10 questions to answer
Science & Nature Quiz (S
Can you score 5 or higher?
History Quiz
10 questions to answer
General trivia quiz
10 mixed questions
1970s Song Quiz
10 mixed questions
10 Trivia Questions
Come play with us!
Grandmas cooking quiz
10 fun questions
General Mixed Trivia Quiz
10 fun questions
1960s Song Quiz
10 questions
Mixed trivia quiz
10 general knowledge questions
Quiz : Songs of the 1960s
How many correct will you get?
Movie Trivia Quiz
10 fun questions
A quiz about the songs lyric
How many of these fantastic songs do you reme..
General Trivia Challenge
10 random questions
Amazing Trivia Quiz
10 hot questions
Impossible geography quiz
10 quite thrilling questions
Trivia Quiz : General Knowle
10 quite impossible questions
General Trivia Quiz
10 mixed questions
IQ test : Spelling and more
10 thrilling questions
General Knowledge Quiz (10 q
Mixed category quiz
Latin Words Quiz (Expert Lev
10 questions to test your knowledge
General Trivia Quiz with 10
Can you score 7 or higher in this one?
Quiz : Science and more
10 mixed trivia questions
Quiz : Do you know 1950s son
How many of the good old songs do you remembe..
Impossible Technology Quiz
You will only get 6 out of 10 correct
Mixed Trivia Quiz
Let us know your score in the comments
Food Quiz
10 mixed questions
Trivia quiz for people over
Can you score 8/10?
Science & Nature quiz for ex
Let's see how smart you really are!
History Quiz (Medium Difficu
Are you ready? :)
General Trivia Quiz
10 fun questions
Expert Trivia Challenge
Can you reach 3 out of 10 correct answers?
Trivia quiz for the experien
10 good questions
Music Quiz
How many correct will you get?
1960s Song Quiz
How many songs can you guess?
10 questions in mixed trivia
How many correct will you get?
General trivia quiz
Can you get more than half of them right?
Trivia quiz for 60+
10 questions about mixed knowledge
General Knowledge For Trivia
10 questions to answer
Dog Quiz
How many correct will you get?
General Knowledge For Trivia
10 mixed genereal questions
General Trivia IQ challenge
How smart are you really?
How smart are you?
10 questions trivia quiz
Cooking Trivia Quiz
10 home chef questions
Trivia quiz for people who a
10 questions in mixed categories
Science & Nature quiz for ex
10 questions to test your knowledge
A fun history quiz
Are you ready? :)
1950s Lyrics Quiz for music
Did you live back in the 1950s?
Trivia Quiz : 10 general kno
Are you up for a quiz?
Crazy fun trivia quiz
10 mixed up trivia questions
..ooOO TRIVIA QUIZ OOoo..
..ooOO 10 questions OOoo..
Quiz : General trivia questi
We mixed up 10 good questions here
Trivia quiz for people who a
10 rather difficult questions
Impossible history quiz
Do you have it in you?
Quiz : Lyrics of the 1960s.
10 questions. 10 songs. How many will you rem..
1960s Quiz
What do you remember?
1960s Song Quiz
10 questions to test your knowledge
Quiz : 10 songs from the 198
Let us know your score in the comments
General Trivia Quiz
Let's see how smart you really are!
You will score 7/10 in this
How many of them will you answer correctly?
A fun history quiz
Are you ready? :)
10 food and cooking question
Are you game?
World History Quiz
Can you even get 5 out of 10 correct?
Music Quiz
Are You A Music Genius?
General Trivia Quiz
Post your score in the comments!
1970s Song Quiz
10 mixed questions
1960s Song Quiz
10 questions to test your knowledge
History Trivia Quiz For Ever
Are you ready? :)
Mixed trivia quiz for smart
10 mixed questions | {"url":"https://weqyoua.net/quiz/6322","timestamp":"2024-11-07T06:12:37Z","content_type":"text/html","content_length":"86512","record_id":"<urn:uuid:e8891acc-a0b7-4bd7-83ff-2e99d7087671>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00744.warc.gz"} |
3 easy steps of using rounding calculator are:
Enter a number
Enter a given number in the search bar of the calculator.
Select unit
Select the million unit to convert the number.
Click on "Calculate" button
Click the “Calculate” button to get rounding results to the nearest million number.
How does the rounding to the nearest million calculator work?
The round to the nearest million place calculator works quickly to round off the integers. When you enter a number into the input, select the unit to convert it into the nearest million place. Press
the “Calculate” button, which explains the approximated rounded results at the million digit place.
How does the nearest million place calculator round off the 325695436?
The rounding up to the million calculator rounds the 325695436 value automatically by simply entering the integers. It detects the next digit right after the million place and rounds it accordingly.
For example, if the place value of a given number after the million is greater than 5, it rounds up to 3, and the million place value of 32695436 becomes 33000000.
Does the rounding to the nearest million calculator give relevant results?
This Rounding calculator gives the most accurate and appropriate results of providing numbers. You can easily use simple and short-rounded numbers for further calculations. The rounded numbers are
precise and approximately the same as the original number.
What is round to the nearest million calculator?
The rounding calculator needs an input value that rounds off the seventh digit in a given number. Then select the place unit and click the “Calculate” button to get the rounded result. If your place
digit is greater than 5, the calculator rounds up the value, and if the digit is followed by 0-4, it rounds down the value or remains unchanged.
How does the nearest million calculator round down to 21543363?
The calculator rounds down the integer 5321543363 because the next digit right after the seventh digit is less than the number 5. So, it rounds off the 5321543363 to the nearest million place and
gives a result with a total value of 21000000. | {"url":"https://roundingcalculator.io/round-to-the-nearest-million-calculator","timestamp":"2024-11-10T14:38:14Z","content_type":"text/html","content_length":"33185","record_id":"<urn:uuid:6aa0cdae-851c-4540-925b-9e6ee00cba31>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00196.warc.gz"} |
Design of a micro-irrigation system based on the control volume method
1. Design of a micro-irrigation system based on the control volume method
since 05 February 2011 :
View(s): 4351 (16 ULiège)
Download(s): 0 (0 ULiège)
, Ahmed
& Gérard
Design of a micro-irrigation system based on the control volume method
(volume 10 (2006) — numéro 3)
Editor's Notes
Received 5 April 2005, accepted 24 January 2006
Dimensionnement de système de micro-irrigation basé sur la méthode du volume de contrôle. Le dimensionnement du système de micro-irrigation basé sur la méthode du contrôle de volume et utilisant la
procédure « back step » est présenté dans cette étude. La méthode numérique proposée est simple et consiste à isoler un volume élémentaire de la rampe, muni d’un goutteur et de longueur égale à
l’espacement entre deux goutteurs. A ce volume de contrôle, sont appliquées les équations fondamentales de conservation relatives à l’hydrodynamique du fluide. La méthode de contrôle de volume repose
sur un principe de calcul itératif de la vitesse et de la pression, pas à pas, pour l’ensemble du réseau. A l’aide d’un simple programme informatique, les équations établies sont résolues et la
convergence vers la solution est rapide. Connaissant les besoins en eau d’une culture, il est cependant aisé de choisir sur la base d’un débit moyen des goutteurs, le débit global du réseau. Ce
calcul permet de repérer les dimensions économiques des conduites du réseau, assurant une uniformité de distribution d’eau acceptable (95 %). Ce programme permet de dimensionner des réseaux complexes
ayant des milliers de goutteurs et offre la possibilité de choisir le réseau optimal. Il donne la vitesse et la pression en n’importe quel point du réseau. Ce programme relativement simple a été
testé pour le dimensionnement de la rampe et les résultants sont similaires à ceux obtenus par la méthode des éléments finis.
Mots-clés : back step, Contrôle du volume, dimensionnement, réseau, uniformité
A micro-irrigation system design based on control volume method using the back step procedure is presented in this study. The proposed numerical method is simple and consists of delimiting an
elementary volume of the lateral equipped with an emitter, called « control volume » on which the conservation equations of the fluid hydrodynamic’s are applied. Control volume method is an iterative
method to calculate velocity and pressure step by step throughout the micro-irrigation network based on an assumed pressure at the end of the line. A simple microcomputer program was used for the
calculation and the convergence was very fast. When the average water requirement of plants was estimated, it is easy to choose the sum of the average emitter discharge as the total average flow rate
of the network. The design consists of exploring an economical and efficient network to deliver uniformly the input flow rate for all emitters. This program permitted the design of a large complex
network of thousands of emitters very quickly. Three subroutine programs calculate velocity and pressure at a lateral pipe and submain pipe. The control volume method has already been tested for
lateral design, the results from which were validated by other methods as finite element method, so it permits to determine the optimal design for such micro-irrigation network.
Keywords : back step method, Control volume, design, micro-irrigation, network, uniformity
Table of content
1The finite element method (FEM) is a systematic numerical procedure that has been used to analyse the hydraulics of the lateral pipe network. A finite element computer model was developed by Bralts
and Segerlind (1985) to analyse micro-irrigation submain units. The advantage of their technique included minimal computer storage and application to a large micro-irrigation network. Bralts and
Edwards (1986) used a graphical technique for field evaluation of micro-irrigation submain units and compared the results with calculated data. Micro-irrigation system design was analysed using the
microcomputer program by Bralts et al. (1991). This program provided the pressure head and flows at each emitter in the system. The program also gave several useful statistics and provided an
evaluation of hydraulic design based upon simple statistics and economics criteria. Since the number of laterals in such a system is large, Bralts et al. (1993) proposed a technique for incorporating
a virtual node structure, combining multiple emitters and lateral lines into virtual nodes. After developing these nodal equations, the FEM was used to numerically solve nodal pressure heads at all
emitters. This simplification of the node number reduced the number of equations and was easy to calculate with a personal computer.
2Most numerical methods for analysing micro-irrigation systems utilise the back step procedure, an iterative technique to solve for flow rates and pressure heads in a lateral line based on an assumed
pressure at the end of the line. However, a micro-irrigation network program needs « large » computer memory, and a long computer calculation time due to the large matrix equations.
3A mathematical model was also developed for a microcomputer by Hills and Povoa (1993) analysing hydraulic characteristics in a micro-irrigation system including emitter plugging. An iterative
procedure was used to locate the average pressure using the Newton-Secant Method. Kang and Nishiyama (1994) and Kang and Nishiyama (1996a) used the finite element method to analyse the pressure head
and discharge distribution along the lateral lines and submains. A golden section search was applied (Kang, Nishiyama, 1996b) to find the operating pressure heads of the lateral and submain lines
corresponding to the required uniformity of water application. A computer program was developed using the back step procedure.
4The primary objective of the present study is to develop and implement a simple program for the hydraulic analysis of lateral pipes and the micro-irrigation system. Using the back step procedure and
control volume method (CVM), results in a non-linear system of algebraic equations, where pressure and velocity are coupled. The objective is to simultaneously solve these equations. The use of the
control volume method reduced computing time required in FEM and facilitated computations.
5The principal computation program was developed in Fortran language using three subroutine control volume programs; a lateral pipe program and a submain program. This computation program analysed
the head pressure and discharge distribution along the lateral and submain pipes and gave total flow rate and total operating pressure of network.
6The proposed computation model was based upon equations of conservation of mass and energy applied to an elementary control volume, containing one emitter on one lateral or submain pipe and solved
by the use of the back step procedure. The first control volume is chosen at the end of the last lateral pipe of network to find pressure at the entrance to the lateral pipe H[Lmax], or pressure at
the end of lateral pipe H[Lmin]. The iterative process based on the back step procedure was successively applied until the other lateral extremity and for all the network (Figure 1). The calculation
was continued step-by-step using an iteration process for all the submain units.
7Figure 1 shows the total average flow rate of network Q[avg] in m^3.s^-1, which is an input for the computation, the total flow rate Q[T] in m^3.s^-1 given after computation, the total head pressure
H[Tmax] in m and the velocity V[max] in m.s^-1 at network entrance.
8The total network is formed by the identical laterals presented in figure 2.
9In figure 2, H[Lmax] represents the pressure at lateral entrance, Q[max] represents the total flow rate at lateral pipe entrance, V[max] the velocity at lateral pipe entrance, H[Lmin] the pressure
at the end of lateral pipe (L[L] = L[L]), V[Lmin] the velocity at the end of lateral pipe, Q = q[i] the discharge of last emitter and L[L] the length of lateral pipe.
10For the elementary volume (Figure 3), the principles of mass and energy conservation are applied.
11The i^th emitter discharge q[i] in m^3.s^-1 is assumed to be uniformly distributed along the length between emitters ∆x[L], and is given by:
15where a is an empirical constant; y is the emitter exponent; H[i], H[i+1] respectivelly the pressure at i^th and (i+1)^th point. H is the average pressure along Dx[L]. The mass conservation
equation for the control volume gives:
16M[i]/t = M[i+1]/t + q[i]
18where M[i] in kg is water mass at the entrance of the control volume, M[i+1] in kg water mass at the exit control volume and t time in s, (M=rW; r is volumic mass of water, W is volume).
19The energy conservation between i and i+1 is as follows:
20E[i] = E[i+1] + DH
22where E[i] is flow energy or pressure in at the input and E[i+1] is flow energy at the exit; and DH including the local head loss h[f] due to the emitter is the head loss in m due to friction along
Dx[L]. The head losses DH are given by the following formula:
26V[L] in m.s^-1, is the avarage velocity between i and (i+1), V[i] and V[i+1] are velocity respectively at i^th and (i+1)^th cross-section lateral, the value of parameter a is given by Hazen-William
27for turbulent flow, R[e] is Reynold’s number, R[e] > 2300,
29for laminar flow, R[e] < 2300,
31where C is Hazen-William coefficient; K proportion-able coefficient; m exponent (m = 1 for laminar flow, m = 1.852 for turbulent flow); A[L] cross-sectional area of lateral pipe in m^2; D[L]
interior lateral pipe diameter in m; n kinematic viscosity in m^2.s^-1; g gravitational acceleration in m.s^-2. H[L] and V[L] are respectively the average pressure and the average velocity between i^
th and (i+1)^th emitter on the lateral. The calculation model for lateral pipe solves simultaneously the system of two coupled and non-linear algebraic equations, having two unknown values:
32V[i+1] and H[i+1].
33Equations (3) and (4) become:
34A[L]V[i] = A[L]V[i+1] + q[i]
37and equations (9) and (10) become:
40For the lateral, equations (11) and (12) become:
43For submain pipe, equations system is
46where Q[s] is flow rate in submain pipe, A[s] a cross-sectional area of submain pipe, V[s] and H[s] respectively, velocity and pressure in submain pipe. At the end of the lateral V[i] = 0, H[Lmax]
is given at entrance of lateral pipe, inlet head pressure. The slop of lateral and submain pipe are assumed null (plat level).
47When H[Lmax] is fixed, the computation program of lateral can give the distribution velocity or emitter’s discharge and pressure along lateral. Theoretical development giving equations (11), (12)
and (13), (14) was already solved without the use of matrix algebra through CVM or Runge Kutta presented in another paper (Zella et al., 2003 ) and (Zella, Kettab, 2002).
48The numerical calculation is accomplished using calculation program in Fortran 77 for the micro-computer. The details of the program are included in the flowchart in figure 4.
49Step 1: to fix H[Lmin], V[Lmin ]= 0
50H[Lmin], V[Lmin] are respectively the pressure and the velocity at the end of lateral.
51Step 2: to calculate equations system (11) and (12) or (13) and (14), so Q[L] = Q[s] corresponding to H[Lmin] and H[Lmax] will be known.
52Step 3: to calculate V[s] using equation (15) and H[s] using equation (16)
53Using linear approximation, the convergence to solution is given by: H[Lmax], H[smax] and V[smax]. The test of convergence is based on the two equations (17) and (18) with e the precision of
imposed to the solution:
56The discharge of any emitter on lateral is given by typical relation equation (1), the average discharge of emitter q[avg] is considered for all discharge emitters on lateral (NG), total discharge
Q[L] or Q[max] at lateral entrance and the total average discharge Q[L avg] corresponding to average pressure H[L avg] are evaluated respectively by the following equations where q[n] is the nominal
emitter discharge.
57Q[max] = V[Lmax] A
59Q[Lavg] = NG.q[n]
63The coefficients variation for discharge or pressure are the quotient between standard deviation and values of average emitter discharge or average pressure:
66The coefficient of discharge uniformity (C[uq]) and pressure uniformity (C[uH]) are calculated by the following (Bralts et al., 1993) equations:
67Cu[q] = 100(1-C[vq])
69Cu[H] = 100(1-C[vH])
71A horizontal lateral pipe (slope = 0%) of length L[L] = 250 m and the interior diameter D[L] = 15.2 mm. The Hazen-William coefficients of polyethylene tubing are C = 150, m = 1.852 and K = 5.88 and
the water kinematic viscosity is n = 10^-6 m^2.s^-1. The emitter spacing Dx[L] is equal to 5 m, so the number of emitter per lateral is NG = 50. The emitter constant and exponent are, respectively a
= 9.14.10^-7, and y = 0.5. The required precision for the test of convergence is e = 10^-6.
72The computation program using the CVM provided the distribution of discharge and pressure with different values of head pressure, HLmax (Table 1). The uniformity coefficient increased with HLmax
while all other parameters (diameter, length, emitter, spacing) were constant. As pressure increased from 20 to 60 m or 75% of the highest HLmax, the new uniformity coefficient, Cuq increased only by
0.56%. This shows that it is useless to opt for the elevated HLmax values since the uniformity of distribution doesn’t improve. If the increase of Cuq is required, it is necessary to change diameter
of lateral line or the type of emitters. The elevated value of Hmax is a waste of pumping energy. Water uniformity (≈ 95%) guaranteed to satisfy water needs of plants when variation of pressure and
emitter discharge were, respectively, less than 20% and 10%.
73The example illustrated in figure 5 is a result obtained by the computation program based on the CVM, the curve permits to know the pressure along the lateral line and therefore at each emitter.
This results is the same as one obtained by Bralts et al. (1993) using the finite element method. The convergence is reached after 8 iterations and a computer time of 4 seconds compared to
3 iterations and one second for FEM and CVM, respectively. The CVM model can be considered validated by reaching the same result as the FEM model of Bralts et al. (1993) which was validated by the
« exact » method.
74Case 1: Simple unit submain as shown in figure 1. A horizontal submain pipe was considered (slope = 0%) with a length Ls = 50 m, a diameter Ds = 0.025 m, and a lateral number NR = 10. All the
laterals are identical and their characteristics have been defined in the previous example (LL = 250 m, DL = 1.52.10^-2 m). The emitters are also the same, so the total number of emitters is NGT =
500 and the calculation precision is maintained equal to e = 10^-3.
75After the network program computation, the total pressure was found HTmax = 44.23 m, the maximal velocity Vmax = 4.80 m/s, and the flow rate QT = 2.357 l/s.
76These results of the network program were verified by the execution of the lateral program for the 10 laterals network. For the 10^th lateral, fixed to the extremity of submain pipe, the total flow
rate was QL = 0.2172.10^-3 l.s^-1 for input pressure HLmax = 30 m, Figure 5 represents the distribution of the pressure along lateral pipe. Between the 10^th and the 9^th lateral, the pressure loss
was determined by the Darcy-Weisbach equation as DHs = 0.062 m. The calculation is achieved for the ten laterals and results are regrouped in table 2. The difference between the average flow rate Q
avg project and the total flow rate, QT, given after computation was only 3.9.10^-3 l/s. It means Qavg introduced by designer data was completely distributed for all emitters.
77Water and mineral elements are delivered to a localised place, to the level of each plant by the emitters whose discharge is function of lateral pressure. The precision of irrigation application,
which must exactly satisfy the requirement for cultivation, depends fundamentally on the design of the network. It takes into account the pressure variations, which are due not only to head loss in
the pipes of network but also to the land slope, characteristics of emitter, water and air temperature and the possible plugging of emitter orifice.
78Case 2: Inlet flow rate at middle of submain. The network (Figure 6) was composed with 2 symmetric submain pipes with the same data as case 1. HTmax = 36.147 m, Vmax = 2.40 m/s, QT = 2.356 l/s.
79The network program is tested for this case of micro-irrigation network. Results were correct and precise at entrance of the submain pipe. The average flow rate, Qavg, was distributed completely
between emitters, assuring zero velocity to the extremity of every lateral pipe and a superior distribution uniformity to 94.22%. The difference between the average flow rate of project Qavg and QT,
given after computation is only 3.9.10^-3 l.s^-1. Results are essentially instantaneously given and the computer time was very short.
80Case 3: A network as described in Figure 1 with the same data. The results after computation are shown for several NR (Ls = 50 m) in table 3.
81For NR = 50 and NR = 100, submain diameter is 0.08 m, because with Ds = 0.025 m there is no solution, the flow rate is high exciting a very important head losses in submain and laterals. In these
cases, velocities are very high and it’s necessarily to increase more submain diameter in order to obtain value around 1.5 m.s^-1.
82These results show that the computer program operates well and converges quickly toward fixed (Qavg) solution at the desired uniformity. The uniformity of emitters distribution is superior to
94.22% for these tested cases. The program gave some precise results for networks covering an irrigated area of 12.5 ha totalling 5000 emitters. In order to analyse such a large micro-irrigation
system accurately and efficiently, the task of calculating the pressure and discharges for each emitter becomes enormous so it’s important to choose this computation method.
83The control volume method was tested and validated for the lateral design and was used in this paper for designing a simple micro-irrigation network. The model precisely describes the distribution
of pressure and discharges to all network emitters. In this case, the total discharge and the total required pressure, the uniformity of pressure and discharges are determined for each pattern of
design. The combination of size network pipes and uniformity distribution (plants water requirement) is applied to guarantee an optimal exploitation taking into account the limits imposed by the
specific norms for micro-irrigation and the technical limits of velocity and pressure tolerance. Uniformity of water distribution is a main criterion for network design. A microcomputer program was
developed that permits designs of high precision in order to optimise the water distribution uniformity at a reasonable investment cost. The proposed methodology is computationally efficient and can
help irrigation consultants in the design of micro-irrigation system. In arid and semiarid regions, design is important to increase yields and to conserve water and soil as well as the economical
utilisation of power.
85a: empirical constant of emitter
86n: kinematic viscosity of water (m^2.s^-1)
87e: precision of convergence (%)
88s: standard deviation of average discharge emitter (%)
89DH: head loss along the length of control volume Dx (m)
90DH[L]: head loss along the length lateral pipe (L[L]) (m)
91DH[S]: head loss along the length submain pipe (L[s]) (m)
92Dx[L]: spacing between emitters (m)
93Dx[S] : spacing between laterals (m)
94a: coefficient of head loss (Hazen-William)
95A[L]: cross-sectional area lateral pipe, m^2
96A[s]: cross-sectional area submain pipe, m^2
97C: Hazen-William coefficient
98Cu[H]: coefficient of pressure uniformity at the lateral pipe (%)
99Cu[q]: coefficient of discharge uniformity at the lateral pipe (%)
100C[vH]: coefficient of variation of pressure emitter (%)
101C[vq]: coefficient of variation of discharge emitter (%)
102D[L]: interior lateral pipe diameter (m)
103D[s]: interior diameter of submain pipe (m)
104E[i]: water energy at entrance of control volume (m)
105E[i+1]: water energy at exit of control volume (m)
106g: gravitational acceleration (m.s^-2)
107H[L1]: pressure of emitter number 1 at the lateral (≈ H[Lmax]) (m)
108H[L50]: pressure of emitter number 50 at the lateral (= H[Lmin]) (m)
109H[Lavg]: average pressure at lateral in m corresponding to average discharge Q[Lavg] (m^3.s^-1)
110H[Lmax]: pressure at the entrance to the lateral pipe (m)
111H[Lmin]: pressure at the end of the lateral pipe (m)
112H[s]: pressure in submain pipe (m)
113H[smax]: pressure at the entrance submain pipe (m)
114H[Tmax]: the total pressure of the network (m)
115Iter: number of computation iteration
116K: coefficient of proportionality
117L[L]: the length of the lateral pipe (m)
118L[s]: length of submain pipe, m
119m: exponent of regime flow
120M[i]: water mass at the entrance of control volume (kg)
121M[i+1]: water mass at exit of control volume (kg)
122NG: number of emitters on lateral
123NGT: total number of emitter of network
124NR: number of lateral pipe of network
125q[avg]: average discharge of emitters (m^3.s^-1)
126Q[avg]: the total average flow rate of the network, input data (m^3.s^-1)
127q[i]: the discharge of emitter i (m^3.s^-1)
128Q[max] (= Q[L]): the total flow rate at the lateral entrance (m^3.s^-1)
129q[n]: nominal discharge of emitter (m^3.s^-1)
130Q[s]: flow rate in submain (m^3.s^-1)
131Q[T]: the total average flow rate of the network given by computation, output data (m^3.s^-1)
132R[e]: Reynolds number
133t: time (s)
134V[Lmax]: the velocity at the lateral pipe entrance (m.s^-1)
135V[Lmin]: the velocity at the end of the lateral pipe (m.s^-1)
136V[max]: the velocity at the network entrance (m.s^-1)
137V[s]: velocity in submain (m.s^-1)
138V[smax]: velocity at the entrance submain pipe (m.s^-1)
139y: emitter exponent
Bralts VF., Segerlind LJ. (1985). Finite element analysis of drip irrigation submain units. Trans. Am. Soc. Agric. Eng. 28 (3), p. 809–814.
Bralts VF., Edwards DM. (1986). Field evaluation of submain units. Trans. Am. Soc. Agric. Eng. 29 (6), p. 1659–1664.
Bralts VF., Shayya WH., Driscoll MA., Cao L. (1991). An expert system for the hydraulic design of microirrigation systems. International Summer Meeting, ASAE, New Mexico, paper n° 91-2153, 12 p.
Bralts VF., Kelly SF., Shayya WH., Segerlind LJ. (1993). Finite element analysis of microirrigation hydraulics using a virtual emitter system. Trans. Am. Soc. Agric. Eng. 36 (3), p. 717–725.
Hills DJ., Povoa AF. (1993). Pressure sensivity to microirrigation emitter plugging. International Summer Meeting, ASAE/CSAE, Washington. Paper n° 93-2130, 17 p.
Kang Y., Nishiyama S. (1994). Finite element method analysis of microirrigation system pressure distribution. Trans. Jap. Soc. Irrig. Drain. Reclam. Eng. 169, p. 19–26.
Kang Y., Nishiyama S. (1996a). Analysis and design of microirrigation laterals. J. Irrig. Drain. Eng. 122 (2), march/april, ASCE, p. 75–82.
Kang Y., Nishiyama S. (1996b). Design of microirrigation submain units. J. Irrig. Drain. Eng. 122 (2), march/april, ASCE. p. 83–90.
Zella L., Kettab A. (2002). Numerical methods of microirrigation lateral design. Biotechnol. Agron. Soc. Environ. 6 (4), p. 231–235.
Zella L., Kettab A., Chasseriaux G. (2003). Hydraulic simulation of micro-irrigation lateral using control volume method. Agronomie 23 (1), p. 37–44.
To cite this article
Lakhdar Zella, Ahmed Kettab & Gérard Chasseriaux, «Design of a micro-irrigation system based on the control volume method», BASE [En ligne], volume 10 (2006), numéro 3, 163-171 URL : https://
University of Blida. BP 30 A Ouled Yaich. Blida. 09100 (Algérie). E-mail: lakhdarz@yahoo.fr
National Polytechnic School. Avenue Hacen Badi, 10. El Harrach, 16200. Alger (Algérie).
INH, INRA. UMR A-462 Sagah. Université d’Angers. F-49000 Angers (France). | {"url":"https://popups.ulg.ac.be/1780-4507/index.php?id=3838","timestamp":"2024-11-10T21:07:47Z","content_type":"application/xhtml+xml","content_length":"104652","record_id":"<urn:uuid:802ebd5d-25ec-4f06-b832-a62d16e6a037>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00108.warc.gz"} |
Conference Program & Abstracts
Monday, July 1
08:40 - 09:40
Stemming the neural data deluge
Jan Rabaey
Room: Conference Hall
Chair: Yonina C. Eldar (Technion-Israel Institute of Technology, Israel)
The dynamic brain map initiative, recently announced by the Obama administration, calls for the simultaneous monitoring of 1 million neurons ("the million neuron march") in a single human brain.
While the precise meaning of this depends upon the phenomena to be observed or the resolution thereof, the design of interfaces that collect that amount of data in situ and transfer it to outside the
brain, is enormously challenging. Implanted neural interface circuits are extremely limited in size, energy and communication bandwidth. Scaling in both resolution and coverage requires innovative
approaches that simultaneously reduce the energy related to data acquisition, and process and format the data so that it can be transmitted through a very narrow data pipe. In this presentation, we
will present the overall challenges related to the design and realization of neural implants, both for scientific, neuro-prosthetic and brain-machine interface purposes. The role of sampling,
compression and feature extraction will be emphasized. One thing is for certain - the outcomes of initiatives such as the brain map are bound to have a profound impact on the understanding of the
brain and the interaction between cyber-human interactions.
10:00 - 12:00
Compressed Sensing
Room: Conference Hall
Chair: Deanna Needel (Stanford University, USA)
10:00 Overcoming the coherence barrier in compressed sensing
We introduce a mathematical framework that bridges a substantial gap between compressed sensing theory and its current use in real-world applications. Although completely general, one of the
principal applications for our framework is the Magnetic Resonance Imaging (MRI) problem. Our theory provides a comprehensive explanation for the abundance of numerical evidence demonstrating the
advantage of so-called variable density sampling strategies in compressive MRI. Another important conclusion of our theory is that the success of compressed sensing is resolution dependent. At
low resolutions, there is little advantage over classical linear reconstruction. However, the situation changes dramatically once the resolution is increased, in which case compressed sensing can
and will offer substantial benefits.
pp. 1-4
10:20 On construction and analysis of sparse matrices and expander graphs with applications to CS
We revisit the probabilistic construction of sparse random matrices where each column has a fixed number of nonzeros whose row indices are drawn uniformly at random. These matrices have a
one-to-one correspondence with the adjacency matrices of lossless expander graphs. We present tail bounds on the probability that the cardinality of the set of neighbors for these graphs will be
less than the expected value. The bounds are derived through the analysis of collisions in unions of sets using a dyadic splitting technique. This analysis led to the derivation of better
constants that allow for quantitative theorems on existence of lossless expander graphs and hence the sparse random matrices we consider and also quantitative compressed sensing (CS) sampling
theorems when using sparse non mean-zero measurement matrices.
pp. 5-8
10:40 OMP with Highly Coherent Dictionaries
Recovering signals that has a sparse representation from a given set of linear measurements has been a major topic of research in recent years. Most of the work dealing with this subject focus on
the reconstruction of the signal's representation as the means to recover the signal itself. This approach forces the dictionary to be of low-coherence and with no linear dependencies between its
columns. Recently, a series of contributions show that such dependencies can be allowed by aiming at recovering the signal itself. However, most of these recent works consider the analysis
framework, and only few discuss the synthesis model. This paper studies the synthesis and introduces a new mutual coherence definition for signal recovery, showing that a modified version of OMP
can recover sparsely represented signals of a dictionary with very high correlations between pairs of columns. We show how the derived results apply to the plain OMP.
pp. 9-12
11:00 Recovery of cosparse signals with Gaussian measurements
This paper provides theoretical guarantees for the recovery of signals from undersampled measurements based on $\ell_1$-analysis regularization. We provide both nonuniform and stable uniform
recovery guarantees for Gaussian random measurement matrices when the rows of the analysis operator form a frame. The nonuniform result relies on a recovery condition via tangent cones and the
case of uniform recovery is based on an analysis version of the null space property.
pp. 13-16
11:20 q-ary compressive sensing
We introduce q-ary compressive sensing, an extension of 1-bit compressive sensing. The recovery properties of the proposed approach are analyzed both theoretically and empirically. Results in
1-bit compressive sensing are recovered as a special case. Our theoretical results suggest a trade- off between the quantization parameter q, and the number of measurements m, in the control of
the error of the resulting recovery algorithm, as well its robustness to noise.
pp. 17-20
11:40 Low-rank Tensor Recovery via Iterative Hard Thresholding
We study recovery of low-rank tensors from a small number of measurements. A version of the iterative hard thresholding algorithm (TIHT) for the higher order singular value decomposition (HOSVD)
is introduced. As a first step towards the analysis of the algorithm, we define a corresponding tensor restricted isometry property (HOSVD-TRIP) and show that Gaussian and Bernoulli random
measurement ensembles satisfy it with high probability.
pp. 21-24
Time-Frequency Analysis
Room: Conference Room
Chair: Hans Feichtinger (University of Vienna, Austria)
10:00 (Non-)Density Properties of Discrete Gabor Multipliers
This paper is concerned with the possibility of approximating arbitrary operators by multipliers for Gabor frames or more general Bessel sequences. It addresses the question of whether sets of
multipliers (whose symbols come from prescribed function classes such as $\ell^2$) constitute dense subsets of various spaces of operators (such as Hilbert-Schmidt class). We prove a number of
negative results that show that in the discrete setting subspaces of multipliers are usually not dense and thus too small to guarantee arbitrary good approximation. This is in contrast to the
continuous case.
pp. 25-28
10:20 Estimation of frequency modulations on wideband signals; applications to audio signal analysis
The problem of joint estimation of power spectrum and modulation from realizations of frequency modulated stationary wideband signals is considered. The study is motivated by some specific signal
classes from which departures to stationarity can carry relevant information and has to be estimated. The estimation procedure is based upon explicit modeling of the signal as a wideband
stationary Gaussian signal, transformed by time-dependent, smooth frequency modulation. Under such assumptions, an approximate expression for the second order statistics of the transformed
signal's Gabor transform is obtained, which leads to an approximate maximum likelihood estimation procedure. The proposed approach is validated on numerical simulations.
pp. 29-32
10:40 Gabor dual windows using convex optimization
Redundant Gabor frames admit an infinite number of dual frames, yet only the canonical dual Gabor system, constructed from the minimal l2-norm dual window, is widely used. This window function
however, might lack desirable properties, such as good time-frequency concentration, small support or smoothness. We employ convex optimization methods to design dual windows satisfying the
Wexler-Raz equations and optimizing various constraints. Numerical experiments show that alternate dual windows with considerably improved features can be found.
pp. 33-36
11:00 Sparse Finite Gabor Frames for Operator Sampling
We derive some interesting properties of finite Gabor frames and apply them to the sampling or identification of operators with bandlimited Kohn-Nirenberg symbols, or equivalently those with
compactly supported spreading functions. Specifically we use the fact that finite Gabor matrices are full Spark for an open, dense set of window vectors to show the existence of periodically
weighted delta trains that identify simultaneously large operator classes. We also show that sparse delta trains exist that identify operator classes for which the spreading support has small
pp. 37-40
11:20 Optimal wavelet reconstructions from Fourier samples via generalized sampling
We consider the problem of computing wavelet coefficients of compactly supported functions from their Fourier samples. For this, we use the recently introduced framework of generalized sampling
in the context of compactly supported orthonormal wavelet bases. Our first result demonstrates that using generalized sampling one obtains a stable and accurate reconstruction, provided the
number of Fourier samples grows linearly in the number of wavelet coefficients recovered. We also present the exact constant of proportionality for the class of Daubechies wavelets. Our second
result concerns the optimality of generalized sampling for this problem. Under some mild assumptions generalized sampling cannot be outperformed in terms of approximation quality by more than a
constant factor. Moreover, for the class of so-called perfect methods, any attempt to lower the sampling ratio below a certain critical threshold necessarily results in exponential
ill-conditioning. Thus generalized sampling provides a nearly-optimal solution to this problem.
pp. 41-44
11:40 Wavelet Signs: A New Tool for Signal Analysis
We propose a new analysis tool for signals, called signature, that is based on complex wavelet signs. The complex-valued signature of a signal at some spatial location is defined as the
fine-scale limit of the signs of its complex wavelet coefficients. We show that the signature equals zero at sufficiently regular points of a signal whereas at salient features, such as jumps or
cusps, it is non-zero. We establish that signature is invariant under fractional differentiation and rotates in the complex plane under fractional Hilbert transforms. We derive an appropriate
discretization, which shows that wavelet signatures can be computed explicitly. This allows an immediate application to signal analysis.
pp. 45-48
13:20 - 15:20
Sampling and Frame Theory
Bernhard Bodmann, Peter Casazza, Matthew Fickus
Room: Conference Room
Chair: Matthew Fickus (AF Institute of Technology, USA)
13:20 Balayage and short time Fourier transform frames
Using his formulation of the potential theoretic notion of balayage and his deep results about this idea, Beurling gave sufficient conditions for Fourier frames in terms of balayage. The analysis
makes use of spectral synthesis, due to Wiener and Beurling, as well as properties of strict multiplicity, whose origins go back to Riemann. In this setting and with this technology, we formulate
and prove non-uniform sampling formulas in the context of the short time Fourier transform (STFT).
pp. 73-76
13:40 Fundamental Limits of Phase Retrieval
Recent advances in convex optimization have led to new strides in the phase retrieval problem over finite-dimensional vector spaces. However, certain fundamental questions remain: What sorts of
measurement vectors uniquely determine every signal up to a global phase factor, and how many are needed to do so? This paper presents several results that address these questions, specifically
in the less-understood complex case. In particular, we characterize injectivity, we identify that the complement property is indeed necessary, we pose a conjecture that 4M-4 generic measurement
vectors are necessary and sufficient for injectivity in M dimensions, and we describe how to prove this conjecture in the special cases where M=2,3. To prove the M=3 case, we leverage a new test
for injectivity, which can be used to determine whether any 3-dimensional measurement ensemble is injective.
pp. 77-80
14:00 On transformations between Gabor frames and wavelet frames
We describe a procedure that enables us to construct dual pairs of wavelet frames from certain dual pairs of Gabor frames. Applying the construction to Gabor frames generated by appropriate
exponential B-splines gives wavelet frames generated by functions whose Fourier transforms are compactly supported splines with geometrically distributed knot sequences. There is also a reverse
transform, which yields pairs of dual Gabor frames when applied to certain wavelet frames.
pp. 81-84
14:20 Perfect Preconditioning of Frames by a Diagonal Operator
Frames which are tight might be considered optimally conditioned in the sense of their numerical stability. This leads to the question of perfect preconditioning of frames, i.e., modification of
a given frame to generate a tight frame. In this paper, we analyze prefect preconditioning of frames by a diagonal operator. We derive various characterizations of functional analytic and
geometric type of the class of frames which allow such a perfect preconditioning.
pp. 85-88
14:40 Characterizing completions of finite frames
Finite frames are possibly-overcomplete generalizations of orthonormal bases. We consider the "frame completion" problem, that is, the problem of how to add vectors to an existing frame in order
to make it better conditioned. In particular, we discuss a new, complete characterization of the spectra of the frame operators that arise from those completions whose newly-added vectors have
given prescribed lengths. To do this, we build on recent work involving a frame's eigensteps, namely the interlacing sequence of spectra of its partial frame operators. We discuss how such
eigensteps exist if and only if our prescribed lengths are majorized by another sequence which is obtained by comparing our completed frame's spectrum to our initial one.
pp. 89-92
15:00 A note on scalable frames
We study the problem of determining whether a given frame is scalable, and when it is, understanding the set of all possible scalings. We show that for most frames this is a relatively simple
task in that the frame is either not scalable or is scalable in a unique way, and to find this scaling we just have to solve a linear system. We also provide some insight into the set of all
scalings when there is not a unique scaling. In particular, we show that this set is a convex polytope whose vertices correspond to minimal scalings.
pp. 93-96
Optical and RF Systems
Michael Gehm, Nathan Goodman
Room: Conference Hall
Chair: Nathan A Goodman (University of Oklahoma, USA)
13:20 Measurement Structures and Constraints in Compressive RF Systems
Compressive sensing (CS) is a powerful technique for sub-sampling of signals combined with reconstruction based on sparsity. Many papers have been published on the topic; however, they often fail
to consider practical hardware factors that may prevent or alter the implementation of desired CS measurement kernels. In particular, different compressive architectures in the RF domain either
sacrifice collected signal energy or create noise folding, both of which cause SNR reduction. In this paper, we consider valid signal models and other system aspects of RF compressive systems.
pp. 49-52
13:40 Calibration—An open challenge in creating practical computational- and compressive-sensing systems
The goal of this manuscript (and associated talk) is not to present any recent experimental results from my laboratory. Rather, the purpose is to elucidate why I believe that calibration is one
of the few remaining significant challenges in the struggle to create a wide range of practical computational sensing and compressive sensing (CS) systems. Toward this end, I briefly describe the
fundamental and implementation difficulties associated with calibration as well as the existing calibration approaches and their associated limitations before sketching the theoretical question
that must be addressed in order to solve the calibration challenge.
pp. 53-56
14:00 Compressive CFAR Radar Processing
In this paper we investigate the performance of a combined Compressive Sensing (CS) Constant False Alarm Rate (CFAR) radar processor under different interference scenarios using both the Cell
Averaging (CA) and Order Statistic (OS) CFAR detectors. Using the properties of the Complex Approximate Message Passing (CAMP) algorithm, we demonstrate that the behavior of the CFAR processor is
independent of the combination with the non-linear recovery and therefore its performance can be predicted using standard radar tools. We also compare the performance of the CS CFAR processor to
that of an L1-norm detector using an experimental data set.
pp. 57-60
14:20 Sampling Techniques for Improved Algorithmic Efficiency in Electromagnetic Sensing
Ground-penetrating radar (GPR) and electromagnetic induction (EMI) sensors are used to image and detect subterranean objects; for example, in landmine detection. Compressive sampling at the
sensors is important for reducing the complexity of the acquisition process. However, there is a second form of sampling done in the imaging-detection algorithms where a parametric forward model
of the EM wavefield is used to invert the measurements. This parametric model includes all the features that need to be extracted from the object; for subterranean targets this includes but is
not limited to type, 3D location, and 3D orientation. As parameters are added to the model, the dimensionality increases. Current sparse recovery algorithms employ a dictionary created by
sampling the entire parameter space of the model. If uniform sampling is done over the high-dimensional parameter space, the size of the dictionary and the complexity of the inversion algorithms
grow rapidly, exceeding the capability of real-time processors. This paper shows that strategic sampling practices can be exploited in both the parameter space, and the acquisition process to
dramatically improve the efficiency and scalability of the these EM sensor systems.
pp. 61-64
14:40 Coding and sampling for compressive tomography
This paper discusses sampling system design for estimation of multidimensional objects from lower dimensional measurements. We consider examples in geometric, diffractive, coherence, spectral and
temporal tomography. Compressive tomography reduces or eliminates conventional tradeoffs between temporal and spatial resolution.
pp. 65-68
15:00 Challenges in Optical Compressive Imaging and Some Solutions
The theory of compressive sensing (CS) has opened up new opportunities in the field of optical imaging. However, its implementation in this field is often not straight-forward. We list the
implementation challenges that might arise in compressive imaging and present some solutions to overcome them.
pp. 69-72
15:40 - 16:40
Sampling theory and applications: developments in the last 20 years and future perspectives
Hans-Georg Feichtinger
Room: Conference Hall
Chair: Paul Butzer (RWTH Aachen, Germany)
It is by now 20 years that a group of visionaries (including Paul Butzer, Abdul Jerri, Farokh Marvasti, and others [e.g. organizer of the first meeting]) organized the first SampTA conference in
Riga. Thanks to the efforts of many members of our community (e.g. Paulo Ferreira, who was willing to the second conference in Aveiro) the SampTA series of biannual conferences has been created,
which has by now a well established position within the international scenario of conferences. It has kept the spirit of a meeting place between mathematicians and applied people, exchanging ideas at
a theoretical level as well as sharing experiences about efficient algorithms and their implementations.
In my talk I will try to summarize the cornerstones of sampling theory as it stand now, indicating some of the key developments in those past 20 years. The concept of frames is now absolutely well
established in the community, while the value of Banach frames is still often underestimated. Function spaces methods have developed further, and various new settings have been introduced, in
particular the setting of shift invariant spaces. Although locally compact Abelian groups are the natural setting for the description of the (regular or irregular) sampling theorem, among others,
because versions of Poisson's formula are valid in this context, many new settings (band-limited functions over other domains, as e.g. in the work of I.~Pesenson) have been considered in those last
20 years. Localization theory has come into place, providing tools to describe the locality of reconstruction, which is important if only local data are available or for real time applications (the
user does not dispose over ``future'' samples).
As an outlook into the future I will indicate a few areas, where I expect still much more to come. Of course it will be the younger generation who will now shape the future, but I am confident that
sampling theory has not just established itself as a distinguished research field for a small group of specialists, but will stay a meeting ground for different disciplines. We have to continue to
keep contact between the different communities, to refresh ideas, keep the assumptions of our theorems realistic and useful, check their validity in practical situations, and work jointly towards
something that I would like to call ``consumer reports'', telling the (experienced and non-experienced) users, from within and outside of our community, which methods might be most promising for
certain application areas.
Tuesday, July 2
08:40 - 09:40
How to best sample a solution manifold?
Wolfgang Dahmen
Room: Conference Hall
Chair: Gitta Kutyniok (Technical University Berlin, Germany)
Many design or optimization tasks in scientific computation require a frequent (even online) evaluation of solutions to parameter dependent families of partial differential equations describing the
underlying model. This is often only feasible when the model is suitably reduced. The so called Reduced Basis Method is a model reduction paradigm that has recently been attracting considerable
attention since it aims at certifying the quality of the reduced model through a-posteriori error bounds. The central idea is to approximate the solution manifold, comprised of all solutions obtained
when the parameter ranges over the the given parameter domain, by the linear hull of possibly few snapshots from that manifold so as to still guarantee that the maximal error stays below a given
target tolerance. This talk highlights the basic ideas behind this method revolving around greedy strategies for the construction of reduced bases.
Moreover, some recent developments are indicated which address the optimal performance of new stabilized variants for problem classes that cannot not be treated well by conventional techniques, such
as unsymmetric singularly perturbed problems. A crucial conceptual ingredient is shown to be a way of "preconditioning" the involved parameter dependent operators on the infinite dimensional level.
10:00 - 12:00
Sampling and Quantization
Holger Boche, Sinan Güntürk, Özgür Yilmaz
Room: Conference Hall
Chair: Sinan Gunturk (New York University, USA)
10:00 Finite-power spectral analytic framework for quantized sampled signals
To be accurate, the theoretical spectral analysis of quantized sequences requires that the deterministic definition of power spectral density be used. We establish the functional space
foundations for this analysis, which remarkably appear to be missing until now. With them, we then shed some new light on quantization error spectra in PCM and ΣΔ modulation.
pp. 97-100
10:40 Non-Convex Decoding for Sigma Delta Quantized Compressed Sensing
Recently G\"unt\"urk et al. showed that $\Sigma\Delta$ quantization is more effective than pulse-code modulation (PCM) when applied to compressed sensing measurements of sparse signals as long as
the step size of the quantizer is sufficiently fine. PCM with the $l^1$ decoder recovers an approximation to the original sparse signal with an error proportional to the quantization step size.
For an $r$-th order $\Sigma\Delta$ scheme the reconstruction accuracy can be improved by a factor of $(m/k)^{\alpha(r-1/2)}$ for any $0 < \alpha < 1$ if $m \gtrsim k(\log N)^{1/(1-\alpha)}$, with
high probability on the measurement matrix. In this paper, we make the observation that the sparsest minimizer subject to a $\Sigma\Delta$-type quantization constraint would approximate the
original signal from the $\Sigma\Delta$ quantized measurements with a comparable reconstruction accuracy. Then we show that the less intractable non-convex $l^\tau$ minimization for $\tau$
sufficiently small can also be used as an alternative recovery method to achieve a comparable reconstruction accuracy. In both cases, we remove the requirement that the quantization alphabet be
fine. Finally, we show using these results that root-exponential accuracy in the bitrate can be achieved for compressed sensing of sparse signals using $\Sigma\Delta$ quantization as the encoder
and $l^\tau$ minimization as the decoder.
pp. 101-104
11:00 Quantized Iterative Hard Thresholding: Bridging 1bit and HighResolution Quantized Compressed Sensing
In this work, we show that reconstructing a sparse signal from quantized compressive measurement can be achieved in an unified formalism whatever the (scalar) quantization resolution, i.e., from
1-bit to high resolution assumption. This is achieved by generalizing the iterative hard thresholding (IHT) algorithm and its binary variant (BIHT) introduced in previous works to enforce the
consistency of the reconstructed signal with respect to the quantization model. The performance of this algorithm, simply called quantized IHT (QIHT), is evaluated in comparison with other
approaches (e.g., IHT, basis pursuit denoise) for several quantization scenarios.
pp. 105-108
11:20 Sigma-Delta quantization of sub-Gaussian compressed sensing measurements
Recently, it has been shown that for the setup of compressed sensing with Gaussian measurements that Sigma-Delta quantization can be effectively incorporated into the sensing mechanism [1]. In
contrast to independently quantized measurements, the resulting schemes yield better reconstruction accuracy with a higher number of measurements even at a constant number of bits per signal. The
original analysis of this method, however, crucially depends on the rotation invariance of the Gaussian measurements and hence does not directly generalize to other classes of measurements. In
this note, we present a refined analysis that allows for a generalization to arbitrary sub-Gaussian measurements.
pp. 109-112
11:40 Stable Recovery with Analysis Decomposable Priors
In this paper, we investigate in a unified way the structural properties of solutions to inverse problems. These solutions are regularized by the generic class of semi-norms defined as a
decomposable norm composed with a linear operator, the so-called analysis type decomposable prior. This encompasses several well-known analysis-type regularizations such as the discrete total
variation (in any dimension), analysis group-Lasso or the nuclear norm. Our main results establish sufficient conditions under which uniqueness and stability to a bounded noise of the regularized
solution are guaranteed. Along the way, we also provide a necessary and sufficient uniqueness result that is of independent interest and goes beyond the case of decomposable norms.
pp. 113-116
Finite Rate of Innovation
Chandra Seelamantula
Room: Conference Room
Chair: Chandra Seelamantula (Indian Institute of Science, India)
10:00 FRI-based Sub-Nyquist Sampling and Beamforming in Ultrasound and Radar
Signals consisting of short pulses are present in many applications including ultrawideband communication, object detection and navigation (radar, sonar) and medical imaging. The structure of
such signals, effectively captured within the finite rate of innovation (FRI) framework, allows for significant reduction in sampling rates, required for perfect reconstruction. In this work we
consider two applications, ultrasound imaging and radar, where the FRI signal structure allows to reduce both sampling and processing rates. Furthermore, we show how the FRI framework inspires
new processing techniques, such as beamforming in the frequency domain and Doppler focusing. In both applications a pulse of a known shape or a stream of such pulses is transmitted into the
respective medium, and the received echoes are sampled and digitally processed in a way referred to as beamforming. Applied either spatially or temporally, beamforming allows to improve
signal-to-noise ratio. In radar applications it also allows for target Doppler frequency estimation. Using FRI modeling both for detected and beamformed signals, we are able to reduce sampling
rates and to perform digital beamforming directly on the low-rate samples.
pp. 117-120
10:20 Robust Spike Train Recovery from Noisy Data by Structured Low Rank Approximation
We consider the recovery of a finite stream of Dirac pulses at nonuniform locations, from noisy lowpass-filtered samples. We show that maximum-likelihood estimation of the unknown parameters
amounts to solve a difficult, even believed NP-hard, matrix problem of structured low rank approximation. We propose a new heuristic iterative optimization algorithm to solve it. Although it
comes, in absence of convexity, with no convergence proof, it converges in practice to a local solution, and even to the global solution of the problem, when the noise level is not too high.
Thus, our method improves upon the classical Cadzow denoising method, for same implementation ease and speed.
pp. 121-124
10:40 Multichannel ECG Analysis using VPW-FRI
In this paper, we present an application of Variable Pulse Width Finite Rate of Innovation (VPW-FRI) in dealing with multi-channel Electrocardiogram (ECG) data using a common annihilator. By
extending the conventional FRI model to include additional parameters such as pulse width and asymmetry, VPW-FRI has been able to deal with a more general class of pulses. The common annihilator,
which is introduced in the annihilating filter step, shows a common support in multi-channel ECG data, which provides interesting possibilities in compression. A model based de-noising method
will be presented which is fast and non-iterative. Also, an application to detect QRS complexes in ECG signals will be demonstrated. The results will show the robustness of the common annihilator
and the QRS detection even in the presence of noise.
pp. 125-128
11:00 Recovery of bilevel causal signals with finite rate of innovation using positive sampling kernels
Bilevel signal $x$ with maximal local rate of innovation $R$ is a continuous-time signal that takes only two values $0$ and $1$ and that there is at most one transition position in any time
period of $1/R$. In this note, we introduce a recovery method for bilevel causal signals $x$ with maximal local rate of innovation $R$ from their uniform samples $x*h(nT), n\ge 1$, where the
sampling kernel $h$ is causal and positive on $(0, T)$, and the sampling rate $\tau:=1/T$ is at (or above) the maximal local rate of innovation $R$. We also discuss stability of the bilevel
signal recovery procedure in the presence of bounded noises.
pp. 129-132
11:20 Approximate FRI with Arbitrary Kernels
In recent years, several methods have been developed for sampling and exact reconstruction of specific classes of non-bandlimited signals known as signals with finite rate of innovation (FRI).
This is achieved by using adequate sampling kernels and reconstruction schemes, for example the exponential reproducing kernels. Proper linear combinations of this type of kernel with its shifted
versions may reproduce polynomials or exponentials exactly. In this paper we briefly review the ideal FRI sampling and reconstruction scheme and some of the existing techniques to combat noise.
We then present an alternative perspective of the FRI retrieval step, based on moments and approximate reproduction of exponentials. Allowing for a controlled model mismatch, we propose a unified
reconstruction stage that addresses two current limitations in FRI: the number of degrees of freedom and the stability of the retrieval. Moreover, the approach is universal in that it can be used
with any sampling kernel from which enough information is available.
pp. 133-136
11:40 Algebraic signal sampling, Gibbs phenomenon and Prony-type systems
Systems of Prony type appear in various reconstruction problems such as finite rate of innovation, superresolution and Fourier inversion of piecewise smooth functions. By keeping the number of
equations small and fixed, we demonstrate that such "decimation" can lead to practical improvements in the reconstruction accuracy. As an application, we provide a solution to the so-called
Eckhoff's conjecture, which asked for reconstructing jump positions and magnitudes of a piecewise-smooth function from its Fourier coefficients with maximal possible asymptotic accuracy -- thus
eliminating the Gibbs phenomenon.
pp. 137-140
13:20 - 15:00
Super Resolution
Laurent Demanet
Room: Conference Hall
Chair: Laurent Demanet (MIT, USA)
13:20 Super-resolution via superset selection and pruning
We present a pursuit-like algorithm that we call the ``superset method" for recovery of sparse vectors from consecutive Fourier measurements in the super-resolution regime. The algorithm has a
subspace identification step that hinges on the translation invariance of the Fourier transform, followed by a removal step to estimate the solution's support. The superset method is always
successful in the noiseless regime (unlike $\ell_1$ minimization) and generalizes to higher dimensions (unlike the matrix pencil method). Relative robustness to noise is demonstrated numerically.
pp. 141-144
13:40 Support detection in super-resolution
We study the problem of super-resolving a superposition of point sources from noisy low-pass data with a cut-off frequency f. Solving a tractable convex program is shown to locate the elements of
the support with high precision as long as they are separated by 2/f and the noise level is small with respect to the amplitude of the signal.
pp. 145-148
14:00 Using Correlated Subset Structure for Compressive Sensing Recovery
Compressive sensing is a methodology for the reconstruction of sparse or compressible signals using far fewer samples than required by the Nyquist criterion. However, many of the results in
compressive sensing concern random sampling matrices such as Gaussian and Bernoulli matrices. In common physically feasible signal acquisition and reconstruction scenarios such as
super-resolution of images, the sensing matrix has a non-random structure with highly correlated columns. Here we present a compressive sensing recovery algorithm that exploits this correlation
structure. We provide algorithmic justification as well as empirical comparisons.
pp. 149-152
14:20 Sub-Wavelength Coherent Diffractive Imaging based on Sparsity
We propose and experimentally demonstrate a method of performing single-shot sub-wavelength resolution Coherent Diffractive Imaging (CDI), i.e. algorithmic object reconstruction from Fourier
amplitude measurements. The method is applicable to objects that are sparse in a known basis. The prior knowledge of the object's sparsity compensates for the loss of phase information, and the
loss of all information at the high spatial frequencies occurring in every microscope and imaging system due to the physics of electromagnetic waves in free-space.
pp. 153-155
14:40 Robust Polyhedral Regularization
In this paper, we establish robustness to noise perturbations of polyhedral regularization of linear inverse problems. We provide a sufficient condition that ensures that the polyhedral face
associated to the true vector is equal to that of the recovered one. This criterion also implies that the $\ell^2$ recovery error is proportional to the noise level for a range of parameter. Our
criterion is expressed in terms of the hyperplanes supporting the faces of the unit polyhedral ball of the regularization. This generalizes to an arbitrary polyhedral regularization results that
are known to hold for sparse synthesis and analysis $\ell^1$ regularization which are encompassed in this framework. As a byproduct, we obtain recovery guarantees for $\ell^\infty$ and $\ell^1-\
ell^\infty$ regularization.
pp. 156-159
Sampling and Learning
Albert Cohen
Room: Conference Room
Chair: Albert Cohen (Universite Pierre et Marie Curie, France)
13:20 On the Performance of Adaptive Sensing for Sparse Signal Inference
In this short paper we survey recent results characterizing the fundamental draws and limitations of adaptive sensing for sparse signal inference. We consider two different adaptive sensing
paradigms, based either on single-entry or linear measurements. Signal magnitude requirements for reliable inference are shown for two different inference problems, namely signal detection and
signal support estimation.
pp. 160-163
14:00 Reconstruction of solutions to the Helmholtz equation from punctual measurements
We analyze the sampling of solutions to the Helmholtz equation (e.g. sound fields in the harmonic regime) using a least-squares method based on approximations of the solutions by sums of
Fourier-Bessel functions or plane waves. This method compares favorably to others such as Orthogonal Matching Pursuit with a Fourier dictionary. We show that using a significant proportion of
samples on the border of the domain of interest improves the stability of the reconstruction, and that using cross-validation to estimate the model order yields good reconstruction results.
pp. 164-167
14:20 A priori convergence of the Generalized Empirical Interpolation Method
In an effort to extend the classical lagrangian interpolation tools, new interpolating methods that use general interpolating functions are explored. The Generalized Empirical Interpolation
Method (GEIM) belongs to this class of new techniques. It generalizes the plain Empirical Interpolation Method by replacing the evaluation at interpolating points by application of a class of
interpolating linear functions. Since its efficiency depends critically on the choice of the interpolating functions (that are chosen by a Greedy selection procedure), the purpose of this paper
is therefore to provide a priori convergence rates for the Greedy algorithm that is used to build the GEIM interpolating spaces.
pp. 168-171
14:40 Test-size Reduction Using Sparse Factor Analysis
Consider a large database of questions that test the knowledge of learners (e.g., students) about a range of different concepts. While the main goal of personalized learning is to obtain accurate
estimates of each learner's concept understanding, it is additionally desirable to reduce the number of questions to minimize each learner's workload. In this paper, we propose a novel method to
extract a small subset of questions (from a large question database) that still enables the accurate estimation of a learner's concept understanding. Our method builds upon the SPARse Factor
Analysis (SPARFA) framework and chooses a subset of questions that minimizes the entropy of the error in estimating the level of concept understanding. We approximate the underlying combinatorial
optimization problem using a mixture of convex and greedy methods and demonstrate the efficacy of our approach on real educational data.
pp. 172-175
15:00 - 16:20
Poster Session I
coffee served
Chair: Goetz Pfander (Jacobs University Bremen, Germany)
Special Frames
We will present three classes of special frames: special Fourier-type frames, special Gabor frames and special wavelet frames. Then we will give one example for each class: Fourier-Bessel frames,
Gabor frames with Hermite functions and wavelet frames with Laguerre functions. Some results about these three class of special frames, currently under investigation, will be outlined.
pp. 176-177
Variation and approximation for Mellin-type operators
Mellin analysis is of extreme importance in approximation theory, also for its wide applications: among them, for example, it is connected with problems of Signal Analysis, such as the
Exponential Sampling. Here we study a family of Mellin-type integral operators defined as $$ (T_w f)({\tt s})=\int_{\R_+^N} K_w({\tt t}) f({\tt st}){\,d{\tt t} \over \langle{\tt t}\rangle}, \ {\
tt s}\in \R_+^N,\ w>0,\eqno \rm{(I)} $$ where $\{K_w\}_{w>0}$ are approximate identities, $\langle{\tt t}\rangle:=\prod_{i=1}^N t_i,$ ${\tt t}=(t_1,\dots,t_N)\in \R^N_+$, and $f:\R_+^N\rightarrow
\R$ is a function of bounded $\varphi-$variation. We use a new concept of multidimensional $\varphi-$variation inspired by the Tonelli approach, which preserves some of the main properties of the
classical variation. For the family of operators (I), besides several estimates and a result of approximation for the $\varphi-$modulus of smoothness, the main convergence result that we obtain
proves that $$ \lim_{w\to +\infty} V^{\varphi}[\lambda(T_w f-f)]=0, $$ for some $\lambda>0$, provided that $f$ is $\varphi-$absolutely continuous. Moreover, the problem of the rate of
approximation is studied, taking also into consideration the particular case of Fej\'er-type kernels.
pp. 178-181
iterative methods for random sampling recovery and compressed sensing recovery
In this paper, an iterative sparse recovery method based on sampling and interpolation is suggested. The proposed method exploits the sparsity of the embedded signal to recover it from random
samples. Simulation results indicate that the proposed method outperforms IMAT (a random sampling recovery method) and OMP (compressed sensing recovery method) in the case of image compression.
Also an iterative method with adaptive thresholding for compressed sensing (IMATCS) recovery is proposed. Unlike its similar counterpart, iterative hard thresholding (IHT), the thresholding
function of the proposed method is adaptive i.e. the threshold value changes with the iteration number, which enables IMATCS to reconstruct the sparse signal without having any knowledge of the
sparsity number. The simulation results indicate that IMATCS outperforms IHT in both computational complexity and quality of the recovered signal. Compared to the orthogonal matching pursuit
(OMP), the proposed method is computationally more efficient with similar recovery performance.
pp. 182-185
A Review of the Invertibility of Frame Multipliers
In this paper we give a review of recent results on the invertibility of frame multipliers $M_{m,\Phi,\Psi}$. In particular we give sufficient, necessary or equivalent conditions for the
invertibility of such operators, depending on the properties of the sequences $\Psi$, $\Phi$ and $m$. We consider Bessel sequences, frames and Riesz sequences.
pp. 186-188
Hybrid Regularization and Sparse Reconstruction of Imaging Mass Spectrometry Data
Imaging mass spectrometry (IMS) is a technique to visualize the molecular distributions from biological samples without the need of chemical labels or antibodies. The underlying data is taken
from a mass spectrometer that ionizes the sample on spots on a grid of a certain size. Mathematical postprocessing methods has been investigated twice for better visualization but also for
reducing the huge amount of data. We propose a first model that applies compressed sensing to reduce the number of measurements needed in IMS. At the same time we apply peak picking in spectra
using the l1-norm and denoising on the m/z-images via the TV-norm which are both general procedures of mass spectrometry data postprocessing, but always done separately and not simultaneous. This
is realized by using a hybrid regularization approach for a sparse reconstruction of both the spectra and the images. We show reconstruction results for a given rat brain dataset in spectral and
spatial domain.
pp. 189-192
Level crossing sampling of strongly monoHölder functions
We address the problem of quantifying the number of samples that can be obtained through a level crossing sampling procedure for applications to mobile systems. We specially investigate the link
between the smoothness properties of the signal and the number of samples, both from a theoretical and a numerical point of view.
pp. 193-196
MAP Estimators for Self-Similar Sparse Stochastic Models
We consider the reconstruction of multi-dimensional signals from noisy samples. The problem is formulated within the framework of the theory of continuous-domain sparse stochastic processes. In
particular, we study the fractional Laplacian as the whitening operator specifying the correlation structure of the model. We then derive a class of MAP estimators where the priors are confined
to the family of infinitely divisible distributions. Finally, we provide simulations where the derived estimators are compared against total-variation (TV) denoising.
pp. 197-199
From variable density sampling to continuous sampling using Markov chains
Since its discovery over the last decade, Compressed Sensing (CS) has been successfully applied to Magnetic Resonance Imaging (MRI). It has been shown to be a powerful way to reduce scanning time
without sacrificing image quality. MR images are actually strongly compressible in a wavelet basis, the latter being largely incoherent with the k-space or spatial Fourier domain where
acquisition is performed. Nevertheless, since its first application to MRI [1], the theoretical justification of actual k-space sampling strategies is questionable. Indeed, the vast majority of
k-space sampling distributions have been heuristically designed (e.g., variable density) or driven by experimental feasibility considerations (e.g., random radial or spiral sampling to achieve
smoothness k-space trajectory). In this paper, we try to reconcile very recent CS results with the MRI specificities (magnetic field gradients) by enforcing the measurements, i.e. samples of
k-space, to fit continuous trajectories. To this end, we propose random walk continuous sampling based on Markov chains and we compare the reconstruction quality of this scheme to the
state-of-the art.
pp. 200-203
A Comparison of Reconstruction Methods for Compressed Sensing of the Photoplethysmogram
Compressed sensing has the possibility to significantly decrease the power consumption of wireless medical devices. The photoplethysmogram (PPG) is a device which can greatly benefit from
compressed sensing due to the large amount of power needed to capture data. The aim of this paper is to determine if the least absolute shrinkage and selection operator (LASSO) optimization
algorithm is the best approach for reconstructing a compressively sampled PPG across varying physiological states. The results show that LASSO reconstruction approaches, but does not surpass, the
reliability of constrained optimization.
pp. 204-207
Generalized sampling in $U$-invariant subspaces
In this work we carry out some results in sampling theory for $U$-invariant subspaces of a separable Hilbert space $\mathcal{H}$, also called atomic subspaces: \[ \mathcal{A}_a=\big\{\sum_{n\in\
mathbb{Z}}a_nU^na:\, \{a_n\} \in \ell^2(\mathbb{Z}) \big\}\,, \] where $U$ is an unitary operator on $\mathcal{H}$ and $a$ is a fixed vector in $\mathcal{H}$. These spaces are a generalization of
the well-known shift-invariant subspaces in $L^2(\mathbb{R})$; here the space $L^2(\mathbb{R})$ is replaced by $\mathcal{H}$, and the shift operator by $U$. Having as data the samples of some
related operators, we derive frame expansions allowing the recovery of the elements in $\mathcal{A}_a$. Moreover, we include a frame perturbation-type result whenever the samples are affected
with a jitter error.
pp. 208-211
Iterative Hard Thresholding with Near Optimal Projection for Signal Recovery
Recovering signals that have sparse representations under a given dictionary from a set of linear measurements got much attention in the recent decade. However, most of the work has focused on
recovering the signal's representation, forcing the dictionary to be incoherent and with no linear dependencies between small sets of its columns. A series of recent papers show that such
dependencies can be allowed by aiming at recovering the signal itself. However, most of these contributions focus on the analysis framework. One exception to these is the work reported in
[Davenport, Needell and Wakin, 2012], proposing a variant of the CoSaMP for the synthesis model, and showing that signal recovery is possible even in high-coherence cases. In the theoretical
study of this technique the existence of an efficient near optimal projection scheme is assumed. In this paper we extend the above work, showing that under very similar assumptions, a variant of
IHT can recover the signal in cases where regular IHT fails.
pp. 212-215
The Design of Non-redundant Directional Wavelet Filter Bank Using 1-D Neville Filters
In this paper, we develop a method to construct non-redundant directional wavelet filter banks. Our method uses a special class of filters called Neville filters and can construct non-redundant
wavelet filter banks in any dimension for any dilation matrix. The resulting filter banks have directional analysis highpass filters, thus can be used in extracting directional contents in
multi-D signals such as images. Furthermore, one can custom-design the directions of highpass filters in the filter banks.
pp. 216-219
Sparse Approximation of Ion-Mobility Spectrometry Profiles by Minutely Shifted Discrete B-splines
Employing discrete B-splines instead of the Gaussian distribution, we construct an algorithm for the analysis of ion-mobility spectrometry profiles. The algorithm is suitable for hardware
implementation because the discrete B-splines are supported by a simple digital filter to compute their weighted sum and their correlations with a given signal. Minutely shifted discrete
B-splines are deployed of which weighted sum is to approximate a given profile with non-negative weights. Closely neighboring discrete B-splines are almost linearly dependent so that they may
cause numerical instability in the approximation process. But numerical experiments deny this anxiety at least for the final results. Varying the width of discrete B-splines, we obtain a number
of different approximations. Out of sufficiently precise approximations, we choose the sparsest one in the sense that it comprises few discrete B-splines with large weights.
pp. 220-223
Tracking Dynamic Sparse Signals with Kalman Filters: Framework and Improved Inference
The standard Kalman filter performs optimally for conventional signals but tends to fail when it comes to recovering dynamic sparse signals. In this paper a method to solve this problem is
proposed. The basic idea is to model the system dynamics with a hierarchical Bayesian network which successfully captures the inherent sparsity of the data, in contrast to the traditional
state-space model. This probabilistic model provides all the necessary statistical information needed to perform sparsity-aware predictions and updates in the Kalman filter steps. A set of
theorems show that a properly scaled version of the associated cost function can lead to less greedy optimisation algorithms, unlike the ones previously proposed. It is demonstrated empirically
that the proposed method outperforms the traditional Kalman filter for dynamic sparse signals and also how the redesigned inference algorithm, termed here Bayesian Subspace Pursuit (BSP) greatly
improves the inference procedure.
pp. 224-227
The Variation Detracting Property of some Shannon Sampling Series and their Derivatives
In this paper are considered some generalized Shannon sampling operators which preserve the total variation of functions and their derivatives. For that purpose will be used the averaged kernel
functions of certain even band-limited kernel functions.
pp. 228-231
Jointly filtering and regularizing seismic data using space-varying FIR filters
Array forming in seismic data acquisition can be likened to FIR filtering. Misplacement of the receivers used to record seismic waves can lead to degraded performance with respect to the
filtering characteristics of the array. We propose two methods for generating linear space-varying filters that take receiver misplacements into account and demonstrate their performance on
synthetic data.
pp. 232-235
Non-uniform sampling pattern recognition based on atomic decomposition
Non-uniform sampling is an interesting scheme that can outperforms the uniform sampling with low activity signals. With such signals, it generates fewer samples, which means less data to process
and lower power consumption. In addition, it is well-known that asynchronous logic is a low power technology. This paper deals with the coupling between a non-uniform sampling scheme and a
pattern recognition algorithm implemented with an event-driven logic. This non-uniform analog-to-digital conversion and the specific processing have been implemented on an Altera FPGA platform.
This paper reports the first results of this low-activity pattern recognition system and its availability to recognize specific patterns with very few samples. The objectives of this work target
the future ultra-low power integrated systems.
pp. 236-239
Particle Filter Acceleration Using Multiscale Sampling Methods
We present a multiscale based method that accelerates the computation of particle filters. Particle filter is a powerful method that tracks the state of a target based on non-linear observations.
Unlike the conventional way that calculates weights over all particles in each cycle of the algorithm, we sample a small subset from the source particles using matrix decomposition methods. Then,
we apply a function extension algorithm that uses the particle subset to recover the density function for all the rest of the particles. As often happens, the computational effort is substantial
especially when tracking multiple objects takes place. The proposed algorithm reduces significantly the computational load. We demonstrate our method on both simulated and on real data such as
tracking in videos sequences.
pp. 240-243
Analysis of Multistage Sampling Rate Conversion for Potential Optimal Factorization
Digital multistage sampling rate conversion has many engineering applications in fields of signal and image processing, which is to adapt the sampling rates to the flows of diverse audio and
video signals. The FIR (Finite Impulse Response) polyphase sampling rate converter is one of typical schemes that are suitable for interpolation or decimation by an integer factor. It also
guarantees the stability performance with the stable gain margin and phase margin. The big challenge occurs upon implementation when a very high order filter is needed with large values of L
(positive integer factor of interpolator) and/or M (positive integer factor of decimator). Narrowband linear phase filter specifications are hard to achieve, however. It leads to extra storage
space, additional computation expense and detrimental finite word length effects. The multistage sampling rate converter has been introduced to factorize the L and M ratio into a product of
ratios of integers or prime numbers. The optimal number of stages and optimal converting factors are both critical terms to minimize the computation time and storage requirements. Filter
structure analysis is conducted in this study to search for the potential factors that could have a remarkable impact to optimize the sampling rate conversion.
pp. 244-247
Sparse 2D Fast Fourier Transform
This paper extends the concepts of the Sparse Fast Fourier Transform (sFFT) Algorithm to work with two dimensional (2D) data. The 2D algorithm requires several generalizations to multiple key
concepts of the 1D sparse Fourier transform algorithm. Furthermore, several parameters needed in the algorithm are optimized for the reconstruction of sparse image spectra. This paper addresses
the case of the exact k-sparse Fourier transform but the underlying concepts can be applied to the general case of finding a k-sparse approximation of the Fourier transform of an arbitrary
signal. The proposed algorithm can further be extended to even higher dimensions. Simulations illustrate the efficiency and accuracy of the proposed algorithm when applied to real images.
pp. 248-251
GESPAR: Efficient Sparse Phase Retrieval with Application to Optics
The problem of phase retrieval, namely, recovery of a signal from the magnitude of its Fourier transform is ill-posed since the Fourier phase information is lost. Therefore, prior information on
the signal is needed in order to recover it. In this work we consider the case in which the prior information on the signal is that it is sparse, i.e., it consists of a small number of nonzero
elements. We propose GESPAR: A fast local search method for recovering a sparse signal from measurements of its Fourier transform magnitude. Our algorithm does not require matrix lifting, unlike
previous approaches, and therefore is potentially suitable for large scale problems such as images. Simulation results indicate that the proposed algorithm is fast and more accurate than existing
techniques. We demonstrate applications in optics where GESPAR is generalized and used for finding sparse solutions to sets of quadratic measurements.
pp. 252-255
16:20 - 17:20
Fast algorithms for sparse Fourier transform
Piotr Indyk
Room: Conference Hall
Chair: Farokh Marvasti (Sharif university of Technology, Iran)
The Fast Fourier Transform (FFT) is one of the most fundamental numerical algorithms. It computes the Discrete Fourier Transform (DFT) of an n-dimensional signal in O(n log n) time. The algorithm
plays an important role in many areas. It is not known whether its running time can be improved. However, in many applications the output of the transform is (approximately) sparse. In this case, it
is known that one can compute the set of non-zero coefficients faster than in O(n log n) time.
In this talk, I will describe a new set of efficient algorithms for the sparse Fourier Transform. One of the algorithms has the running time of O(k log n), where k is the number of non-zero Fourier
coefficients of the signal. This improves over the runtime of the FFT for any k = o(n). If time allows, I will also describe some of the applications, to spectrum sensing and GPS locking, as well as
mention a few outstanding open problems.
The talk will cover the material from the joint papers with Fadel Adib, Badih Ghazi, Haitham Hassanieh, Dina Katabi, Eric Price and Lixin Shi. The papers are available at http://groups.csail.mit.edu/
17:30 - 18:30
Compressed Sensing
Room: Conference Hall
Chair: Felix Krahmer (University of Göttingen, Germany)
17:30 Sparse Signal Reconstruction from Phase-only Measurements
We demonstrate that the phase of complex linear measurements of signals preserves significant information about the angles between those signals. We provide stable angle embedding guarantees,
akin to the restricted isometry property in classical compressive sensing, that characterize how well the angle information is preserved. They also suggest that a number of measurements linear in
the sparsity and logarithmic in the dimensionality of the signal contains sufficient information to acquire and reconstruct a sparse signal within a positive scalar factor. We further show that
the reconstruction can be formulated and solved using standard convex and greedy algorithms taken directly from the CS literature. Even though the theoretical results only provide approximate
reconstruction guarantees, our experiments suggest that exact reconstruction is possible.
pp. 256-259
17:50 Optimal Sampling Rates in Infinite-Dimensional Compressed Sensing
The theory of compressed sensing studies the problem of recovering a high dimensional sparse vector from its projections onto lower dimensional subspaces. The recently introduced framework of
infinite-dimensional compressed sensing [1], to some extent generalizes these results to infinite-dimensional scenarios. In particular, it is shown that the continuous-time signals that have
sparse representations in a known domain can be recovered from random samples in a different domain. The range M and the minimum number m of samples for perfect recovery are limited by a
balancing property of the two bases. In this paper, by considering Fourier and Haar wavelet bases, we experimentally show that M can be optimally tuned to minimize the number of samples m that
guarantee perfect recovery. This study does not have any parallel in the finite-dimensional CS.
pp. 260-263
18:10 Deterministic Binary Sequences for Modulated Wideband Converter
The modulated wideband converter (MWC) is a promising spectrum blind, sub-Nyquist multi-channel sampling scheme for sparse multi-band signals. In an MWC, the input analog signal is modulated by a
bank of periodic binary waveforms, low-pass filtered and then down sampled uniformly at a low rate. One important issue in the design and implementation of an MWC system is the selection of
binary waveforms, which impacts the stability of sparse reconstruction. In this paper, we propose to construct the binary pattern with a circulant structure, in which each row is a random cyclic
shift of a single deterministic sequence or a pair of complementary sequences. Such operators have hardware friendly structures and fast computation in recovery. They are incoherent with the FFT
matrix and the corresponding sampling operators satisfy the restricted isometry property with sub-optimal bounds. Some simulation results are included to demonstrate the validity of the proposed
sampling operators.
pp. 264-267
Harmonic Analysis
Room: Conference Room
Chair: Anna Rita Sambucini (University of Perugia, Italy)
17:30 Fractional Prolate Spheroidal Wave Functions
An important problem in communication engineering is the energy concentration problem, that is the problem of finding a signal bandlimited to $[-\sigma, \sigma]$ with maximum energy concentration
in the interval $[-\tau, \tau], 0<\tau ,$ in the time domain or equivalently, finding a signal that is time limited to the interval $[-\tau, \tau]$ with maximum energy concentration in $[-\sigma,
\sigma]$ in the frequency domain. This problem was solved by a group of mathematicians at Bell Labs in the early 1960's. The solution involves the prolate spheroidal wave functions which are
eigenfunctions of a differential and an integral equations. The main goal of this talk is to solve the energy concentration problem in the fractional Fourier transform domain, that is to find a
signal that is bandlimited to $[-\sigma, \sigma]$ in the fractional Fourier transform domain with maximum energy concentration in the interval $[-\tau, \tau], 0<\tau ,$ in the time domain. The
solution involves a generalization of the prolate spheroidal wave functions which we call fractional prolate spheroidal wave functions. \end{abstract}
pp. 268-270
17:50 Absolute Convergence of the Series of Fourier-Haar Coefficients
We give some sharp statements on absolute convergence of the series of Fourier-Haar coefficients. There are two-dimensional analogs of one-dimensional results.
pp. 271-273
18:10 Mellin analysis and exponential sampling. Part I: Mellin fractional integrals
The Mellin transform and the associated convolution integrals are intimately connected with the exponential sampling theorem. Thus it is very important to develop the various tools of Mellin
analysis. In this part we pave the way to sampling analysis by studying basic theoretical properties, including Mellin-type fractional integrals, and give a new approach and version on these
integrals, specifying their basic semigroup property. Especially their domain and range need be studied in detail.
pp. 274-276
18:20 Mellin analysis and exponential sampling. Part II: Mellin differential operators and sampling
Here, we introduce a notion of strong fractional derivative and we study the connection with the pointwise fractional derivative, which is defined by means of Hadamard-type integrals. The main
result is a fractional version of the fundamental theorem of integral and differential calculus in Mellin frame. Finally there follow the first of several theorems in the sampling area, the
highlight being the reproducing kernel theorem as well as its approximate version for non-bandlimited functions in the Mellin sense, both being new.
pp. 277-280
Wednesday, July 3
08:40 - 09:40
Sampling and high-dimensional convex geometry
Roman Vershynin
Room: Conference Hall
Chair: Holger Rauhut (University of Bonn, Germany)
This expository talk is about intuition in modern high dimensional convex geometry and random matrix theory. We will explore connections with sampling and quantization, in particular with problems
arising in compressed sensing and high dimensional statistics.
10:00 - 12:00
Sampling in Bio Imaging
Brigitte Forster, Hagai Kirshner, Michael Unser
Room: Conference Hall
Chair: Hagai Kirshner (EPFL, Switzerland)
10:00 From super-resolution microscopy towards quantitative single-molecule biology
In the fluorescence microscopy field, much interest has focused on new super-resolution techniques (collectively known as PALM/STORM, STED, SIM and others) [1] that have demonstrated to bypass
the diffraction limit and provide a spatial resolution reaching a near-molecular level. With these techniques, it has become possible to image cellular structures in far greater detail than ever
before. Single-molecule based methods such as photoactivation-localization microscopy (PALM) [1], stochastic optical reconstruction microscopy (STORM) [2] and directSTORM (dSTORM) [3] employ
photoswitchable fluorophores and single-molecule localization to generate a super-resolution image. These methods are uniquely suited not only to resolve small cellular structures, but also to
provide quantitative information on the number of molecules or stoichiometries. This talk will summarize our recent efforts in single-molecule based super-resolution imaging. It will include
experimental developments, such as 3D tissue imaging [4] and new labeling strategies for cellular structures [5]. Furthermore, single-molecule based super-resolution methods are uniquely suited
not only to resolve small cellular structures, but also to quantify these by extracting the number of molecules or stoichiometries [6].
10:20 Optimisation and control of sampling rate in localisation microscopy
Localisation microscopy (PALM/ STORM, etc.) involves sampling sparse subsets of fluorescently labelled molecules, so that the density of bright ("active") molecules in a single frame is low
enough to allow single molecule sub-diffraction limited localisation. The sampling rate, ie. the mean number of active molecules per unit time, is controlled by the illumination intensity of a
"photoactivation" UV laser. Two key sampling problems are inherent in any localisation microscopy measurement: 1. What is the maximum sampling rate before sub-diffraction limited resolution is
lost? The maximum sampling rate determines the temporal resolution of the technique. Clearly, the absolute maximum sampling rate is for all molecules to be active (conventional microscopy).
However, the maximum usable sampling rate is largely determined by the localisation algorithm used. Our algorithm, DAOSTORM, is a high-density localisation algorithm which allows an order of
magnitude increase in sampling rate compared to traditional low-density algorithms. 2. Can we automatically maintain optimal sampling rate during data acquisition? A careful balance in sampling
rate is required: if sampling rate is too high, spatial resolution is reduced; if sampling rate is too low, temporal resolution is reduced. Traditionally, sampling rate is controlled by
continuous manual assessment of the density of molecules in any single frame, and manual adjustment of photoactivation laser intensity. This is tedious, and incompatible with automation. To
resolve this, we present AutoLase, an algorithm for real-time closed-loop measurement and control of sampling rate.
pp. 281-284
10:40 STORM by compressed sensing
In super-resolution microscopy methods based on single-molecule switching, each camera snapshot samples a random, sparse subset of probe molecules in the samples. The final super-resolution image
is assembled from thousands of such snapshots. The rate of accumulating single-molecule activation events often limits the time resolution. We have developed a sparse-signal recovery technique
using compressed sensing to analyze camera images with highly overlapping fluorescent spots. This method allows an activated fluorophore density an order of magnitude higher than what
conventional single-molecule fitting methods can handle. Combination of compressed sensing with Bayesian statistics over the entire image sequence further enabled us to improve the spatial
precision of determining fluorescent probe positions.
11:00 Video sampling and reconstruction using linear or non-linear Fourier measurements
The theory of compressed sensing (CS) predicts that structured images can be sampled in a compressive manner with very few non-adaptive linear measurements, made in a proper adjacent domain.
However, is such a recovery still possible with nonlinear measurements, such as optical-based Fourier modulus? Here, we investigate how phase retrieval methods can be extended to solve the
problem of recovering a video signal from a subset of Fourier modulus samples, taking advantage of some relevant sparse prior assumptions on the signal of interest. We compare this recovery
technique to the usual convex reconstruction method encountered when dealing with linear CS measurements. We present some simulation results obtained on real video sequences coming from
biological imaging experiment.
11:20 Fast Maximum Likelihood High-density Low-SNR Super-resolution Localization Microscopy
Localization microscopy such as STORM/PALM achieves the super-resolution by sparsely activating photo-switchable probes. However, to make the activation sparse enough to obtain reconstruction
images using conventional algorithms, only small set of probes need to be activated simultaneously, which limits the temporal resolution. Hence, to improve temporal resolution up to a level of
live cell imaging, high-density imaging algorithms that can resolve several overlapping PSFs are required. In this paper, we propose a maximum likelihood algorithm under Poisson noise model for
the high-density low-SNR STORM/PALM imaging. Using a sparsity promoting prior with concave-convex procedure (CCCP) optimization algorithm, we achieved high performance reconstructions with
ultra-fast reconstruction speed of 5 second per frame under high density low SNR imaging conditions. Experimental results using simulated and real live-cell imaging data demonstrate that proposed
algorithm is more robust than conventional methods in terms of both localization accuracy and molecular recall rate.
pp. 285-288
11:40 Analogies and differences in optical and mathematical systems and approaches
We review traditions and trends in optics and imaging recently arising by applying programmable optical devices and by sophisticated approaches for data evaluation and image reconstruction.
Furthermore, a short overview is given about modeling of well-known classical optical elements, and vice versa, about optical realizations of classical mathematical transforms, as in particular
Fourier, Hilbert, and Riesz transforms.
pp. 289-292
Sampling and Geometry
Stephen Casey, Michael Robinson
Room: Conference Room
Chairs: Stephen D. Casey (American University & NWC at the University of Maryland, USA), Michael Robinson (American University, USA)
10:00 The Nyquist theorem for cellular sheaves
We develop a unified sampling theory based on sheaves and show that the Shannon-Nyquist theorem is a cohomological consequence of an exact sequence of sheaves. Our theory indicates that there are
additional cohomological obstructions for higher-dimensional sampling problems. Using these obstructions, we also present conditions for perfect reconstruction of piecewise linear functions on
graphs, a collection of non-bandlimited functions on topologically nontrivial domains.
pp. 293-296
10:20 Frames of eigenspaces and localization of signal components
We present a construction of frames adapted to a given time-frequency cover and study certain computational aspects of it. These frames are based on a family of orthogonal projections that can be
used to localize signals in the time-frequency plane. We compare the effect of the corresponding orthogonal projections to the traditional time-frequency masking.
pp. 297-300
10:40 A Lie group approach to diffusive wavelets
The aim of this paper is to give an overview of diffusive wavelets on compact groups, homogeneous spaces and the Heisenberg group. This approach is based on Lie groups and representation theory
and generalizes well-known constructions of wavelets on the sphere. The key idea of diffusive wavelets is to generate a dilation from a diffusive semigroup where as the translation is the action
of a compact group. We give examples for the construction of diffusive wavelets.
pp. 301-304
11:00 Shannon Sampling and Parseval Frames on Compact Manifolds
The paper contains several generalizations of the classical Sampling Theorem for band limited functions constructed using a self-adjoint second order differential elliptic operator on a compact
homogeneous manifolds.
pp. 305-308
11:20 Signal Analysis with Frame Theory and Persistent Homology
A basic task in signal analysis is to characterize data in a meaningful way for analysis and classification purposes. Time-Frequency transforms are powerful strategies for signal decomposition,
and important recent generalizations have been achieved in the setting of frame theory. In parallel recent developments, tools from algebraic topology, traditionally developed in purely abstract
settings, have provided new insights in applications to data analysis. In this report, we investigate some interactions of these tools, both theoretically and with numerical experiments in order
to characterize signals and their corresponding adaptive frames. We explain basic concepts in persistent homology as an important new subfield of computational topology, as well as formulations
of time-frequency analysis in frame theory. Our objective is to use persistent homology for constructing topological signatures of signals in the context of frame theory for classification and
analysis purposes. The main motivation for studying these interactions is to combine the strength of frame theory as a fundamental signal analysis methodology, with persistent homology as a novel
perspective in data analysis.
pp. 309-312
11:40 Signal Adaptive Frame Theory
The projection method is an atomic signal decomposition designed for adaptive frequency band (AFB) and ultra-wide-band (UWB) systems. The method first windows the signal and then decomposes the
signal into a basis via a continuous-time inner product operation, computing the basis coefficients in parallel. The windowing systems are key, and we develop systems that have variable
partitioning length, variable roll-off and variable smoothness. These include systems developed to preserve orthogonality of any orthonormal systems between adjacent blocks, and almost orthogonal
windowing systems that are more computable/constructible than the orthogonality preserving systems. The projection method is, in effect, an adaptive Gabor system for signal analysis. The natural
language to express this structure is frame theory.
pp. 313-316
Thursday, July 4
08:40 - 09:40
Signal recognition and filter identification
Nikolai Nikolskii
Room: Conference Hall
Chair: Bruno Torrésani (Aix-Marseille Université, France)
Given a linear space of stationary filters A= {F}, S-->R= S*F, the following two problems are briefly surveyed. (1) The stable signal recognition problem consists to decide whether it is possible to
control the upper bound c in the signal recognition estimate ||S|< c||R||= c||S*F|| in terms of the lower bound of the energy spectrum only. The classical Wiener and Beurling-Sobolev algebras are
considered. In particular, it is explained why there exists no constructive proof of the Wiener 1/F theorem. (2) We say that a Weak Filter Identification (WFI) holds on a space X for a sampling
(observation) grid T (a subset of the real line R) if there is a test signal S such that F*S(t)= 0 for all t in T implies F=0. Discrete and continuous time filters are considered; in particular for X
= L^{p}(R) and T= Z (or any other discrete subgroup of R), the WFI holds iff p<2. If the time permits we discuss a special choice of X and T which leads to the entire dilation problem, the cyclic
vectors in the Hardy space on the Hilbert infinite dimensional multi-disc, and the Riemann hypothesis.
10:00 - 12:00
Sampling of Bandlimited Functions
Room: Conference Room
Chair: Gerhard Schmeisser (Erlangen-Nürnberg University, Germany)
10:00 Generalized oversampling with missing samples
It is well known that in the classical Shannon sampling theory on band-limited signals, any finitely many missing samples can be recovered when the signal is oversampled at a rate higher than the
minimum Nyquist rate. In this work, we consider the problem of recovering missing samples from multi-channel oversampling in a general shift-invariant space. We find conditions under which any
finite or infinite number of missing samples can be recovered.
pp. 337-340
10:20 Identification of Rational Transfer Functions from Sampled Data
We consider the task of estimating an operator from sampled data. The operator, which is described by a rational transfer function, is applied to continuous-time white noise and the resulting
continuous-time process is sampled uniformly. The main question we are addressing is whether the stochastic properties of the time series that originates from the sample values of the process
allows one to determine the operator. We focus on the autocorrelation property of the process and identify cases for which the sampling operator is injective. Our approach relies on sampling
properties of almost periodic functions, which together with exponentially decaying functions, provide the building blocks of the autocorrelation measure. Our results indicate that it is
possible, in principle, to estimate the parameters of the rational transfer function from sampled data, even in the presence of prominent aliasing.
pp. 341-343
10:40 Reconstruction of Signals from Highly Aliased Multichannel Samples by Generalized Matching Pursuit
This paper considers the problem of reconstructing a bandlimited signal from severely aliased multichannel samples. Multichannel sampling in this context means that the samples are available
after the signal has been filtered by various linear operators. We propose the method of Generalized Matching Pursuit to solve the reconstruction problem. We illustrate the potential of the
method using synthetic data that could be acquired using multimeasurement towed-streamer seismic data acquisition technology. A remarkable observation is that high-fidelity reconstruction is
possible even when the data are uniformly and coarsely sampled, with the order of aliasing significantly exceeding the number of channels.
pp. 344-347
11:00 Joint Signal Sampling and Detection
In this paper, we examine the joint signal sampling and detection problem when noisy samples of a signal are collected in the sequential fashion. In such a scheme, at the each observation time
point we wish to make a decision that the observed data record represents a signal of the assumed target form. Moreover, we are able simultaneously to recover a signal when it departs from the
target class. For such a joint signal detection and recovery setup, we introduce a novel algorithm relying on the smooth correction of linear sampling schemes. Given a finite frame of noisy
samples of the signal we design a detector being able to test a departure from a target signal as quickly as possible. Our detector is represented as a continuous time normalized partial-sum
stochastic process, for which we obtain a functional central limit theorem under weak assumptions on the correlation structure of the noise. The established limit theorems allow us to design
monitoring algorithms with the desirable level of the probability of false alarm and able to detect a change with probability approaching one.
pp. 348-351
11:20 On Optimal Sampling Trajectories for Mobile Sensing
We study the design of sampling trajectories for stable sampling and reconstruction of bandlimited spatial fields using mobile sensors. As a performance metric we use the path density of a set of
sampling trajectories, defined as the total distance traveled by the moving sensors per unit spatial volume of the spatial region being monitored. We obtain new results for the problem of
designing stable sampling trajectories with minimal path density, that admit perfect reconstruction of bandlimited fields. In particular, we identify the set of parallel lines with minimal path
density that contains a stable sampling set for isotropic fields.
pp. 352-355
11:40 Phase Retrieval via Structured Modulations in Paley-Wiener Spaces
This paper considers the recovery of continuous time signals from the magnitude of its samples. It uses a combination of structured modulation and oversampling and provides sufficient conditions
on the signal and the sampling system such that signal recovery is possible. In particular, it is shown that an average sampling rate of four times the Nyquist rate is sufficient to reconstruct a
signal from its magnitude measurements.
pp. 356-359
Sampling for Imaging Science
Jalal Fadili and Gabriel Peyré
Room: Conference Hall
Chair: Gabriel Peyré (CNRS and Université Paris-Dauphine, France)
10:00 Joint reconstruction of misaligned images from incomplete measurements for cardiac MRI
We present a novel method for robust reconstruction of the image of a moving object from incomplete linear measurements. We assume that only few measurements of this object can be acquired at
different instants and model the correlation between measurements using global geometric transformations represented by few parameters. Then, we design a method that is able to jointly estimate
these transformation parameters and an image of the object, while taking into account possible occlusions of parts of the object during the acquisitions. The reconstruction algorithm minimizes a
non-convex functional and generates a sequence of estimates converging to a critical point of this functional. Finally, we show how to apply this algorithm on a real cardiac acquisition for free
breathing coronary magnetic resonance imaging.
pp. 317-320
10:40 Localization of point sources in wave fields from boundary measurements using new sensing principle
We address the problem of localizing point sources in 3D from boundary measurements of a wave field. Recently, we prosed the sensing principle which allows extracting volumetric samples of the
unknown source distribution from the boundary measurements. The extracted samples allow a non-iterative re construction algorithm that can recover the parameters of the source distribution
projected on a 2-D plane in the continuous domain without any discretization. Here we extend the method for the 3-D localization of multiple point sources by combining multiple 2-D planar
projections. In particular, we propose a three-step algorithm to retrieve the locations by means of multiplanar application of the sensing principle. First, we find the projections of the
locations onto several 2-D planes. Second, we propose a greedy algorithm to pair the solutions in each plane. Third, we retrieve the 3D locations by least squares regression.
pp. 321-324
11:00 Compressive Acquisition of Sparse Deflectometric Maps
Schlieren deflectometry aims at measuring deflections of light rays from transparent objects, which is subsequently used to characterize the objects. With each location on a smooth object surface
a sparse deflection map (or spectrum) is associated. In this paper, we demonstrate the compressive acquisition and reconstruction of such maps, and the usage of deflection information for object
characterization, using a schlieren deflectometer. To this end, we exploit the sparseness of deflection maps and we use the framework of spread spectrum compressed sensing. Further, at a second
level, we demonstrate how to use the deflection information optimally to reconstruct the distribution of refractive index inside an object, by exploiting the sparsity of refractive index maps in
gradient domain.
pp. 325-328
11:20 Fourier-Laguerre transform, convolution and wavelets on the ball
We review the Fourier-Laguerre transform, an alternative harmonic analysis on the three-dimensional ball to the usual Fourier-Bessel transform. The Fourier-Laguerre transform exhibits an exact
quadrature rule and thus leads to a sampling theorem on the ball. We study the definition of convolution on the ball in this context, showing explicitly how translation on the radial line may be
viewed as convolution with a shifted Dirac delta function. We review the exact Fourier-Laguerre wavelet transform on the ball, coined flaglets, and show that flaglets constitute a tight frame.
pp. 329-332
11:40 Truncation Error in Image Interpolation
Interpolation is a fundamental issue in image processing. In this short paper, we communicate ongoing results concerning the accuracy of two landmark approaches: the Shannon expansion and the DFT
interpolation. Among all sources of error, we focus on the impact of spatial truncation. Our estimations are expressed in the form of upper bounds on the Root Mean Square Error as a function of
the distance to the image border. The quality of these bounds is appraised through experiments driven on natural images.
pp. 333-336
13:20 - 15:00
Room: Conference Room
Chair: Pavel Zheltov (Jacobs University Bremen, Germany)
13:20 Optimal Interpolation Laws for Stable AR(1) Processes
In this paper, we focus on the problem of interpolating a continuous-time AR(1) process with stable innovations using minimum average error criterion. Stable innovations can be either Gaussian or
non-Gaussian. In the former case, the optimality of the exponential splines is well understood. For non-Gaussian innovations, however, the problem has been all too often addressed through Monte
Carlo methods. In this paper, based on a recent non-Gaussian stochastic framework, we revisit the AR(1) processes in the context of stable innovations and we derive explicit expressions for the
optimal interpolator. We find that the interpolator depends on the stability index of the innovation and is linear for all stable laws, including the Gaussian case. We also show that the solution
can be expressed in terms of exponential splines.
pp. 380-383
13:40 Hierarchical Tucker Tensor Optimization - Applications to Tensor Completion
In this work, we develop an optimization framework for problems whose solutions are well-approximated by Hierarchical Tucker tensors, an efficient structured tensor format based on recursive
subspace factorizations. Using the differential geometric tools presented here, we construct standard optimization algorithms such as Steepest Descent and Conjugate Gradient, for interpolating
tensors in HT format. We also empirically examine the importance of one's choice of data organization in the success of tensor recovery by drawing upon insights from the Matrix Completion
literature. Using these algorithms, we recover various seismic data sets with randomly missing source pairs.
pp. 384-387
14:00 Estimation of large data sets on the basis of sparse sampling
We propose a new technique which allows us to estimate any random signal from a large set of noisy observed data on the basis of samples of only a few reference signals.
pp. 388-391
14:20 Analysis of Hierarchical Image Alignment with Descent Methods
We present a performance analysis for image registration with gradient descent methods. We consider a multiscale registration setting where the global 2-D translation between a pair of images is
estimated by smoothing the images and minimizing the distance between their intensity functions with gradient descent. We focus in particular on the effect of low-pass filtering on the alignment
performance. We adopt an analytic representation for images and analyze the well-behavedness of the distance function by estimating the neighborhood of translations for which the distance
function is free of undesired local minima. This corresponds to the set of translation vectors that are correctly computable with a simple gradient descent minimization. We show that the area of
this neighborhood increases at least quadratically with the filter size, which justifies the use of smoothing in image registration with local optimizers. We finally use our results in the design
of a regular multiscale grid in the translation parameter domain that has perfect alignment guarantees.
pp. 392-395
14:40 Spectrum Reconstruction from Sub-Nyquist Sampling of Stationary Wideband Signals
In light of the ever-increasing demand for new spectral bands and the underutilization of those already allocated, the new concept of Cognitive Radio (CR) has emerged. Opportunistic users could
exploit temporarily vacant bands after detecting the absence of activity of their owners. One of the most crucial tasks in the CR cycle is therefore spectrum sensing and detection which has to be
precise and efficient. Yet, CRs typically deal with wideband signals whose Nyquist rates are very high. In this paper, we propose to reconstruct the spectrum of such signals from sub-Nyquist
samples in order to perform detection. We consider both sparse and non sparse signals as well as blind and non blind detection in the sparse case. For each one of those scenarii, we derive the
minimal sampling rate allowing perfect reconstruction of the signal spectrum in a noise-free environment and provide recovery techniques. The simulations shows spectrum recovery at the minimal
rate in noise-free settings.
pp. 396-399
Advances in Compressive Sensing
Holger Rauhut, Joel Tropp
Room: Conference Hall
Chair: Holger Rauhut (University of Bonn, Germany)
13:20 Energy-aware adaptive bi-Lipschitz embeddings
We propose a dimensionality reducing matrix design based on training data with constraints on its Frobenius norm and number of rows. Our design criteria is aimed at preserving the distances
between the data points in the dimensionality reduced space as much as possible relative to their distances in original data space. This approach can be considered as a deterministic Bi-Lipschitz
embedding of the data points. We introduce a scalable learning algorithm, dubbed AMUSE, and provide a rigorous estimation guarantee by leveraging game theoretic tools. We also provide a
generalization characterization of our matrix based on our sample data. We use compressive sensing problems as an example application of our problem, where the Frobenius norm design constraint
translates into the sensing energy.
pp. 360-363
13:40 Randomized Singular Value Projection
Affine rank minimization algorithms typically rely on calculating the gradient of a data error followed by a singular value decomposition at every iteration. Because these two steps are
expensive, heuristic approximations are often used to reduce computational burden. To this end, we propose a recovery scheme that merges the two steps with randomized approximations, and as a
result, operates on space proportional to the degrees of freedom in the problem. We theoretically establish the estimation guarantees of the algorithm as a function of approximation tolerance.
While the theoretical approximation requirements are overly pessimistic, we demonstrate that in practice the algorithm performs well on the quantum tomography recovery problem.
pp. 364-367
14:00 On Sparsity Averaging
Recent developments in [1] and [2] introduced a novel regularization method for compressive imaging in the context of compressed sensing with coherent redundant dictionaries. The approach relies
on the observation that natural images exhibit strong average sparsity over multiple coherent frames. The associated reconstruction algorithm, based on an analysis prior and a reweighted L1
scheme, is dubbed Sparsity Averaging Reweighted Analysis (SARA). We review these advances and extend associated simulations establishing the superiority of SARA to regularization methods based on
sparsity in a single frame, for a generic spread spectrum acquisition and for a Fourier acquisition of particular interest in radio astronomy.
pp. 368-371
14:20 Conditions for Dual Certificate Existence in Semidefinite Rank-1 Matrix Recovery
We study the existence of dual certificates in convex minimization problems where a rank-1 matrix X0 is to be recovered under semidefinite and linear constraints. We provide an example where such
a dual certificate does not exist. We prove that dual certificates are guaranteed to exist if the linear measurement matrices can not be recombined to form something positive and orthogonal to
X0. If the measurements can be recombined in this way, the problem is equivalent to one with additional linear constraints. That augmented problem is guaranteed to have a dual certificate at the
minimizer, providing the form of an optimality certificate for the original problem.
pp. 372-375
14:40 The restricted isometry property for random convolutions
We present significantly improved estimates for the restricted isometry constants of partial random circulant matrices as they arise in the matrix formulation of subsampled convolution with a
random pulse. We show that the required condition on the number $m$ of rows in terms of the sparsity $s$ and the vector length $n$ is $m > C s \log^2 s \log^2 n$.
pp. 376-379
15:00 - 16:20
Poster Session II
coffee served
Chair: Goetz Pfander (Jacobs University Bremen, Germany)
Multivariate sampling Kantorovich operators: approximation and applications to civil engineering
In this paper, we present the theory and some new applications of linear, multivariate, sampling Kantorovich operators. By means of the above operators, we are able to reconstruct pointwise,
continuous and bounded signals (functions), and to approximate uniformly, uniformly continuous and bounded functions. Moreover, the reconstruction of signals belonging to Orlicz spaces are also
considered. In the latter case, we show how our operators can be used to approximate not necessarily continuous signals/images, and an algorithm for image reconstruction is developed. Several
applications of the theory in civil engineering are obtained. Thermographic images, such as masonries images, are processed to study the texture of the buildings, thus to separate the stones from
the mortar and finally a real-world case-study is analyzed in terms of structural analysis.
pp. 400-403
On the Number of Degrees of Freedom of Band-Limited Functions
The concept of the number of degrees of freedom of band-limited signals is discussed. Classes of band-limited signals obtained as a result of successive application of the truncated direct and
truncated inverse Fourier transforms are shown to posses a finite number of degrees of freedom.
pp. 404-407
Tracing Sound Objects in Audio Textures
This contribution presents first results on two proposed methods to trace sound objects within texture sounds. We first discuss what we mean by these two notions and explain how the properties of
a sound that is known to be textural are exploited in order to detect changes which suggest the presence of a distinct sound event. We introduce two approaches, one is based on Gabor multipliers
mapping consecutive time-segments of the signal to each other, the other one on dictionary learning. We present the results of simulations based on real data.
pp. 408-411
An Uncertainty Principle for Discrete Signals
By use of window functions, time-frequency analysis tools like short time Fourier transform overcome a shortcoming of the Fourier transform and enable us to study the time-frequency
characteristics of signals which exhibit transient oscillatory behaviours. Since the resulting representations depend on the choice of the window functions, it is important to know how they
influence the analyses. One crucial question on a window function is how accurate it permits us to analyze the signals in the time and frequency domains. In the continuous domain (for functions
defined on the real line), the limit on the accuracy is well-established by the Heisenberg's uncertainty principle when the time-frequency spread is measured in terms of the variance measures.
However, for the finite discrete signals (where we consider the discrete Fourier transform), the uncertainty relation is not as well understood. Our work fills in some of the gap in the
understanding and states uncertainty relation for a subclass of finite discrete signals. Interestingly, the result is a close parallel to that of the continuous domain: the time-frequency spread
measure is, in some sense, natural generalization of the variance measure in the continuous domain, the lower bound for the uncertainty is close to that of the continuous domain, and the lower
bound is achieved approximately by the `discrete Gaussians'.
pp. 412-415
Efficient Simulation of Continuous Time Digital Signal Processing RF Systems
A new simulation method for continuous time digital signal processing RF architectures is proposed. The approach is based on a discrete time representation of the input signal combined with a
linear interpolation. Detailed theoretical calculations are presented, which prove the efficiency of the simulation when dealing with RF signals. We show that, compared to a discrete time
simulation, for the same simulation error a decrease of almost two orders of magnitude is expected in the necessary number of input samples.
pp. 416-419
Shift-Variance and Cyclostationarity of Linear Periodically Shift-Variant Systems
We study shift-variance and cyclostationarity of linear periodically shift-variant (LPSV) systems. Both input and output spaces are assumed to be of continuous-time. We first determine how far an
LPSV system is away from the space of linear shift-invariant systems. We then consider cyclostationarity of a random process based on its autocorrelation operator. The results allow us to
investigate properties of output of an LPSV system when its input is a random process. Finally, we analyze shift-variance and cyclostationarity of generalized sampling-reconstruction processes.
pp. 420-423
Constructive sampling for patch-based embedding
To process high-dimensional big data, we assume that sufficiently small patches (or neighborhoods) of the data are approximately linear. These patches represent the tangent spaces of an
underlying manifold structure from which we assume the data is sampled. We use these tangent spaces to extend the scalar relations that are used by many kernel methods to matrix relations, which
encompass multidimensional similarities between local neighborhoods in the data. The incorporation of these matrix relations improves the utilization of kernel-based data analysis methodologies.
However, they also result in a larger kernel and a higher computational cost of its spectral decomposition. We propose a dictionary construction that approximates the oversized kernel in this
case and its associated patch-to-tensor embedding. The performance of the proposed dictionary construction is demonstrated on a super-kernel example that utilizes the Diffusion Maps methodology
together with linear-projection operators between tangent spaces in the manifold.
pp. 424-427
The Constrained Earth Mover Distance Model, with Applications to Compressive Sensing
Sparse signal representations have emerged as powerful tools in signal processing theory and applications, and serve as the basis of the now-popular field of compressive sensing (CS). However,
several practical signal ensembles exhibit additional, richer structure beyond mere sparsity. Our particular focus in this paper is on signals and images where, owing to physical constraints, the
positions of the nonzero coefficients do not change significantly as a function of spatial (or temporal) location. Such signal and image classes are often encountered in seismic exploration,
astronomical sensing, and biological imaging. Our contributions are threefold: (i) We propose a simple, deterministic model based on the Earth Mover Distance that effectively captures the
structure of the sparse nonzeros of signals belonging to such classes. (ii) We formulate an approach for approximating any arbitrary signal by a signal belonging to our model. The key idea in our
approach is a min-cost max-flow graph optimization problem that can be solved efficiently in polynomial time. (iii) We develop a CS algorithm for efficiently reconstructing signals belonging to
our model, and numerically demonstrate its benefits over state-of-the-art CS approaches.
pp. 428-431
Orlicz Modulation Spaces
In this work we extend the definition of modulation spaces associated to Lebesgue spaces to Orlicz spaces and mixed-norm Orlicz spaces. We give the definition of the Orlicz spaces $L^\Phi$, a
generalisation of the $L^p$ spaces of Lebesgue. Therefore we characterise the Young function $\Phi$ and give some basic properties of this spaces. We collect some facts about this spaces that we
need for the time frequency analysis, then we introduce the Orlicz modulation spaces. Finally we present a discretisation of the Orlicz space and mixed-norm Orlicz space and a characterisation of
the modulation space by discretisation.
pp. 432-435
Binary Reduced Row Echelon Form Approach for Subspace Segmentation
This paper introduces a subspace segmentation and data clustering method for a set of data drawn from a union of subspaces. The proposed method works perfectly in absence of noise, i.e., it can
find the number of subspaces, their dimensions, and an orthonormal basis for each subspace. The effect of noise on this approach depends on the noise level and relative positions of subspaces. We
provide a comprehensive performance analysis in presence of noise and outliers.
pp. 436-439
Missing Entries Matrix Approximation and Completion
We describe several algorithms for matrix completion and matrix approximation when only some of its entries are known. The approximation constraint can be any whose approximated solution is known
for the full matrix. For low rank approximations, similar algorithms appear recently in the literature under different names. In this work, we introduce new theorems for matrix approximation and
show that these algorithms can be extended to handle different constraints such as nuclear norm, spectral norm, orthogonality constraints and more that are different than low rank approximations.
As the algorithms can be viewed from an optimization point of view, we discuss their convergence to global solution for the convex case. We also discuss the optimal step size and show that it is
fixed in each iteration. In addition, the derived matrix completion flow is robust and does not require any parameters. This matrix completion flow is applicable to different spectral
minimizations and can be applied to physics, mathematics and electrical engineering problems such as data reconstruction of images and data coming from PDEs such as Helmholtz's equation used for
electromagnetic waves.
pp. 440-443
Using Affinity Perturbations to Detect Web Traffic Anomalies
The initial training phase of machine learning algorithms is usually computationally expensive as it involves the processing of huge matrices. Evolving datasets are challenging from this point of
view because changing behavior requires updating the training. We propose a method for updating the training profile efficiently and a sliding window algorithm for online processing of the data
in smaller fractions. This assumes the data is modeled by a kernel method that includes spectral decomposition. We demonstrate the algorithm with a web server request log where an actual
intrusion attack is known to happen. Updating the kernel dynamically using a sliding window technique, prevents the problem of single initial training and can process evolving datasets more
pp. 444-447
Finite Rate of Innovation Signals: Quantization Analysis with Resistor-Capacitor Acquisition Filter
Finite rate of innovation or FRI signals, which are usually not bandlimited, have been studied as an alternate model for signal sampling and reconstruction. Sampling and perfect reconstruction of
FRI signals was first presented by Vetterli, Marziliano, and Blu. A typical FRI reconstruction algorithm requires solving for FRI signal parameters from a power-sum series. This in turn requires
annihilation filters and root-finding techniques. These non-linear steps complicate the analysis of FRI signal reconstruction in the presence of quantization. In this work, we introduce a
resistor-capacitor filter bank for sample acquisition of FRI signal and an associated signal reconstruction scheme which uses much simpler operations than those of the existing techniques. This
simplification allows us to analyze the effect of quantization noise. However, the sampling-rate required for our scheme is larger than the minimum sampling-rate of FRI signals.
pp. 448-451
Tangent space estimation bounds for smooth manifolds
Many manifold learning methods require the estimation of the tangent space of the manifold at a point from locally available data samples. Local sampling conditions such as (i) the size of the
neighborhood and (ii) the number of samples in the neighborhood affect the performance of learning algorithms. In this paper, we propose a theoretical analysis of local sampling conditions for
the estimation of the tangent space at a point P lying on an m-dimensional Riemannian manifold S in R^n. Assuming a smooth embedding of S in R^n, we estimate the tangent space by performing a
Principal Component Analysis (PCA) on points sampled from the neighborhood of P on S. Our analysis explicitly takes into account the second order properties of the manifold at P, namely the
principal curvatures as well as the higher order terms. Considering a random sampling framework, we leverage recent results from random matrix theory to derive local sampling conditions for an
accurate estimation of tangent subspace. Our main results state that the width of the sampling region in the tangent space guaranteeing an accurate estimation is inversely proportional to the
manifold dimension, curvature, and the square root of the ambient space dimension. At the same time, we show that the number of samples increases quadratically with the manifold dimension and
logarithmically with the ambient space dimension.
pp. 452-455
A null space property approach to compressed sensing with frames
An interesting topic in compressive sensing concerns problems of sensing and recovering signals with sparse representations in a dictionary. In this note, we study conditions of sensing matrices
$A$ for the $\ell^1$-synthesis method to accurately recovery sparse, or nearly sparse signals in a given dictionary $D$. In particular, we propose a dictionary based null space property (D-NSP)
which, to the best of our knowledge, is the first sufficient and necessary condition for the success of the $\ell^1$ recovery. This new property is then utilized to detect some of those
dictionaries whose sparse families cannot be compressed universally. Moreover, when the dictionary is full spark, we show that $AD$ being NSP, which is well-known to be only sufficient for stable
recovery via $\ell^1$-synthesis method, is indeed necessary as well.
pp. 456-459
Irregular Sampling of the Radon Transform of Bandlimited Functions
We provide conditions for exact reconstruction of a bandlimited function from irregular polar samples of its Radon transform. First, we prove that the Radon transform is a continuous L2 -operator
for certain classes of bandlimited signals. We then show that the Beurling-Malliavin condition for the radial sampling density ensures existence and uniqueness of a solution. Moreover, Jaffard's
density condition is sufficient for stable reconstruction.
pp. 460-463
Spline-based frames for image restoration
We present a design scheme to generate tight and semi-tight frames in the space of discrete-time periodic signals, which are originated from four-channel perfect reconstruction periodic filter
banks. The filter banks are derived from interpolating and quasi-interpolating polynomial splines. Each filter bank comprises one linear phase low-pass filter (in most cases interpolating) and
one high-pass filter, whose magnitude response mirrors that of a low-pass filter. In addition, these filter banks comprise two band-pass filters. In the semi-tight frames case, all the filters
have linear phase and (anti)symmetric impulse response, while in the tight frame case, some of band-pass filters are slightly asymmetric. We introduce the notion of local discrete vanishing
moments (LDVM). In the tight frame case, analysis framelets coincide with their synthesis counterparts. However, in the semi-tight frames, we have the option to swap LDVM between synthesis and
analysis framelets. The design scheme is generic and it enables us to design framelets with any number of LDVM. The computational complexity of the framelet transforms, which consists of
calculation of the forward and the inverse fast Fourier transforms and simple arithmetic operations, practically does not depend on the number of LDVM and on the size of the impulse response of
filters . The designed frames are used for restoration of images, which are degraded by blurring, random noise and missing pixels. The images were restored by the application of the Split Bregman
Iterations method. I
pp. 464-467
On the Noise-Resilience of OMP with BASC-Based Low Coherence Sensing Matrices
In Compressed Sensing (CS), measurements of a sparse vector are obtained by applying a sensing matrix. With the means of CS, it is possible to reconstruct the sparse vector from a small number of
such measurements. In order to provide reliable reconstruction also for less sparse vectors, sensing matrices are desired to be of low coherence. Motivated by this requirement, it was recently
shown that low coherence sensing matrices can be obtained by Best Antipodal Spherical Codes (BASC). In this paper, the noise-resilience of the Orthogonal Matching Pursuit (OMP) used in
combination with low coherence BASC-based sensing matrices is investigated.
pp. 468-471
Tight frames in spiral sampling
The paper deals with the construction of Parseval tight frames for the space of square integrable functions whose domain is the ball of radius R and centered at the origin. The focus is on
Fourier frames on a spiral. Starting with a Fourier frame on a spiral, a Parseval tight frame that spans the same space can then be obtained by a symmetric approximation of the original Fourier
pp. 472-475
16:20 - 17:20
Seeing the invisible; predicting the unexpected
Michal Irani
Room: Conference Hall
Chair: Abdul Jerri (Clarkson University, USA)
Small image patches tend to repeat abundantly within a natural image, both at the original scale, as well as at coarser scales of the image. Similarly, small space-time patches recur abundantly
within a video sequence, both within and across temporal scales. In this talk I will show how complex visual inference tasks can be performed by exploiting this inherent property of patch redundancy
within and across different parts of the visual data. Comparing and integrating local pieces of visual information gives rise to complex notions of visual similarity and to a general "Inference by
Composition" approach. This allows to infer about the likelihood of new visual data that was never seen before, make inferences about complex static and dynamic visual information without any prior
examples or prior training. I will demonstrate the power of this approach to several example problems (as time permits):
1. Spatial super-resolution from a single image & Temporal super-resolution from a single video.
2. Prediction of missing visual information.
3. Inferring the "likelihood" of "never-before-seen" visual data.
4. Detecting the "irregular" and "unexpected"
5. Detecting complex objects and actions
6. Segmentation of complex visual data.
7. Generating visual summaries (of images and video)
17:30 - 18:10
Harmonic Analysis
Room: Conference Room
Chair: Rowland Higgins (Anglia Polytechnic University, Cambridge, United Kingdom)
17:30 Measure-based diffusion kernel methods
A commonly used approach for analyzing massive high dimensional datasets is to utilize diffusion-based kernel methods. The kernel in these methods is based on a Markovian diffusion process, whose
transition probabilities are determined by local similarities between data points. When the data lies on a low dimensional manifold, the diffusion distances according to this kernel encompass the
geometry of the manifold. In this paper, we present a generalized approach for defining diffusion-based kernels by incorporating measure-based information, which represents the density or
distribution of the data, together with its local distances. The generalized construction does not require an underlying manifold to provide a meaningful kernel interpretation but assumes a more
relaxed assumption that the measure and its support are related to a locally low dimensional nature of the analyzed phenomena.
pp. 489-492
17:50 Spectral properties of dual frames
We study spectral properties of dual frames of a given finite frame. We give a complete characterization for which spectral patterns of dual frames are possible for a fixed frame. For many cases,
we provide simple explicit constructions for dual frames with a given spectrum, in particular, if the constraint on the dual is that it be tight.
pp. 493-496
17:30 - 18:30
Advances in Compressive Sensing
Holger Rauhut, Joel Tropp
Room: Conference Hall
Chair: Holger Rauhut (University of Bonn, Germany)
17:30 Local coherence sampling for stable sparse recovery
Exact recovery guarantees in compressive sensing often assume incoherence between the sensing basis and sparsity basis, a strong assumption that is often unattainable in practice. Here we discuss
the notion of local coherence, and show that by resampling from the sensing basis according to the local coherence function, stable and robust sparse recovery guarantees extend to a rich new
class of sensing problems beyond incoherent systems. We discuss particular applications to compressive MRI imaging and polynomial interpolation.
pp. 476-480
17:50 Structured-signal recovery from single-bit measurements
1-bit compressed sensing was introduced by Boufounos and Baraniuk in 2008 as a model of extreme quantization; only the sign of each measurement is retained. Recent theoretical and algorithmic
advances, combined with the ease of hardware implementation, show that it is an effective method of signal acquisition. Surprisingly, in the high-noise regime there is almost no information loss
from 1-bit quantization. We review and revise recent results, and compare to closely related statistical problems: sparse binary regression and binary matrix completion.
pp. 481-484
18:10 Dictionary Identification Results for K-SVD with Sparsity Parameter 1
In this talk we summarise part of the results from our recent work \cite{sc13arxiv} and \cite{sc13b}. We give theoretical insights into the performance of K-SVD, a dictionary learning algorithm
that has gained significant popularity in practical applications, by answering the question when a dictionary $\dico$ can be recovered as local minimum of the minimisation criterion underlying
K-SVD from a set of training signals $y_n=\dico x_n$. Assuming the training signals are generated from a tight frame with coefficients drawn from a random symmetric distribution, then in
expectation the generating dictionary can be recovered as a local minimum of the K-SVD criterion if the coefficient distribution exhibits sufficient decay. This decay can be characterised by the
coherence of the dictionary and the $\ell_1$-norm of the coefficients. Further it is demonstrated that given a finite number of training samples $N$ with probability $O(\exp(-N^{1-4q}))$ there is
a local minimum of the K-SVD criterion within a radius $O(N^{-q})$ of the generating dictionary.
pp. 485-488
Friday, July 5
08:40 - 09:40
Event-driven sampling and continuous-time digital signal processing
Yannis Tsividis
Room: Conference Hall
Chair: Laurent Fesquet (TIMA Laboratory, France)
Many new and emerging applications require extremely low power dissipation in order to preserve scarce energy resources; such applications include sensor networks and wearable/implantable/ingestible
biomedical devices. In such cases, uniform sampling, as used in conventional, clocked circuits, represents undesirable and unnecessary energy waste. We review techniques in which the signal itself
dictates when it needs to be sampled and processed, thus tightly coupling energy use to signal activity. Methods for implementing event-driven A/D converters and DSPs in this context, without using
any clock, are reviewed. It is shown that, compared to traditional, clocked techniques, the techniques reviewed here produce circuits that completely avoid aliasing, respond immediately to input
changes, result in better error spectral properties, and exhibit dynamic power dissipation that goes down when the input activity decreases. Experimental results from recent test chips, operating at
kHz to GHz signal frequencies, fully confirm these properties.
10:00 - 12:00
Sampling of Bandlimited Functions
Room: Conference Room
Chair: David Walnut (George Mason University, USA)
10:00 Sampling and Reconstruction of Bandlimited BMO-Functions
Functions of bounded mean oscillation (BMO) play an important role complex function theory and harmonic analysis. In this paper a sampling theorem for bandlimited BMO-functions is derived for
sampling points that are the zero sequence of some sine-type function. The class of sine-type functions is large and, in particular, contains the sine function, which corresponds to the special
case of equidistant sampling. It is shown that the sampling series is locally uniformly convergent if oversampling is used. Without oversampling, the local approximation error is bounded.
pp. 521-524
10:20 Reconstruction of band-limited random signals from local averages
We consider the problem of reconstructing a wide sense stationary band-limited random signal from its local averages taken at the Nyquist rate or above. Success of the perfect reconstruction
depends on the length of intervals on which the averages are taken. The resulting average sampling expansions converge in mean square and are the same as the original signal with probability 1.
pp. 525-527
10:40 Bandlimited Signal Reconstruction From the Distribution of Unknown Sampling Locations
We study the reconstruction of bandlimited fields from samples taken at unknown but statistically distributed sampling locations. The setup is motivated by distributed sampling where precise
knowledge of sensor locations can be difficult. Periodic one-dimensional bandlimited fields are considered for sampling. Perfect samples of the field at independent and identically distributed
locations are obtained. The statistical realization of sampling locations is not known. First, it is shown that a bandlimited field cannot be uniquely determined with samples taken at
statistically distributed but unknown locations, even if the number of samples is infinite. Next, it is assumed that the order of sample locations is known. In this case, using insights from
order-statistics, an estimate for the field with useful asymptotic properties is designed. Distortion (mean-squared error) and central-limit are established for this estimate.
pp. 528-531
11:00 Sampling aspects of approximately time-limited multiband and bandpass signals
We provide an overview of recent progress regarding the role of sampling in the study of signals that are in the image of a bandpass or multiband frequency limiting operation and have most of
their energies concentrated in a given time interval. First we address the question of approximation of a time- and band-limited signal on its essential time support by a finite sinc series. Next
we consider a method by which essentially time limited multiband signals can be approximated as superpositions of eigenfunctions of time- and band-limiting to each separate band. Finally we
consider a means to approximate essentially time-limited bandpass signals. In this case we present a new phase-locking metric that arises in the study of EEG signals.
pp. 532-535
11:20 Recovery of Bandlimited Signal Based on Nonuniform Derivative Sampling
The paper focuses on the perfect recovery of band- limited signals from nonuniform samples of the signal and its derivatives. The main motivation to address signal recovery using nonuniform
derivative sampling is a reduction of mean sampling frequency under Nyquist rate which is a critical issue in event-based signal processing chains with wireless link. In particular, we introduce
a set of reconstructing functions for nonuniform derivative sampling as an extension of relevant set of reconstructing functions derived by Linden and Abramson for uniform derivative sampling. An
example of signal recovery using the first derivative is finally reported.
pp. 536-539
11:40 Approximation by Shannon sampling operators in terms of an averaged modulus of smoothness
The aim of this paper is to study the approximation properties of generalized sampling operators in L^p(R) in terms of an averaged modulus of smoothness.
pp. 540-543
Compressive Sensing and Applications
Room: Conference Hall
Chair: Rachel Ward (University of Texas, USA)
10:00 Sparse Recovery with Fusion Frames via RIP
We extend ideas from compressed sensing to a structured sparsity model related to fusion frames. We present theoretical results concerning the recovery of sparse signals in a fusion frame from
undersampled measurements. We provide both nonuniform and uniform recovery guarantees. The novelty of our work is to exploit an incoherence property of the fusion frame which allows us to reduce
the number of measurements needed for sparse recovery.
pp. 497-500
10:20 Blind Sensor Calibration in Sparse Recovery Using Convex Optimization
We investigate a compressive sensing system in which the sensors introduce a distortion to the measurements in the form of unknown gains. We focus on {\em blind} calibration, using measures
performed on a few unknown (but sparse) signals. We extend our earlier study on real positive gains to two generalized cases (signed real-valued gains; complex-valued gains), and show that the
recovery of unknown gains together with the sparse signals is possible in a wide variety of scenarios. The simultaneous recovery of the gains and the sparse signals is formulated as a convex
optimization problem which can be solved easily using off-the-shelf algorithms. Numerical simulations demonstrate that the proposed approach is effective provided that sufficiently many (unknown,
but sparse) calibrating signals are provided, especially when the sign or phase of the unknown gains are not completely random.
pp. 501-504
10:40 Sampling by blocks of measurements in Compressed Sensing
Various acquisition devices impose sampling blocks of measurements. A typical example is parallel magnetic resonance imaging (MRI) where several radio-frequency coils simultaneously acquire a set
of Fourier modulated coefficients. We study a new random sampling approach that consists in selecting a set of blocks that are predefined by the application of interest. We provide theoretical
results on the number of blocks that are required for exact sparse signal reconstruction. We finish by illustrating these results on various examples, and discuss their connection to the
literature on CS.
pp. 505-508
11:00 Travelling salesman-based variable density sampling
Compressed sensing theory indicates that selecting a few measurements independently at random is a near optimal strategy to sense sparse or compressible signals. This is infeasible in practice
for many acquisition devices that acquire samples along continuous trajectories (e.g., radial, spiral, ...). Examples include magnetic resonance imaging (MRI) or radio-interferometry. In this
paper, we propose to generate continuous sampling trajectories by drawing a small set of measurements independently and joining them using a travelling salesman problem solver. Our contribution
lies in the theoretical derivation of the appropriate probability density of the initial drawings. Preliminary simulation results show that this strategy is as efficient as independent drawings
while being implementable on real acquisition systems.
pp. 509-512
11:20 Incremental Sparse Bayesian Learning for Parameter Estimation of Superimposed Signals
This work discuses a novel algorithm for joint sparse estimation of superimposed signals and their parameters. The proposed method is based in two concepts: a variational Bayesian version of the
incremental sparse Bayesian learning (SBL) - fast variational SBL - and a variational Bayesian approach to parameter estimation of superimposed signal models. Both schemes estimate the unknown
parameters by minimizing the variational lower bound on model evidence; also, these optimizations are performed incrementally with respect to the parameters of single component. It is
demonstrated that these estimations can be naturally unified under the framework of variational Bayesian inference. This allows, on the one hand, for an adaptive dictionary design for FV-SBL
schemes, and, on the other hand, for a fast superresolution approach to parameter estimation of superimposed signals. The experimental evidence collected with synthetic data as well as with
estimation results for measured multipath channels demonstrate the effectiveness of the proposed algorithm.
pp. 513-516
11:40 Sparse MIMO Radar with Random Sensor Arrays and Kerdock Codes
We derive a theoretical framework for the recoverability of targets in the azimuth-range-Doppler domain using random sensor array and tools developed in the area of compressive sensing. In one
manifestation of our theory we use Kerdock codes as transmission waveforms and exploit some of their peculiar properties in our analysis. Not only our result is the first rigorous mathematical
theory for the detection of moving targets using random sensor arrays, but also the transmitted waveforms satisfy a variety of properties that are very desirable and important from a practical
pp. 517-520
13:20 - 15:00
FFT and Related Algorithms
Room: Conference Room
Chair: Peter Massopust (Helmholtz Zentrum München, Germany)
13:20 Phase retrieval using time and Fourier magnitude measurements
We discuss the reconstruction of a finite-dimensional signal from the absolute values of its Fourier coefficients. In many optical experiments the signal magnitude in time is also available. We
combine time and frequency magnitude measurements to obtain closed reconstruction formulas. Random measurements are discussed to reduce the number of measurements.
pp. 564-567
13:40 Fast Ewald summation under 2d- and 1d-periodic boundary conditions based on NFFTs
Ewald summation has established as basic element of fast algorithms evaluating the Coulomb interaction energy of charged systems subject to periodic boundary conditions. In this context particle
mesh routines, as the P3M method, and the P2NFFT, which is based on nonequispaced fast Fourier transforms (NFFT), should be mentioned. In this paper we present a new approach for the efficient
calculation of the Coulomb interaction energy subject to mixed boundary conditions based on NFFTs.
pp. 568-571
14:00 A sparse Prony FFT
We describe the application of Prony-like reconstruction methods to the problem of the sparse Fast Fourier transform (sFFT). In particular, we adapt both important parts of the sFFT, quasi random
sampling and filtering techniques, to Prony-like methods.
pp. 572-575
14:20 Taylor and rank-1 lattice based nonequispaced fast Fourier transform
The nonequispaced fast Fourier transform (NFFT) allows the fast approximate evaluation of trigonometric polynomials with frequencies supported on full box-shaped grids at arbitrary sampling
nodes. Due to the curse of dimensionality, the total number of frequencies and thus, the total arithmetic complexity can already be very large for small refinements at medium dimensions. In this
paper, we present an approach for the fast approximate evaluation of trigonometric polynomials with frequencies supported on symmetric hyperbolic cross index sets at arbitrary sampling nodes.
This approach is based on Taylor expansion and rank-1 lattice methods. We prove error estimates for the approximation and present numerical results.
pp. 576-579
14:40 Decoupling of Fourier Reconstruction System for Shifts of Several Signals
We consider the problem of ``algebraic reconstruction'' of linear combinations of shifts of several signals $f_1,\ldots,f_k$ from the Fourier samples. For each $r=1,\ldots,k$ we choose sampling
set $S_r$ to be a subset of the common set of zeroes of the Fourier transforms ${\cal F}(f_\l), \ \l \ne r$, on which ${\cal F}(f_r)\ne 0$. We show that in this way the reconstruction system is
reduced to $k$ separate systems, each including only one of the signals $f_r$. Each of the resulting systems is of a ``generalized Prony'' form. We discuss the problem of unique solvability of
such systems, and provide some examples.
pp. 580-583
Circuit Design for Analog to Digital Converters
Yun Chiu
Room: Conference Hall
Chair: Yun Chiu (University of Texas at Dallas, USA)
13:20 Digital Calibration of SAR ADC
Four techniques for digital background calibration of SAR ADC are presented and compared. Sub-binary redundancy is the key to the realization of these techniques. Some experimental and simulation
results are covered to support the effectiveness of these techniques.
pp. 544-547
13:40 Trend of High-Speed SAR ADC towards RF Sampling
One emerging trend of high-speed low-power ADC design is to leverage the successive approximation (SAR) topology. It has successfully advanced the power efficiency by orders of magnitude over the
past decade. Given the nature of SAR algorithm, the conversion speed is intrinsically slow compared to other high-speed ADC architectures, and yet minimal static power is required due to the
mostly digital implementation. This paper examines various speed enhancement techniques that enable SAR ADCs towards RF sampling, i.e. >GS/s sampling rate with >GHz input bandwidth, while
maintaining low power and area consumption. It is expected to play a crucial role in the future energy-constrained wideband system.
pp. 548-551
14:00 Multi-Step Switching Methods for SAR ADCs
This paper presents multi-step capacitor switching methods for SAR ADCs based on precharge with floating capacitors and charge sharing. The proposed switching methods further reduce the transient
power of the split monotonic switching method (an improved version of the monotonic switching method). Compared to the split monotonic switching, adding charge sharing achieves around 50%
reduction in switching power. Using precharge with floating capacitors and charge sharing simultaneously, the switching power reduces around 75%. The proposed switching methods do not require
additional intermediate reference voltages.
pp. 552-555
14:20 On the use of redundancy in successive approximation A/D converters
In practical realizations of sequential (or pipelined) A/D converters, some form of redundancy is typically employed to help absorb imperfection in the underlying circuit. The purpose of this
paper is to review the various ways in which redundancy has been used in successive approximating register (SAR) ADCs, and to connect findings from the information theory community to ideas that
drive modern hardware realizations.
pp. 556-559
14:40 Design Considerations of Ultra-Low-Voltage Self-Calibrated SAR ADC
This paper discusses the design of 0.5V 12bit successive approximation register (SAR) analog-to-digital converter (ADC) with focus on the considerations of self calibration at low supply voltage.
Relationships among noises of comparators and overall ADC performance are studied. Moreover, an ultra-low-leakage switch is demonstrated in a 0.13μm CMOS process and an improved process of
measuring mismatch is proposed to alleviate the charge injection of sampling switch. Simulation shows the ADC achieves an ENOB of 11.4b and a SFDR of 90dB near Nyquist rate with capacitor
mismatch up to 3%. At 12b 1MS/s, the ADC exhibits an FOM of 13.2fJ/step under 0.5V supply voltage.
pp. 560-563
15:20 - 16:20
Robust subspace clustering
Emmanuel Candes
Room: Conference Hall
Chair: Hans Feichtinger (University of Vienna, Austria)
Subspace clustering refers to the task of finding a multi-subspace representation that best fits a collection of points taken from a high-dimensional space. Subspace clustering can be regarded as a
generalization of PCA in which points do not lie around a single lower-dimensional subspace but rather around a union of subspaces as shown in the picture on the left. It can also be seen as a
nonstandard clustering problem in which neighbors are not close according to a pre-defined notion of metric but rather belong to the same lower dimensional structure.
We introduce an algorithm inspired by sparse subspace clustering (SSC) to cluster noisy data, and develops some novel theory demonstrating its correctness. In particular, the theory uses ideas from
geometric functional analysis to show that the algorithm can accurately recover the underlying subspaces under minimal requirements on their orientation, and on the number of samples per subspace. We
present synthetic as well as real data experiments illustrating our approach and demonstrating its effectiveness. | {"url":"https://www.eurasip.org/Proceedings/Ext/SampTA2013/toc.html","timestamp":"2024-11-02T15:49:03Z","content_type":"application/xhtml+xml","content_length":"186006","record_id":"<urn:uuid:669ccd4a-ae0e-4458-83f1-b09d274d0cd4>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00551.warc.gz"} |
Descriptive statistics assignment help,Descriptive statistics...
Descriptive statistics assignment help
Introduction to Descriptive statistics
On working with the descriptive statistics assignment help and descriptive statistics homework help sections, students are expected to have grasped the fundamental knowledge of Descriptive
statistics. The discussed below should provide the students with assistance in dealing with the descriptive statistics assignment help and descriptive statistics homework help sections for this
What are the uses of Descriptive statistics?
Descriptive statistics is used for describing the basic features of the data considered in a statistical study. They facilitate providing a simple summary of the sample data and the measures. Along
with other simple graphical analysis, they form the basis in almost all the facets of quantitative analysis of the data.
Descriptive statistics are summarized and descriptive coefficients of a given data set. which can be either a representation of the entire population or a sample of it. Descriptive statistics are
typically broken down into the different measures of central tendency and the measures of variation. Measures of central tendency comprise of the mean, median and mode, whereas the measures of
variability consist of standard deviation and variances skewness and kurtosis.
Descriptive Statistics vs Inferential Statistics
Descriptive statistics are typically different when compared to inferential statistics. By using descriptive statistics, an analyst can easily describe what the data represents. With the use of
inferential statistics, it is possible to try to reach conclusions which extend beyond the immediate data sets. For example, when we use inferential statistics for trying to infer from a sample data
about one of the attributes of the population. We can also use inferential statistics in order to make judgments about the probability of an observed difference between two or more groups. Hence, it
can safely be said that we use inferential statistics for making inferences from our data to a general scenario. Descriptive statistics, on the other hand, is used to describing the observations in a
data set.
Discuss some instances of Descriptive Statistics
One of the many uses of descriptive Statistics is for presenting quantitative statistical descriptions in a manageable form. When performing a research study, there may be lots of measures involved
in the research. Alternatively, we may also measure a large number of people on any of their attributes. Descriptive statistics allows us to represent a large amount of data in a meaningful manner.
Let us look at one of the simplest forms of descriptive statistics. In sports, we consider the average of a batsman as an indicator of his gameplay. This batting average is a number computed from the
player’s performance. Based on this statistic, we can place emphasis on the batsman’s present performance. This single number describes a number of other discrete events in the game. Likewise, with
respect to the GPA of a student, this single number describes, in general, the performance of a student across a wide range of curriculum.
With each and every effort taken to describe a large data set with a single indicator, there is a possibility of distorting the original data or even worse losing out on important details. Based on
the above example, we can say that GPA does not provide the information on the difficulty of the courses the student was in. There is also no information about courses in their major field or in
other disciplines. Irrespective of these drawbacks, descriptive statistics still provide analysts with an intensive summary which may enable easy comparisons between different data sets and
Click here to submit your Assignment relating to Statistics Assignment Help and Online Statistics Help and get help on Descriptive Statistics Assignment Help from best tutors. | {"url":"https://courseworktutors.com/descriptive-statistics-assignment-help/","timestamp":"2024-11-03T03:09:27Z","content_type":"text/html","content_length":"156486","record_id":"<urn:uuid:7700ec1e-3035-4032-9818-8a891c6c93c6>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00360.warc.gz"} |
How did our House prediction do?
We offered some predictions about House elections in earlier posts (see here, here, and here). We based our predictions on a model that included some national factors like presidential approval and
the state of the economy, plus some district variables like the district presidential vote and incumbency. Now that we have something close to the final House results, how did we do?
The bottom line: our model performed quite well, predicting only 7 fewer seats for the Democrats than they actually won, and miscalling only 21 races out of 435.
Now for the details.
Before we can get any further, we have to decide which model to evaluate. Our first prediction was a purely “fundamentals” model that used only incumbency and the district presidential vote to
distinguish one district from another. Nothing about the relative strength of the candidates was included. This model proved to have a large amount of error, suggesting that candidate strength does
make a difference. Adding campaign spending to the model brought down this error a lot, though for our forecast it forced us to use fundraising in the summer as an indicator of likely spending in
the fall. We couldn’t be certain how well that would work. But since summer fundraising still falls far before the election, this approach stuck to information that was publicly available before
the most intense period of the campaign season. That comes close enough to a “fundamentals” model for us, so we’re going to use it as our final prediction.*
There is more than one way to evaluate the model’s performance. The first is to see how close it came to the topline vote and seat share. In this respect, we were a little too hard on the
Democrats. Based on current results at the New York Times’s “big board,” the Democrats have won 195 seats for sure, and six of the remaining seven are leaning their way. That’s a total of 201
seats. By contrast, our model predicts 194 Democratic wins,** missing the actual result by 7 seats. Our model also predicted a Democratic two-party vote share of 48.9%, or about 1.9% below the
actual result of 50.7%. The model expected a vote share at least as high as the actual one about 36% of the time, and it expected a seat share that high about 33% of the time. So both fall in a
comfortable range for the model’s error.
The second way to look at our model is to see how well it predicted each individual race. Below is a scatter plot of the predicted vote share against the actual vote share for all 435 races. The
diagonal line is equivalence: if the predictions were exactly accurate all the points would fall along that line. The red data points are cases where the model missed the winner: there were 21
such cases overall.
Perhaps the most striking aspect of this graph is the curved relationship between our prediction and the outcome. The model is too hard on Democrats at the low end and a little too easy on them at
the high end, producing a sort of s-curve. This curvature is entirely a function of using early fundraising as one of our predictors. The model without campaign money has a more linear
relationship, but it also gets the actual outcome wrong more often. In other words, early money misses some of the dynamics of the race, but it does a good job of discriminating between winners and
So our model was bearish on Democrats, was somewhat off on district vote shares, and missed the actual winner in 21 cases. How does this compare with other predictions?
In terms of total seats, our prediction was closer to the final number than Charlie Cook, and two seats worse than Larry Sabato. It was also two seats better than than Sam Wang’s generic ballot
prediction (which considered 201 to be a highly unlikely outcome), and three seats worse than his Bayesian combination of the generic ballot and the handicappers. So on this score, Wang’s hybrid
beats all other forecasts by a nose.
What about predictions for individual seats? Here we have to drop Wang, since he didn’t offer such predictions. But we can still look at Sabato and Cook. Compared to these two handicappers, our
21 misses were the highest of the bunch. Sabato was the best, miscalling only 13 races, while Cook fell in between at 17.
However, one can think about this a different way. Our model only used information publicly available by the end of the summer (when the last primaries were decided). The handicappers had lots of
information (some of it proprietary to the campaigns) up to election day. Yet our miss rate wasn’t that much higher. At most, all that extra information amounted to 8 correctly called seats. The
outcome of the rest could have been known far earlier.
To be fair, the races that distinguish between these forecasts were incredibly close. None of Sabato’s missed predictions were decided by more than 10 percentage points. Only two of our missed
predictions fit this description, and one of Cook’s. Most of the misses were far closer. Many of these races could have gone the other way, meaning the differences between the success rates might
be due to chance alone. There should be some credit all around for correctly identifying the races that were likely to be close in the first place.
On balance, there is much that we could improve about our model, and we hope to keep working with it in the future. In the first time out of the gate it got the lay of the land quite well, and
actually came within spitting distance of forecasters who had a lot more information at their disposal. We think that’s pretty good.
*We also did a prediction with Super PAC money, but that was more for explanatory purposes than anything else. It certainly violated our rule of using only the fundamentals.
**Our original forecast was 192 seats, but in conducting this post-mortem, we discovered an error in our code that mis-classified some uncontested races. When the error was fixed, the prediction
bumped to 194. Other aspects of the prediction were basically the same. | {"url":"https://goodauthority.org/news/how-did-our-house-prediction-do/","timestamp":"2024-11-11T07:19:48Z","content_type":"text/html","content_length":"89233","record_id":"<urn:uuid:08541c8d-dbad-4565-b367-afb2e31b0abb>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00072.warc.gz"} |
What is Strike Rate in Cricket and How is it Calculated? - Cricket Resolved
Since cricket is a technical sport, statistics are frequently used to determine the importance of a bowler or a batsman. Average and strike rate statistics typically provide a fairly clear picture of
cricketers’ careers, and they are most frequently utilised.
Batting and bowling strike rates are both relatively recent statistics in cricket, having been developed following the introduction of One Day International cricket roughly five decades ago.
What is the Batting Strike Rate in cricket?
Powerful batters who like to hit big shots and score runs typically have a higher strike rate. It’s also worth noting that the batting strike rate is set considerably differently in Tests and limited
overs cricket (ODI and T20I) because it’s difficult to score runs in a Test match.
In Test cricket, a batsman’s skill and temperament are usually put to the test. They usually have to be calm and face a number of balls before trying to make strokes. Some batters, like Virender
Sehwag, Brendon McCullum, and Rishabh Pant, are not afraid to do things their own way, even in Test cricket, and they score runs quickly.
In ODI and T20I cricket, batters whose strike rates are higher are more valuable. This factor is also used to measure a batter’s ability to score runs against different types of bowling (like spin
bowling or fast bowling) to see how comfortable he is against a certain type of bowling.
Now that you understand what a batting strike rate is, let’s look at how to compute it.
How to Calculate the Batting Strike Rate?
Batting strike rate is simply the ratio of runs scored to balls faced, and it is commonly computed over a 100-delivery average. It is a metric that indicates how frequently a batter meets his primary
hitting goal, that is, how frequently a batter scores runs.
It is computed by dividing the total number of runs scored in an inning by the number of deliveries faced by a batter. Multiplying this ratio by 100 yields the batting strike rate.
The Formula | Batting SR = (Number of runs scored in an innings) / (Number of balls faced) x 100.
Let’s look at the example:
In a T20 game between Mumbai and Delhi, Player “A” hit 125 runs off 62 balls. Here, we get 2.01 by dividing 125 runs by 62 balls faced. When we multiply it by 100, we find that Player “A”‘s strike
rate for their innings in the match was 201.61.
What is the Bowling Strike Rate in Cricket?
The bowling strike rate is another important number that tells us how well a bowler is doing, especially in the longer formats. The strike rate of a bowler is the average number of balls it takes to
get rid of a batter. For bowlers, a lower strike rate means success because it takes the bowler fewer balls to get rid of the batsman.
In comparison to the batting strike rate, which measures how quickly a batsman scores, the bowling strike rate measures how quickly a bowler can dismiss a batsman.
The bowling strike rate is also very different from the batting strike rate in a big way. The latter is more important in limited-overs cricket than in Tests. While in Test cricket, taking wickets is
more important than giving up runs.
On the other hand, bowlers in T20Is and ODIs must keep a good economy rate, which means giving up fewer runs per ball, even if they don’t take many wickets.
How to Calculate the Bowling Strike Rate?
To calculate the bowling strike rate, you need to divide the number of balls a bowler has delivered in an innings by the number of wickets they have taken.
The Formula | Bowling SR = Number of balls bowled / Number of Wickets Taken
Here is one example:
Ravindra Jadeja took two wickets in 14 overs during a Test match between India and New Zealand. Here, we get Jadeja’s bowling strike rate of 42 by dividing the 84 balls bowled by the two wickets
Read Next | How is the Bowling Speed in Cricket Calculated?
Comments (1)
• What is Bowling Average in Cricket and How is it Calculated? - Cricket Resolvedsays:
September 14, 2023 at 10:25 am
[…] The better a bowler is doing, the lower his or her average is. Along with the economy rate and the strike rate, it is often used to figure out how well a bowler did […] | {"url":"https://cricketresolved.com/what-is-strike-rate-in-cricket/","timestamp":"2024-11-08T09:03:57Z","content_type":"text/html","content_length":"82501","record_id":"<urn:uuid:06a39941-b95e-46d0-bcc4-59c91b087ce3>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00079.warc.gz"} |
Algorithm Stuff
previous post
was designed to give you some intuition when working with quaternions. If you already have a library that does multiplications and inverses for you, you should be fine. If you'd like to learn more
details or don't have a quaternion library, this blog post is for you.
How do we rotate something by some axis $V=(v_x, v_y, v_z)$ and some angle $\theta_V$?
To answer that, lets start with something simpler first: we ball with a stick in it, the stick is pointing in direction $D=(d_x, d_y, d_z)$. We want to rotate the ball via $V$ to get a new stick
direction $R=(r_x, r_y, r_z)$.
We can do this by using
Rodrigues' rotation formula
$R = cos(\theta) D + sin(\theta) (V \times D) + (1-cos(\theta))(V \cdot D) V$
where $\times$ is the cross product (returns a vector)
$(V \times D)_x = (v_y d_z - v_z d_y)$
$(V \times D)_y = (v_z d_x - v_x d_z)$
$(V \times D)_z = (v_x d_y - v_y d_x)$
and $\cdot$ is the dot product (returns a single value)
$(V \cdot D) = v_xd_x + v_yd_y + v_zd_z$
This formula comes from splitting D into two pieces: $D_{\Vert}$, the part that is parallel to V, and $D_{\perp}$, the part that is perpendicular to V. Assuming $V$ and $D$ are normalized, $D_{\Vert}
=(V \cdot D)V$ is the
vector projection
of $D$ onto $V$, and then $D_{\perp}=D-D_{\Vert}$ (also known as the
vector rejection
of $D$ onto $V$).
Then we can find $R_{\Vert}$ and $R_{\perp}$ and get $R=R_{\Vert}+R_{\perp}$. (this isn't quite right, there are weighting terms on $R_{\Vert}$ and $R_{\perp}$ based on $\theta_V$, but this is a good
$D_{\Vert}$ is unaffected by the rotation, so $R_{\Vert}=D_{\Vert}$.
$D_{\perp}$ is rotated $\theta_V$ degrees around $V$. If $\theta_V=90$, $R_{perp}$ will be perpendicular to $D_{\perp}$ and $V$. The cross product of any two vectors is known to return a vector that
is perpendicular to them both, so if $\theta_V=90$ degrees, $R_{\perp} = V \times D_{\perp}$. Likewise, if $\theta_V=-90$ degrees, $R_{\perp}= V \times D_{\perp}$, and if $\theta_V=180$ degrees then
The math for getting $R_{\perp}$ is then identical to what happens when you rotate the 2D vector $(1,0)$ by $\theta_V$ degrees around the origin, just replace $(1,0)$ with $D_{\perp}$ and $(0,1)$
with $V \times D_{\perp}$. See the
Rodrigues' Rotation Formula wiki page
for the full details.
If I have axis angle $V=(v_x, v_y, v_z), v_{\theta}$ and another axis angle $U=(u_x, u_y, u_z), u_{\theta}$, intuitively how we combine them is easy: we apply V to an object, then apply U to an
object. But can we make a new axis angle $W=(w_x, w_y, w_z), w_{\theta}$ so that when we apply $W$, it is the same as applying $V$ and then applying $U$?
Whenever the topic of Quaternions comes up, programmers I talk to are always annoyed by them. Quaternions seem useful, but tutorials that give nice intuitive pictures are very hard to find. Just look
at the Wikipedia page and you'll see what I mean.
Let's fix that :)
The takeaway here is this: a single Quaternion is equivalent to an axis (3D unit vector) and a rotation θ degrees around that axis. Quaternions are just a weird encoding of these 4 values that make
some computations nice.
How do we get the angle axis representation? Given a quaternion $Q=(q_x,q_y,q_z,q_w)$, our axis of rotation $V=(v_x, v_y, v_z)$ is:
$v_x = q_x / \sqrt{1-q_w*q_w}$
$v_y = q_y / \sqrt{1-q_w*q_w}$
$v_z = q_z / \sqrt{1-q_w*q_w}$
and our rotation angle $\theta_V$
$\theta_V = 2 * acos(q_w)$
You can just call this $\theta$, I just use this notation to make it clear that this is the axis $V$'s angle and not some other axis $W=(w_x, w_y, w_z)$'s angle (which would be $\theta_W$).
Cool so that's nice. Intuitively, rotating objects with these is nice: we stick the axis $V$ pointing out from the center of that object, then rotate the object $\theta_V$ around it. But how do we
represent this in code? If you have a library (such as Unity) that has quaternions, just use their code. If not, see my follow up blog post.
The important takeaway here is that every time you use a quaternion, just imagine it as an angle axis in your head.
For example, given two quaternions Q_V and Q_U that by using the formula above map to angle axes $V=(v_x, v_y, v_z), \theta_V$ and $U=(u_x, u_y, u_z), \theta_U$, if you do $Q_V*Q_U$ that is
equivalent to a new angle axis that rotates by V and U.
What order do these happen in? Well, rotations are weird in that they are associative (you can swap parenthesis, so $(Q_V*Q_U)Q_W=Q_V*(Q_U*Q_W)$), but they aren't communicative so $Q_V*Q_W$ isn't
always equal to $Q_W*Q_V$. So if want to rotate some $M$,
means: rotate M by V, and then by U
means: rotate M by U, and then by V.
The main reason we use quaternions (as far as I know) is that to rotate positions and vectors using angle-axis, you end up using some sins, cos, dot products, etc. With quaternions you can rotate
positions and vectors via linear functions that you can just represent as a 4x4 matrix. | {"url":"https://algorithmstuff.phylliida.dev/2018/","timestamp":"2024-11-07T02:58:46Z","content_type":"application/xhtml+xml","content_length":"41989","record_id":"<urn:uuid:b54f17b9-fccd-4dc3-894a-4a80b1453bc0>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00247.warc.gz"} |
1. Topic: Introduction to Logic Minimization
Same Logic Function, Different Circuit Implementations
A digital logic circuit consists of a collection of logic gates; the input signals that drive them, and the output signals they produce. The behavioral requirements of a logic circuit are best
expressed through truth tables or logic equations, and any design problem that can be addressed with a logic circuit can be expressed in one of these forms. Both of these formalisms define the
behavior of a logic circuit, how inputs are combined to drive outputs‚ but they do not specify how to build a circuit that meets these requirements.
Only one truth table exists for any particular logic relationship, but many different logic equations and logic circuits can be found to describe and implement the same relationship. Different (but
equivalent) logic equations and circuits exist for a given truth table because it is always possible to add unneeded, redundant logic gates to a circuit without changing its logical output. For
example the logic system introduced in the previous module (reproduced in Fig. 1 below). The system’s behavior is defined by the truth table in the center of the figure, and it can be implemented by
any of the logic equations and related logic circuits shown.
Figure 1. Six different circuit implementations of same logic function
All six circuits shown are equivalent, meaning they share the same truth table, but they have different physical structures. Imagine a black box with three input buttons, two LEDs, and two
independent circuits driving the LEDs. Any of the six circuits in Fig. 1 shown above could drive either LED, and an observer pressing buttons in any combination could not identify which circuit drove
which LED. For every possible combination of button presses, the LEDs would be illuminated in exactly the same manner regardless of which circuit was used. If you have a choice of logic circuits for
any given logic relationship, you should first define which circuit is the best and develop a method to ensure you find it.
The circuits in the blue boxes above are said to be canonical because they contain all required minterms and maxterms. Canonical circuits typically use resources inefficiently, but they are
conceptually simple. Below the canonical circuits are standard POS and SOP circuits‚ these two circuits behave identically to the canonical circuits, but they use fewer resources. It would clearly be
less wasteful of resources to build the standard POS or SOP circuits. Furthermore, replacing logic gates in the standard circuits with transistor-minimum gate equivalents (by taking advantage of NAND
/NOR logic) results in the minimized POS and SOP circuits shown in the green boxes.
Find the Most Efficient Circuit
As engineers one of our primary goals is to implement circuits efficiently. The most efficient circuit can use the fewest number of transistors, or it can operate at the highest speeds, or it can use
the least amount of power. Often, these three measures of efficiency cannot all be optimized at the same time, and designers must trade-off circuit size for speed of operation, or speed for power, or
power for size, etc. In this case, most efficient circuit is the one that uses the minimum number of transistors, leaving speed and power for later consideration. Because the minimum-transistor
measure of efficiency was chosen, the focus is on minimum circuits. The best method of determining which of several circuits is the minimum is to count the needed transistors. For now, a simpler
method can be used‚ the minimal circuit will be defined as the one that uses the fewest number of logic gates (or, if two forms use the same number of gates, then the one that uses the fewest number
of total inputs to all gates will be considered the simplest). The following examples show circuits with the gate/input number shown below. Inverters are not included in the gate or input count,
because often, they are absorbed into the logic gates themselves.
Figure 2. Count the number of logic gate and inputs to evaluate the efficiency of logic circuits
A minimal logic equation for a given logic system can be obtained by eliminating all non-essential or redundant inputs. Any input that can be removed from the equation without changing the input/
output relationship is redundant. To find minimal equations, all redundant inputs must be identified and removed. In the truth table, Fig. 1 above, note the SOP terms generated by rows 1 and 3. The A
input is ‘0’ in both rows, and the C input is ‘1’ in both rows, but the B input is ‘0’ in one row and ‘1’ in the other. Thus, for these two rows, the output is a ‘1’ whether B is a ‘0’ or ‘1’ and B
is therefore redundant.
The goal in minimizing logic systems is to find the simplest form by identifying and removing all redundant inputs. For a logic function of N inputs, there are $2^{2^N}$ logic functions, and for each
of these functions, there exists a minimum SOP form and a minimum POS form. The SOP form may be more minimal than the POS form, or the POS form may be more minimal, or they may be equivalent (i.e.,
they may both require the same number of logic gates and inputs). In general, it is difficult to identify the minimum form by simply staring at a truth table. Several methods have evolved to assist
with the minimization process, including the application of Boolean algebra, the use of logic graphs, and the use of searching algorithms. Although any of these methods can be employed using pen and
paper, it is far easier (and more productive) to implement searching algorithms on computer.
Important Ideas
• Only one truth table exists for any particular logic relationship, but many different logic equations and logic circuits can be found to describe and implement the same relationship.
• For now, you can use a simpler method the minimal circuit will be defined as the one that uses the fewest number of logic gates (or, if two forms use the same number of gates, then the one that
uses the fewest number of total inputs to all gates will be considered simplest). | {"url":"https://www.realdigital.org/doc/9fec89522c07324db87f6709136a3a11","timestamp":"2024-11-06T03:02:45Z","content_type":"text/html","content_length":"15852","record_id":"<urn:uuid:4256d7f7-4c66-4d4e-a0fb-f468978ebc78>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00459.warc.gz"} |
The Square Magnabicupolic Ring
The square magnabicupolic ring is a member of the family of 4D bicupolic rings, CRF polychora that consist of a 3-membered ring containing two cupolae and one prism or antiprism, with various
pyramids and prisms filling up the remaining gaps to form a closed 4D shape. This particular example consists of a ring of two square cupolae and an octagonal prism, with 4 square pyramids and 4
triangular prisms around the side, for a total of 11 cells, 35 polygons (16 triangles, 17 squares, 2 octagons), 44 edges, and 20 vertices. It is classified under the category of wedges in Dr. Richard
Klitzing's list of 4D segmentochora as octagon||square cupola (K4.105).
The structure of the square magnabicupolic ring is quite simple. We shall explore it using its parallel projections into 3D:
This first image shows one of the two square cupola cells. It appears slightly flattened, because it's being seen from an angle; in 4D, it has perfectly regular polygonal faces.
Here's the second square cupola, in addition to the first one:
Between these two cupolae are 4 square pyramids:
For clarity, we have omitted the square cupola cells. There are also 4 triangular prisms, which link with square pyramids to form a ring of alternating cells:
The visible square gap in the middle of the ring is where the two square cupolae meet, sharing a square face.
The outward-facing faces of these cells, together with the octagonal bases of the two square cupolae, are where they meet the last cell, the octagonal prism:
We've hidden the other cells for clarity's sake. To see more clearly how this octagonal prism is connected to the other cells, the next image shows their edge outlines:
These are all the cells in the square magnabicupolic ring.
Side View
Looking at the square magnabicupolic ring's projections in the previous section from our handicapped 3D point of view, we may be misled into thinking that it a rather wide, circular shape in 4D.
However, this is not accurate, because for something to be truly wide in the 4D sense, it must be wide in three directions, not just two. Here's the square magnabicupolic ring, as seen from a
different 4D viewpoint:
Whoa! What happened here? Nothing strange, really. We're just looking at the same object from an angle that shows us how narrow it actually is, contrary to the impressions we may have had from
looking at the previous images.
For the sake of clarity, we have omitted the cells that lies on the far side of the polytope. Here's the same view without hidden surface removal:
The octagonal prism now projects to the narrow base of the projection, as shown below:
It appears flattened, because now it is being seen from a 90° angle; but in 4D, it is still the same, perfectly uniform, octagonal prism. The green square cupola now projects to a trapezium-like
Likewise with yellow cupola:
The ring of two cupolae and one octagonal prism is now clearly seen as a ring.
The ring of alternating square pyramids and triangular prisms is now orthogonal to the 4D viewpoint, and so only 3 cells are visible on the near side, two square pyramids and one triangular prism:
The remaining 2 square pyramids and 3 triangular prisms lie on the far side:
This side view of the square magnabicupolic ring shows us the rationale behind Klitzing's classification of it under the category of wedges. In 3D, it's impossible for something to be both circular
and wedge-shaped at the same time; but in 4D, the extra dimension makes this possible.
Geometric Properties
A fact that may not be very obvious is that the dichoral angle (the 4D analogue of the dihedral angle) between the two square cupola cells is exactly 90°. A consequence of this fact is that the
square magnabicupolic ring actually occurs as segments of the cantellated tesseract. In fact, the cantellated tesseract may be thought of as the augmentation of the 8,8-duoprism with 8 square
magnabicupolic rings. The square pyramids from adjacent augments are coplanar, and merge into regular octahedra.
If two pairs of antipodal square magnabicupolic rings are cut off from the cantellated tesseract, rotated 45° in the plane of the octagonal faces, and glued back on, the result is the biparabigyrated
cantellated tesseract, another CRF polychoron.
The Cartesian coordinates of the square magnabicupolic ring with edge length 2 are:
• (±1, ±(1+√2), ±1, 0)
• (±(1+√2), ±1, ±1, 0)
• (±1, ±1, 0, 1) | {"url":"http://www.qfbox.info/4d/smbr","timestamp":"2024-11-04T10:37:05Z","content_type":"text/html","content_length":"11652","record_id":"<urn:uuid:037e085e-c745-4f05-8247-2bbf0c125f09>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00296.warc.gz"} |
Assignment 8 CS 750/850 Machine Learning solved
Problem 1 [33%]
Describe, in words, the results that you would expect if you performed K-means clustering of the eight
shoppers in Figure 10.14 in ISL, on the basis of their sock and computer purchases, with K = 2. Give three
answers, one for each of the variable scalings displayed. Explain.
Problem 2 [33%]
In this problem, you will compute principal components for the Auto dataset. First remove qualitative
features, which cannot be handled by PCA. Then:
1. Compute principal components without scaling features. Plot the result (you can use
2. Compute principal components after scaling features to have unit variance. Plot the result (you can use
3. How do the principal components computed in parts 1 and 2 compare?
Problem 3 [33%]
1. Apply PCA to a subset of the MNIST dataset. Plot the first few principal vectors. Describe what you
observe and interpret the results.
2. Optional(+10 points): Apply k-means clustering to the MNIST dataset both with the original features,
and using at least two different subsets of the principal components. Which approach works best? | {"url":"https://codeshive.com/questions-and-answers/assignment-8-cs-750-850-machine-learning-solved/","timestamp":"2024-11-08T17:41:32Z","content_type":"text/html","content_length":"97802","record_id":"<urn:uuid:f0206aef-d9cc-4677-bb9b-158301c84aae>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00508.warc.gz"} |
Capital Budgeting Alternatives to NPV and IRR Analysis - Finance Train
Capital Budgeting Alternatives to NPV and IRR Analysis
The discounting cash flow methods of net present value and internal rate of return analysis are common for capital project analysis, but other methods exist.
• Economic Income: applies the same after tax cash flow analysis as NPV modeling, but adds an adjustment to account for the change in the market value of the asset.
• Accounting Income: this method represents the income that would be reported under local accounting regulations. Deprecation will reflect historic asset values (unlike economic income) and
interest expense will be deducted rather than captured as part of the discounting of future cash flows.
• Economic Profit (EP) or Economic Value Added (EVA): this method takes net operating profit after tax (NOPAT) and makes a reduction based on the weighted average cost of capital (WACC). The
following formula represent the EP/EVA method:
EP = EBIT (1 – tax rate) – WACC (capital invested)
• Residual Income (RI): similar to EP, RI starts with net income and deducts an equity charge to the prior accounting period’s book value of equity.
RI = Net Income t – (return required on common equity * BV Equity t-1)
The NPV of a project is equal to the sum of its residuals incomes discounted by the required return on common equity.
• Claims Valuation: this method is a similar to standard NPV modeling. Under the claims valuation method, the analyst discounts cash flows to the different claimants (debt holders versus equity
holders) at their respective costs of capital.
Data Science in Finance: 9-Book Bundle
Master R and Python for financial data science with our comprehensive bundle of 9 ebooks.
What's Included:
• Getting Started with R
• R Programming for Data Science
• Data Visualization with R
• Financial Time Series Analysis with R
• Quantitative Trading Strategies with R
• Derivatives with R
• Credit Risk Modelling With R
• Python for Data Science
• Machine Learning in Finance using Python
Each book includes PDFs, explanations, instructions, data files, and R code for all examples.
Get the Bundle for $39 (Regular $57)
JOIN 30,000 DATA PROFESSIONALS
Free Guides - Getting Started with R and Python
Enter your name and email address below and we will email you the guides for R programming and Python. | {"url":"https://financetrain.com/capital-budgeting-alternatives-to-npv-and-irr-analysis","timestamp":"2024-11-02T08:17:49Z","content_type":"text/html","content_length":"105397","record_id":"<urn:uuid:96a2d9e4-de20-478b-b779-675f4499a04f>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00521.warc.gz"} |
Some Hockey Stats (DEL Quarterfinal 2018 Game 5 Nürnberg Ice Tigers vs Cologne Sharks)
The Nürnberg Ice Tigers are my favourite hockey team. They have never won the championship, were in the final twice, and have in recent years always played well in the regular season but frequently
lost the first round (quarter final) of the play-offs.
Tomorrow, they will play game five of this year's quarter final against the Cologne Sharks. Both teams have won both of their away games so far, and the Ice Tigers are trying to win their first home
game tomorrow.
This notebook looks into some statistics related to tomorrow's event. The ideas came from Allan Downey.
I am following the wonderful reporting of Sebastian Böhm / Nürnberger Nachrichten und Frank Strube.
For my students, this is in English.
The statistics related to both teams, in this fairly averaging and not deeply distinguishing analysis, seems to suggest that the series is very close. Guess what, this is also what I saw when I
watched it. Despite this similarity, the numbers favour Cologne slightly but consistently.
In [1]:
import json
import numpy as np
import scipy.stats as sst
import matplotlib.pylab as plt
%matplotlib inline
import thinkplot
import del_bayes as delba
Some Data¶
Let's look at some basic data
First, of the current play-off quarter final results
Games Home Away
Ice Tigers - Sharks 1 4
Sharks - Ice Tigers 2 3 (OT)
Ice Tigers - Sharks 2 4
Sharks - Ice Tigers 2 3
In [2]:
NIT_PO_scores = [1, 3, 2, 3]
KEC_PO_scores = [4, 2, 4, 2]
n_goals_NIT_PO18 = 9
n_goals_KEC_PO18 = 12
fname_scores = './data/DEL Saison 2017_2018.txt'
In [3]:
home_scores_NIT, away_scores_NIT = delba.get_goals_per_season(fname_scores, team_id=16)
home_scores_KEC, away_scores_KEC = delba.get_goals_per_season(fname_scores, team_id=15)
lets look at the priors, i.e., the distribution of goals scored by both teams in the season 2017/2018
In [4]:
bins = np.linspace(-0.5, 10.5, 12)
things_to_plot = [home_scores_NIT, away_scores_NIT, home_scores_KEC, away_scores_KEC]
labels = ['NIT home', 'NIT away', 'KEC home', 'KEC away']
colors=['cyan', 'blue', 'pink', 'red']
size = [3, 2.5, 2, 1.5]
for i, item in enumerate(things_to_plot):
hi, bi = np.histogram(item, bins)
plt.xlabel('number of goals in game')
plt.ylabel('absolute number of occurrence \nof given number of goals in season 17/18')
plt.ylim(0.0, None)
In [5]:
avg_goals_per_game_NIT = np.hstack((home_scores_NIT, away_scores_NIT)).mean()
std_goals_per_game_NIT = np.sqrt(np.hstack((home_scores_NIT, away_scores_NIT)).var())
avg_goals_per_game_KEC = np.hstack((home_scores_KEC, away_scores_KEC)).mean()
std_goals_per_game_KEC = np.sqrt(np.hstack((home_scores_KEC, away_scores_KEC)).var())
print("avg., std. of number of goals per games")
print("NIT={:3.2f}, {:3.2f}".format(avg_goals_per_game_NIT, std_goals_per_game_NIT))
print("KEC={:3.2f}, {:3.2f}".format(avg_goals_per_game_KEC, std_goals_per_game_KEC))
avg., std. of number of goals per games
NIT=2.92, 1.69
KEC=2.85, 1.74
on average, Nürnberg scores slightly more goals than KEC, but the mean and the variance are fairly similar
Thought 1¶
Suppose that goal scoring in hockey is well modeled by a Poisson process, and that the long-run goal-scoring rate of the Nürnberg Ice Tigers against the Cologne Sharks is 2.9 goals per game. In their
next game, what is the probability that the Ice Tigers score exactly 3 goals? Plot the PMF of k, the number of goals they score in a game.
In [6]:
sst.poisson.pmf(3, avg_goals_per_game_NIT)
In [7]:
possible_goals = np.linspace(0,10,11)
p_goals = sst.poisson.pmf(possible_goals, avg_goals_per_game_NIT)
In [8]:
plt.bar(possible_goals, p_goals)
plt.xlabel('number of goals')
plt.ylabel('P of scoring x goals')
Text(0,0.5,'P of scoring x goals')
Thought 2¶
Assuming the same goal scoring rate, what is the probability of scoring a total of n_goals_NIT_PO18 (=9) goals in four games (this is what they have done in the recent past, in the last four games)
In [9]:
possible_goals = np.linspace(0,30,31)
p_goals = sst.poisson.pmf(possible_goals, avg_goals_per_game_NIT)
In [10]:
temp = delba.AddPmf(p_goals, p_goals)
temp2 = delba.AddPmf(temp, p_goals)
final = delba.AddPmf(temp2, p_goals)
In [11]:
xs = np.linspace(0, final.shape[0], final.shape[0])
plt.bar(xs, final)
plt.xlim(0., 30.)
plt.xlabel("number of goals in 4 games")
plt.ylabel("P of scoring x goals in 4 games")
Text(0,0.5,'P of scoring x goals in 4 games')
In [13]:
sst.poisson.pmf(9, 4*avg_goals_per_game_NIT)
The probability of scoring 9 goals in four games is ~0.094 (given that we model using Poisson and that the long term average scoring during the season is representative for an average scoring in the
post season)
The "good news" is, that it would have been more likely if they had scored 10, 11, 12, or 13 goals (the sharks did score 13 goals).
So either Cologne's defense and/or goalie was much improved compared to the regular season in the first four games, or the NIT offense was weaker, or both.
Statistically interesting, the result of the convolution (final[9]) is identical to the analytical result from the poisson distribution (sst.poisson.pmf...).
Also interesting, we're seeing here already some effect of the central limit theorem.
Thought 3¶
Suppose that the long-run goal-scoring rate of the Sharks against the Ice Tigers is 2.6 goals per game. Plot the distribution of t, the time until the Sharks score their first goal.
In their next game, what is the probability that the Sharks score during the first period (that is, the first third of the game)?
In [14]:
time_ax = np.linspace(0., 2.5, 21)
In [15]:
p_t_scoring = sst.expon.cdf(time_ax, scale = 1./2.6)
In [16]:
plt.plot(time_ax, p_t_scoring)
plt.xlim(0., 2.5)
plt.xlabel('time between goals (1=60minutes)')
Text(0.5,0,'time between goals (1=60minutes)')
In [17]:
sst.expon.cdf(1/3, scale=1/2.6)
given the average scoring rate, which is assumed to be constant over all games and within each game (no difference between any of the periods... we kind of know that in practice this assumption is
violated...), there is a slightly larger chance than 50:50 that the Sharks score within the first period.
Thought 4:¶
Assuming again that the goal scoring rate is 2.6, what is the probability that the Ice Tigers get shut out (that is, don't score for an entire game)?
Answer this question two ways, using the CDF of the exponential distribution and the PMF of the Poisson distribution. Again, this is a statistically interesting thing!
In [18]:
1.0 - sst.expon.cdf(1., scale=1./2.6)
Adventures into Bayes¶
First let's look at the prior, given the data of the season 2017/2018. We will start with the same prior distribution for both hockey clubs. Both are Gaussian with a mean of 2.9 (that means both
teams start with the same prior; we could use a more informative prior, however in the limited time I had for this, I was not successful implementing it)
In [20]:
import importlib
suite1 = delba.Hockey('NIT')
suite2 = delba.Hockey('KEC')
thinkplot.Config(xlabel='Goals per game',
And we can update each suite with the scores from the first 4 games.
In [21]:
In [22]:
In [23]:
thinkplot.Config(xlabel='Goals per game',
suite1.Mean(), suite2.Mean()
(2.5931581264473484, 2.7979813677731227)
using the play-off games goal scoring to date to update the priors, shows that the teams are really close: The most likely number of goals for NIT is 2.59, for KEC 2.80
To predict the number of goals scored in the next game we can compute, for each hypothetical value of $\lambda$, a Poisson distribution of goals scored, then make a weighted mixture of Poissons:
In [24]:
goal_dist1 = delba.MakeGoalPmf(suite1)
goal_dist2 = delba.MakeGoalPmf(suite2)
xlim=[-0.7, 11.5])
goal_dist1.Mean(), goal_dist2.Mean()
(2.59171397162217, 2.7955988586976046)
Now we can compute the probability that the NIT win, lose, or tie in regulation time.
In [25]:
diff = goal_dist1 - goal_dist2
p_win = diff.ProbGreater(0)
p_loss = diff.ProbLess(0)
p_tie = diff.Prob(0)
print('P(win)= {:3.2f}, \nP(loss)={:3.2f}, \nP(tie)= {:3.2f}'.format(p_win, p_loss, p_tie))
P(win)= 0.38,
P(tie)= 0.17
If the game goes into overtime, we have to compute the distribution of t, the time until the first goal, for each team. For each hypothetical value of $\lambda$, the distribution of t is exponential,
so the predictive distribution is a mixture of exponentials.
Here's what the predictive distributions for t look like.
In [26]:
time_dist1 = delba.MakeGoalTimePmf(suite1)
time_dist2 = delba.MakeGoalTimePmf(suite2)
thinkplot.Config(xlabel='Games until goal',
time_dist1.Mean(), time_dist2.Mean()
(0.3890322826246101, 0.3608324093454575)
In overtime the first team to score wins, so the probability of winning is the probability of generating a smaller value of t:
In [27]:
p_win_in_overtime = time_dist1.ProbLess(time_dist2)
p_adjust = time_dist1.ProbEqual(time_dist2)
p_win_in_overtime += p_adjust / 2
print('p_win_in_overtime', p_win_in_overtime)
p_win_in_overtime 0.4810635311241746
Finally, we can compute the overall chance that the Ice Tigers win, either in regulation or overtime.
In [28]:
p_win_overall = p_win + p_tie * p_win_in_overtime
print('p_win_overall', p_win_overall)
p_win_overall 0.46317227261461386
Exercise: To make the model of overtime more correct, we could update both suites with 0 goals in one game, before computing the predictive distribution of t. Make this change and see what effect it
has on the results.
In [29]:
time_dist1 = delba.MakeGoalTimePmf(suite1)
time_dist2 = delba.MakeGoalTimePmf(suite2)
p_win_in_overtime = time_dist1.ProbLess(time_dist2)
p_adjust = time_dist1.ProbEqual(time_dist2)
p_win_in_overtime += p_adjust / 2
print('p_win_in_overtime', p_win_in_overtime)
p_win_overall = p_win + p_tie * p_win_in_overtime
print('p_win_overall', p_win_overall)
p_win_in_overtime 0.4793124781132104
p_win_overall 0.46287182347893885 | {"url":"http://www.claus-haslauer.de/downloads/archive_nbconvert/Analyzing%20the%20performance%20of%20the%20Ice%20Tigers%20before%20tonight's%20game%205%20against%20the%20Cologne%20Sharks%20with%20Bayes.html","timestamp":"2024-11-04T11:23:57Z","content_type":"text/html","content_length":"230409","record_id":"<urn:uuid:81d7567a-02a2-4d65-b7b2-bce7ec077885>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00589.warc.gz"} |
The Stacks project
Lemma 42.68.24. Let $R$ be a local ring with residue field $\kappa $. Let $M$ be a finite length $R$-module. Let $\alpha , \beta , \gamma $ be endomorphisms of $M$. Assume that
1. $I_\alpha = K_{\beta \gamma }$, and similarly for any permutation of $\alpha , \beta , \gamma $,
2. $K_\alpha = I_{\beta \gamma }$, and similarly for any permutation of $\alpha , \beta , \gamma $.
1. The triple $(M, \alpha , \beta \gamma )$ is an exact $(2, 1)$-periodic complex.
2. The triple $(I_\gamma , \alpha , \beta )$ is an exact $(2, 1)$-periodic complex.
3. The triple $(M/K_\beta , \alpha , \gamma )$ is an exact $(2, 1)$-periodic complex.
4. We have
\[ \det \nolimits _\kappa (M, \alpha , \beta \gamma ) = \det \nolimits _\kappa (I_\gamma , \alpha , \beta ) \det \nolimits _\kappa (M/K_\beta , \alpha , \gamma ). \]
Comments (0)
Post a comment
Your email address will not be published. Required fields are marked.
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
All contributions are licensed under the GNU Free Documentation License.
In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder,
this is tag 02PU. Beware of the difference between the letter 'O' and the digit '0'.
The tag you filled in for the captcha is wrong. You need to write 02PU, in case you are confused. | {"url":"https://stacks.math.columbia.edu/tag/02PU","timestamp":"2024-11-14T21:28:38Z","content_type":"text/html","content_length":"18802","record_id":"<urn:uuid:de097dc2-d078-4a9a-8680-b146d8c96a14>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00238.warc.gz"} |
How to double commands on PSC05/TW523
So I'm working on a project that involves X10 interfacing using the PSC05 and referencing technicalnote.pdf, which says that ordinary (non-dimming, non-extended) X10 commands should be sent twice
with three cycles in between. When I do this with the PSC05, though, my CM11A detects the commands as being sent twice instead of once like it does with an ordinary plug-in X10 controller.
Is technicalnote.pdf wrong? I've tried this both with "cycle" being defined as a single zero crossing (i.e. three zero crossings in total) and with "cycle" being defined as a full period of the sine
wave (i.e. six zero crossings in total) and both give the same result.
The PSC05's receiver detects the doubling and suppresses the first of the two copies of the command, so I can't use it to figure out what's going on, and I have no other means of looking at the raw
power line.
Anyone have any idea what I should be doing? Thanks in advance. | {"url":"https://forums.x10.com/index.php?topic=31711.0","timestamp":"2024-11-04T08:12:59Z","content_type":"application/xhtml+xml","content_length":"43119","record_id":"<urn:uuid:42758a8c-8013-4254-9f2d-f42f94892840>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00881.warc.gz"} |
GED Mathematical Reasoning Formulas
Taking the GED with only a few weeks or even few days to study?
First and foremost, you should understand that the 2019 GED® Mathematical Reasoning test contains a formula sheet, which displays formulas relating to geometric measurement and certain algebra
concepts. Formulas are provided to test-takers so that they may focus on application, rather than the memorization, of formulas.
However, the test does not provide a list of all basic formulas that will be required to know for the test. This means that you will need to be able to recall many math formulas on the GED.
56% Off*
Includes GED Math Prep Books, Workbooks, and Practice Tests
Below you will find the 2019 GED® Mathematics Formula Sheet followed by a complete list of all Math formulas you MUST have learned before test day, as well as some explanations for how to use them
and what they mean. Keep this list around for a quick reminder when you forget one of the formulas.
Review them all, then take a look at the math topics to begin applying them!
Good luck! | {"url":"https://testinar.com/product.aspx?P_ID=gLOsCzrCuM1RyhOoAQ0R3Q%3D%3D","timestamp":"2024-11-03T18:53:23Z","content_type":"text/html","content_length":"55491","record_id":"<urn:uuid:edf79dc9-8f5f-4b29-b6ae-f3204cba82af>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00074.warc.gz"} |
fundiversity provides a lightweight package to compute common functional diversity indices. To a get a glimpse of what fundiversity can do refer to the introductory vignette. The package is built
using clear, public design principles inspired from our own experience and user feedback.
You can install the stable version from CRAN with:
Alternatively, you can install the development version with:
fundiversity lets you compute six functional diversity indices: Functional Richness with fd_fric(), intersection with between convex hulls with fd_fric_intersect(), Functional Divergence with fd_fdiv
(), Rao’s Quadratic Entropy with fd_raoq(), Functional Dispersion with fd_fdis() and Functional Evenness with fd_feve(). You can have a brief overview of the indices in the introductory vignette.
All indices can be computed either using global trait data or at the site-level:
# If only the trait dataset is specified, considers all species together
# by default
#> site FRic
#> 1 s1 230967.7
# We can also compute diversity across sites
fd_fric(traits_birds, site_sp_birds)
#> site FRic
#> 1 elev_250 171543.730
#> 2 elev_500 185612.548
#> 3 elev_1000 112600.176
#> 4 elev_1500 66142.748
#> 5 elev_2000 20065.764
#> 6 elev_2500 18301.176
#> 7 elev_3000 17530.651
#> 8 elev_3500 3708.735
To compute Rao’s Quadratic Entropy, the user can also provide a distance matrix between species directly:
dist_traits_birds = as.matrix(dist(traits_birds))
fd_raoq(traits = NULL, dist_matrix = dist_traits_birds)
#> site Q
#> 1 s1 170.0519
Function Summary
fd_fric() FRic ✅ ✅
fd_fric_intersect() FRic_intersect ✅ ✅
fd_fdiv() FDiv ✅ ✅
fd_feve() FEve ✅ ❌
fd_fdis() FDis ✅ ❌
fd_raoq() Rao’s Q ❌ ❌
Thanks to the future.apply package, all functions (except fd_raoq()) within fundiversity support parallelization through the future backend. To toggle parallelization follow the future syntax:
For more details please refer to the parallelization vignette or use vignette("fundiversity_1-parallel", package = "fundiversity") within R.
Available functional diversity indices
According to Pavoine & Bonsall (2011) classification, functional diversity indices can be classified in three “domains” that assess different properties of the functional space: richness, divergence,
and regularity. We made sure that the computations in the package are correct in our correctness vignette. fundiversity provides function to compute indices that assess this three facets at the site
α-diversity FDiv with fd_fdiv()
(= among sites) FRic with fd_fric() Rao’s QE with fd_raoq() FEve with fd_feve()
FDis with fd_fdis()
β-diversity FRic pairwise intersection with fd_fric_intersect() available in entropart, betapart or hillR available in BAT
(= between sites) alternatives available in betapart
Several other packages exist that compute functional diversity indices. We did a performance comparison between related packages. We here mention some of them (but do not mention the numerous
wrappers around these packages):
adiv Functional Entropy, Functional Redundancy ✅ ❌ ❌
BAT β-diversity indices, Richness, divergence, and evenness with hypervolumes ❌ ❌ ✅
betapart Functional β-diversity ❌ ❌ ❌
entropart Functional Entropy ✅ ✅ ✅
FD FRic, FDiv, FDis, FEve, Rao’s QE, Functional Group Richness ❌ ❌ ❌
hilldiv Dendrogram-based Hill numbers for functional diversity ❌ ❌ ✅
hillR Functional Diversity Hill Numbers ❌ ✅ ✅
hypervolume Hypervolume measure of functional diversity (~FRic) ✅ ❌ ✅
mFD Functional α- and β-diversity indices, including FRic, FDiv, FDis, FEve, FIde, FMPD, FNND, FOri, FSpe, Hill Numbers ✅ ❌ ✅
TPD FRic, FDiv, FEve but for probability distributions ✅ ❌ ❌
vegan Only dendrogram-based FD (treedive()) ✅ ✅ ✅ | {"url":"https://cran.dcc.uchile.cl/web/packages/fundiversity/readme/README.html","timestamp":"2024-11-12T19:52:49Z","content_type":"application/xhtml+xml","content_length":"20825","record_id":"<urn:uuid:5ebcffc6-b255-4f18-a6ec-e0201a3c63cd>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00000.warc.gz"} |
How to get the $i\epsilon$ prescription for a Faddeev-Popov ghost propagator?
7311 views
In path integral formalism, for a physical field there will be an $i\epsilon$ term in the action, which comes from identifying the in and out vacuum, and in turn this $i\epsilon$ (with the correct
sign) will naturally appear in the denominator of the corresponding propagator. However for FP ghost, it is only introduced to rewrite the functional determinant in an exponential form, and the issue
of identifying an in and out ghost vacuum never enters the picture, thus no $i\epsilon$ term in the ghost part of the action. Yet all ghost propagators I've seen do have an $i\epsilon$ in the
denominator, so where does it come from?
This post imported from StackExchange Physics at 2014-04-07 12:30 (UCT), posted by SE-user Jia Yiyang\
UPDATE(27-Jun-2015): I recently came across the following paragraph in Faddeev and Slavnov's book (Gauge Fields: An Introduction to Quantum Theory, 2nd Edition), page 94:
I take it as they are claiming that for ghosts (and other non-propagating fields), it doesn't matter whether you use $+i\epsilon$ or $-i\epsilon$. The claim would be a very desirable one if true, but
I cannot prove it in any simple-minded way by inspecting the Feynman graphs, for example, in the following graph (where wavy lines represent gluons and dashed lines ghosts)
the sign in front of $i\epsilon$ of the gluon propagator must be kept fixed, and changing the sign for that of the ghosts seems to induce a very nontrivial change of the loop integral.
(I would've uploaded the whole page of the book in case someone wants to see the context, but the upload limit is only 1MB.)
EDIT: Let me add some context here. I met Professor Faddeev on a conference meeting and asked him this question during a coffee break. He promptly agreed that the $i\epsilon$ for ghost doesn't appear
naturally in the path integral, since there's no in and out states for them. But due to the limited time window of the coffee break, he only pointed me to his book with Slavnov. So far I've only
found the quoted paragraph which vaguely makes the assertion that the boundary condition doesn't matter, which seems to be suggesting either sign for $i\epsilon$ is fine.
The $i\epsilon$ prescription doesn't seem to depend on which propagator you are talking about. It is naturally introduced when calculating the free Feynman propagator for any field. We don't need to
refer to in and out states at all. It arises when writing (scalar field example) $\langle 0 |T\{ \phi_1(x) \phi_2(y) \}| 0\rangle$ as a Fourier transform of the momentum space result. That is, you
calculate in position space and rearrange to get it in the form $\int \frac{d^4k}{(2\pi)^4} (propagator)$.
This post imported from StackExchange Physics at 2014-04-07 12:30 (UCT), posted by SE-user Will
What I am referring to applies for the operator approach to QFT - I'm not sure how you get the $i\epsilon$ in the path integral, but given that they are equivalent methods, you should be able to get
the same result, somehow? This seems like a fun little paradox.
This post imported from StackExchange Physics at 2014-04-07 12:30 (UCT), posted by SE-user Will
@Will - In the Path Integral approach, you do in fact get the $i \epsilon$ prescription as a contribution from the In and Out states. The two methods are equivalent and therefore we should be able to
deduce the $i \epsilon$ prescription for the ghosts without having to invoke the operator approach at all, right?
This post imported from StackExchange Physics at 2014-04-07 12:30 (UCT), posted by SE-user Prahar
@Prahar the OP's problem is that there shouldn't be ghost in and out states. Well at least, that's what I think the problem is?
This post imported from StackExchange Physics at 2014-04-07 12:30 (UCT), posted by SE-user Will
(Because they aren't physical particles).
This post imported from StackExchange Physics at 2014-04-07 12:30 (UCT), posted by SE-user Will
Hmmm... even in the operator approach we are assuming that the ghosts are in and out states in the free theory. It seems that they only way to get the $i\epsilon$ term is to do this, but make the
restriction that ghosts are never in and out states in the full vacuum (that is, don't use them as external states). Can others comment on this?
This post imported from StackExchange Physics at 2014-04-07 12:30 (UCT), posted by SE-user Will
Related physics.stackexchange.com/q/44250
This post imported from StackExchange Physics at 2014-04-07 12:30 (UCT), posted by SE-user drake
Bosonic path integrals :
$$Z = \int D\phi ~e^{-i \large \int ~ dx [\frac{1}{2}\phi (\square+m^2)\phi]}$$
or Femionic path integrals (like Fadeev-Popov ghosts) :
$$Z = \int D\eta D \tilde \eta ~e^{-i \large \int ~ dx [\tilde \eta^a \square \eta^a]}$$
are not mathematically well-defined, because of the presence of the imaginary unit in the exponential.
To ensure convergence and meaning of these expressions, the prescription is then : $$\square + m^2 \rightarrow \square + m^2 - i\epsilon$$ When $m=0$, this simply gives the prescription : $$\square \
rightarrow \square - i\epsilon$$
Obviously, the form of the propagators comes direcly from this prescription.
This post imported from StackExchange Physics at 2014-04-07 12:30 (UCT), posted by SE-user Trimok
Ah!!! I had just worked that out and was about to write my solution. :) +1
This post imported from StackExchange Physics at 2014-04-07 12:30 (UCT), posted by SE-user Will
I don't think that's right. While its true that the $i \epsilon$ prescription ensures convergence, it is not introduced ad hoc just to ensure convergence. In fact, the In and Out states precisely
provide the extra contribution of $+i \epsilon$ which in the end makes it all work. Now, when doing the ghost path integral it is not clear where a similar contribution of $+i \epsilon$ should come
from since one does not have In and Out ghost states. My argument for this was that we do indeed have In and Out ghost states but that they do not contribute to any physical amplitudes. Any comment?
This post imported from StackExchange Physics at 2014-04-07 12:30 (UCT), posted by SE-user Prahar
Oh no! This seems right on the surface, but I agree with Prahar in that you are effectively using in and out states to get this $i\epsilon$ prescription as defined by the path integral. I think a
precise answer will require a careful derivation from the ground up, beginning with the method FP gauge fixing.
This post imported from StackExchange Physics at 2014-04-07 12:30 (UCT), posted by SE-user Will
@Prahar : "While its true that the iϵ prescription ensures convergence, it is not introduced ad hoc just to ensure convergence" . Not at all, this is precisely to ensure convergence that the $i\
epsilon$ prescription is introduced. Without that, the coherence of QFT would be just wrong. This is the same trick that the Wick rotation which brings you to an Euclidean Action $S_E$ which has to
be positive.
This post imported from StackExchange Physics at 2014-04-07 12:30 (UCT), posted by SE-user Trimok
@Trimok - I agree that the $i\epsilon$ prescription is required for the path integral to converge. I am not contesting that. Further, Wick rotation to a Euclidean action is also possible only due to
the presence of the $i\epsilon$. However, I don't think it is introduced "by hand". It follows from the derivation of the path integral from the operator formalism. It's the time ordering in the
operator side that tells us exactly which prescription of $i\epsilon$ to use and a derivation of this prescription can be done. So, no ad hoc introduction of $i\epsilon$ is required.
This post imported from StackExchange Physics at 2014-04-07 12:30 (UCT), posted by SE-user Prahar
@Trimok - In fact, I think that is precisely the OPs question. While the $i\epsilon$ prescription can be derived for usual fields, it does seem to come out naturally using the FP procedure. Either we
are not being careful or it must be introduced by hand this time. The second option does not sound to appealing to me. But maybe that's what's required to be done. Note that one often DEFINEs the
theory using the gauged fixed path integral (with the correct $i\epsilon$ prescription) without any reference to the original action. In this case, this question does not arise.
This post imported from StackExchange Physics at 2014-04-07 12:30 (UCT), posted by SE-user Prahar
@Trimok: Prahar understands me correctly. Besides, I'm a bit skeptical about convergence argument, for bosonic fields of course no problem, but for grassman fields I'm not sure how one defines
This post imported from StackExchange Physics at 2014-04-07 12:30 (UCT), posted by SE-user Jia Yiyang
@JiaYiyang : The propagator for ghosts is $\square^{-1}$, while it is $(\square + m^2)^{-1}$ for a scalar field, so it is the same kind of problem.
This post imported from StackExchange Physics at 2014-04-07 12:30 (UCT), posted by SE-user Trimok
@Prahar : The path integral formalism is the more fundamental one, while it is true, that the operator formalism is more practical in a lot of cases. The presentation by Zee (Quantum Field Theory in
a nutshell) is very clear and very impressive about that.
This post imported from StackExchange Physics at 2014-04-07 12:30 (UCT), posted by SE-user Trimok
@Trimok: I thought you were talking about the convergence of the Gaussian integral, but if you were talking about the convergence of the propagator, then both ${(\square+i\epsilon)}^{-1}$ and ${(\
square-i\epsilon)}^{-1}$ look equally valid to me.
This post imported from StackExchange Physics at 2014-04-07 12:30 (UCT), posted by SE-user Jia Yiyang
@JiaYiyang : It is not the same thing, see Feyman propagators, because the idea is to be allowed to perform a Wick rotation, so, if you define the Wick rotation to be a rotation with a positive angle
$90°$, it is possible with the prescription I gave. With the other prescription, you are stuck.
This post imported from StackExchange Physics at 2014-04-07 12:30 (UCT), posted by SE-user Trimok
@Trimok: Let me put it this way: what's the reason for performing a Wick rotation? Does the same reason apply to ghost fields?
This post imported from StackExchange Physics at 2014-04-07 12:30 (UCT), posted by SE-user Jia Yiyang
In fact, the prescription $-i\epsilon$ allow a Wick rotation, and a Wick rotation corresponds to euclidean path integrals : $Z = \int D\phi ~e^{- \large \int ~ dx [\frac{1}{2}\phi (P_0^2+\vec P^2+m^
2)\phi]}$ and $Z = \int D\eta D \tilde \eta ~e^{- \large \int ~ dx [\tilde \eta^a (P_0^2+\vec P^2)+ \eta^a]}$, where $P_0, \vec P$ are operators.
This post imported from StackExchange Physics at 2014-04-07 12:30 (UCT), posted by SE-user Trimok
@JiaYiyang: The point is that since ghosts do not appear as an external line, the sign of the ghost propagator $i\epsilon$ doesn't seem to matter. But for the purpose of analytic continuation to
imaginary time, the standard sign is needed. (The opposite sign would lead to an analytic continuation across the branch cut, and one would end up in the nonphysical second sheet.)
Faddeev and Slavnov's ''it is convenient'' possibly refers to just this advantage of the ''correct'' sign.
The point is that since ghosts do not appear as an external line, the sign of the ghost propagator iϵ doesn't seem to matter.
But the loop integral I drew seems to change significantly if I change the sign, or is there another sense of "doesn't matter" here?
The opposite sign would lead to an analytic continuation across the branch cut, and one would end up in the nonphysical second sheet
Why would this necessarily be the case?
Faddeev and Slavnov's ''it is convenient'' possibly refers to just this advantage of the ''correct'' sign.
The "it's convenient" doesn't bother me, what bothers me is the "but not necessary", in that I desire it but cannot prove it. For physical fields it would be necessary to have the right sign because
it's naturally embedded in the derivation of the path integral, when selecting in and out states.
@JiaYiyang: I don't know about ''not necessary'' - can't follow the argument in detail. Approximate calculations in QFT are rarely completely transparent unless they are done by yourself or in
endless places.
That one necessarily ends up in the nonphysical sheet is because of the way the analytic continuation has to be done. Frequencies make physical sense only if their imaginary part is negative, and the
continuation to imaginary time must go through the physical domain to be meaningful. If you now do the same analytic continuation on the Green's function with the wrong sign then it is clear that one
immediately passes the branch cut and then is automatically (by continuity on the Riemann surface) in the unphysical sheet.
Because of this I also wouldn't trust the statement that the choice of the sign for ghosts is irrelevant.
Because of this I also wouldn't trust the statement that the choice of the sign for ghosts is irrelevant.
Well, the statement is vague and that sign doesn't matter is my own interpretation of it, which might not be what Faddeev meant. However I still think it's a reasonable guess since ghost was
introduced to represent a functional determinant det$M$ which has no $\epsilon$ dependence at all, unlike physical fields.
@JiaYiyang: Always remember that functional integrals are themselves ill-defined and need a concrete interpretation to give proper sense to them.
Even though the sense is approximate only (except for quadratic actions), it does not justify arbitrary formal manipulations - it is easy to give ''sensible'' recipes that produce arbitrarily wrong
results. One must always select the correct manipulations to extract a physical meaning. Unfortunately, without a proper mathematical foundation that doesn't yet exist, what is correct is difficult
to tell before you finished a calculation and can compare with experiment or alternative approximate methods such as lattice simulations. | {"url":"https://physicsoverflow.org/13888/epsilon%24-prescription-for-faddeev-popov-ghost-propagator?show=32246","timestamp":"2024-11-09T04:08:38Z","content_type":"text/html","content_length":"243119","record_id":"<urn:uuid:7c617e28-e27a-43b0-b48d-2c196b0b3651>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00388.warc.gz"} |
Earthquake Hazards: Next big one?- Incorporated Research Institutions for Seismology
Earthquake Hazards: Next big one?
How can we model earthquakes in the classroom?
Geoscientists use probability to describe potential earthquake effects in a given location. This exercise will explore seismic hazards for various regions, which can be described by the likelihood of
a certain level of ground shaking for a particular region. Once the seismic hazard is quantified, the seismic risk can be estimated by determining the potential effects of the shaking on buildings
and other structures. Students begin by finding the probability of an earthquake of a particular magnitude occurring during different periods in different regions, and comparing these results. Next,
students investigate the probability that the ground in each region will shake by a certain amount, during a given length of time and compare those results. Finally, students consider the societal
implications of these hazards and how this seismic hazard information might be used to improve community resilience.
The development of this resources was funded by the National Science Foundation via Award # 0942518
Students will be able to:
1. Compare and contrast the probability of an earthquake occurring in different regions and relate that probability to the seismic hazard of the regions.
2. Explain in a written essay how a region can have a high seismic hazard but have a low seismic risk.
3. Describe at least three factors that affect the intensity of an earthquake at a given location. | {"url":"https://www.iris.edu/hq/inclass/lesson/earthquake_hazards_next_big_one","timestamp":"2024-11-04T18:18:15Z","content_type":"text/html","content_length":"51779","record_id":"<urn:uuid:bf4ac4cc-44a2-4dcb-a8d6-a56f61f35d68>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00205.warc.gz"} |
Breaking generalized Diffie-Hellman modulo a composite is no easier
than factoring
Breaking generalized Diffie-Hellman modulo a composite is no easier than factoring
Authors: E. Biham, D. Boneh, and O. Reingold
The Diffie-Hellman key-exchange protocol may naturally be extended to k>2 parties. This gives rise to the generalized Diffie-Hellman assumption (GDH-Assumption). Naor and Reingold have recently shown
an efficient construction of pseudo-random functions and proved its security based on the GDH-Assumption. In this note, we prove that breaking this assumption modulo a so called Blum-integer would
imply an efficient algorithm for factorization. Therefore, both the key-exchange protocol and the pseudo-random functions are secure as long as factoring Blum-integers is hard. Our reduction
strengthen a previous ``worst-case'' reduction of Shmuely.
In Information Processing Letters (IPL), Vol. 70, 1999, pp. 83--87
Full paper: gzipped-PostScript | {"url":"https://crypto.stanford.edu/~dabo/pubs/abstracts/DHfact.html","timestamp":"2024-11-14T03:43:44Z","content_type":"text/html","content_length":"3078","record_id":"<urn:uuid:b03d7938-a799-428d-abae-91ec458e7017>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00166.warc.gz"} |
Engineering Reference — EnergyPlus 8.6
Simple Hot Water Boiler[LINK]
The input object Boiler:HotWater provides a simple model for boilers that only requires the user to supply the nominal boiler capacity and thermal efficiency. An efficiency curve can also be used to
more accurately represent the performance of non-electric boilers but is not considered a required input. The fuel type is input by the user for energy accounting purposes.
The model is based the following three equations
The final equation above includes the impact of the optional boiler efficiency performance curve. To highlight the use of the normalized boiler efficiency curve, the fuel use equation is also shown
in an expanded format. The normalized boiler efficiency curve represents the changes in the boiler’s nominal thermal efficiency due to loading and changes in operating temperature. If the optional
boiler efficiency curve is not used, the boiler’s nominal thermal efficiency remains constant throughout the simulation (i.e., BoilerEfficiencyCurveOutput = 1).
When a boiler efficiency performance curve is used, any valid curve object with 1 or 2 independent variables may be used. The performance curves are accessed through EnergyPlus’ built-in performance
curve equation manager (curve objects). The linear, quadratic, and cubic curve types may be used when boiler efficiency is solely a function of boiler loading, or part-load ratio (PLR). These curve
types are used when the boiler operates at the specified setpoint temperature throughout the simulation. Other curve types may be used when the boiler efficiency can be represented by both PLR and
boiler operating temperature. Examples of valid single and dual independent variable equations are shown below. For all curve types, PLR is always the x independent variable. When using curve types
with 2 independent variables, the boiler water temperature (Twater) is always the y independent variable and can represent either the inlet or outlet temperature depending on user input.
Single independent variable:[LINK]
• BoilerEfficiencyCurve=C1+C2(PLR) (Linear)
• BoilerEfficiencyCurve=C1+C2(PLR)+C3(PLR)2 (Quadratic)
• BoilerEfficiencyCurve=C1+C2(PLR)+C3(PLR)2+C4(PLR)3 (Cubic)
Dual independent variables:[LINK]
• BoilerEfficiencyCurve=C1+C2(PLR)+C3(PLR)2+(C4+C5(PLR)+C6(PLR)2)(Twater) (QuadraticLinear)
• BoilerEfficiencyCurve=C1+C2(PLR)+C3(PLR)2+C4(Twater)+C5(Twater)2+C6(PLR)(Twater) (Biquadratic)
• BoilerEfficiencyCurve=C1+C2(PLR)+C3(PLR)2+C4(Twater)+C5(Twater)2+C6(PLR)(Twater)+C7(PLR)3+C8(Twater)3+C9(PLR)2(Twater)+C10(PLR)(Twater)2 (Bicubic)
When a boiler efficiency curve is used, a constant efficiency boiler may be specified by setting C1 = 1 and all other coefficients to 0. A boiler with an efficiency proportional to part-load ratio or
which has a non-linear relationship of efficiency with part-load ratio will typically set the coefficients of a linear, quadratic, or cubic curve to non-zero values. Using other curve types allows a
more accurate simulation when boiler efficiency varies as a function of part-load ratio and as the boiler outlet water temperature changes over time due to loading or as changes occur in the water
temperature setpoint.
The parasitic electric power is calculated based on the user-defined parasitic electric load and the operating part load ratio calculated above. The model assumes that this parasitic power does not
contribute to heating the water.
Pparasitic = parasitic electric power (W), average for the simulation time step
Pload = parasitic electric load specified by the user (W)
Description of Model[LINK]
A steam boiler is the essential part of a building steam heating system and can be described as primary driver of the steam loop. It is the component that maintains the desired loop temperature.
The emphasis in EnergyPlus was laid on developing a building simulation model for steam boiler with ability to model detailed boiler performance without the cost of exhaustive user inputs to the
boiler model. The Boiler:Steam input object is used on the plant loop supply side of EnergyPlus with the primary purpose of supplying steam to the heating coils, which constitute the demand side of
the loop.
The steam boiler is a variable mass flow rate device. The mass flow rate of steam through the boiler is determined by the heating demand on the loop which in turn is determined by the equipment that
is hooked to the demand side of the loop, namely the steam coils and hot water heater. In short, the steam coil determines the mass flow rate of steam required for heating the zone to its required
setpoint, the mixer sums up the total steam demanded by each of the individual coils and reports it to the boiler via the pump.
Figure describes the rudimentary loop structure with steam flowing from coils to boiler. It is essential to mention that it is the coils that determine the mass of steam required and the boiler
simply delivers the required mass flow at desired temperature provided it is adequately sized. The algorithm for determining the mass flow rate is structured on the demand side and the variable flow
boiler has no role to play in determining the steam mass flow.
Figure outlines the simple steam boiler model. Sub cooled water enters the variable flow boiler through the pump, the boiler inputs energy to water stream consuming fuel, boiler losses are accounted
via boiler efficiency. The boiler delivers steam at a quality equal to 1.0 at saturated condition.
The advantage of steam heating systems over hot water is the high latent heat carrying capacity of steam, which reduces the mass flow rate of the fluid required. The amount of superheated and sub
cooled heat transfer in Steam heating systems is negligible, latent heat transfer accounts for almost all of the heat exchange into the zones via steam to air heat exchangers.
Boiler Load is a summation of sensible and latent heat addition to the water stream as described with the following equation. The mass flow rate through the boiler is known, while delta temp is the
temperature difference between the boiler inlet and boiler outlet. Latent heat of steam is calculated at loop operating temperature.
Theoretical fuel used is calculated with the following equation. Boiler efficiency is a user input and accounts for all the losses in the steam boiler.
The operation part load ratio is calculated with the following equation. This is later used to calculate the actual fuel consumption, its ratio of boiler load to boiler nominal capacity.
The actual fuel consumption by the boiler is calculated as using the following equation, where C1, C2, and C3 are the Part Load Ratio coefficients.
Essentially the boiler model provides a first order approximation of performance for fuel oil, gas, and electric boilers. Boiler performance is based on theoretical boiler efficiency and a single
quadratic fuel use-part load ratio curve represented in the equation above. This single curve accounts for all combustion inefficiencies and stack losses.
The control algorithm for a steam boiler is an important issue. The user may want the boiler to be undersized and in such a case it will not be able to meet the demand side steam flow request.
Subsequently the boiler load exceeds the boiler nominal capacity. The boiler operates at its nominal capacity but is unable to meet the plant heating demand. Pseudo code from EnergyPlus has been used
to describe the control logic used in the steam boiler simulation.
*********************PSEUDO CODE SECTION STARTS***********************
At start of simulation an initial value of steam mass flow rate is calculated. This is required to start the flow of steam around the loop.
Calculate the boiler supply steam mass flow rate at start of simulation.
ELSE ! Not first time through
Steam boiler calculations rely heavily on the variable ˙m b, boiler mass flow rate. This variable ˙m b is the assigned equal to mass flow at boiler inlet node for preliminary calculations.
Calculating the boiler delta temperature difference between the inlet and outlet nodes. This calculation is used to determine various boiler control situation.
In case the temperature difference calculated with the previous equation equation is zero then the boiler just needs to supply latent heat to steam, else the boiler performs its normal load
calculations by providing both sensible and latent heat to the inlet stream.
Sometimes the boiler load QB is greater than the demand side requested load at the current time step, which may occur because the boiler inlet conditions is from previous time step. There is sudden
fall in request of steam mass flow from the demand side. The boiler now recalculates its new mass flow and adjusts to these new conditions.
Boiler load is set equal to the new boiler heating demand and steam mass flow rate is recalculated.
In case the requested load exceeds the boiler nominal capacity, which is its maximum heating capacity. In this case the requested steam mass flow is not met and the zone is not heated adequately.
This happens if the boiler is undersized. The steam mass flow rate is recalculated at nominal capacity.
Boiler load is set equal to boiler nominal capacity and steam mass flow rate recalculated.
End If statement for the boiler load control algorithm. This algorithm determines all possible control conditions that might while simulating a system in EnergyPlus.
*********************PSEUDO CODE SECTION STARTS***********************
If the boiler operating pressure exceeds the maximum allowable boiler pressure, the simulation trips and outputs a warning regarding the same. This notifies the user about potential system pressure
sizing problems.
Integration of the steam boiler simulation model in EnergyPlus required developing number of subroutines, which operate in sequence. These subroutines are designed to read inputs from the input file,
initialize the variables used in the boiler simulation model, simulate the boiler performance, update the node connections, and report the required variables. In case the user has difficulty with
boiler inputs, provisions have been made to auto size the boiler nominal capacity and maximum steam flow rate. These two values play an important role in sizing the boiler.
Model Assumptions[LINK]
The EnergyPlus boiler model is “simple” in the sense that it requires the user to supply the theoretical boiler efficiency. The combustion process is not considered in the model. The model is
independent of the fuel type, which is input by the user for energy accounting purposes only. This is an ideal model for Building Simulation Program such that it utilizes the desired amount of
resources in terms of simulation run time, but successfully provides fairly good sizing parameters for an actual boiler.
It is assumed that the steam boiler operates to maintain a desired temperature, the temperature being saturation temperature of steam and corresponding to this saturation temperature there exist a
single value of saturation pressure at which the loop operates. Hence the boiler could either be saturation pressure controlled or temperature controlled. Since users would have better idea of steam
temperatures rather than pressure the boiler inputs are designed for temperature control.
Nomenclature for Steam Loop[LINK]
Steam Loop Nomenclature
QB Boiler Heat Transfer. W.
QB,N Boiler Nominal Capacity. W.
OPLR Boiler Operating Part Load Ratio.
ΔTsc Degree of subcooling in coil.
ΔTinout Temperature difference across the steam boiler. ºC.
ρw Density of condensate entering the pump. Kg/m3.
QDes Design Load on the steam coil. W.
hf,n Enthalpy of fluid at point n on the Ts diagram. J/kg.
PFrac Fraction of Pump Full Load Power. W.
Fm,f Fractional Motor Power Lost to Fluid. W.
Qa,l Heating load on the Air Loop Steam Coil. W.
Qz,c Heating load on the Zone Steam Coil. W.
hfg,Tloop Latent heat of steam at Loop operating Temperature. J/kg.
hfg Latent Heat of Steam. J/kg.
QL,H Latent Heat Part of the Heating Coil Load. W.
ΔQloss Loop losses in steam coil. W.
ΔTloop Loop Temperature Difference.
˙ma Mass flow rate for steam coil Kg/s.
˙min Mass flow rate of steam entering the steam coil .Kg/s.
˙ma,l Mass flow rate of steam for Air loop steam coil Kg/s
˙mz,c Mass flow rate of steam for zone steam coil Kg/s.
˙ms Mass flow rate of steam. Kg/s.
˙mloop Mass flow rate of steam for the steam loop. Kg/s.
˙m Mass of condensate entering the pump. Kg/s.
˙ma,max Maximum allowed mass flow rate of air. Kg/s
˙mS,max Maximum Mass flow rate of steam Kg/s
˙mB,Supply Maximum steam mass flow rate supplied by boiler. Kg/s.
˙Vw,max Maximum Volume flow rate of condensate in pump. m3/s.
˙Vw,loop Maximum Volume flow rate of condensate in steam loop. m3/s.
Ta,in,min Minimum inlet air temperature possible. ºC.
Pn Nominal Power Capacity for condensate pump. W.
Pnom Nominal power of the pump. W.
Hn Nominal Pump Head. M.
˙Vnom Nominal volume flow rate through the condensate pump. m3/s.
PLR Part Load Ratio for condensate pump.
ηp Pump efficiency.
ηm Pump Motor Efficiency.
P Pump Power. W.
QS,H Sensible Heat Part of the Heating Coil Load. W.
Tsp Setpoint Temperature of the zone. ºC.
Ta,out,SP Setpoint air outlet temperature for the steam coil. ºC.
PS Shaft power of the pump. W.
cp,a Specific Heat Capacity for Air. J/Kg K.
cp,w Specific Heat Capacity for Water. J/Kg K.
ηB Steam Boiler Efficiency.
Ta,in Temperature of air entering the coil. ºC.
Ta Temperature of air entering the steam coil. ºC.
Ta,out Temperature of air leaving the coil. ºC.
Ts,in Temperature of steam entering the coil. ºC.
Fin Theoretical Fuel Consumption by the Steam Boiler. W.
˙mcoils,R Total Mass flow rate requested by all the steam coils. Kg/s.
˙V Volume of condensate entering the pump. m3/s.
Tw,out Water outlet temperature from pump. ºC.
ASHRAE Handbook. 1996. HVAC Systems and Equipment, Air Conditioning and Heating Systems. Chapter 10, Steam Systems. pp. 10.1-10.16. 1996.
BLAST 3.0 Users Manual. 1999. Building Systems Laboratory. Urbana-Champaign: Building Systems Laboratory, Department of Mechanical and Industrial Engineering, University of Illinois.
Chillar, R.J. 2005. “Development and Implementation of a Steam Loop In The Building Energy Simulation Program EnergyPlus,” M.S. Thesis, Department of Mechanical and Industrial Engineering, University
of Illinois at Urbana-Champaign.
TRNSYS 16 User Manual. 2004. A Transient System Simulation Program. Solar Energy Laboratory, Madison. University of Wisconsin-Madison.
El-Wakil, M. M. 1984. Power Plant Technology, McGraw Hill, New York, pp. 30-72.
Babcock & Wilcox. 1978. Steam-Its Generation and Use, The Babcock & Wilcox Company, New York ,Section I, II, IV, and VII.
S.A. Klein. 2004. Engineering Equation Solver EES. University of Wisconsin Madison. | {"url":"https://bigladdersoftware.com/epx/docs/8-6/engineering-reference/boilers.html","timestamp":"2024-11-02T10:53:15Z","content_type":"text/html","content_length":"329457","record_id":"<urn:uuid:b314569b-cc02-4c62-b9eb-9ab12291ea9e>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00190.warc.gz"} |
After a particularly annoying update today - D•Scribe
□ While it’s true that’s a progess bar is guessing (since it doesn’t know what would take more time in your computer). It should still finish when at 100%
☆ This. The guessing part comes from the time it takes to do the tasks, but you know the number of tasks. So a progress bar should only reach 100% when all the tasks are completed.
For example, you might have a big process that performs 3 other small tasks and then finishes. You could reasonably assume that each small task is 33% of the big process, so after the
first finishes you get 33% progress, then 66% after the second and 100% after the third. When the bar reaches 100%, the third task has finished, so your process has finished too.
What you don’t know is how much time each small task takes, so if the first task needs 20 seconds and the following tasks take just 5, you’ll spend 2/3 of the time on the first 33% of the
progress bar, and then the remaining 66% gets done in 1/3 of the time. | {"url":"https://scribe.disroot.org/post/573985/1343772","timestamp":"2024-11-03T15:23:15Z","content_type":"text/html","content_length":"112937","record_id":"<urn:uuid:8752875c-c88a-4c30-ae0d-4df333153ac9>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00769.warc.gz"} |
* Introduction
Rheology is a general theory of relations between tension, deformation and rate of deformation in real materials. It comes out from the elasticity theory of solids and from the theory of viscous
liquids. The straight line between solids and liquids is made vague in rheology what is expressed in the Greek slogan "panta rei" This slogan "panta rei - all flows" is reflected in the name
"Rheology" of the new discipline
* Main terms
The expressions tension, deformation and rate of deformation will be thoroughly studied. The large deformation tensor will be also introduced and the necessity of distinguishing between non-deformed
and deformed states in the theory of large deformations will be emphasized.
* Rheological classification of materials
The main categories of materials will be given: Hook`s (classical elastic) material, Newton`s and non-Newton`s viscous liquids, viscoelastic materials, plastic materials. The graphic models of these
materials will be introduced. It will be also shown how more complex rheological materials may be described by combination of these simple models.
* Rheological behaviour of special materials
This part of the lecture may be modified due to the speciality of students who will attend it. Originally, it has been prepared for students of polymer science. But it is possible to devote it also
to the study of biomaterials, i. e. to biorheology, to the study of non-Newtonian liquids and their technological treatment. For several years it has been taught for postgraduate students of
geophysics and geology. | {"url":"https://explorer.cuni.cz/class/NBCM064?query=energy-transfer","timestamp":"2024-11-01T22:37:37Z","content_type":"text/html","content_length":"31527","record_id":"<urn:uuid:e36f47b3-2c2b-4bfe-9876-aa60dda47542>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00739.warc.gz"} |
TTE simulation data manipulations
Keaven Anderson
We attempt to provide a big picture view of what is involved in a clinical trial simulation with individual time-to-event data generated for each patient. Primary interest is in group sequential
trials, usually with a single endpoint. However extensions could be made.
Results data table
At the time of simulation planning an analysis plan for the trial is needed. For a group sequential design, a data table to store results is generated. Generally, the dimensions and variables planned
for storage would be planned up front. As a simple example, if there is a group sequential design with 3 analyses planned, 15 data items for each analysis and 10,000 simulations planned, a data table
with 30,000 rows and 15 columns has been used to store summary results. As each trial simulation proceeds, a row is updated with results for each analysis.
Simulated trial dataset generation
For each simulated trial, an initial table is generated with information at a patient level. If trials are generated sequentially, the space needed for this data table could be re-used, never
requiring allocation of more space. Each row contains data for a single patient. As an example, we could simulate a trial with 500 patients and 10 data items per patients. The data items would be in
columns, the patients in rows.
Dataset manipulations for analysis
Simulated trial data need to be manipulated to do any individual analysis (interim or final) for a clinical trial. The following operations are needed:
1. Ordering data
2. Selecting a subset for analysis
3. Calculating individual patient results for the subset at the time of analysis.
4. Performing statistical tests and computing treatment effect estimates as well as other summaries that will included in the results summary dataset described above. Types of computations needed
1. Number of subjects by treatment group
2. Number of events by treatment group
3. Kaplan-Meier estimation of survival curves
4. Observed minus expected computations as well as weighting for logrank, weighted logrank calculations.
5. Using the survival package to compute hazard ratio estimates.
Flow for simulating group sequential: one scenario algorithm
Group sequential design simulation flow:
• Generate a trial.
• Analyze repeatedly.
• Summarize across simulated trials. | {"url":"https://cran-r.c3sl.ufpr.br/web/packages/simtrial/vignettes/workflow.html","timestamp":"2024-11-06T21:36:17Z","content_type":"text/html","content_length":"30516","record_id":"<urn:uuid:f09e4b0d-a672-443e-81e2-80d8b624153a>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00095.warc.gz"} |
USACO 2020 US Open Contest, Silver
Problem 3. The Moo Particle
Contest has ended.
Log in to allow submissions in analysis mode
Quarantined for their protection during an outbreak of COWVID-19, Farmer John's cows have come up with a new way to alleviate their boredom: studying advanced physics! In fact, the cows have even
managed to discover a new subatomic particle, which they have named the "moo particle".
The cows are currently running an experiment involving $N$ moo particles ($1 \leq N \leq 10^5$). Particle $i$ has a "spin" described by two integers $x_i$ and $y_i$ in the range $-10^9 \ldots 10^9$
inclusive. Sometimes two moo particles interact. This can happen to particles with spins $(x_i, y_i)$ and $(x_j, y_j)$ only if $x_i \leq x_j$ and $y_i \leq y_j$. Under these conditions, it's possible
that exactly one of these two particles may disappear (and nothing happens to the other particle). At any given time, at most one interaction will occur.
The cows want to know the minimum number of moo particles that may be left after some arbitrary sequence of interactions.
INPUT FORMAT (file moop.in):
The first line contains a single integer $N$, the initial number of moo particles. Each of the next $N$ lines contains two space-separated integers, indicating the spin of one particle. Each particle
has a distinct spin.
OUTPUT FORMAT (file moop.out):
A single integer, the smallest number of moo particles that may remain after some arbitrary sequence of interactions.
-1 0
0 -1
One possible sequence of interactions:
• Particles 1 and 4 interact, particle 1 disappears.
• Particles 2 and 4 interact, particle 4 disappears.
• Particles 2 and 3 interact, particle 3 disappears.
Only particle 2 remains.
-1 3
Particle 3 cannot interact with either of the other two particles, so it must remain. At least one of particles 1 and 2 must also remain.
• Test cases 3-6 satisfy $N\le 1000.$
• Test cases 7-12 satisfy no additional constraints.
Problem credits: Dhruv Rohatgi
Contest has ended. No further submissions allowed. | {"url":"https://usaco.org/index.php?page=viewproblem2&cpid=1040","timestamp":"2024-11-07T10:30:24Z","content_type":"text/html","content_length":"9311","record_id":"<urn:uuid:7c52142b-02d8-4b0f-9d00-2879c2b7c911>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00527.warc.gz"} |
More Precise Estimation of Lower-Level Interaction Effects in Multilevel Models
Version 2 2018-05-02, 17:58
Version 1 2018-03-20, 18:39
posted on 2018-05-02, 17:58 authored by Tom Loeys, Haeike Josephy, Marieke Dewitte
In hierarchical data, the effect of a lower-level predictor on a lower-level outcome may often be confounded by an (un)measured upper-level factor. When such confounding is left unaddressed, the
effect of the lower-level predictor is estimated with bias. Separating this effect into a within- and between-component removes such bias in a linear random intercept model under a specific set of
assumptions for the confounder. When the effect of the lower-level predictor is additionally moderated by another lower-level predictor, an interaction between both lower-level predictors is included
into the model. To address unmeasured upper-level confounding, this interaction term ought to be decomposed into a within- and between-component as well. This can be achieved by first multiplying
both predictors and centering that product term next, or vice versa. We show that while both approaches, on average, yield the same estimates of the interaction effect in linear models, the former
decomposition is much more precise and robust against misspecification of the effects of cross-level and upper-level terms, compared to the latter.
This work was not supported. | {"url":"https://tandf.figshare.com/articles/dataset/More_Precise_Estimation_of_Lower-Level_Interaction_Effects_in_Multilevel_Models/6007283/2","timestamp":"2024-11-05T03:30:49Z","content_type":"text/html","content_length":"133344","record_id":"<urn:uuid:9a6e2a43-b67b-4abc-a5c6-a2af84bc55f3>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00088.warc.gz"} |
Frequently Asked Questions
If you find GPAW useful in your research please cite this GPAW review:
Jens Jørgen Mortensen, Ask Hjorth Larsen, Mikael Kuisma et al.
J. Chem. Phys. 160, 092503 (2024)
together with the ASE review (see How should I cite ASE?).
You are welcome to cite also the original GPAW reference and an earlier GPAW review:
J. J. Mortensen, L. B. Hansen, and K. W. Jacobsen
Phys. Rev. B 71, 035109 (2005)
J. Enkovaara, C. Rostgaard, J. J. Mortensen et al.
J. Phys.: Condens. Matter 22, 253202 (2010)
Please also cite those of the following that are relevant to you work:
(updated on 18 Mar 2021)
The total number of citations above is the number of publications citing at least one of the other papers, not the sum of all citation counts.
BibTex (doc/GPAW.bib):
author = {Mortensen, J. J. and Hansen, L. B. and Jacobsen, K. W.},
title = {Real-space grid implementation of the projector augmented wave method},
journal = {Phys. Rev. B},
volume = {71},
number = {3},
pages = {035109},
year = {2005},
doi = {10.1103/PhysRevB.71.035109}
author = {Enkovaara, J. and Rostgaard, C. and Mortensen, J. J. and
Chen, J. and Du{\l}ak, M. and Ferrighi, L. and
Gavnholt, J. and Glinsvad, C. and Haikola, V. and
Hansen, H. A. and Kristoffersen, H. H. and Kuisma, M. and
Larsen, A. H. and Lehtovaara, L. and Ljungberg, M. and
Lopez-Acevedo, O. and Moses, P. G. and Ojanen, J. and
Olsen, T. and Petzold, V. and Romero, N. A. and
Stausholm-M{\o}ller, J. and Strange, M. and
Tritsaris, G. A. and Vanin, M. and Walter, M. and
Hammer, B. and H{\"a}kkinen, H. and Madsen, G. K. H. and
Nieminen, R. M. and N{\o}rskov, J. K. and Puska, M. and
Rantala, T. T. and Schi{\o}tz, J. and Thygesen, K. S. and
Jacobsen, K. W.},
title = {Electronic structure calculations with {GPAW}: a real-space implementation of the projector augmented-wave method},
journal = {J. Phys.: Condens. Matter},
volume = {22},
number = {25},
pages = {253202},
year = {2010},
doi = {10.1088/0953-8984/22/25/253202}
author = {Susi Lehtola and Conrad Steigemann and Micael
J. T. Oliveira and Miguel A. L. Marques},
title = {Recent developments in libxc -- A comprehensive library of functionals for density functional theory},
journal = {SoftwareX},
volume = {7},
pages = {1-5},
year = {2018},
issn = {2352-7110},
url = {https://www.sciencedirect.com/science/article/pii/S2352711017300602},
keywords = {Density functional theory, Exchange–correlation, Local
density approximations, Generalized gradient
approximations, meta-GGA approximations},
abstract = {libxc is a library of exchange–correlation functionals
for density-functional theory. We are concerned with
semi-local functionals (or the semi-local part of
hybrid functionals), namely local-density
approximations, generalized-gradient approximations,
and meta-generalized-gradient
approximations. Currently we include around 400
functionals for the exchange, correlation, and the
kinetic energy, spanning more than 50 years of
research. Moreover, libxc is by now used by more
than 20 codes, not only from the atomic, molecular,
and solid-state physics, but also from the quantum
chemistry communities.},
doi = {10.1016/j.softx.2017.11.002}
author = {Walter, Michael and H{\"a}kkinen, Hannu and Lehtovaara, Lauri and
Puska, Martti and Enkovaara, Jussi and Rostgaard, Carsten and
Mortensen, Jens J{\o}rgen},
title = {Time-dependent density-functional theory in the projector augmented-wave method},
journal = {J. Chem. Phys.},
volume = {128},
number = {24},
pages = {244101},
year = {2008},
doi = {10.1063/1.2943138}
author = {Larsen, A. H. and Vanin, M. and Mortensen, J. J. and
Thygesen, K. S. and Jacobsen, K. W.},
title = {Localized atomic basis set in the projector augmented wave method},
journal = {Phys. Rev. B},
volume = {80},
number = {19},
pages = {195112},
year = {2009},
doi = {10.1103/PhysRevB.80.195112}
author = {Yan, Jun and Mortensen, Jens J. and Jacobsen, Karsten W. and
Thygesen, Kristian S.},
title = {Linear density response function in the projector augmented wave method: Applications to solids, surfaces, and interfaces},
journal = {Phys. Rev. B},
volume = {83},
number = {24},
pages = {245122},
year = {2011},
doi = {10.1103/PhysRevB.83.245122}
author = {H\"user, Falco and Olsen, Thomas and Thygesen, Kristian S.},
title = {Quasiparticle GW calculations for solids, molecules, and two-dimensional materials},
journal = {Phys. Rev. B},
volume = {87},
number = {23},
pages = {235132},
year = {2013},
doi = {10.1103/PhysRevB.87.235132}
author = {Held, Alexander and Walter, Michael},
title = {Simplified continuum solvent model with a smooth cavity based on volumetric data},
journal = {J. Chem. Phys.},
volume = {141},
number = {17},
pages = {174108},
year = {2014},
doi = {10.1063/1.4900838}
author = {Kuisma, M. and Sakko, A. and Rossi, T. P. and Larsen, A. H. and Enkovaara, J. and Lehtovaara, L. and Rantala, T. T.},
title = {Localized surface plasmon resonance in silver nanoparticles: Atomistic first-principles time-dependent density-functional theory calculations},
journal = {Phys. Rev. B},
volume = {91},
number = {11},
pages = {115431},
year = {2015},
doi = {10.1103/PhysRevB.91.115431}
author = {Rossi, Tuomas P. and Kuisma, Mikael and Puska, Martti J. and
Nieminen, Risto M. and Erhart, Paul},
title = {Kohn--Sham Decomposition in Real-Time Time-Dependent Density-Functional Theory: An Efficient Tool for Analyzing Plasmonic Excitations},
journal = {J. Chem. Theory Comput.},
volume = {13},
number = {10},
pages = {4779-4790},
year = {2017},
doi = {10.1021/acs.jctc.7b00589}
author = {Kastlunger, Georg and Lindgren, Per and Peterson, Andrew A.},
title = {Controlled-Potential Simulation of Elementary Electrochemical Reactions: Proton Discharge on Metal Surfaces},
journal = {The Journal of Physical Chemistry C},
volume = {122},
number = {24},
pages = {12771-12781},
year = 2018,
doi = {10.1021/acs.jpcc.8b02465},
In English: “geepaw” with a long “a”.
In Danish: Først bogstavet “g”, derefter “pav”: “g-pav”.
In Finnish: supisuomalaisittain “kee-pav”.
In Polish: “gyeh” jak “Gie”rek, “pav” jak paw: “gyeh-pav”.
For architecture dependent settings see the Platforms and architectures page.
Compilation of the C part failed:
[~]$ python2.4 setup.py build_ext
building '_gpaw' extension
pgcc -fno-strict-aliasing -DNDEBUG -O2 -g -pipe -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -m64 -D_GNU_SOURCE -fPIC -fPIC -I/usr/include/python2.4 -c c/localized_functions.c -o build/temp.linux-x86_64-2.4/c/localized_functions.o -Wall -std=c99
pgcc-Warning-Unknown switch: -fno-strict-aliasing
PGC-S-0040-Illegal use of symbol, _Complex (/usr/include/bits/cmathcalls.h: 54)
You are probably using another compiler, than was used for compiling python. Undefine the environment variables CC, CFLAGS and LDFLAGS with:
# sh/bash users:
unset CC; unset CFLAGS; unset LDFLAGS
# csh/tcsh users:
unsetenv CC; unsetenv CFLAGS; unsetenv LDFLAGS
and try again.
If you are doing a spin-polarized calculation for an isolated molecule, then you should set the Fermi temperature to a low value.
You can also try to set the number of grid points to be divisible by 8. Consult the Notes on performance page. | {"url":"https://gpaw.readthedocs.io/faq.html","timestamp":"2024-11-14T22:15:27Z","content_type":"text/html","content_length":"43926","record_id":"<urn:uuid:b600becf-16f3-499d-8b12-8fd4a41ab08e>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00873.warc.gz"} |
Computing roots of polynomials by quadratic clipping
We present an algorithm which is able to compute all roots of a given univariate polynomial within a given interval. In each step, we use degree reduction to generate a strip bounded by two quadratic
polynomials which encloses the graph of the polynomial within the interval of interest. The new interval(s) containing the root(s) is (are) obtained by intersecting this strip with the abscissa axis.
In the case of single roots, the sequence of the lengths of the intervals converging towards the root has the convergence rate 3. For double roots, the convergence rate is still superlinear (frac(3,
2)). We show that the new technique compares favorably with the classical technique of Bézier clipping.
• Bézier clipping
• Polynomial
• Root finding
ASJC Scopus subject areas
• Modeling and Simulation
• Automotive Engineering
• Aerospace Engineering
• Computer Graphics and Computer-Aided Design
Dive into the research topics of 'Computing roots of polynomials by quadratic clipping'. Together they form a unique fingerprint. | {"url":"https://faculty.kaust.edu.sa/en/publications/computing-roots-of-polynomials-by-quadratic-clipping","timestamp":"2024-11-06T11:16:58Z","content_type":"text/html","content_length":"52047","record_id":"<urn:uuid:fa736e96-655b-4a09-8d5e-4e188d3246d7>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00097.warc.gz"} |
Updated: 2022/Sep/29
Please read Privacy Policy. It's for your privacy.
EVP_KDF-KRB5KDF(7) OpenSSL EVP_KDF-KRB5KDF(7)
EVP_KDF-KRB5KDF - The RFC3961 Krb5 KDF EVP_KDF implementation
Support for computing the KRB5KDF KDF through the EVP_KDF API.
The EVP_KDF-KRB5KDF algorithm implements the key derivation function
defined in RFC 3961, section 5.1 and is used by Krb5 to derive session
keys. Three inputs are required to perform key derivation: a cipher,
(for example AES-128-CBC), the initial key, and a constant.
"KRB5KDF" is the name for this implementation; it can be used with the
EVP_KDF_fetch() function.
Supported parameters
The supported parameters are:
"properties" (OSSL_KDF_PARAM_PROPERTIES) <UTF8 string>
"cipher" (OSSL_KDF_PARAM_CIPHER) <UTF8 string>
"key" (OSSL_KDF_PARAM_KEY) <octet string>
These parameters work as described in "PARAMETERS" in EVP_KDF(3).
"constant" (OSSL_KDF_PARAM_CONSTANT) <octet string>
This parameter sets the constant value for the KDF. If a value is
already set, the contents are replaced.
A context for KRB5KDF can be obtained by calling:
EVP_KDF *kdf = EVP_KDF_fetch(NULL, "KRB5KDF", NULL);
EVP_KDF_CTX *kctx = EVP_KDF_CTX_new(kdf);
The output length of the KRB5KDF derivation is specified via the keylen
parameter to the EVP_KDF_derive(3) function, and MUST match the key
length for the chosen cipher or an error is returned. Moreover, the
constant's length must not exceed the block size of the cipher. Since
the KRB5KDF output length depends on the chosen cipher, calling
EVP_KDF_CTX_get_kdf_size(3) to obtain the requisite length returns the
correct length only after the cipher is set. Prior to that
EVP_MAX_KEY_LENGTH is returned. The caller must allocate a buffer of
the correct length for the chosen cipher, and pass that buffer to the
EVP_KDF_derive(3) function along with that length.
This example derives a key using the AES-128-CBC cipher:
EVP_KDF *kdf;
EVP_KDF_CTX *kctx;
unsigned char key[16] = "01234...";
unsigned char constant[] = "I'm a constant";
unsigned char out[16];
size_t outlen = sizeof(out);
OSSL_PARAM params[4], *p = params;
kdf = EVP_KDF_fetch(NULL, "KRB5KDF", NULL);
kctx = EVP_KDF_CTX_new(kdf);
*p++ = OSSL_PARAM_construct_utf8_string(OSSL_KDF_PARAM_CIPHER,
*p++ = OSSL_PARAM_construct_octet_string(OSSL_KDF_PARAM_KEY,
key, (size_t)16);
*p++ = OSSL_PARAM_construct_octet_string(OSSL_KDF_PARAM_CONSTANT,
constant, strlen(constant));
*p = OSSL_PARAM_construct_end();
if (EVP_KDF_derive(kctx, out, outlen, params) <= 0)
/* Error */
RFC 3961
EVP_KDF(3), EVP_KDF_CTX_free(3), EVP_KDF_CTX_get_kdf_size(3),
EVP_KDF_derive(3), "PARAMETERS" in EVP_KDF(3)
This functionality was added in OpenSSL 3.0.
Copyright 2016-2021 The OpenSSL Project Authors. All Rights Reserved.
Licensed under the Apache License 2.0 (the "License"). You may not use
this file except in compliance with the License. You can obtain a copy
in the file LICENSE in the source distribution or at
3.0.12 2023-05-07 EVP_KDF-KRB5KDF(7) | {"url":"https://www.daemon-systems.org/man/EVP_KDF-KRB5KDF.7.html","timestamp":"2024-11-13T12:33:10Z","content_type":"text/html","content_length":"6227","record_id":"<urn:uuid:e4d5d835-ec2e-4abf-939c-a95b96ac7662>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00201.warc.gz"} |
M.S. Program in Secondary School Science and Mathematics Education
Program tanımları
Head of Department: Aysenur Yontar Togrol
Professors: Fusun Akarsu, Dilek Ardac, Ali Baykal
Associate Professor: Aysenur Yontar Togrol
Assistant Professor: Buket Yakmaci Guzel
Instructors: Emine Adadan, Nizamettin Engin Ader, Fatih C. Mercan
The Department of Secondary School Science and Mathematics Education coordinates several teacher training programs in Teaching Chemistry, Teaching Mathematics and Teaching Physics at the secondary
school level.
The department aims to produce science and mathematics educators able to meet the increasing demand for high quality teaching. Graduates are expected to be reflective and competent educators,
sensitive to the demands of their job, responsive to a developing education system and technological requirements, committed to continuing their own professional development and ready to play a
leading role in the field of science and mathematics education.
All three programs include courses offered by the departments of the Faculty of Arts and Sciences (predominantly in the first three years) and the Faculty of Education. Courses offered by the Faculty
of Education cover curriculum and professional studies introducing various aspects of education, psychology, guidance, and classroom management. Courses offered by the Department of Secondary School
Science and Mathematics Education integrate theoretical considerations in the field with methodology and practical applications in the selected secondary schools. The five year program leads to a
master's degree without thesis (M. Ed.).
The M.S. program in Secondary School Science and Mathematics Education comprises a minimum of 24 credits of course work, a non-credit seminar and a thesis. The minimum number of 24 credits consist of
4 required and 4 elective courses. Elective courses may be chosen from among the courses offered by the program, the Institute or the university, subject to the approval of the advisors. Students
with a B.S. or B.A. degree from the faculties of Education, Arts and Sciences, Engineering and related fields can apply for the M.S. Program in Secondary School Science and Mathematics Education. The
candidates must complete the course work in two successive semesters; however, students with a non-education background may be allowed to extend their course work to three semesters at the discretion
of the advisor and the Institute.
First Semester
SCED 511 Instructional Science for Sc./Math. Inst.
SCED 541 Meas.of Constr. Rel. to Sc./Math. Ed.
-- -- Elective
-- -- Elective
Second Semester
SCED 522 Int. Perspectives in Sc./Math. Ed.
SCED 544 Sc./Math. Curriculum Develop. Study
SCED 579 Graduate Seminar
-- -- Elective
-- -- Elective
SCED 510 Educational Imperatives of Science/Mathematics in Social Context
(Fen/Matematik Egitiminin Toplumsal Boyutu) (3+0+0) 3
The impact of knowledge explosion; curriculum development as a contraction of knowledge; economics of science and math education; scientific literacy and science for all; science and technology in
everyday life and in immediate environment; affordable science programs for the layman.
SCED 511 Instructional Science for Science/Mathematics Instruction (3+0+0) 3
(Fen/Matematik Ogretimi Icin Ogretim Bilimi)
The nature of theory in sciences; the need for a theory of instruction, criteria for a formal theory; philosophical foundations of developmental, instrumental and cognitive theories and their
practical applications in science/mathematics education; specific techniques for developing science and math concepts; domain-specific and general problem solving skills at secondary level;
development of educational materials and modules.
SCED 522 International Perspectives in Science/Mathematics Education (3+0+0) 3
(Fen/Matematik Egitimine Uluslararasi Yaklasimlar)
Critical review of the secondary school science/mathematics curricula in developed and in developing countries; analysis of social change and its impact on education in general and on science
education in particular. In depth case studies of the science, math and technology curricula in the European Union, North America, Japan, Israel, Ghana, India etc.
SCED 524 Individually Prescribed Science/Mathematics Instruction (3+0+0) 3
(Bireysellestirilmis Fen Egitimi)
Mainstreaming students with special educational needs in science and mathematics, dealing with under achievement in science-shys; environmental enhancement for science-prones, nurturing creativity
and high achievement, audio-visual-tutorial courseware, single subject assessment research and case study.
SCED 531 World Agenda in Science/Mathematics Education (3+0+0) 3
(Fen/Matematik Egitiminde Dunya Gundemi)
Universal framework of science/mathematics curricula for every citizen of the world; current trends and recent progress in science/mathematics curricula all around the world; projects and innovations
by UNESCO, ICASE, IOSTE, WOCATE and other steering organizations.
SCED 532 Science/Mathematics Education in the Information Society (3+0+0) 3
(Bilgi Toplumunda Fen/Matematik Egitimi)
Creative curriculum design as a response to the possible technical developments such as controlled nuclear fusion, gravity control, manned landing on planets, superperformance structural materials,
weather control, matter transmission, three dimensional interactive TV, automated information exchange, artificial vision and hearing, organic computers, artificial enhancement of intelligence etc.
SCED 540 Improvement of Mental Skills in Science/Mathematics Education
(Fen/Matematik Egitiminde Bilissel Surec Becerilerinin Gelistirilmesi) (3+0+0) 3
Content analysis of mental constructs; mind and brain; the biological basis of learning and individuality; development of procedures and practices necessary to implement various modes of thinking and
cognitive styles.
SCED 541 Measurement of Constructs Relevant to Science/Mathematics Education (Fen/Matematik Egitimine Ozgu Yapilarin Olcumu) (3+0+0) 3
Measurement of aptitudes, abilities, attitudes and mental skills relevant to science/mathematics education; assessment devices for interest, achievement and creativity in science/mathematics courses;
norm referenced and criterion referenced evaluation methodologies; multi-stage evaluation.
SCED 544 Science/Mathematics Curriculum Development Study (3+0+0) 3
(Fen/Matematik Egitiminde Program Gelistirme)
Assessment of educational needs; parameters and constraints in curricula; design and development of taxonomies; work space, equipment and tool design; materials, equipment, facilities, personnel,
time and cost in curriculum.
SCED 551 Media and Methods in Science/Mathematics Education (3+0+0) 3
(Fen/Matematik Egitiminde Ortam ve Yontemler)
Perception and communication; behaviorist and cognitive approaches to learning; expository, discovery and inquiry strategies in courseware; interactive learning environments; computer-human
interface; distance education; hypertext and hypermedia; simulators for skill development; virtual reality and cyberspace; design and development of audio-visual learning resources.
SCED 552 Validation and Implementation of Science/Mathematics Courseware
(Fen/Matematik Ogretim Yazilimlarinin Gecerlenmesi ve Uygulanmasi) (3+0+0) 3
Philosophical (axiological, ontological and epistemological), psychological (motivational, cognitive and affective), sociological (nomothetic and idiosyncratic), economical analysis of components
which constitute science/mathematics curriculum; approval of innovations; teacher training; setting stage for diffusing curriculum change; termination of curricula.
SCED 570 Research in Science/Mathematics Education (3+0+0) 3
(Fen/Matematik Egitiminde Arastirma)
Experimental, correlational, ex-post facto designs, case studies, surveys; developing data collection instruments; using relevant parametric and non-parametric techniques to analyze data;
interpretation of results and writing reports.
SCED 571 Quantitative Methods in Experimental Research (3+0+0) 3
(Deneysel Arastirmada Sayisal Yontemler)
Critical review of classical test theory; assessment of reliability and validity; experimental and quasi-experimental research designs; statistical inference by parametric and non-parametric
techniques; computer applications of ANOVA, ANCOVA, MANOVA and factor analysis.
SCED 579 Graduate Seminar in Science/Mathematics Education (0+1+0) 0 P/F
(Yuksek Lisans Semineri)
The widening of student's perspectives and awareness of topics of interest to science/mathematics educators through seminars offered by faculty, graduate students, and invited guests from industry,
government, business and academia.
SCED 580-589, 591-598 Special Topics in Science/Mathematics Education
(Fen/Matematik Egitiminde Ozel Konular) (3+0+0) 3
Directed readings and study aiming at producing a substantial project on a specific area of interest.
SCED 599 Guided Research I (0+4+0) 0 (ECTS:8) P / F
(Yonlendirilmis Calismalar I )
Research in the field of Science/Mathematics Education, supervised by faculty.
SCED 59A Guided Research II (0+4+0) 0 (ECTS: 8) P / F
(Yonlendirilmis Calismalar II)
Research in the field of Science/Mathematics Education, supervised by faculty.
SCED 601 Science/Mathematics Education in Social Context (3+0+2) 4
(Bilim, Toplum ve Fen/Matematik Egitimi)
Impact of science and technology on the quality of lives of individuals and society. Economical and epistemological imperatives for scientific literacy. Life-long learning in science to prosper and
to make informed decisions about the interrelated scientific, social, and educational issues. The practice in creative contraction of expanding knowledge into viable curricular units .
SCED 611 Research Design in Science/Mathematics Education (2+0+2) 3
(Fen/Matematik Egitiminde Arastirma Tasarimi)
Principles of effective use of research designs in science/mathematics education. Instructional experiments, classroom-based research, action research in curriculum design. Small-scale research
SCED 621 Material Design in Science/Mathematics Education (2+0+4) 4
(Fen/Matematik Egitiminde Materyal Tasarimi)
Development of authentic and simulated learning materials, teachings aids and assessment tools for a variety of settings and educational media. Design, application and evaluation of materials to
improve basic skills in science/mathematics.
SCED 622 Learning and Teaching Processes in Science/Mathematics Education (Fen/Matematik Egitiminde Ogrenme ve Ogretim Surecleri) (4+0+0) 4
Models of science/mathematics teaching and learning. Teaching methods and strategies that invest on cumulative, constructive, self-regulated and situated attributes of learning science/mathematics.
Meaningful learning and problem solving in science/mathematics. Strategies for constructing and organizing knowledge. Recent findings on mind and brain research.
SCED 624 Multivariate Research in Science/Mathematics Education (2+0+4) 4
(Fen/Matematik Egitiminde Cok Degiskenli Arastirma)
Measurements in school and in nonschool settings. General design classifications. Complex independent group designs. Practical work in mixed factorial designs, multiple analysis of variance (MANOVA)
etc. Practical and ethical issues in the research process. Techniques of writing research reports and dissertations.
SCED 630 Fostering Creativity in Science/Mathematics Instruction (3+0+2) 4
(Fen/Matematik Ogretiminde Yaraticiligin Gelistirilmesi)
The characteristics of creativity and divergent thinking. Metaphors in creative scientific thought. Mental processes in scientific creativity exercises. Exercises in creative writing, problem
finding, problem solving and creative drama. Research on developing of giftedness and talent.
SCED 631 Applied Studies in Program Design (2+0+4) 4
(Program Tasariminda Uygulamali Caliþmalar)
Instructional design and applications in Science/Mathematics education. Construction of instructional kits for future use in formal or nonformal environments. Development of teaching strategies.
SCED 634 Developing Object Oriented Courseware in Science/Mathematics (Fen/Matematikte Nesne Yonelimli Ders Yazilimi Gelistirme) (2+0+4) 4
Principles of designing and developing educational programs with object oriented programming languages. Understanding the concepts and principles of authoring languages, and their applications in the
internet as well as in other instructional environments.
SCED 640 Human-Technology Interaction (Insan-Teknoloji Etkilesimi) (3+0+0) 3
Teaching and learning experiences of technology and computer usage. Interdisciplinary approach to system design and evaluation. Ergonomic principles, current viewpoints and activities in the field of
human-technology interaction. Trends in next-generation system design.
SCED 644 Methods And Media in Science/Mathematics Education (3+0+2) 4
(Fen/Matematik Egitiminde Yontem ve Donanimlar)
Developing a vision for schools of the 21st century. Learning as a communication process. Technology-rich classrooms. Intranet and the internet in the learning environment. Simulators for skill
development. Interactive television and video as educational resources. Specialized technology for science/mathematics skills. Practice in authentic assessment and portfolio evaluation.
SCED 651 Project Management in Science/Mathematics Education (2+0+2) 3
(Fen/Matematik Egitiminde Proje Yonetimi)
Principles of project management. Role of project managers in timing, budgeting, achieving, and delivering in science/mathematics education projects. Developing small group projects and training
through the use of project software.
SCED 660 Analysis and Design of Interactive Environments (2+0+2) 3
(Ag Tabanli Etkilesimli Ortamlarin Analiz ve Tasarimi)
Fundamentals and architecture of computer networks in instructional institutions. Electronic mail, computer conferencing, distance education and their applications in science/mathematics education.
Procedures of web-based learning environments. Creating an online interaction environment to communicate, deliver and manage instructional information. Establishing knowledge communication platforms
through portals, web sites and intranets. Design and use of online tools to support learning in sciences/mathematics.
SCED 670 E-Learning in Science/Mathematics Education (2+0+2) 3
(Fen/Matematik Ogretiminde e-Ogrenme)
Issues related to e-learning in science/mathematics education. Market demand and educational supply for e-learning. Integration and differentiation of formal and non-formal aspects of e-learning.
Process control and quality assurance in e-learning practices. Exercises in refinement of content and methodology to promote e-learning in science/mathematics education.
SCED (680-689, 691-698) Special Topics in Science/Mathematics Education (Fen/Matematik Egitiminde Ozel Konular) (4+0+0) 4
Advanced study of special subjects in Science/Mathematics education.
SCED 699 Guided Research I (2+0+4) 4 (ECTS: 8 )
(Yonlendirilmis Calismalar )
Research in the field of Science/Mathematics Education, supervised by faculty; preparation and presentation of a research proposal.
SCED 69A Guided research II (0+4+0) 0 (ECTS: 8) P / F
(Yonlendirilmis Calismalar II )
Continued research in the field of Science/Mathematics Education, supervised by faculty; preparation and presentation of a research proposal.
SCED 69B Guided research III (0+4+0) 0 (ECTS: 8) P / F
(Yonlendirilmis Calismalar III )
Continued research in the field of Science/Mathematics Education, supervised by faculty; preparation and presentation of a research proposal.
SCED 69C Guided research in IV (0+4+0) 0 (ECTS: 8) P / F
(Yonlendirilmis Calismalar IV )
Continued research in the field of Science/Mathematics Education, supervised by faculty; preparation and presentation of a research proposal.
SCED 69D Guided research V (0+4+0) 0 (ECTS: 8) P / F
(Yonlendirilmis Calismalar V)
Continued research in the field of Science/Mathematics Education supervised by faculty; preparation and presentation of a research proposal.
SCED 690 Master's Thesis in Secondary School Science and Mathematics Education (Ortaogretim Fen ve Matematik Alanlari Egitimi Yuksek Lisans Tezi) | {"url":"https://www.educaedu-turkiye.com/m-s-program-in-secondary-school-science-and-mathematics-education-yuksek-lisans-programlari-1060.html","timestamp":"2024-11-05T23:46:38Z","content_type":"text/html","content_length":"98404","record_id":"<urn:uuid:bc34356d-ca4c-4c68-897a-95265c6e2854>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00658.warc.gz"} |
CHEM 125a - Lecture 7 - Quantum Mechanical Kinetic Energy
Freshman Organic Chemistry I
CHEM 125a - Lecture 7 - Quantum Mechanical Kinetic Energy
Chapter 1. Limits of the Lewis Bonding Theory [00:00:00]
Professor Michael McBride: Okay, so we just glimpsed this at the end last time. This is a crystal structure of a complicated molecule that was performed by these same Swiss folks that we’ve talked
about, and notice how very precise it is. The bond distances between atoms are reported to ±1/2000th of an angstrom. The bonds are like one and a half angstroms. So it’s like a part in a thousand, is
the precision to which the positions of the atoms is known. Okay? But those positions are average positions, because the atoms are constantly in motion, vibrating. In fact, the typical vibration
amplitude, which depends on — an atom that’s off on the end of something is more floppy than one that’s held in by a lot of bonds in various directions, in a cage sort of thing. But typically they’re
vibrating by about 0.05 angstroms, which is twenty-five times as big as the precision to which the position of the average is known. Okay? So no molecule looks like that at an instant, the atoms are
all displaced a little bit.
Now how big is that? Here, if you look at that yellow thing, when it shrinks down, that’s how big it is, that’s how big the vibration is. It’s very small. But these are very precise measurements.
Right? Now why did they do so precise measurements? Did they really care to know bond distances to that accuracy? Maybe for some purposes they did, but that wasn’t the main reason they did the work
very carefully. They did it carefully in order to get really precise positions for the average atom, so they could subtract spherical atoms and see the difference accurately. Okay? Because if you
have the wrong position for the atom that you’re subtracting, you get nonsense. Okay, and what this is going to reveal is some pathologies of bonding, from the point of view of Lewis concept of
shared electrons. Okay, so here’s a picture of this molecule. And remember, we had — rubofusarin, which we looked at last time, had the great virtue that it was planar. So you could cut a slice that
went through all the atoms. This molecule’s definitely not planar, so you have to cut various slices to see different things. So first we’ll cut a slice that goes through those ten atoms. Okay? And
here is the difference electron density. What does the difference density show? Somebody? Yes Alex?
Student: It’s the electron density minus the spherical —
Professor Michael McBride: It’s the total electron density minus the atoms; that is, how the electron density shifted when the molecule was formed from the atoms. Okay, and here we see just exactly
what we expected to see, that the electrons shifted in between the carbon atoms, the benzene ring, and other pairs of carbon atoms as well. It also shows the C-H bonds, because in this case the
hydrogen atoms were subtracted. We showed one last time where the hydrogen atoms weren’t subtracted. Okay, so this is — there’s nothing special here, everything looks the way you expect it to be;
although it’s really beautiful, as you would expect from these guys who do such great work. Now we’ll do a different slice. This is sort of the plane of the screen which divides the molecule
symmetrically down the middle, cuts through some bonds, cuts through some atoms and so on. So here’s the difference density map that appears on that slice.
Now the colored atoms, on the right, are the positions of the atoms through which the plane slices, but the atoms are subtracted out; so what you see is the bonding in that plane. So you see those
bonds, because both ends of the bond are in the plane, so the bonds are in the plane, and you see just what you expect to see. But there are other things you see as well. You see the C-H bonds,
although they don’t have nearly as much electron density as the C-C bonds did. Right? You also see that lump, which is the unshared pair on nitrogen. Right? But you see these two things, which are
bonds, but they’re cross-sections of bonds, because this particular plane cuts through the middle of those bonds. Everybody see that? Okay, so again that’s nothing surprising. But here is something
surprising. There’s another bond through which that plane cuts, which is the one on the right, through those three-membered rings, right? And what do you notice about that bond?
Student: It isn’t there.
Professor Michael McBride: It isn’t there. There isn’t any electron density for that bond. So it’s a missing bond. This is what we’ll refer to as pathological bonding, right? It’s not what Lewis
would have expected, maybe; we don’t have Lewis to talk to, so we don’t know what he would’ve thought about this particular molecule. So here’s a third plane to slice, which goes through those three
atoms, and here’s the picture of it. And again, that bond is missing, that we saw before. Previously we looked at a cross-section. Here we’re looking at a plane that contains the bond, and again
there’s no there there. Okay? But there’s something else that’s funny about this slice. Do you see what? What’s funny about the bonds that you do see? Corey? Speak up so I can hear you.
Student: They’re connected; they’re not totally separate.
Professor Michael McBride: What do you mean they’re connected separately?
Student: Usually you see separate electron densities, but they’re connected.
Professor Michael McBride: Somebody say it in different words. I think you got the idea but I’m not sure everybody understood it. John, do you have an idea?
Student: The top one seems to be more dense than the bottom one.
Professor Michael McBride: One, two, three, four; one, two, three four five. That’s true, it is a little more dense. That kind of thing could be experimental error, because even though this was done
so precisely, you’re subtracting two numbers that are very large, so that any error you make in the experimental one, or in positioning things for the theoretical position of the atoms, any error you
make will really be amplified in a map like this. But it’s true, you noticed. But there’s something I think more interesting about those — that. Yes John?
Student: The contour lines, they’re connected, the contour lines between the top and the bottom bonds are connected. So maybe the electrons, maybe — I don’t know if they’re connected.
Professor Michael McBride: Yes, they sort of overlap one another. But of course if they’re sort of close to one another, that doesn’t surprise you too much because as you go out and out and out,
ultimately you’ll get rings that do meet, if you go far enough down. Yes Chris?
Student: The center of density on the bonds doesn’t intercept the lines connecting the atoms.
Professor Michael McBride: Ah. The bonds are not centered on the line that connects the nuclei. These bonds are bent. Okay? So again, pathological bonding; and in three weeks you’ll understand this,
from first principles, but you’ve got to be patient. Okay, so Lewis pairs and octets provide a pretty good bookkeeping device for keeping track of valence, but they’re hopelessly crude when it comes
to describing the actual electron distribution, which you can see experimentally here. There is electron sharing. There’s a distortion of the spheres of electron density that are the atoms, but it’s
only about 5% as big as Lewis would’ve predicted, had he predicted that two electrons would be right between. Okay? And there are unshared pairs, as Lewis predicted. And again they’re less — but in
this case they’re even less than 5% of what Lewis would’ve predicted. But you can see them.
Chapter 2. Introduction to Quantum Mechanics [00:08:35]
Now this raises the question, is there a better bond theory than Lewis theory, maybe even one that’s quantitative, that would give you numbers for these things, rather than just say there are pairs
here and there. Right? And the answer, thank goodness, is yes, there is a great theory for this, and what it is, is chemical quantum mechanics. Now you can study quantum mechanics in this department,
you can study quantum mechanics in physics, you can probably study it other places, right? And different people use the same quantum mechanics but apply it to different problems. Right? So what we’re
going to discuss in this course is quantum mechanics as applied to bonding. So it’ll be somewhat different in its flavor — in fact, a lot different in its flavor — from what you do in physics, or
even what you do in physical chemistry, because we’re more interested — we’re not so interested in getting numbers or solving mathematical problems, we’re interested in getting insight to what’s
really involved in forming bonds. We want it to be rigorous but we don’t need it to be numerical. Okay? So it’ll be much more pictorial than numerical.
Okay, so it came with the Schrödinger wave equation that was discovered in, or invented perhaps we should say, in — I don’t know whether — it’s hard to know whether to say discovered or invented; I
think invented is probably better — in 1926. And here is Schrödinger the previous year, the sort of ninety-seven-pound weakling on the beach. Right? He’s this guy back here with the glasses on. Okay?
He was actually a well-known physicist but he hadn’t done anything really earthshaking at all. He was at the University of Zurich. And Felix Bloch, who was a student then — two years before he had
come as an undergraduate to the University of Zurich to study engineering, and after a year and a half he decided he would do physics, which was completely impractical and not to the taste of his
parents. But anyhow, as an undergraduate he went to these colloquia that the Physics Department had, and he wrote fifty years later — see this was 1976, so it’s the 50th anniversary of the discovery
of, or invention, of quantum mechanics. So he said:
“At the end of a colloquium I heard Debye”(there’s a picture of Debye)”say something like, ‘Schrödinger, you’re not working right now on very important problems anyway. Why don’t you tell us
something about that thesis of de Broglie?’ So in one of the next colloquia, Schrödinger gave a beautifully clear account of how de Broglie associated a wave with a particle. When he had
finished, Debye casually remarked that he thought this way of talking was rather childish. He had learned that, to deal properly with waves, one had to have a wave equation. It sounded rather
trivial and did not seem to make a great impression, but Schrödinger evidently thought a bit more about the idea afterwards, and just a few weeks later he gave another talk in the colloquium,
which he started by saying, ‘My colleague Debye suggested that one should have a wave equation. Well I have found one.’”
And we write it now: HΨ=EΨ. He actually wrote it in different terms, but that’s his way, the Schrödinger equation. The reason the one we write is a little different from his is he included time as a
variable in his, whereas we’re not interested in, for this purpose, in changes in time. We want to see how molecules are when they’re just sitting there. We’ll talk about time later. So within, what?
Seven years, here’s Schrödinger looking a good deal sharper. Right? And where is he standing? He’s standing at the tram stop in Stockholm, where he’s going to pick up his Nobel Prize for this. Right?
And he’s standing with Dirac, with whom he shared the Nobel Prize, and with Heisenberg, who got the Nobel Prize the previous year but hadn’t collected it yet, so he came at the same time. Okay? So
the Schrödinger equation is HΨ=EΨ. H and E you’ve seen, but Ψ may be new to you. It’s a Greek letter. We can call it Sigh or P-sighi, some people call it Psee. Right? I’ll probably call it Psi. Okay.
And it’s a wave function. Well what in the world is a wave function? Okay? So this is a stumbling block for people that come into the field, and it’s not just a stumbling block for you, it was a
stumbling block for the greatest minds there were at the time.
So, for example, this is five years later in Leipzig, and it’s the research group of Werner Heisenberg, who’s sitting there in the front, the guy that — this was about the time he was being nominated
or selected for the Nobel Prize. Right? So he’s there with his research group, and right behind him is seated Felix Bloch, who himself got the Nobel Prize for discovering NMR in 1952. So he’s quite a
young guy here, and he’s with these other — there’s a guy who became famous at Oxford and another one who became the head of the Physics Department at MIT. Bloch was at Stanford. So these guys know
they’re pretty hot stuff, so they’re looking right into the camera, to record themselves for posterity, as part of this distinguished group; except for Bloch. What’s he thinking about? [Laughter]
What in the world is Ψ? Right? Now, in fact, in that same year, it was in January that Schrödinger announced the wave equation, and Ψ. Right?
And that summer these smart guys, who were hanging around Zurich at that time, theoretical physicists, the young guys went out on an excursion, on the lake of Zurich, and they made up doggerel rhymes
for fun about different things that were going on, and the one that was made up by Bloch and Erich Hückel, whom we’ll talk about next semester, was about Ψ. “Gar Manches rechnet Erwin schon, Mit
seiner Wellenfuction. Nur wissen möcht man gerne wohl, Was man sich dabei vorstell’n soll.” Which means: “Erwin with his Psi can do calculations, quite a few. We only wish that we could glean an
inkling of what Psi could mean.” Right? You can do calculations with it, but what is it? — was the question. Okay? And it wasn’t just these young guys who were confused. Even Schrödinger was never
comfortable with what Ψ really means. Now if we’re lucky, this’ll play this time. So this is a lecture by Schrödinger, “What is Matter”, from 1952.
[Short film clip is played]
Schrödinger’s voice: “etwa so wie Cervantes einmal den Sancho Panza, sein liebes Eselchen auf dem er zu reiten pflegte, verlieren läßt. Aber ein paar Kapitel später hat der Autor das vergessen
und das gute Tier ist wieder da. Nun werden sie mich vielleicht zuletzt fragen, ja was sind denn nun aber wirklich diese Korpuskeln, diese Atome - Moleküle. Ehrlich müßte ich darauf bekennen, ich
weiß es sowenig, als ich weiß, wo Sancho Panzas zweites Eselchen hergekommen ist.”
Chapter 3. Understanding Psi as a Function of Position [00:16:36]
Professor Michael McBride: So twenty-six years later Schrödinger still didn’t really know what Ψ was. Okay? So don’t be depressed when it seems a little curious what Ψ might be. Okay? First we’ll —
like Schrödinger and like these other guys — first we’ll learn how to find Ψ and use it, and then later we’ll learn what it means. Okay? So Ψ is a function, a wave function. What do you want to know,
from what I’ve shown here? What is a function?
Student: A relationship.
Professor Michael McBride: Like sine is a function; what does that mean? Yes? I can’t hear very well.
Student: You put an input into a function and you get an output.
Professor Michael McBride: Yes, it’s like a little machine. You put a number in, or maybe several numbers, and a number comes out. Right? That’s what the function does; okay, you put in ninety
degrees and sin says one. Okay? So what do you want to know about Ψ first?
Student: What does it do?
Professor Michael McBride: What’s it a function of? What are the things you have to put in, in order to get a number out? Okay? So it’s different from the name. The wave functions have names. That’s
not what they’re a function of. Right? You can have sine, sine^2, cosine. Those are different functions, but they can be functions of the same thing, an angle. Right? So we’re interested in what’s it
a function of; not what the function is but what’s it a function of? So you can have different Ψs. They have names and quantum numbers give them their names. For example, you can have n, l, and m.
You’ve seen those before, n, l, m, to name wave functions. Those are just their names. It’s not what they’re a function of. Or you can have 1s or 3d[xy], or σ, or π, or π*. Those are all names of
functions. Right? But they’re not what it’s a function of. What it’s a function of is the position of a particle, or a set of particles. It’s a function of position, and it’s also a function of time,
and sometimes of spin; some particles have spin and it could be a function of that too. But you’ll be happy to know that for purposes of this course we’re not so interested in time and spin. So for
our purposes it’s just a function of position. So if you have N particles, how many positions do you have to specify to know where they all are? How many numbers do you need? You need x, y, z for
every particle. Right? So you need 3N arguments for Ψ. So Ψ is a function that when you tell it where all these positions are, it gives you a number. Now curiously enough, the number can be positive,
it can be zero, it can be negative, it can even be complex, right, although we won’t talk about cases where it’s complex. The physicists will tell you about those, or physical chemists. Okay? And
sometimes it can be as many as 4N+1 arguments. How could it be 4N+1?
Student: [inaudible]
Professor Michael McBride: Because if each particle also had a spin, then it would be x, y, z and spin; that’d be four. And if time is also included, it’s plus one. Okay, so how are we going to go
through this? First we’ll try — this is unfamiliar territory, I bet, to every one of you. Okay? So first we’re going to talk about just one particle and one dimension, so the function is fairly
simple. Okay? And then we’ll go on to three dimensions, but still one particle, the electron in an atom; so a one-electron atom, but now three dimensions, so it’s more complicated. Then we’ll go on
to atoms that have several electrons. So you have now more than three variables, because you have at least two electrons; that would be six variables that you have to put into the function to get a
number out. Then we’ll go into molecules — that is, more than one atom — and what bonding is. And then finally we get to the payoff for organic chemistry, which is talking about what makes a group a
functional group and what does it mean to be functional, what makes it reactive? That’s where we’re heading. But first we have to understand what quantum mechanics is.
So here’s the Schrödinger equation, ΗΨ=EΨ, and we’re talking about the time-independent Schrödinger equation, so time is not a variable, and that means what we’re talking about is stationary states.
We don’t mean that the atoms aren’t moving, but just that they’re in a cloud and we’re going to find how is the cloud distributed. If a molecule reacts, the electrons shift their clouds and so on; it
changes. We’re not interested in reaction now, we’re just interested in understanding the cloud that’s sitting there, not changing in time. Okay, now the right part of the equation is E times Ψ,
right? And E will turn out to be the energy of the system; maybe you won’t be surprised at that. So that’s quite simple. What’s the left? It looks like H times Ψ. If that were true, what could you do
to simplify things? Knock out Ψ. But HΨ is not H times Ψ. H is sort of recipe for doing something with Ψ; we’ll see that right away. So you can’t just cancel out the Ψ, unfortunately. Okay, so ΗΨ=EΨ.
Oops sorry, what did I do, there we go. Now we can divide, you can divide the right by Ψ, and since it was E times Ψ, the Ψ goes away. But when you divide on the left, you don’t cancel the Ψs,
because the top doesn’t mean multiplication.
Now I already told you the right side of this equation is the total energy. So when you see a system, what does the total energy consist of? Potential energy and kinetic energy. So somehow this part
on the left, ΗΨ/Ψ, must be kinetic energy plus potential energy. That recipe, H, must somehow tell you how to work with Ψ in order to get something which, divided by Ψ, gives kinetic energy plus
potential energy. So there are two parts of it. There’s the part that’s potential energy, of the recipe, and there’s the part that’s kinetic energy. Now, the potential energy part is in fact easy
because it’s given to you. Right? What’s Ψ a function of?
Students: Position.
Professor Michael McBride: Position of the particles. Now if you know the charges of the particles, and their positions, and know Couloumb’s Law, then you know the potential energy, if Couloumb’s Law
is right. Is everybody with me on that? If you know there’s a unit positive charge here, a unit negative charge here, another unit positive charge here and a unit negative charge here, or something
like that, you — it might be complicated, you might have to write an Excel program or something to do it — but you could calculate the distances and the charges and so on, and what the energy is, due
to that. So that part is really given to you, once you know what system you’re dealing with, the recipe for finding the potential energy. So that part of HΨ/Ψ is no problem at all. But hold your
breath on kinetic energy. Sam?
Student: Didn’t we just throw away an equation? There was an adjusted Couloumb’s Law equation.
Professor Michael McBride: Yes, that was wrong. That was three years earlier, remember? 1923 Thomson proposed that. But it was wrong. This is what was right.
Student: How did they prove it wrong?
Professor Michael McBride: How did what?
Student: Did they prove it wrong or just —
Professor Michael McBride: Yes, they proved this right, that Couloumb’s Law held, because it agreed with a whole lot of spectroscopic evidence that had been collected about atomic spectra, and then
everything else that’s tried with it works too. So we believe it now. So how do you handle kinetic energy? Well that’s an old one, you did that already in high school, right? Forget kinetic energy,
here it is. It’s some constant, which will get the units right, depending on what energy units you want, times the sum over all the particles of the kinetic energy of each particle. So if you know
the kinetic energy of this particle, kinetic energy of this particle, this particle, this particle; you add them all up and you get the total kinetic energy. No problem there, right? Now what is the
kinetic energy that you’re summing up over each particle? It’s ½mv^2. Has everybody seen that before? Okay, so that’s the sum of classical kinetic energy over all the particles of interest in the
problem, and the constant is just some number you put in to get the right units for your energy, depending on whether you use feet per second or meters per year or whatever, for the velocity.
Okay, but it turned out that although this was fine for our great-grandparents, it’s not right when you start dealing with tiny things. Right? Here’s what kinetic energy really is. It’s a constant.
This is the thing that gets it in the right units: (h^2)/8(π^2) times a sum over all the particles — it’s looking promising, right? — of one over the mass — not the mass mv^2, but one over the mass
of each particle — and here’s where we get it — [Students react] — times second derivatives of a wave function. That’s weird. I mean, at least it has twos in it, like v^2, right? [Laughter] That’s
something. And in fact it’s not completely coincidental that it has twos in it. There was an analogy that was being followed that allowed them to formulate this. And you divide it by the number Ψ. So
that’s a pretty complicated thing. So if we want to get our heads around it, we’d better simplify it. And oh also there’s a minus sign; it’s minus, the constant is negative that you use. Okay, now
let’s simplify it by using just one particle, so we don’t have to sum over a bunch of particles, and we’ll use just one dimension, x; forget y and z. Okay? So now we see something simpler. So it’s a
negative constant times one over the mass of the particle, times the second derivative of the function, the wave function, divided by Ψ. That’s kinetic energy really, not ½mv^2. Okay? Or here it is,
written just a little differently. So there’s a constant, C, over the mass, right? And then we have the important part, is the second derivative. Does everybody know that the second derivative is a
curvature of a function. Right? What’s the first derivative?
Students: Slope.
Professor Michael McBride: Slope, and the second derivative is how curved it is. It can be curving down, that’s negative curvature; or curving up, that’s positive curvature. So it can be positive or
negative; it can be zero if the line is straight. Okay. So note that the kinetic energy involves the shape of Ψ, how curved it is, not just what the value of Ψ is; although it involves that too.
Maybe it’s not too early to point out something interesting about this. So suppose that’s the kinetic energy. What would happen if you multiplied Ψ by two? Obviously the denominator would get twice
as large, if you made Ψ twice as large. What would happen to the curvature? What happens to the slope? Suppose you have a function and you make it twice as big and look at the slope at a particular
point? How does the slope change if you’ve stretched the paper on which you drew it?
Student: It’s sharper.
Professor Michael McBride: The slope will double, right, if you double the scale. How about the curvature, the second derivative? Does it go up by four times? No it doesn’t go up by four times, it
goes up by twice. So what would happen to the kinetic energy there if we doubled the size of Ψ every place?
Student: Stay the same.
Professor Michael McBride: It would stay the same. The kinetic energy doesn’t depend on how you scale Ψ, it only depends on its shape, how curved it is. Everybody see the idea? Curvature divided by
the value. Okay, now solving a quantum problem. So if you’re in a course and you’re studying quantum mechanics, you get problems to solve. A problem means you’re given something, you have to find
something. You’re given a set of particles, like a certain nuclei of given mass and charge, and a certain number of electrons; that’s what you’re given. Okay, the masses of the particles and the
potential law. When you’re given the charge, and you know Couloumb’s Law, then you know how to calculate the potential energy; remember that’s part of it. Okay? So that part’s easy, okay? Now what do
you need to find if you have a problem to solve? Oh, for example, you can have one particle in one dimension; so it could be one atomic mass unit is the weight of the particle, and Hooke’s Law could
be the potential, right? It doesn’t have to be realistic, it could be Hooke’s Law, it could be a particle held by a spring, to find a Ψ. You want to find the shape of this function, which is a
function of what?
Student: Positions of the particles.
Professor Michael McBride: Positions of the particles, and if you’re higher, further on than we are, time as well; maybe spin even. But that function has to be such that HΨ/Ψ is the total energy, and
the total energy is the same, no matter where the particle is, right, because the potential energy and the kinetic energy cancel out. It’s like a ball rolling back and forth. The total energy is
constant but it goes back and forth between potential and kinetic energy, right? Same thing here. No matter where the particles are, you have to get the same energy. So Ψ has to be such that when you
calculate the kinetic energy for it, changes in that kinetic energy, in different positions, exactly compensate for the changes in potential energy. When you’ve got that, then you’ve got a correct Ψ;
maybe. It’s also important that Ψ remain finite, that it not go to infinity. And if you’re a real mathematician, it has to be single valued; you can’t have two values for the same positions. It has
to be continuous, you can’t get a sudden break in Ψ. And Ψ^2 has to be integrable; you have to be able to tell how much area is under Ψ^2, and you’ll see why shortly. But basically what you need is
to find a Ψ such that the changes in kinetic energy compensate changes in potential energy.
Chapter 4. Understanding Negative Kinetic Energy and Finding Potential Energy [00:33:24]
Now what’s coming? Let’s just rehearse what we did before. So first there’ll be one particle in one dimension; then it’ll be one-electron atoms, so one particle in three dimensions; then it will be
many electrons and the idea of what orbitals are; and then it’ll be molecules and bonds; and finally functional groups and reactivity. Okay, but you’ll be happy to hear that by a week from Friday
we’ll only get through one-electron atoms. So don’t worry about the rest of the stuff now. But do read the parts on the webpage that have to do with what’s going to be on the exam. Okay, so normally
you’re given a problem, the mass and the charges — that is, that potential energy as a function of position — and you need to find Ψ. But at first we’re going to try it a different way. We’re going
to play Jeopardy and we’re going to start with the answer and find out what the question was. Okay? So suppose that Ψ is the sine of x; this is one particle in one dimension, the position of the
particle, and the function of Ψ is sine. If you know Ψ, what can you figure out? We’ve just been talking about it. What can you use Ψ to find?
Student: Kinetic energy.
Professor Michael McBride: Kinetic energy. How do you find it? So we can get the kinetic energy, which is minus a constant over the mass times the curvature of Ψ divided by Ψ at any given position.
Right? And once we know how the kinetic energy varies with position, then we know how the potential energy varies with position, because it’s just the opposite, in order that the sum be constant.
Right? So once we know the kinetic energy, then we know what the potential energy was, which was what the problem was at the beginning. Okay? So suppose our answer is sin(x). What is the curvature of
sin(x), the second derivative?
[Students speak over one another]
Professor Michael McBride: It’s -sin(x). Okay, so what is the kinetic energy?
Student: C/m.
Professor Michael McBride: C/m. Does it depend on where you are, on the value of x? No, it’s always C/m. So what was the potential energy? How did the potential energy vary with position?
Student: It doesn’t.
Professor Michael McBride: The potential energy doesn’t vary with position. So sin(x) is a solution for what? A particle that’s not being influenced by anything else; so its potential energy doesn’t
change with the position, it’s a particle in free space. Okay? So the potential energy is independent of x. Constant potential energy, it’s a particle in free space. Now, suppose we take a different
one, sin(ax). How does sin(ax) look different from sin(x)? Suppose it’s sin(2x). Here’s sin(x). How does sin(2x) look? Right? It’s shorter wavelength. Okay? Now so we need to figure out — so it’s a
shortened wave, if a > 1. Okay, now what’s the curvature? Russell?
Student: It’s -a^2 times sin(ax).
Professor Michael McBride: It’s -a^2 times sin(ax). Right? The a comes out, that constant, each time you take a derivative. So now what does the kinetic energy look like? It’s a^2 times the same
thing. Okay? So again, the potential energy is constant. Right? It doesn’t change with position. But what is different? It has higher kinetic energy if it’s a shorter wavelength. And notice that the
kinetic energy is proportional to one over the wavelength squared, right?; a^2; a shortens the wave, it’s proportional to a^2, one over the wavelength squared. Okay. Now let’s take another function,
exponential, so e^x. What’s the second derivative of e^x? Pardon me?
Student: e^x.
Professor Michael McBride: e^x. What’s the 18th derivative of e^x?
Students: e^x.
Professor Michael McBride: Okay, good. So it’s e^x. So what’s this situation, what’s the kinetic energy?
Student: -C/m.
Professor Michael McBride: -C/m. Negative kinetic energy. Your great-grandparents didn’t get that. You can have kinetic energy that’s less than zero. What does that mean? It means the total energy is
lower than the potential energy. Pause a minute just to let that sink in. The total energy is lower than the potential energy. The difference is negative. Okay? So the kinetic energy, if that’s the
difference, between potential and total, is negative. You never get that for ½m(v^2). Yes?
Student: Does this violate that Ψ has to remain finite?
Professor Michael McBride: Does it violate what?
Student: That Ψ has to remain finite?
Professor Michael McBride: No. You’ll see in a second. Okay, so anyhow the constant potential energy is greater than the total energy for that. Now, how about if it were minus exponential, e^-x? Now
what would it be? It would be the same deal again, it would still be -C/m, and again it would be a constant potential energy greater than the total energy. This is not just a mathematical curiosity,
it actually happens for every atom in you, or in me. Every atom has the electrons spend some of their time in regions where they have negative kinetic energy. It’s not just something weird that never
happens. And it happens at large distance from the nuclei where 1/r — that’s Couloumb’s Law — where it stops changing very much. When you get far enough, 1/r gets really tiny and it’s essentially
zero, it doesn’t change anymore. Right? Then you have this situation in any real atom. So let’s look at getting the potential energy from the shape of Ψ via the kinetic energy. Okay, so here’s a map
of Ψ, or a plot of Ψ, it could be positive, negative, zero — as a function of the one-dimension x, wherever the particle is. Okay? Now let’s suppose that that is our wave function, sometimes
positive, sometimes zero, sometimes negative. Okay? And let’s look at different positions and see what the kinetic energy is, and then we’ll be able to figure out, since the total will be constant,
what the potential energy is. Okay? So we’ll try to find out what was the potential energy that gave this as a solution? This is again the Jeopardy approach. Okay? Okay, so the curvature minus —
remember it’s a negative constant — minus the curvature over the amplitude could be positive — that’s going to be the kinetic energy; it could be positive, it could be zero, it could be negative, or
it could be that we can’t tell by looking at the graph. So let’s look at different positions on the graph and see what it says. First look at that position. What is the kinetic energy there?
Positive, negative, zero? Ryan, why don’t you help me out?
Student: No
Professor Michael McBride: Well no, you can help me out. [Laughter] Look, so what do you need to know? You need to know — here’s the complicated thing you have to figure out. What is minus the
curvature divided by the amplitude at this point? Is it positive, negative or zero? So what’s the curvature at that point? Is it curving up or down at that point? No idea. Anybody got an idea? Keith?
Student: It looks like a saddle point so it’s probably zero.
Professor Michael McBride: It’s not a saddle point. What do you call it?
Students: Inflection points.
Professor Michael McBride: A saddle point’s for three dimensions. In this it’s what? Inflection point. It’s flat there. It’s curving one way on one side, the other way on the other side. So it’s got
zero curvature there; okay, zero curvature. Now Ryan, can you tell me anything about that, if the curvature is zero?
Student: Zero.
Professor Michael McBride: Ah ha.
Professor Michael McBride: Not bad. So that one we’ll color grey for zero. The kinetic energy at that point is zero, if that’s the wave function. Now let’s take another point. Who’s going to help me
with this one? How about the curvature at this point right here?
Student: Negative.
[Students speak over one another]
Professor Michael McBride: It’s actually — I choose a point that’s not curved.
Student: Ah.
Professor Michael McBride: It’s straight right there. I assure you that’s true. So I bet Ryan can help me again on that one. How about it?
Student: Zero.
Professor Michael McBride: Ah ha. So we’ll make that one grey too. Now I’ll go to someone else. How about there? What’s the curvature at that point do you think? Shai?
Student: It looks straight, zero curvature.
Professor Michael McBride: It looks straight, zero curvature. So does that mean that this value is zero?
Student: Not necessarily, because the amplitude —
Professor Michael McBride: Ah, the amplitude is zero there too. So really you can’t be sure. Right? So that one we’re going to have to leave questionable, that’s a question mark. How about out here?
Not curved. So what’s the kinetic energy? Josh?
Student: Questionable.
Professor Michael McBride: Questionable, right? Because the amplitude is zero again; zero in the numerator, also zero in the denominator; we really don’t know. Okay, how about here? Tyler, what do
you say? Is it curved there?
Student: Yes.
Professor Michael McBride: Curving up or down?
Student: Down.
Professor Michael McBride: So negative, the curvature is negative. The value of Ψ?
Student: Positive.
Professor Michael McBride: Positive. The energy, kinetic energy?
Student: Positive.
Professor Michael McBride: Positive. Okay, so we can make that one green. Okay, here’s another one. Who’s going to help me here? Kate?
Student: Yes.
Professor Michael McBride: Okay, so how about the curvature; curving up, curving down?
Student: It’s curving down, that’s negative.
Professor Michael McBride: Yes. Amplitude?
Student: Zero. So it should be green.
Professor Michael McBride: Ah, green again. Okay. How about here? Ah, now how about the curvature? Seth?
Student: I don’t know.
Professor Michael McBride: Which way is it curving at this point here?
Student: Curving up.
Professor Michael McBride: Curving up. So the curvature is —
Student: Positive.
Professor Michael McBride: Positive.
Student: The amplitude is negative, so it’s positive.
Professor Michael McBride: Yeah. So what color would we make it? Green again. Okay, so if you’re — you can have — be curving down or curving up and still be positive; curving down if you’re above the
baseline, curving up if you’re below the baseline. Right? So as long as you’re curving toward the baseline, towards Ψ=0, the kinetic energy is positive. How about here? Zack? Which way is it curving?
Curving up or curving down?
Student: It should be curving up.
Professor Michael McBride: Curving up, curvature is positive. The value?
Student: Positive.
Professor Michael McBride: Positive.
Student: I guess it’ll be negative.
Professor Michael McBride: So it’s negative kinetic energy there. Make that one whatever that pinkish color is. Okay? Here’s another one, how about there? Alex? Which way is it curving at the new
place? Here?
Student: Curving down.
Professor Michael McBride: Curving down, negative curvature.
Student: Negative amplitude.
Professor Michael McBride: Negative amplitude.
Student: Negative kinetic energy.
Professor Michael McBride: Negative kinetic energy; pink again. Is that enough? Oh, there’s one more, here, the one right here. Okay?
Student: Negative.
Professor Michael McBride: Pardon me?
Student: Negative.
Professor Michael McBride: Negative, because it’s — how did you do it so quick? We didn’t have to go through curvature.
Student: It goes away from the line.
Professor Michael McBride: Because it’s curving away from the baseline, negative. Okay, pink. Okay, curving away from Ψ=0 means that the kinetic energy is negative. So now we know at all these
positions whether the kinetic energy is positive, negative or zero, although there are a few that we aren’t certain about. Right? So here’s the potential energy that will do that. If you have this
line for the total energy, right? Then here and here you have zero. Right? Also, incidentally, here and here, you have zero kinetic energy. With me? Okay. So no curvature, right? At these green
places, the total energy is higher than the potential energy. So the kinetic energy is positive. Okay? At these places, the potential energy is higher than the total energy. So the kinetic energy is
negative and the thing is curving away from the baseline. Right? And now we know something about this point. If the potential energy is a continuous kind of thing, then, although we couldn’t tell by
looking at the wave function, it’s curving away from the baseline, but very slightly, right? It’s negative kinetic energy there, and also on the right here is negative kinetic energy. And here we
know, just by continuity, that at this point it must’ve been positive kinetic energy, even though we couldn’t tell it by looking at the curve. There must be an inflection point when you go through
zero, otherwise you’d get a discontinuity in the potential energy. Okay, so that one was green. Okay, now I have to stop.
[end of transcript] | {"url":"https://oyc.yale.edu/chemistry/chem-125a/lecture-7","timestamp":"2024-11-11T14:26:40Z","content_type":"text/html","content_length":"109591","record_id":"<urn:uuid:b5c05490-f9b4-4f5c-9a06-9a37aa2e64ab>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00282.warc.gz"} |
Find \(b^{2}-4 a c\) and the number of real solutions to each equation. $$-3 x^{2}+7 x=0$$
Short Answer
Expert verified
The discriminant is 49. There are two distinct real solutions.
Step by step solution
Identify the equation format
Rewrite the given equation in the standard quadratic form: the standard equation is: a x^{2} + b x + c = 0 The given equation is: -3 x^{2} + 7 x = 0 a = -3, b = 7, c = 0
Calculate the Discriminant
The discriminant of a quadratic equation is given by: b^{2} - 4ac Substitute the values: b = 7, a = -3, and c = 0 b^{2} - 4(-3)(0) = 49 - 0 = 49
Analyze the Discriminant
Now, determine the number of real solutions based on the value of the discriminant: If the discriminant is greater than 0, then there are two distinct real solutions. If the discriminant is equal to
0, then there is exactly one real solution. If the discriminant is less than 0, there are no real solutions. Since 49 is greater than 0, there are two distinct real solutions.
Key Concepts
These are the key concepts you need to understand to accurately answer the question.
quadratic equations
A quadratic equation is an equation of the form: \[a x^{2} + b x + c = 0\]. Here, \(a\), \(b\), and \(c\) are constants with \(a eq 0\). This form is known as the standard form of a quadratic
equation. Quadratic equations are very important in mathematics because they describe the parabolic relationship between variables.
You often find them in problems involving projectile motion, areas, and optimization. In the equation \(-3x^{2} + 7x = 0\), you can identify the coefficients: \(a = -3\), \(b = 7\), and \(c = 0\).
This step is crucial as these coefficients are used in further calculations, especially when finding the discriminant, which helps determine the number of real solutions.
real solutions
Real solutions to quadratic equations are the points where the parabola (graph of the equation) crosses the x-axis. These solutions are the values of \(x\) that make the quadratic equation true.
Depending on the discriminant, the quadratic equation can have:
• Two distinct real solutions
• One real solution
• No real solutions (when solutions are complex or imaginary)
When solving \(-3x^{2} + 7x = 0\), after identifying the coefficients and calculating the discriminant, you see that it determines how many real solutions the equation has. If the graph of the
equation touches or intersects the x-axis, those points are your real solutions.
discriminant calculation
The discriminant is a key part of solving quadratic equations and is given by the formula \(b^{2} - 4ac\). This value tells you the nature and number of solutions for the quadratic equation.
• If \(b^{2} - 4ac > 0\), there are two distinct real solutions.
• If \(b^{2} - 4ac = 0\), there is exactly one real solution.
• If \(b^{2} - 4ac < 0\), there are no real solutions (solutions are complex).
For the equation \(-3 x^{2} + 7 x = 0\):
Given \(a = -3\), \(b = 7\), and \(c = 0\).
The discriminant, \(b^{2} - 4ac\), is calculated as follows: \[7^{2} - 4(-3)(0) = 49 - 0 = 49\]. Since the discriminant is 49, which is greater than 0, this means that the quadratic equation has two
distinct real solutions. This analysis helps understand why and how many solutions exist, providing insight into the behavior of the parabolic graph. | {"url":"https://www.vaia.com/en-us/textbooks/math/algebra-for-college-students-5-edition/chapter-8/problem-48-find-b2-4-a-c-and-the-number-of-real-solutions-to/","timestamp":"2024-11-08T02:04:00Z","content_type":"text/html","content_length":"249367","record_id":"<urn:uuid:9d7efc45-61cb-4076-8c57-41ebba0dad0f>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00405.warc.gz"} |
Why is $S^1\times\mathbb{R}^{n-1}$ the topology of $AdS_n$?
2678 views
Anti-de Sitter $AdS_n$ may be defined by the quadric
embedded in ${\mathbb{R}^{2,n-1}}$, where I write ${\vec{x}^2}$ as the squared norm ${|\vec{x}|^2}$ of ${\vec{x}=(x^2,\ldots,x^n)}$. Now, I don't quite understand how is it justified that the
topology of this space is $S^1\times\mathbb{R}^{n-1}$. As I understand it informally, I could write $(1)$ as
and then fix the ${(n-1)}$ terms ${\vec{x}^2}$, each one on $\mathbb{R}$, such that ${(2)}$ defines a circle ${S^1}$.
This is actually a reasoning I came up to later, based on the case of ${dS_n}$ (in which one just fixes the time variable) and when I saw what the topology was meant to be, but actually I first wrote
${(1)}$ as
which for fixed ${x^0,x^1}$, both in $\mathbb{R}$, defines a sphere ${S^{n-2}}$, so the topology would be something like ${S^{n-2}\times\mathbb{R}^2}$, (which is indeed similar to that of ${dS_n}$)
right? I even liked this one better, since I could relate it as the 2 temporal dimensions on ${\mathbb{R}^2}$ and the spatial ones on ${S^{n-2}}$.
I don't *really* know topology, so I would like to know what is going on even if it's pretty basic and how could I interpret topological differences physically.
**Update**: I originally used $\otimes$ instead of $\times$ in the question. My reference to do this is page 4 of [Ingemar Bengtsson's notes on Anti-de Sitter space][1]; so is that simply a *typo* in
the notes?
**Update 2**: I'm trying to understand this thing in simpler terms. If I write Minkowski 4-dimensional space in spherical coordinates, could I say that it's topology is ${\mathbb{R}\times{S}^3}$? If
so, how come?
[1]: http://www.fysik.su.se/~ingemar/Kurs.pdf
This post imported from StackExchange Physics at 2014-05-04 11:13 (UCT), posted by SE-user Pedro Figueroa
The tensor product symbol $\otimes$ is sometimes used in physics when the product $\times$ should be. I don't know why. For instance, one sometimes encounters papers saying that the gauge group of
the Standard Model is SU(3)$\otimes$SU(2)$\otimes$U(1). I don't know what that means. I think people use the notation because it makes them feel more "mathy" somehow.
This post imported from StackExchange Physics at 2014-05-04 11:14 (UCT), posted by SE-user Matt Reece
@MattReece Yes, with the unfortunate side effects that it's just confusing and makes them look less mathy (imho).
This post imported from StackExchange Physics at 2014-05-04 11:14 (UCT), posted by SE-user joshphysics
The Cartesian product of two topological spaces \(M\)and \(N\) can be thought of as the set of all ordered pairs selected from \(M\) and \(N\). (There's more to the definition than this, but this is
what's pertinent for our purposes.) For example, the Cartesian product of two open/closed intervals is an open/closed rectangle. This means that if you're going to tell me that AdS[n] has the same
topology as \(\mathbb{R}^n \times S^m\), I should always be able to pick a point from the Euclidean space and a point from the m-sphere, and you can show me the point in AdS[n] that corresponds to
that ordered pair.
With that in mind: if you write your constraint as
\((x^0)^2 + (x^1)^2 = \alpha^2 + \vec{x}^2\),
then it's not too hard to see that this defines a circle in the \(x^0\text{-}x^1\) plane for any value of \(\vec{x}\), with radius \(\sqrt{\alpha^2 + \vec{x}^2}\). This means that if I pick a point
on the circle \(S^1\), and pick a point in the Euclidean space \(\mathbb{R}^{n-1}\), then there will always be a unique point in AdS[n] corresponding to that choice.
On the other hand, if you write
\(\vec{x}^2 = (x^0)^2 + (x^1)^2 - \alpha^2\),
then this does not define an (n-2)-sphere for all possible choices of \(x^0\) and \(x^1\), since the right-hand side can become negative. Thus, your proposed map from ordered pairs in \(\mathbb{R}^2
\) and \(S^{n-2}\) to AdS[n] doesn't work, since I can choose points from the base spaces that have no corresponding point in AdS[n] (namely, any point from \(\mathbb{R}^2\) that has \((x^0)^2 + (x^
1)^2 < \alpha^2\), and any point at all from \(S^{n-2}\).)
Of course, this flaw in your second argument doesn't prove on its own that AdS[n] isn't homeomorphic to \(\mathbb{R}^2 \times S^{n-2}\)—merely that this map isn't a homeomorphism. But since we
already found another argument that AdS[n] was homeomorphic to \(\mathbb{R}^{n-1} \times S^1\), and since these two product spaces aren't homeomorphic to each other (they're topologically distinct,
so they can't be), we can conclude that AdS[n] isn't homeomorphic to \(\mathbb{R}^2 \times S^{n-2}\). (Unless, of course, \(n = 3\).)
I like these nice rather intuitive explanations, +1. Just recently I was reading up about AdS/CFT a bit, so this post came in quite handy :-)
Sketched proof: One may define a homotopy via the constraint
$$x_0^2+x_n^2~=~ \alpha^2 +\lambda \sum_{i=1}^{n-1}x_i^2,\quad \alpha >0,$$
where $\lambda\in[0,1]$ is the homotopy parameter. Then $\lambda=1$ corresponds to $AdS_n \subset \mathbb{R}^{n+1}$, while $\lambda=0$ corresponds to $S^1\times \mathbb{R}^{n-1} \subset \mathbb{R}^
This post imported from StackExchange Physics at 2014-05-04 11:14 (UCT), posted by SE-user Qmechanic
Does this mean that the tensor product $\otimes$ should be replaced by the Cartesian product $\times$ in the original question?
This post imported from StackExchange Physics at 2014-05-04 11:14 (UCT), posted by SE-user Hunter
$\uparrow$ @Hunter: Yes, that's a typo in the notes. Btw, a tensor product $V\otimes W$ of vector spaces wouldn't make natural sense here since $V=S^1$ is not a vector space.
This post imported from StackExchange Physics at 2014-05-04 11:14 (UCT), posted by SE-user Qmechanic | {"url":"https://physicsoverflow.org/16730/why-is-%24s-1-times-mathbb-r-n-1-%24-the-topology-of-%24ads_n%24","timestamp":"2024-11-08T05:02:35Z","content_type":"text/html","content_length":"159961","record_id":"<urn:uuid:aea6c66a-aa6b-45d2-a168-068aad7b5d5b>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00876.warc.gz"} |
For F to be minimum, k must be maximum.
⇒dθdktanθsinθcosθFmin... | Filo
Question asked by Filo student
For to be minimum, must be maximum.
Not the question you're searching for?
+ Ask your question
Video solutions (1)
Learn from their 1-to-1 discussion with Filo tutors.
2 mins
Uploaded on: 12/14/2022
Was this solution helpful?
Found 8 tutors discussing this question
Discuss this question LIVE for FREE
15 mins ago
One destination to cover all your homework and assignment needs
Learn Practice Revision Succeed
Instant 1:1 help, 24x7
60, 000+ Expert tutors
Textbook solutions
Big idea maths, McGraw-Hill Education etc
Essay review
Get expert feedback on your essay
Schedule classes
High dosage tutoring from Dedicated 3 experts
Practice more questions on Mechanics
View more
Students who ask this question also asked
View more
Stuck on the question or explanation?
Connect with our Physics tutors online and get step by step solution of this question.
231 students are taking LIVE classes
Question Text For to be minimum, must be maximum.
Updated On Dec 14, 2022
Topic Mechanics
Subject Physics
Class Class 11
Answer Type Video solution: 1
Upvotes 63
Avg. Video Duration 2 min | {"url":"https://askfilo.com/user-question-answers-physics/for-to-be-minimum-must-be-maximum-begin-aligned-rightarrow-33333533363435","timestamp":"2024-11-14T15:37:32Z","content_type":"text/html","content_length":"291728","record_id":"<urn:uuid:5c0b901e-f829-4324-8b30-dbf80bdccaef>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00527.warc.gz"} |
How do you evaluate 2s^3+r-t given r=-3, s=4, t=-7? | HIX Tutor
How do you evaluate #2s^3+r-t# given r=-3, s=4, t=-7?
Answer 1
Given the expression: #color(white)("XXX")2color(red)s^3+color(blue)r-color(green)t# with #color(white)("XXX")color(red)(s=4)# #color(white)("XXX")color(blue)(r=-3)# #color(white)("XXX")color(green)
The expression becomes: #color(white)("XXX")2(color(red)4)^3-(color(blue)(-3))+(color(green)(-7))#
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer 2
Evaluate the expression: (2s^3 + r - t) with (r = -3), (s = 4), and (t = -7).
Substitute the values:
(2(4)^3 + (-3) - (-7))
(2 \times 64 - 3 + 7 = 128 - 3 + 7 = 132)
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer from HIX Tutor
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
Not the question you need?
HIX Tutor
Solve ANY homework problem with a smart AI
• 98% accuracy study help
• Covers math, physics, chemistry, biology, and more
• Step-by-step, in-depth guides
• Readily available 24/7 | {"url":"https://tutor.hix.ai/question/how-do-you-evaluate-2s-3-r-t-given-r-3-s-4-t-7-8f9af8d360","timestamp":"2024-11-03T07:59:45Z","content_type":"text/html","content_length":"568596","record_id":"<urn:uuid:831ab4f0-81f3-4f49-9f0d-d22fd5b9557d>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00184.warc.gz"} |
2009 AIME II Problems/Problem 14
The sequence $(a_n)$ satisfies $a_0=0$ and $a_{n + 1} = \frac85a_n + \frac65\sqrt {4^n - a_n^2}$ for $n\geq 0$. Find the greatest integer less than or equal to $a_{10}$.
The "obvious" substitution
An obvious way how to get the $4^n$ from under the square root is to use the substitution $a_n = 2^n b_n$. Then the square root simplifies as follows: $\sqrt{4^n - a_n^2} = \sqrt{4^n - (2^n b_n)^2} =
\sqrt{4^n - 4^n b_n^2} = 2^n \sqrt{1 - b_n^2}$.
The new recurrence then becomes $b_0=0$ and $b_{n+1} = \frac45 b_n + \frac 35\sqrt{1 - b_n^2}$.
Solution 1
We can now simply start to compute the values $b_i$ by hand:
\begin{align*} b_1 & = \frac 35 \\ b_2 & = \frac 45\cdot \frac 35 + \frac 35 \sqrt{1 - \left(\frac 35\right)^2} = \frac{24}{25} \\ b_3 & = \frac 45\cdot \frac {24}{25} + \frac 35 \sqrt{1 - \left(\
frac {24}{25}\right)^2} = \frac{96}{125} + \frac 35\cdot\frac 7{25} = \frac{117}{125} \\ b_4 & = \frac 45\cdot \frac {117}{125} + \frac 35 \sqrt{1 - \left(\frac {117}{125}\right)^2} = \frac{468}{625}
+ \frac 35\cdot\frac {44}{125} = \frac{600}{625} = \frac{24}{25} \end{align*}
We now discovered that $b_4=b_2$. And as each $b_{i+1}$ is uniquely determined by $b_i$, the sequence becomes periodic. In other words, we have $b_3=b_5=b_7=\cdots=\frac{117}{125}$, and $b_2=b_4=\
Therefore the answer is
\begin{align*} \lfloor a_{10} \rfloor & = \left\lfloor 2^{10} b_{10} \right\rfloor = \left\lfloor \dfrac{1024\cdot 24}{25} \right\rfloor = \left\lfloor \dfrac{1025\cdot 24}{25} - \dfrac{24}{25} \
right\rfloor \\ & = \left\lfloor 41\cdot 24 - \dfrac{24}{25} \right\rfloor = 41\cdot 24 - 1 = \boxed{983} \end{align*}
Solution 2
After we do the substitution, we can notice the fact that $\left( \frac 35 \right)^2 + \left( \frac 45 \right)^2 = 1$, which may suggest that the formula may have something to do with the unit
circle. Also, the expression $\sqrt{1-x^2}$ often appears in trigonometry, for example in the relationship between the sine and the cosine. Both observations suggest that the formula may have a neat
geometric interpretation.
Consider the equation: $\[y = \frac45 x + \frac 35\sqrt{1 - x^2}\]$
Note that for $t=\sin^{-1} \frac 35$ we have $\sin t=\frac 35$ and $\cos t = \frac 45$. Now suppose that we have $x=\sin s$ for some $s$. Then our equation becomes:
$\[y=\cos t \cdot \sin s + \sin t \cdot |\cos s|\]$
Depending on the sign of $\cos s$, this is either the angle addition, or the angle subtraction formula for sine. In other words, if $\cos s \geq 0$, then $y=\sin(s+t)$, otherwise $y=\sin(s-t)$.
We have $b_0=0=\sin 0$. Therefore $b_1 = \sin(0+t) = \sin t$, $b_2 = \sin(t+t) = \sin (2t)$, and so on. (Remember that $t$ is the constant defined as $t=\sin^{-1} \frac 35$.)
This process stops at the first $b_k = \sin (kt)$, where $kt$ exceeds $\frac{\pi}2$. Then we'll have $b_{k+1} = \sin(kt - t) = \sin ((k-1)t) = b_{k-1}$ and the sequence will start to oscillate.
Note that $\sin \frac{\pi}6 = \frac 12 < \frac 35$, and $\sin \frac{\pi}4 = \frac{\sqrt 2}2 > \frac 35$, hence $t$ is strictly between $\frac{\pi}6$ and $\frac{\pi}4$. Then $2t\in\left(\frac{\pi}3,\
frac{\pi}2 \right)$, and $3t\in\left( \frac{\pi}2, \frac{3\pi}4 \right)$. Therefore surely $2t < \frac{\pi}2 < 3t$.
Hence the process stops with $b_3 = \sin (3t)$, we then have $b_4 = \sin (2t) = b_2$. As in the previous solution, we conclude that $b_{10}=b_2$, and that the answer is $\lfloor a_{10} \rfloor = \
left\lfloor 2^{10} b_{10} \right\rfloor = \boxed{983}$.
Video Solution 1 by SpreadTheMathLove
See Also
The problems on this page are copyrighted by the Mathematical Association of America's American Mathematics Competitions. | {"url":"https://artofproblemsolving.com/wiki/index.php?title=2009_AIME_II_Problems/Problem_14&oldid=208245","timestamp":"2024-11-08T10:53:38Z","content_type":"text/html","content_length":"53164","record_id":"<urn:uuid:39aa001c-01c2-4cec-b3e1-a260564afaf3>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00322.warc.gz"} |
A Quick Study Guide with Two Full-Length CLEP College Mathematics Practice Tests
Also included in: The Most Comprehensive CLEP College Math Preparation Bundle
Original price was: $99.99.Current price is: $49.99.
Looking for a perfect prep book to help you brush up in math and prepare for the CLEP College Mathematics test? If so, then look no further. Preparing for the CLEP College Mathematics Test in 7 Days
helps you hone your math skills, overcome your exam anxiety, and do your best on the CLEP College Mathematics test.
This speedy study guide provides students with only the most critical and vital mathematics concepts and subjects that students must learn in order to succeed on the CLEP College Mathematics test.
This is your one-stop-shop for everything a CLEP College Mathematics test taker needs to ace the test. Mathematics concepts break down the topics, so the material can be quickly grabbed. Hundreds of
CLEP Math examples are worked step–by–step to help students learn exactly what to do.
Prepare for the CLEP College Mathematics Test in 7 Days offers easy–to–read important math summaries that highlight the key areas of the CLEP College Mathematics test. You only need to spend about
three to five hours daily (depending on your math background) in your 7–day period in order to get the result you are looking for. After reviewing this quick study guide, you will have a solid
foundation and adequate practice that is essential to do well on the CLEP College Mathematics test.
Written by top CLEP Mathematics experts and experienced instructors, Prepare for the CLEP College Mathematics Test in 7 Days is the only book you will need to prepare for the CLEP College Mathematics
Published By:
Effortless Math Education | {"url":"https://www.effortlessmath.com/product/prepare-for-the-clep-college-mathematics-test-in-7-days/","timestamp":"2024-11-03T22:22:28Z","content_type":"text/html","content_length":"67926","record_id":"<urn:uuid:22f78db1-44b3-4549-ac4c-b9d0319bbafb>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00073.warc.gz"} |
ABSTRACT : “We investigate the crossing of an energy barrier by a self-propelled particle described by a Rayleigh friction term. We show that a sharp transition between low and large amplitude of the
external force field occurs. It corresponds to a saddle point transition in the velocity flow phase space, and would therefore occur for any type of force field. We use this approach to describe the
results obtained by Eddi et al. [Phys. Rev. Lett. 102, 240401 (2009)] in 2009 who studied the interaction between a drop propelled by its own generated wave field and a submarine obstacle. It has
been shown that this wave particle entity can overcome barrier of potential, suggesting the existence of a ”macroscopic tunnel effect”. We show that the effect of self-propulsion is sufficiently
enough to generate crossing of high energy barrier. By assuming a random distribution of initial angles, we define a probability to cross the barrier of potential that matches with the data obtained
by Eddi et al.. This probability appears similar to the one encountered in statistical physics for Hamiltonian systems i.e. a Boltzmann exponential law.”
Hubert, M., Labousse, M., & Perrard, S. (2017). Self-propulsion with random initial conditions: how to cross an energy barrier?. arXiv preprint arXiv:1701.01937.
Click to access 1701.01937.pdf
Abstract : The back-reaction of a radiated wave on the emitting source is a general problem. In the most general case, back-reaction on moving wave sources depends on their whole history. Here we
study a model system in which a pointlike source is piloted by its own memory-endowed wave field. Such a situation is implemented experimentally using a self-propelled droplet bouncing on a
vertically vibrated liquid bath and driven by the waves it generates along its trajectory. The droplet and its associated wave field form an entity having an intrinsic dual particle-wave character.
The wave field encodes in its interference structure the past trajectory of the droplet. In the present article we show that this object can self-organize into a spinning state in which the droplet
possesses an orbiting motion without any external interaction. The rotation is driven by the wave-mediated attractive interaction of the droplet with its own past. The resulting “memory force” is
investigated and characterized experimentally, numerically, and theoretically. Orbiting with a radius of curvature close to half a wavelength is shown to be a memory-induced dynamical attractor for
the droplet’s motion.
Labousse, M., Perrard, S., Couder, Y., & Fort, E. (2016). Self-attraction into spinning eigenstates of a mobile wave source by its emission back-reaction. Physical Review E, 94(4), 042224.
Available on ResearchGate (Free login required)
Labousse, M., Oza, A. U., Perrard, S., & Bush, J. W. (2016). Pilot-wave dynamics in a harmonic potential: Quantization and stability of circular orbits.Physical Review E, 93(3), 033122.
“We present the results of a theoretical investigation of the dynamics of a droplet walking on a vibrating fluid bath under the influence of a harmonic potential. The walking droplet’s horizontal
motion is described by an integro-differential trajectory equation, which is found to admit steady orbital solutions. Predictions for the dependence of the orbital radius and frequency on the
strength of the radial harmonic force field agree favorably with experimental data. The orbital quantization is rationalized through an analysis of the orbital solutions. The predicted dependence of
the orbital stability on system parameters is compared with experimental data and the limitations of the model are discussed.”
Bacot, V., Labousse, M., Eddi, A., Fink, M., & Fort, E. (2015). Revisiting time reversal and holography with spacetime transformations. arXiv preprint arXiv:1510.01277.
Wave control is usually performed by spatially engineering the properties of a medium. Because time and space play similar roles in wave propagation, manipulating time boundaries provides a
complementary approach. Here, we experimentally demonstrate the relevance of this concept by introducing instantaneous time mirrors. We show with water waves that a sudden change of the effective
gravity generates time-reversed waves that refocus at the source. We generalize this concept for all kinds of waves introducing a universal framework which explains the effect of any time disruption
on wave propagation. We show that sudden changes of the medium properties generate instant wave sources that emerge instantaneously from the entire space at the time disruption. The time-reversed
waves originate from these “Cauchy sources” which are the counterpart of Huygens virtual sources on a time boundary. It allows us to revisit the holographic method and introduce a new approach for
wave control.
Borghesi, C., Moukhtar, J., Labousse, M., Eddi, A., Fort, E., & Couder, Y. (2014). Interaction of two walkers: Wave-mediated energy and force. Physical Review E, 90(6), 063017.
A bouncing droplet, self-propelled by its interaction with the waves it generates, forms a classical wave-particle association called a “walker.” Previous works have demonstrated that the dynamics of
a single walker is driven by its global surface wave field that retains information on its past trajectory. Here, we investigate the energy stored in this wave field for two coupled walkers and how
it conveys an interaction between them. For this purpose, we characterize experimentally the “promenade modes” where two walkers are bound, and propagate together. Their possible binding distances
take discrete values, and the velocity of the pair depends on their mutual binding. The mean parallel motion can be either rectilinear or oscillating. The experimental results are
recovered analytically with a simple theoretical framework. A relation between the kinetic energy of the droplets and the total energy of the standing waves is established.
Labousse, M., Perrard, S., Couder, Y., & Fort, E. (2014). Build-up of macroscopic eigenstates in a memory-based constrained system. New Journal of Physics, 16(11), 113027.
A bouncing drop and its associated accompanying wave forms a walker. Based on previous works, we show in this article that it is possible to formulate a simple theoretical framework for the walker
dynamics. It relies on a time scale decomposition corresponding to the effects successively generated when the memory effects increase. While the short time scale effect is simply responsible for the
walkerʼs propulsion, the intermediate scale generates spontaneously pivotal structures endowed with angular momentum. At an even larger memory
scale, if the walker is spatially confined, the pivots become the building blocks of a self-organization into a global structure. This new theoretical framework is applied in the presence of an
external harmonic potential, and reveals the underlying mechanisms leading to the emergence of the macroscopic spatial organization reported by Perrard et al (2014 Nature Commun. 5 3219).
Perrard, S., Labousse, M., Fort, E., & Couder, Y. (2014). Chaos driven by interfering memory. Physical review letters, 113(10), 104101.
The transmission of information can couple two entities of very different nature, one of them serving as a memory for the other. Here we study the situation in which information is stored in a wave
field and serves as a memory that pilots the dynamics of a particle. Such a system can be implemented by a bouncing drop generating surface waves sustained by a parametric forcing. The motion of the
resulting “walker” when confined in a harmonic potential well is generally disordered. Here we show that these trajectories correspond to chaotic regimes characterized by intermittent transitions
between a discrete set of states. At any given time, the system is in one of these states characterized by a double quantization of size and angular momentum. A low dimensional intermittency
determines their respective probabilities. They thus form an eigenstate basis of decomposition for what would be observed as a superposition of states if all measurements were intrusive
“À l’échelle macroscopique, les ondes et les particules sont des objets distincts. La découverte d’objets appelés marcheurs, constitués d’une goutte rebondissant sur un bain liquide vibré
verticalement, a montré qu’il n’en était rien. La goutte est autopropulsée, guidée sur la surface du liquide par l’onde qu’elle a elle-même créée lors des rebonds précédents. Ces objets possèdent une
dynamique originale dominée par le concept de mémoire de chemin. La structure du champ d’onde qui guide la goutte dépend, en effet, de la position des rebonds passés disposés le long de la
trajectoire. La profondeur de cette mémoire peut, de plus, être contrôlée expérimentalement en changeant l’accélération du bain. De nombreuses réalisations expérimentales ont mis en évidence les
comportements dynamiques singuliers de ces systèmes couplés goutte/onde. Cette thèse répond à la nécessité d’une compréhension théorique des effets non locaux en temps introduit par la mémoire de
chemin. Pour ce faire, nous étudierons l’évolution d’un marcheur numérique en potentiel harmonique bidimensionnel. Un ensemble relativement restreint de trajectoires stables est obtenu. Nous
constaterons que ces dernières sont quantifiées en extension moyenne et en moment angulaire moyen. Nous analyserons comment s’imbriquent les différentes échelles de temps de la dynamique, permettant
ainsi de dissocier les termes propulsifs à temps court de l’émergence de structures ondulatoires cohérentes à temps long. Nous verrons en quoi l’expression du caractère non-local d’un marcheur permet
d’en révéler les symétries internes et d’assurer la convergence du système dynamique vers un jeu d’états propres de basse dimension.”
Labousse, M. (2014). Étude d’une dynamique à mémoire de chemin: une expérimentation théorique (Doctoral dissertation, Université Pierre et Marie Curie UPMC Paris VI).
Perrard, S., Labousse, M., Miskin, M., Fort, E., & Couder, Y. Effets de quantification d’une association onde-particule soumise à une force centrale.Résumés des exposés de la 16e Rencontre du
Non-Linéaire Paris 2013, 68.
eigenstates in circular cavity | {"url":"http://dotwave.org/tag/labousse/","timestamp":"2024-11-15T02:49:52Z","content_type":"text/html","content_length":"99697","record_id":"<urn:uuid:c6c22cba-52bd-456b-9814-1594ef233156>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00624.warc.gz"} |
Telephone Wiretapping • Nicholas Pilkington
Imagine you are looking to intercept a communication that can happen between two people over a telephone network. Let’s say that the two people in question are part of a larger group, who all
communicate with each other (sometimes via other people in the network if they don’t have their phone number). We can represent this as a graph where the vertices are people and edges connect two
people if they have each other in their phone books.
Here’s a network with 6 people, some of whom don’t directly communicate with each other but can do so through others. Each person can reach all the others though, so the graph is connected.
Let’s also assume that this group of people communicate efficiently and use the smallest amount of calls possible and always distribute information to every person. If an unknown member of the
network wants to communicate some nefarious plans to all the other members they call some people who in turn spread the message through the network by making more calls while adhering to the rules
above. If we can tap a single link between two people what is the probability of intercepting one of these calls? Let’s work through an example of a network of 4 people who can each communicate with
two others. The graph looks like this:
If we tap then link connecting 0 and 1 there is only one way to communicate to all members without using this tapped link, where as there are 3 that do use it. Meaning that the probability of
intercepting the information is 0.75. The small images represent the ways in which the communication can happen and those that use the tapped link (will be intercepted) are highlighted.
There are a couple of important things to note as this point. Firstly the links chosen to communicate over form a spanning tree of the graph. This is an important property as a spanning tree has one
less edge than the number of nodes and doesn’t contain any cycles. Cycles would mean that the communication has not been efficient because we could remove an edge on the cycle and still have the
information reach all the people.
Let’s work through another example and compute the probability of intercepting the communication if we tap a specific link. Here is another graph. It represents 4 people but this time there are 6
links. Everyone can communicate with everyone else. Let’s tap the top link - highlighted in yellow.
Now let’s enumerate all the spanning trees of this graph manually. Notice that each spanning tree connects all the vertices in the original graph just using fewer edges. In particular 3 edges which
is one less that the number of vertices. Adding another edge would create a cycle. There are 16 different spanning trees and 8 of them (highlighted in yellow) use the link we have tapped. This means
the probability of intercepting the transitions is 8.0 / 16.0 = 0.5.
Cool! So to solve this problem we need to count the number of spanning trees of a graph that uses a specified edge - call that value A. Then compute the number of spanning trees that the graph has -
call that value B. The probability of intercepting the communication on the tapped link is A/B.
The number of spanning trees that use a specific edges can be computed by collapsing the vertices at each end of that edge into one vertex and computing the number of spanning trees for that new
multi-graph. For example for the cross-box graph above if we want to find the number of spanning trees that use the top edge we collapse it and generate the graph on the right which indeed has 8
spanning trees. Remember this could create multi edges between vertices.
Enumerating all the spanning trees is not a feasible option as this number grows really quickly. In fact Cayley’s formula gives the number of spanning trees of a complete graph of size K as K **
Instead we can use Kirchhoff’s matrix tree theorem. Which tells us that if we have a graph represented by an adjacency matrix G we can count the number of the spanning trees as:
Where the lambdas are the non-zero eigenvalues of the associated Laplacian matrix G. It’s actually easier and more numerically stable to compute the determinant of a cofactor of the Laplacian which
gives the same result. The Laplacian matrix is used to compute lots of useful properties of graphs. It is equal to the degree matrix minus the adjacency matrix:
Computing the Laplacian from an adjacency matrix can be done with this code:
# compute the Laplacian of the adjacency matrix
def laplacian(A):
L = -A
for a in xrange(L.shape[0]):
for b in xrange(L.shape[1]):
if A[a][b]:
L[a][a] += A[a][b] # increase degree
return L
Using this we can compute the cofactor.
def cofactor(L, factor=1.0):
Q = L[1::, 1::] # bottom right minor
return np.linalg.det(Q / factor)
Also I added a scaling parameter to the cofactor computation. The determinants can get really big when the network has thousands of vertices. In this case computing the numerator and denominator of
the probability can result in overflow. If we take some factor factor out of the Laplacian matrices before computing the determinant we reduce the value by factor ** N where N is the size of the
matrix. Using this we can compute the probability for large matrices because the factors almost totally cancel out because the matrices dimensions different by only 1.
def probability(G1, G2):
factor = 24.0
# det(A) = f**n*det(A/f)
L1 = laplacian(G1)
L2 = laplacian(G2)
Q1 = cofactor(L1, factor=factor)
Q2 = cofactor(L2, factor=factor)
# f**(n-1) * det(G1/f)
# -------------------
# f**(n-2) * det(G2/f)
return Q1 / Q2 / factor
Using this we can go through each edge in the graph and compute the probability of intercepting if we tap that edge. This value will change depending on the graph and if it has an articulating points
these edges will have probability 1.0. | {"url":"https://nickp.svbtle.com/telephone-tapping","timestamp":"2024-11-13T16:25:33Z","content_type":"text/html","content_length":"18712","record_id":"<urn:uuid:4e265d31-8c0d-49f5-8880-1f59cd784090>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00655.warc.gz"} |
How do I apply Life Cycle Cost of Ownership in the case of motors?
This article is no longer actively maintained. While it remains accessible for reference, exercise caution as the information within may be outdated. Use it judiciously and consider verifying its
content in light of the latest developments.
The Life Cycle Cost (LCC) of ownership of a product is the sum of all the costs incurred during its lifetime:
LCC = Purchase cost + energy cost + maintenance cost + installation cost + Decommissioning cost
Academic approaches to the calculation of Life Cycle Cost can be very involved, but in practical terms it is the difference between competing options that matters. So items where there is little or
no discernible difference (for example the installation or de-commissioning cost) could reasonably be ignored, as can items which might simply be unknowns (for example the cost of condition
For an industrial induction motor, the energy cost will typically be over 90% of this whole cost, and so this should be the focus of attention, with the calculations then focussing on the price
premium of a higher efficiency motor. The energy cost can be calculated using the formula:
Lifetime cost of energy = Cost of energy / kWh x load (kW) x lifetime
For a more accurate estimate, the time spent at different load factors, and the motor efficiency at each of these load points, can be used to give a more refined idea of the energy consumption.
(A short cut to estimate the new energy consumption is to simply multiply the existing estimated energy consumption by the ratio of the old:new efficiency. This gives a small under-estimate of the
energy saving, but even this diminishes as the efficiency difference becomes smaller. But for a quick estimate this should be fine.)
In the extreme case of a motor running long running hours at high load, over 10 years the energy cost might be 100 times the purchase price. Life Cycle Costing of such examples show that big
differences in the list price of a motor matter far less than just an apparently minor difference in motor efficiency.
0 comments
Article is closed for comments. | {"url":"https://help.leonardo-energy.org/hc/en-us/articles/201941131-How-do-I-apply-Life-Cycle-Cost-of-Ownership-in-the-case-of-motors","timestamp":"2024-11-10T20:57:00Z","content_type":"text/html","content_length":"32170","record_id":"<urn:uuid:ccbdfd3f-de2c-410a-85b2-a87f6fea030a>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00452.warc.gz"} |
What is the best test prep book for AP Calculus? - Gia sư IB
What is the best test prep book for AP Calculus?
16 September, 2021
giasuib.com – AP Calculus is a subject that is quite popular with the majority of international students. In this article, we will help you know what is the best test prep book for AP Calculus?
Related posts:
Is AP Calculus really difficult?
In fact, many students do not find the Math concepts of AP Calculus (AB or BC) not at all difficult. That’s why for good and excellent students, grasping this subject is quite easy. AP Calculus AB
content weighs less and has a slower pace than AP Calculus BC, so AB may be more suitable for students who are not gifted in Mathematics. In contrast, BC is more suitable for students who are
calculative and nimble with numbers.
For other cases, they don’t find AP Calculus easy because all Maths requires practice for good acquisition. Conceptually, however, AP Calculus is like an extended chapter of Algebra, not a whole lot
of new maths knowledge. Students consider AP Calculus similar to music and sports, both of which involve non-stop practice. Most people can learn to play an instrument and or a sport through regular,
diligent effort and practice.
There is no denying that AP Calculus is a subject with many obstacles, so to overcome those obstacles, what is the best test prep book for AP Calculus?
We at giasuib have compiled for you a list of AP Calculus test prep textbook (both AB and BC) for you to consult and make choices:
Price: $174
This book helps you to fully understand the concepts of Calculus. It covers all the core concepts of algebra, geometry, trigonometry, and basic functions to ensure that you are fully prepared for the
AP exam. The textbook teaches the basic concepts of Calculus using algebraic, visual, and verbal methods. Before you decide to buy any other book or review material, you should consider buying this
textbook first as it can help you get a 5 on the AP Calculus AB exam.
Price: $15
This is a comprehensive AP test prep handbook, updated to align with the new framework in effect for the 2020 AP Calculus AB and BC exams.
The material includes eight practice exams for AP Calculus AB and four practice exams for AP Calculus BC. It also includes helpful test questions including solutions and detailed review outlines on
topics for both exams. In addition, this document will also help you practice using the graphing calculator more effectively by providing in-depth advice.
Price: $18.22
This test prep material covers everything you need to know to get an AP score of 5 on the upcoming AP Calculus exam. It also includes comprehensive review content for all test topics, updated
information on the 2020 AP Calculus AB Exam. Furthermore, all topics are organized into manageable units. We also recommend this book because you’ll have access to AP Connect, an online portal for
helpful AP exam updates and college prep information.
Price: $9.99
This book is a comprehensive textbook and workbook with solutions to each problem. With it, you’ll have all the topics you need to practice math on, including trig functions, polynomials and more.
Furthermore, McMullen also explains more complex concepts such as series rules, second derivatives, and multiple integrals. The goal of this book is not to cover every tedious concept in the study of
calculus, but rather to provide the essential skills needed to apply calculus to any field, such as physics or engineering.
giasuib.com – A place to share experiences of learning international programs such as IB, AP, A-level, IGCSE, GED… If you have any questions, please contact us directly by email or hotline for free | {"url":"https://www.giasuib.com/en/what-is-the-best-test-prep-book-for-ap-calculus/","timestamp":"2024-11-04T04:33:52Z","content_type":"text/html","content_length":"48815","record_id":"<urn:uuid:c596be54-464f-4131-88af-aa63ff2cbdde>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00480.warc.gz"} |
Milliradians conversion
Worldwide use:
Milliradians, also known as mils, are a unit of measurement commonly used in various fields worldwide.
Milliradians are a versatile unit of measurement that find applications in military operations, surveying, and navigation highlights their importance in achieving accuracy and precision in
measurements and calculations.
A Radian is the angle made by taking the radius of a circle and wrapping it along the circle's edge. Therefore 1 Radian is equal to (180/π) degrees. A milliradian is 1/1000 of this value
Milliradians, often abbreviated as mrad or mil, are a unit of angular measurement. The term "milliradian" is derived from the Latin word "mille," meaning thousand, and "radius," referring to the
radius of a circle. In simple terms, a milliradian is equal to one-thousandth of a radian.
The origin of the milliradian can be traced back to the early 20th century when it was first introduced by the Swedish engineer and inventor Carl Gustav Jungner. Jungner developed the concept of the
milliradian as a way to simplify and standardize angular measurements in military optics and artillery. Since then, the milliradian has become widely adopted by military organizations around the
world and is now a common unit of measurement in various fields, including ballistics, surveying, and navigation.
Common references:
One finger width at an arms length is approximately 30 mils wide
A fist at arms length is approx. 150 mils
A spread out hand is approx. 300 mils.
Usage context:
A milliradian is a useful unit of measurement because it allows for precise and consistent angular calculations. It is often used to measure small angles, especially in situations where accuracy is
crucial. For example, in the field of optics, milliradians are used to measure the field of view of a telescope or binoculars. In ballistics, milliradians are used to calculate bullet drop and
windage adjustments, ensuring accurate long-range shooting. In surveying, milliradians are used to measure horizontal and vertical angles, aiding in the precise mapping of land and construction
The milliradian is a unit of measurement commonly used in military and firearms applications.
Conversion of milliradians:
To convert milliradians to other units of angular measurement, such as degrees or radians, a simple conversion factor is used. One milliradian is equal to approximately 0.0573 degrees or 0.001
radians. Conversely, to convert degrees or radians to milliradians, one can multiply the value by approximately 17.453 or 1000, respectively.
About NATO milliradians:
NATO milliradians (NATO mils) are a unit of angular measurement commonly used in military and artillery applications. They are derived from the radian, which is the standard unit for measuring angles
in the International System of Units (SI). A radian is defined as the angle subtended at the center of a circle by an arc that is equal in length to the radius of the circle.
NATO mils are a more practical and convenient unit for military purposes, as they allow for easier estimation and calculation of angles in the field. One NATO mil is equal to 1/6400th of a circle, or
approximately 0.05625 degrees. This means that a full circle is divided into 6400 NATO mils.
NATO mils are particularly useful in artillery and target acquisition, as they provide a simple and accurate way to measure angles and distances. They are often used to determine the direction and
elevation of artillery fire, as well as to calculate the range to a target. NATO mils are also employed in land navigation and map reading, allowing military personnel to quickly and accurately
determine their position and plan their movements. Overall, NATO mils provide a practical and efficient means of angular measurement in military operations.
About USSR milliradians:
The USSR milliradian, also known as the Soviet milliradian, is a unit of measurement used in the former Soviet Union for angular measurements. It is derived from the radian, which is the standard
unit for measuring angles in the International System of Units (SI). The milliradian is roughly equal to one thousandth of a radian, making it a smaller unit of measurement.
The USSR milliradian was widely used in various fields, including military and engineering applications. It provided a convenient way to measure small angles with high precision. In military
applications, the milliradian was used for artillery targeting and range estimation. It allowed for accurate calculations of bullet trajectory and helped improve the accuracy of artillery fire. In
engineering, the milliradian was used for surveying and mapping, providing a precise way to measure angles and distances.
Although the USSR milliradian is no longer in common use since the dissolution of the Soviet Union, it still holds historical significance. It serves as a reminder of the unique measurement systems
that were developed in different regions of the world. Today, the radian and its decimal multiples, such as the milliradian, are widely used in various fields, including mathematics, physics, and
engineering, providing a standardized way to measure angles and facilitate accurate calculations.
There are 6,300 USSR milliradians to a full circle.
About US WW2 Milliradians:
During World War II, milliradians (mils) and radians played a crucial role in various military operations. Milliradians are a unit of angular measurement commonly used in artillery and long-range
shooting. They are derived from the concept of a radian, which is the angle subtended at the center of a circle by an arc equal in length to the radius of the circle. A milliradian is equal to
one-thousandth of a radian, making it a more precise unit for measuring small angles.
In the context of World War II, milliradians were used extensively by artillery units to calculate the elevation and azimuth angles required to accurately hit targets at long distances. Artillery
gunners would use specialized instruments, such as the M2A2 aiming circle, to measure the angle between the target and the gun. By converting this angle into milliradians, gunners could then adjust
the elevation and direction of the gun to ensure accurate fire. This was particularly important in situations where targets were located far away or obscured by terrain, as milliradians allowed for
precise adjustments to be made, increasing the chances of hitting the target successfully.
There are 4,000 US WW2 milliradians in a full circle.
About UK Milliradians:
The milliradian (mrad) is a unit of measurement commonly used in the United Kingdom to express angles and distances. It is derived from the radian, which is the standard unit for measuring angles in
the International System of Units (SI). The milliradian is equal to one thousandth of a radian, making it a smaller and more precise unit of measurement.
In the UK, milliradians are often used in various fields such as surveying, engineering, and ballistics. They are particularly useful for measuring small angles and distances with high accuracy. For
example, in surveying, milliradians are used to measure the slope of the land or the inclination of a surface. In engineering, milliradians are used to calculate the angular displacement of
mechanical components or the field of view of optical instruments.
The advantage of using milliradians over degrees or other units is their ability to provide more precise measurements. Since a milliradian is a smaller unit, it allows for finer adjustments and more
accurate calculations. Additionally, milliradians are often used in conjunction with metric units, which makes them compatible with the SI system and facilitates conversions between different units
of measurement. Overall, the use of milliradians in the UK ensures greater precision and consistency in various applications that require accurate angular and distance measurements. | {"url":"http://metric-conversions.com/angle/milliradians-conversion.htm","timestamp":"2024-11-07T12:20:58Z","content_type":"text/html","content_length":"46352","record_id":"<urn:uuid:96956c3e-296f-4384-bafa-bfb72f2b8b38>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00382.warc.gz"} |
: MathObject formatting PG 2.6/2.7
Hi all,
As a follow-up, I've isolated the problem to the rendering of the ijk format of the vector. A stripped down version of the problem that I'm using is appended. In this, both ijk formatted vectors
display as in the lower figure above (as an experiment, I also moved the definition of $vijk above the Context()->texStrings call, to no effect); the standard formatted vector is as expected; and the
components of the vector all display correctly.
Thoughts welcome. Thanks,
$showPartialCorrectAnswers = 1;
$c1 = non_zero_random(-5,5,1); $c2 = non_zero_random(3,7,1); $c3 = non_zero_random(-3,3,1);
$c1c2 = $c1*$c2; $twoc1 = 2*$c1; $c2m1 = $c2 - 1; $nc1c2 = -1*$c1c2;
@vel = ( Compute("($twoc1)*t+$c2"), Compute("-$c1*e^(-t)"), Compute("-($c1c2)*sin($c2*t)") );
$vel = Vector( @vel );
$acc = $vel->D;
$vijk = $vel->ijk;
The velocity of a particle is given by \( {\bf v}(t) = \{ $vel->ijk \} \). Find
the acceleration \( {\bf a}(t) \): \{ ans_rule(45) \}
\(v_1 = $vel[0] \)$BR \(v_2 = $vel[1] \)$BR
\(v_3 = $vel[2] \)$BR \(v = $vel \)$BR
\(v = $vijk\)
ANS( $acc->cmp() ); | {"url":"https://webwork.maa.org/moodle/mod/forum/discuss.php?d=3109&parent=7683","timestamp":"2024-11-14T03:56:19Z","content_type":"text/html","content_length":"68509","record_id":"<urn:uuid:f1096bcf-8d12-432e-b3c8-4cf3473a7ce4>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00817.warc.gz"} |
6.9 The second law of thermodynamics for open systems
6. Entropy and the Second Law of Thermodynamics
6.9 The second law of thermodynamics for open systems
Entropy can be transferred to a system via two mechanisms: (1) heat transfer and (2) mass transfer. For open systems, the second law of thermodynamics is often written in the rate form; therefore, we
are interested in the time rate of entropy transfer due to heat transfer and mass transfer.
[latex]\dot{S}_{heat} =\dfrac{dS_{heat}}{dt} \cong \displaystyle\sum\dfrac{\dot{Q}_k}{T_k}[/latex]
[latex]\dot{S}_{mass} = \displaystyle\sum\dfrac{dS_{mass}}{dt} = \displaystyle \sum \dot{m}_{k}s_{k}[/latex]
[latex]\dot{m}[/latex]: rate of mass transfer
[latex]\dot{Q}_k[/latex]: rate of heat transfer via the location [latex]k[/latex] of the system boundary, which is at a temperature of [latex]T_k[/latex] in Kelvin
[latex]\dot{S}_{heat}[/latex]: time rate of entropy transfer due to heat transfer
[latex]\dot{S}_{mass}[/latex]: time rate of entropy transfer that accompanies the mass transfer into or out of a control volume
[latex]s_k[/latex]: specific entropy of the fluid
Applying the entropy balance equation, [latex]\Delta \rm {entropy= + in - out + gen}[/latex], to a control volume, see Figure 6.9.1, we can write the following equations:
• General equation for both steady and transient flow devices
right)+\displaystyle{\dot{{S}}}_{{gen}}\ \ \ \ \ \ ({\dot{{S}}}_{{gen}} \ge 0)[/latex]
• For steady-state, steady-flow devices, [latex]\dfrac{{dS}_{c.v.}}{dt}=0[/latex]; therefore,
[latex]\displaystyle \sum{{\dot{{m}}}_{e}{s}_{e}}-\sum{{\dot{{m}}}_{i}{s}_{i}}=\displaystyle \sum\dfrac{{\dot{{Q}}}_{{c}.{v}.}}{{T}}+{\dot{{S}}}_{{gen}}\ \ \ \ \ \ ({\dot{{S}}}_{{gen}} \ge 0)[/
• For steady and isentropic flow devices, [latex]\dot{Q}_{c.v.}=0[/latex] and [latex]\dot S_{gen}=0[/latex]; therefore,
[latex]\dot{m}[/latex]: rate of mass transfer of the fluid entering or leaving the control volume via the inlet [latex]i[/latex] or exit [latex]e[/latex], in kg/s
[latex]\dot{Q}_{c.v.}[/latex]: rate of heat transfer into the control volume via the system boundary (at a constant [latex]T[/latex]), in kW
[latex]S_{c.v.}[/latex]: entropy in the control volume, in kJ/K
[latex]\dfrac{{dS}_{c.v.}}{dt}[/latex]: time rate of change of entropy in the control volume, in kW/K
[latex]\dot S_{gen}[/latex]: time rate of entropy generation in the process, in kW/K
[latex]s[/latex]: specific entropy of the fluid entering or leaving the control volume via the inlet [latex]i[/latex] or exit [latex]e[/latex], in kJ/kgK
[latex]T[/latex]: absolute temperature of the system boundary, in Kelvin
Figure 6.9.1 Flow through a control volume, showing the entropy transfers and entropy generation
The diagrams in Figure 6.9.e1 show a reversible process in a steady-state, single flow of air. The letters i and e represent the initial and final states, respectively. Treat air as an ideal gas and
assume ΔKE=ΔPE=0. Are the change in specific enthalpy Δh=h[e]−h[i], specific work w, and specific heat transfer q positive, zero, or negative values? What is the relation between w and q?
Figure 6.9.e1 T-s and P-v diagrams of a reversible process for an ideal gas
The specific work can be evaluated mathematically and graphically.
(1) Mathematically,
[latex]\because v_{e} > v_{i}[/latex]
[latex]\therefore w = \displaystyle\int_{i}^{e}{Pd{v}\ } >\ 0[/latex]
(2) Graphically, the specific work is the area under the process curve in the [latex]P-v[/latex] diagram; therefore [latex]w[/latex] is positive, see Figure 6.9.e2.
In a similar fashion, the specific heat transfer can also be evaluated graphically and mathematically.
(1) Graphically,
[latex]\because ds=\left(\displaystyle\frac{\delta q}{T}\right)_{rev}[/latex]
[latex]\therefore q_{rev} = \displaystyle \int_{i}^{e}{Tds} = T(s_e-s_i)\ >\ 0[/latex]
For a reversible process, the area under the process curve in the [latex]T-s[/latex] diagram represents the specific heat transfer of the reversible process; therefore [latex]q=q_{rev}[/latex] is
positive, see Figure 6.9.e2.
(2) The same conclusion, [latex]q_{rev}>0[/latex], can also be derived from the second law of thermodynamics mathematically, as follows.
For a reversible process, [latex]\dot{S}_{gen}[/latex]= 0, and the fluid is assumed to be always in thermal equilibrium with the system boundary, or [latex]T = T_{surr}[/latex]; therefore,
[latex]q_{rev} = \dfrac{\dot{Q}}{\dot{m}} = T(s_e-s_i) > 0[/latex]
The change in specific enthalpy can then be evaluated. For an ideal gas,
[latex]\Delta h = h_e - h_i = C_p(T_e - T_i)[/latex]
[latex]\because T_e = T_i[/latex]
[latex]\therefore h_e = h_i[/latex] and [latex]\Delta h =0[/latex]
Now, we can determine the relation between [latex]w[/latex] and [latex]q_{rev}[/latex] from the first law of thermodynamics for control volumes.
[latex]\because \dot{m}( h_e - h_i ) = \dot{Q}_{rev} - \dot{W} = 0[/latex]
[latex]\therefore \dot{Q}_{rev} = \dot{W}[/latex]
[latex]\therefore q_{rev} = w[/latex]
In this reversible process, the specific heat transfer and specific work must be the same. Graphically, the two areas under the [latex]P-v[/latex] and [latex]T-s[/latex] diagrams must be the same.
Figure 6.9.e2 T-s and P-v diagrams, showing the solutions for a reversible process of an ideal gas
An isentropic process refers to a process that is reversible and adiabatic. The entropy remains constant in an isentropic process. | {"url":"https://pressbooks.bccampus.ca/thermo1/chapter/6-9-the-second-law-of-thermodynamics-for-open-systems/","timestamp":"2024-11-09T22:12:00Z","content_type":"text/html","content_length":"119488","record_id":"<urn:uuid:7faa40ea-5ca0-48f8-82fa-139af2c0fa07>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00258.warc.gz"} |
Number Logic Practice 2
The following activity is a practice exercise to help you measure your success under timed conditions.
If you are unsure of an answer, move on to the next question.
You should come back to questions you found difficult at the end if you have time.
Let's try one together as a reminder:
We need to work out how the numbers on the outside of the brackets create the numbers on the inside of the brackets and then apply the same rule to the third set of brackets.
10 (23) 3 5 (16) 6 2 ( _ ) 5
First, we need to find a way to make 23 using 10 and 3.
If we add 10 and 3 together we get 13.
Then, if we add another lot of 10, we get to 23.
So the rule could be: add the two outside numbers and then add the first number again.
Apply to the second set of numbers: 5 + 5 + 6 = 16.
We’ve found the rule.
Then we use the rule to work out the missing number in the final set: 2 + 2 + 5 = 9.
The missing number is 9.
Now it's time to begin this practice exercise.
Good luck! | {"url":"https://www.edplace.com/worksheet_info/11+/keystage2/year5/topic/1411/8793/number-logic-practice-2","timestamp":"2024-11-03T19:02:26Z","content_type":"text/html","content_length":"80417","record_id":"<urn:uuid:0da409aa-dda3-4d86-a2fa-485f1683522a>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00113.warc.gz"} |
Suppose Babies Born After A Gestation Period Of 32 To 35 Weeks Have A Mean Weight Of 2500 Grams And A
Z-score for 34-week baby = 0.643
Z-score for 41-week baby = 1.154
Baby weight of 41-week is more than the baby weight of 34-week in the gestation period.
Step-by-step explanation:
Given - Suppose babies born after a gestation period of 32 to 35 weeks have a mean weight of 2500 grams and a standard deviation of 700 grams while babies born after a gestation period of 40 weeks
have a mean weight of 3100 grams and a standard deviation of 390 grams. If a 34-week gestation period baby weighs 2950 grams and a 41-week gestation period baby weighs 3550 grams
To find - Find the corresponding z-scores. Which baby weighs more relative to the gestation period.
Proof -
Given that,
In between period of 32 to 35 weeks
Mean = 2500
Standard deviation = 700
In between after a period of 40 weeks
Mean = 3100
Standard deviation = 390
For a 34-week baby,
X = 2950
For a 41-week baby,
X = 3550
Z-score = (X - mean) / Standard deviation
For a 34-week baby,
Z - score = (2950 - 2500) / 700 = 0.643
For a 41-week baby,
Z-score = (3550 - 3100) / 390 = 1.154
∴ we get
Z-score for 34-week baby = 0.643
Z-score for 41-week baby = 1.154
As 1.154 > 0.643
Baby weight of 41-week is more than baby weight of 34-week in the gestation period.
[tex]P = \boxed{\sf 4000}[/tex]
[tex]r=\boxed{\sf 0.03}[/tex]
[tex]n=\boxed{\sf 4}[/tex]
[tex]t=\boxed{\sf 5}[/tex]
Step-by-step explanation:
[tex]\boxed{\begin{minipage}{8.5 cm}\underline{Compound Interest Formula}\\\\$ A=P\left(1+\frac{r}{n}\right)^{nt}$\\\\where:\\\\ \phantom{ww}$\bullet$ $A =$ final amount \\ \phantom{ww}$\bullet$ $P =
$ principal amount \\ \phantom{ww}$\bullet$ $r =$ interest rate (in decimal form) \\ \phantom{ww}$\bullet$ $n =$ number of times interest is applied per year \\ \phantom{ww}$\bullet$ $t =$ time (in
years) \\ \end{minipage}}[/tex]
If you invest $4,000 then the principal amount is $4,000. As P represents the principal amount:
[tex]\implies P = \boxed{\sf 4000}[/tex]
The interest is 3%. 3% = 3/100 = 0.03. Therefore:
[tex]\implies r=\boxed{\sf 0.03}[/tex]
"n" represents the number of times interest is applied per year.
Therefore, if the interest is applied quarterly then:
[tex]\implies n=\boxed{\sf 4}[/tex]
"t" represents the time in years. Therefore, if you plan to leave the money in the account for 5 years then:
[tex]\implies t=\boxed{\sf 5}[/tex]
To calculate the amount in the account after 5 years, substitute the values into the formula and solve for A:
[tex]\implies A=4000\left(1+\dfrac{0.03}{4}\right)^{4 \times 5}[/tex]
[tex]\implies A=4000\left(1+0.0075\right)^{20}[/tex]
[tex]\implies A=4000\left(1.0075\right)^{20}[/tex]
[tex]\implies A=4000(1.16118414...)[/tex]
[tex]\implies A=\$4644.74[/tex] | {"url":"https://www.cairokee.com/homework-solutions/suppose-babies-born-after-a-gestation-period-of-32-to-35-wee-ujbd","timestamp":"2024-11-07T11:06:49Z","content_type":"text/html","content_length":"91582","record_id":"<urn:uuid:50b32744-661b-47de-a1c8-ef693004721a>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00193.warc.gz"} |
weierstrass substitution proof
7.3: The Bolzano-Weierstrass Theorem - Mathematics LibreTexts {\textstyle t} From MathWorld--A Wolfram Web Resource. G Sie ist eine Variante der Integration durch Substitution, die auf bestimmte
Integranden mit trigonometrischen Funktionen angewendet werden kann. Follow Up: struct sockaddr storage initialization by network format-string. Mathematics with a Foundation Year - BSc (Hons) PDF
Rationalizing Substitutions - Carleton 2 2 Since jancos(bnx)j an for all x2R and P 1 n=0 a n converges, the series converges uni-formly by the Weierstrass M-test. 1 . {\displaystyle t} This proves
the theorem for continuous functions on [0, 1]. Newton potential for Neumann problem on unit disk. As I'll show in a moment, this substitution leads to, \( cot Finding $\\int \\frac{dx}{a+b \\cos x}$
without Weierstrass substitution. (originally defined for ) that is continuous but differentiable only on a set of points of measure zero. Our Open Days are a great way to discover more about the
courses and get a feel for where you'll be studying. {\textstyle t=\tan {\tfrac {x}{2}}} However, I can not find a decent or "simple" proof to follow. cos James Stewart wasn't any good at history. By
application of the theorem for function on [0, 1], the case for an arbitrary interval [a, b] follows. \text{tan}x&=\frac{2u}{1-u^2} \\ By eliminating phi between the directly above and the initial
definition of International Symposium on History of Machines and Mechanisms. Tangent half-angle substitution - Wikipedia Other sources refer to them merely as the half-angle formulas or half-angle
formulae . Draw the unit circle, and let P be the point (1, 0). There are several ways of proving this theorem. tan ISBN978-1-4020-2203-6. $\qquad$. 2 Is there a way of solving integrals where the
numerator is an integral of the denominator? In integral calculus, the tangent half-angle substitution is a change of variables used for evaluating integrals, which converts a rational function of
trigonometric functions of Splitting the numerator, and further simplifying: $\frac{1}{b}\int\frac{1}{\sin^2 x}dx-\frac{1}{b}\int\frac{\cos x}{\sin^2 x}dx=\frac{1}{b}\int\csc^2 x\:dx-\frac{1}{b}\int\
frac{\cos x}{\sin^2 x}dx$. H. Anton, though, warns the student that the substitution can lead to cumbersome partial fractions decompositions and consequently should be used only in the absence of
finding a simpler method. Substituio tangente do arco metade - Wikipdia, a enciclopdia livre x must be taken into account. For an even and $2\pi$ periodic function, why does $\int_{0}^{2\pi}f(x)dx =
2\int_{0}^{\pi}f(x)dx $. Browse other questions tagged, Start here for a quick overview of the site, Detailed answers to any questions you might have, Discuss the workings and policies of this site.
Do new devs get fired if they can't solve a certain bug? B n (x, f) := $\int \frac{dx}{\sin^3{x}}$ possible with universal substitution? Step 2: Start an argument from the assumed statement and work
it towards the conclusion.Step 3: While doing so, you should reach a contradiction.This means that this alternative statement is false, and thus we . Weierstrass Substitution and more integration
techniques on https://brilliant.org/blackpenredpen/ This link gives you a 20% off discount on their annual prem. Karl Theodor Wilhelm Weierstrass ; 1815-1897 . cot are well known as Weierstrass's
inequality [1] or Weierstrass's Bernoulli's inequality [3]. A point on (the right branch of) a hyperbola is given by(cosh , sinh ). Date/Time Thumbnail Dimensions User = According to the Weierstrass
Approximation Theorem, any continuous function defined on a closed interval can be approximated uniformly by a polynomial function. It only takes a minute to sign up. All Categories; Metaphysics and
Epistemology This is really the Weierstrass substitution since $t=\tan(x/2)$. He also derived a short elementary proof of Stone Weierstrass theorem. The Weierstrass substitution is the trigonometric
substitution which transforms an integral of the form. t cot Hoelder functions. He is best known for the Casorati Weierstrass theorem in complex analysis. Generalized version of the Weierstrass
theorem. Our aim in the present paper is twofold. , Weierstrass Substitution - Page 2 eliminates the \(XY\) and \(Y\) terms. $\int \frac{dx}{a+b\cos x}=\int\frac{a-b\cos x}{(a+b\cos x)(a-b\cos x)}dx=
\int\frac{a-b\cos x}{a^2-b^2\cos^2 x}dx$. on the left hand side (and performing an appropriate variable substitution) How to make square root symbol on chromebook | Math Theorems 2 into one of the
following forms: (Im not sure if this is true for all characteristics.). After browsing some topics here, through one post, I discovered the "miraculous" Weierstrass substitutions. csc Integration by
substitution to find the arc length of an ellipse in polar form. Instead of Prohorov's theorem, we prove here a bare-hands substitute for the special case S = R. When doing so, it is convenient to
have the following notion of convergence of distribution functions. {\displaystyle \cos 2\alpha =\cos ^{2}\alpha -\sin ^{2}\alpha =1-2\sin ^{2}\alpha =2\cos ^{2}\alpha -1} [7] Michael Spivak called
it the "world's sneakiest substitution".[8]. Given a function f, finding a sequence which converges to f in the metric d is called uniform approximation.The most important result in this area is due
to the German mathematician Karl Weierstrass (1815 to 1897).. t u-substitution, integration by parts, trigonometric substitution, and partial fractions. Weierstrass Substitution 382-383), this is
undoubtably the world's sneakiest substitution. {\textstyle t=\tan {\tfrac {x}{2}}} Click on a date/time to view the file as it appeared at that time. \end{align} and a rational function of sin If
the \(\mathrm{char} K \ne 2\), then completing the square d The Bolzano-Weierstrass Property and Compactness. ) The Weierstrass substitution, named after German mathematician Karl Weierstrass
(18151897), is used for converting rational expressions of trigonometric functions into algebraic rational functions, which may be easier to integrate.. PDF The Weierstrass Function - University of
California, Berkeley csc By similarity of triangles. Weierstrass Substitution : r/calculus - reddit &=\int{(\frac{1}{u}-u)du} \\ cos File usage on Commons. Stewart provided no evidence for the
attribution to Weierstrass. (PDF) Transfinity | Wolfgang Mckenheim - Academia.edu 2 The German mathematician Karl Weierstrauss (18151897) noticed that the substitution t = tan(x/2) will convert any
rational function of sin x and cos x into an ordinary rational function. {\textstyle t=\tan {\tfrac {x}{2}},} Browse other questions tagged, Start here for a quick overview of the site, Detailed
answers to any questions you might have, Discuss the workings and policies of this site. 2 . 2 x To compute the integral, we complete the square in the denominator: . t Required fields are marked *,
\(\begin{array}{l}\sum_{k=0}^{n}f\left ( \frac{k}{n} \right )\begin{pmatrix}n \\k\end{pmatrix}x_{k}(1-x)_{n-k}\end{array} \), \(\begin{array}{l}\sum_{k=0}^{n}(f-f(\zeta))\left ( \frac{k}{n} \right )\
binom{n}{k} x^{k}(1-x)^{n-k}\end{array} \), \(\begin{array}{l}\sum_{k=0}^{n}\binom{n}{k}x^{k}(1-x)^{n-k} = (x+(1-x))^{n}=1\end{array} \), \(\begin{array}{l}\left|B_{n}(x, f)-f(\zeta) \right|=\left|B_
{n}(x,f-f(\zeta)) \right|\end{array} \), \(\begin{array}{l}\leq B_{n}\left ( x,2M\left ( \frac{x- \zeta}{\delta } \right )^{2}+ \frac{\epsilon}{2} \right ) \end{array} \), \(\begin{array}{l}= \frac
{2M}{\delta ^{2}} B_{n}(x,(x- \zeta )^{2})+ \frac{\epsilon}{2}\end{array} \), \(\begin{array}{l}B_{n}(x, (x- \zeta)^{2})= x^{2}+ \frac{1}{n}(x x^{2})-2 \zeta x + \zeta ^{2}\end{array} \), \(\begin
{array}{l}\left| (B_{n}(x,f)-f(\zeta))\right|\leq \frac{\epsilon}{2}+\frac{2M}{\delta ^{2}}(x- \zeta)^{2}+\frac{2M}{\delta^{2}}\frac{1}{n}(x- x ^{2})\end{array} \), \(\begin{array}{l}\left| (B_{n}
(x,f)-f(\zeta))\right|\leq \frac{\epsilon}{2}+\frac{2M}{\delta ^{2}}\frac{1}{n}(\zeta- \zeta ^{2})\end{array} \), \(\begin{array}{l}\left| (B_{n}(x,f)-f(\zeta))\right|\leq \frac{\epsilon}{2}+\frac{M}
{2\delta ^{2}n}\end{array} \), \(\begin{array}{l}\int_{0}^{1}f(x)x^{n}dx=0\end{array} \), \(\begin{array}{l}\int_{0}^{1}f(x)p(x)dx=0\end{array} \), \(\begin{array}{l}\int_{0}^{1}p_{n}f\rightarrow \
int _{0}^{1}f^{2}\end{array} \), \(\begin{array}{l}\int_{0}^{1}p_{n}f = 0\end{array} \), \(\begin{array}{l}\int _{0}^{1}f^{2}=0\end{array} \), \(\begin{array}{l}\int_{0}^{1}f(x)dx = 0\end{array} \).
can be expressed as the product of where gd() is the Gudermannian function. As with other properties shared between the trigonometric functions and the hyperbolic functions, it is possible to use
hyperbolic identities to construct a similar form of the substitution, By clicking Accept all cookies, you agree Stack Exchange can store cookies on your device and disclose information in accordance
with our Cookie Policy. Site design / logo 2023 Stack Exchange Inc; user contributions licensed under CC BY-SA. The best answers are voted up and rise to the top, Not the answer you're looking for?
An irreducibe cubic with a flex can be affinely Basically it takes a rational trigonometric integrand and converts it to a rational algebraic integrand via substitutions. The steps for a proof by
contradiction are: Step 1: Take the statement, and assume that the contrary is true (i.e. Geometrically, the construction goes like this: for any point (cos , sin ) on the unit circle, draw the line
passing through it and the point (1, 0). and the integral reads It uses the substitution of u= tan x 2 : (1) The full method are substitutions for the values of dx, sinx, cosx, tanx, cscx, secx, and
cotx. From Wikimedia Commons, the free media repository. Find $\int_0^{2\pi} \frac{1}{3 + \cos x} dx$. Why are physically impossible and logically impossible concepts considered separate in terms of
probability? Differentiation: Derivative of a real function. u t {\displaystyle t} t dx&=\frac{2du}{1+u^2} , (a point where the tangent intersects the curve with multiplicity three)
Weierstra-Substitution - Wikiwand \begin{align} Now he could get the area of the blue region because sector $CPQ^{\prime}$ of the circle centered at $C$, at $-ae$ on the $x$-axis and radius $a$ has
area $$\frac12a^2E$$ where $E$ is the eccentric anomaly and triangle $COQ^{\prime}$ has area $$\frac12ae\cdot\frac{a\sqrt{1-e^2}\sin\nu}{1+e\cos\nu}=\frac12a^2e\sin E$$ so the area of blue sector
$OPQ^{\prime}$ is $$\frac12a^2(E-e\sin E)$$ Theorems on differentiation, continuity of differentiable functions. This follows since we have assumed 1 0 xnf (x) dx = 0 . t x Then we have. Find the
integral. After setting. {\textstyle \csc x-\cot x=\tan {\tfrac {x}{2}}\colon }. In other words, if f is a continuous real-valued function on [a, b] and if any > 0 is given, then there exist a
polynomial P on [a, b] such that |f(x) P(x)| < , for every x in [a, b]. Following this path, we are able to obtain a system of differential equations that shows the amplitude and phase modulation of
the approximate solution. = csc The point. \theta = 2 \arctan\left(t\right) \implies $$. What can a lawyer do if the client wants him to be acquitted of everything despite serious evidence?
S2CID13891212. Weierstrass Approximation Theorem is extensively used in the numerical analysis as polynomial interpolation. Why do academics stay as adjuncts for years rather than move around? \( The
trigonometric functions determine a function from angles to points on the unit circle, and by combining these two functions we have a function from angles to slopes. My code is GPL licensed, can I
issue a license to have my code be distributed in a specific MIT licensed project? \\ NCERT Solutions Class 12 Business Studies, NCERT Solutions Class 12 Accountancy Part 1, NCERT Solutions Class 12
Accountancy Part 2, NCERT Solutions Class 11 Business Studies, NCERT Solutions for Class 10 Social Science, NCERT Solutions for Class 10 Maths Chapter 1, NCERT Solutions for Class 10 Maths Chapter 2,
NCERT Solutions for Class 10 Maths Chapter 3, NCERT Solutions for Class 10 Maths Chapter 4, NCERT Solutions for Class 10 Maths Chapter 5, NCERT Solutions for Class 10 Maths Chapter 6, NCERT Solutions
for Class 10 Maths Chapter 7, NCERT Solutions for Class 10 Maths Chapter 8, NCERT Solutions for Class 10 Maths Chapter 9, NCERT Solutions for Class 10 Maths Chapter 10, NCERT Solutions for Class 10
Maths Chapter 11, NCERT Solutions for Class 10 Maths Chapter 12, NCERT Solutions for Class 10 Maths Chapter 13, NCERT Solutions for Class 10 Maths Chapter 14, NCERT Solutions for Class 10 Maths
Chapter 15, NCERT Solutions for Class 10 Science Chapter 1, NCERT Solutions for Class 10 Science Chapter 2, NCERT Solutions for Class 10 Science Chapter 3, NCERT Solutions for Class 10 Science
Chapter 4, NCERT Solutions for Class 10 Science Chapter 5, NCERT Solutions for Class 10 Science Chapter 6, NCERT Solutions for Class 10 Science Chapter 7, NCERT Solutions for Class 10 Science Chapter
8, NCERT Solutions for Class 10 Science Chapter 9, NCERT Solutions for Class 10 Science Chapter 10, NCERT Solutions for Class 10 Science Chapter 11, NCERT Solutions for Class 10 Science Chapter 12,
NCERT Solutions for Class 10 Science Chapter 13, NCERT Solutions for Class 10 Science Chapter 14, NCERT Solutions for Class 10 Science Chapter 15, NCERT Solutions for Class 10 Science Chapter 16,
NCERT Solutions For Class 9 Social Science, NCERT Solutions For Class 9 Maths Chapter 1, NCERT Solutions For Class 9 Maths Chapter 2, NCERT Solutions For Class 9 Maths Chapter 3, NCERT Solutions For
Class 9 Maths Chapter 4, NCERT Solutions For Class 9 Maths Chapter 5, NCERT Solutions For Class 9 Maths Chapter 6, NCERT Solutions For Class 9 Maths Chapter 7, NCERT Solutions For Class 9 Maths
Chapter 8, NCERT Solutions For Class 9 Maths Chapter 9, NCERT Solutions For Class 9 Maths Chapter 10, NCERT Solutions For Class 9 Maths Chapter 11, NCERT Solutions For Class 9 Maths Chapter 12, NCERT
Solutions For Class 9 Maths Chapter 13, NCERT Solutions For Class 9 Maths Chapter 14, NCERT Solutions For Class 9 Maths Chapter 15, NCERT Solutions for Class 9 Science Chapter 1, NCERT Solutions for
Class 9 Science Chapter 2, NCERT Solutions for Class 9 Science Chapter 3, NCERT Solutions for Class 9 Science Chapter 4, NCERT Solutions for Class 9 Science Chapter 5, NCERT Solutions for Class 9
Science Chapter 6, NCERT Solutions for Class 9 Science Chapter 7, NCERT Solutions for Class 9 Science Chapter 8, NCERT Solutions for Class 9 Science Chapter 9, NCERT Solutions for Class 9 Science
Chapter 10, NCERT Solutions for Class 9 Science Chapter 11, NCERT Solutions for Class 9 Science Chapter 12, NCERT Solutions for Class 9 Science Chapter 13, NCERT Solutions for Class 9 Science Chapter
14, NCERT Solutions for Class 9 Science Chapter 15, NCERT Solutions for Class 8 Social Science, NCERT Solutions for Class 7 Social Science, NCERT Solutions For Class 6 Social Science, CBSE Previous
Year Question Papers Class 10, CBSE Previous Year Question Papers Class 12, Linear Equations In Two Variables Class 9 Notes, Important Questions Class 8 Maths Chapter 4 Practical Geometry, CBSE
Previous Year Question Papers Class 12 Maths, CBSE Previous Year Question Papers Class 10 Maths, ICSE Previous Year Question Papers Class 10, ISC Previous Year Question Papers Class 12 Maths, JEE
Main 2023 Question Papers with Answers, JEE Main 2022 Question Papers with Answers, JEE Advanced 2022 Question Paper with Answers. cos d An affine transformation takes it to its Weierstrass form: If
\(\mathrm{char} K \ne 2\) then we can further transform this to, \[Y^2 + a_1 XY + a_3 Y = X^3 + a_2 X^2 + a_4 X + a_6\]. \end{aligned} Or, if you could kindly suggest other sources. {\textstyle \int
d\psi \,H(\sin \psi ,\cos \psi ){\big /}{\sqrt {G(\sin \psi ,\cos \psi )}}} doi:10.1007/1-4020-2204-2_16. = {\textstyle \cos ^{2}{\tfrac {x}{2}},} Did this satellite streak past the Hubble Space
Telescope so close that it was out of focus? Generally, if K is a subfield of the complex numbers then tan /2 K implies that {sin , cos , tan , sec , csc , cot } K {}. sin x Other sources refer to
them merely as the half-angle formulas or half-angle formulae. Learn more about Stack Overflow the company, and our products. PDF Ects: 8 | {"url":"https://aiu.asso.fr/rud0bk0u/weierstrass-substitution-proof","timestamp":"2024-11-02T22:09:53Z","content_type":"text/html","content_length":"30351","record_id":"<urn:uuid:4638d5e0-70ca-4f5a-b62b-689a4fd31a7d>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00775.warc.gz"} |
Python Circle properties - time2code
Written as the Greek letter for p, or π — pi is the ratio of the circumference of any circle to the diameter of that circle. Regardless of the circle's size, this ratio will always equal pi. The
importance of pi has been known for at least 4,000 years. By the start of the 20th century, about 500 digits of pi were known. With computation advances, thanks to computers, we now know more than
the first ten trillion digits of pi.
Write a program that outputs the area and circumference of a circle from the diameter.
Remember to add a comment before a subprogram or selection statement to explain its purpose.
Use these resources as a reference to help you meet the success criteria.
Run the unit tests below to check that your program has met the success criteria.
Check that you have used comments within the code to describe the purpose of subprograms. | {"url":"https://time2code.today/python-course/python-level-1/python-circle-properties","timestamp":"2024-11-04T18:42:54Z","content_type":"text/html","content_length":"357973","record_id":"<urn:uuid:c1b5456f-b141-4147-aa8a-4dbe357ec9a5>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00366.warc.gz"} |
C# Optimization Revisited Part 1: Algorithms and Bill Clinton
Thursday, March 19, 2009 – 3:58 PM
As part of my journey in technical computing I’ve been writing a gravitational N-body simulation. Turns out you learn a thing or two about optimization when building numerical simulations that need
to run as fast as possible.
First off I used Vance Morrison’s code timer library to run multiple passes and gather statistics. You can’t optimize unless you know how fast, or slow something already is and where the bottlenecks
are. Then I implemented to integration algorithms in my N-body code and compared them.
Two algorithms compared
If you’re in a hurry and/or remember some high school math or physics then you can just look at the equations and the code snippets and skip to the final section.
Most of the computational heavy lifting in an N-body model occurs in series of tight loops where the positions of each body in the model is updated, the Integrate method. Each Integrate method takes
a time interval and array of bodies and updates each Body in the array based on the time interval.
The simplest algorithm is the Forward Euler. Forward Euler calculates the gravitational force on each body and uses this to calculate the acceleration. It then updates the position (based on the
current velocity) and velocity using the current computed acceleration:
Where r, v and a are vectors of the position, velocity and acceleration of a body at integration step n and (n + 1). The time step, dt, is the duration of time between integration steps.
Just for completeness, the acceleration, a, due to the force, f[ij] between any two bodies i and j is calculated as follows:
Where m is the mass, r[ij] is the vector between the two bodies and G is Newton’s gravitational constant. In the implementation below the Body.Force method implements the first equation.
A Forward Euler implementation:
public void Integrate(double dT, Body[] bodies)
for (int i = 0; i < bodies.Length; i++)
bodies[i].Acceleration = new Vector();
for (int i = 0; i < bodies.Length; i++)
for (int j = (i + 1); j < bodies.Length; j++)
Vector f = Body.Force(bodies[i], bodies[j]);
bodies[i].Acceleration += f / bodies[i].Mass;
bodies[j].Acceleration -= f / bodies[j].Mass;
bodies[i].Position += bodies[i].Velocity * dT;
bodies[i].Velocity += bodies[i].Acceleration * dT;
It’s the simplest integration approach, and it turns out there are several ways to improve on it.
Make way for the Leapfrog algorithm!
Leapfrog algorithm makes use of the acceleration from the previous time step and factors this into the calculation of the new position. It also uses the current and previous time step values for
acceleration when calculating the velocity.
A Leapfrog implementation:
public void Integrate(double dT, Body[] bodies)
for (int i = 0; i < bodies.Length; i++)
bodies[i].Velocity += bodies[i].Acceleration * (dT / 2.0);
bodies[i].Position += bodies[i].Velocity * dT;
for (int i = 0; i < bodies.Length; i++)
bodies[i].Acceleration = new Vector();
for (int i = 0; i < bodies.Length; i++)
for (int j = (i + 1); j < bodies.Length; j++)
Vector f = Body.Force(bodies[i], bodies[j]);
bodies[i].Acceleration += f / bodies[i].Mass;
bodies[j].Acceleration -= f / bodies[j].Mass;
bodies[i].Velocity += bodies[i].Acceleration * (dT / 2.0);
But which is better?
Which is faster? If you said the Forward Euler you’d be both right and wrong. The performance results (on the right) show that for the same number of integrations the Forward Euler is marginally
faster, 8.6 vs. 7.9 seconds. It also needs less data per Body as the acceleration need not be persisted between time steps.
Unfortunately fast isn’t good enough in this case. Sure, there’s less code so the algorithm executes faster but the results are less accurate. The Leapfrog algorithm makes uses the acceleration at
the current and next time steps to calculate better a better result for the velocity and position. So these raw performance numbers don’t compare the same things. They compare time per integration
not time taken to simulate to the same accuracy over a set amount of time.
In order to obtain the same degree of accuracy the time step (dT) used in the Forward Euler implementation needs to be much smaller than that used in the Leapfrog (see graph to right). In other
words I have to call the Forward Euler code many more times – hundreds more times – with a smaller value of dT for every call to the Leapfrog implementation. This more than outweighs the slight raw
speed improvement of the Forward Euler implementation.
To paraphrase Bill Clinton “It’s the algorithm stupid”. Regardless of the complexity of the math in the above example the lesson I learnt here is… After you’ve established how slow something is think
about the underlying algorithm before trying to tweak the algorithm you may already have.
In part 2 I’ll cover another consideration, concurrency and manycore processors.
1. 8 Responses to “C# Optimization Revisited Part 1: Algorithms and Bill Clinton”
2. Hi.
An interesting blog entry. However I have a recommendation about the code. I think you should rewrite this for loop
for (int i = 0; i < bodies.Length; i++)
bodies[i].Velocity += bodies[i].Acceleration * (dT / 2.0);
bodies[i].Position += bodies[i].Velocity * dT;
to this :
for (int i = 0; i < bodies.Length; i++)
var b = bodies[i];
b.Velocity += b.Acceleration * (dT / 2.0);
b.Position += b.Velocity * dT;
It will (probably) run faster.
By Petar Petrov on Mar 23, 2009
3. Petar,
Thanks for your suggestion. I tried this and ran some tests and the difference is negligible. In fact in most of my runs the version you’d expect to be faster is actually slightly slower but
taking into account the margin of error you would have to say they were pretty much equal at best.
Simple: 239.4 ms +- 1.8%
Optimized: 251.5 ms +- 1.7%
I’m still looking at the generated IL and assembler to see why this is.
Really this is an example of a micro-optimization. My opinion is that these are very rarely worth doing. At this point the programmer is second guessing the compiler and JITer.
In this specific scenario Leapfrog doesn’t represent the best algorithm. Firstly, there are forth order algorithms – like Hermite – that take longer per iteration but are much more accurate.
Secondly this code is only executing on one core of a several on the machine. I have a future post in the works that uses the Parallel Extensions to .NET 3.5 to give nearly a 4x improvement in
performance on an Intel i7 processor.
So time spent thinking about the than micro-optimizing the existing code is actually better spent thinking about overall optimization.
By Ade Miller on Mar 24, 2009
4. I totally agree with you. I’m also using Parallel Extensions. It’s a micro optimization but because you talk about optimization I think it’s worth mention it. You’ll start to see a difference
(between the two versions) when you make many calls to bodies[i] in the body of your loop.
By Petar Petrov on Mar 25, 2009
5. Petar,
That might be true but the numbers above are for an array of 2,000,000 Bodies on a dual core machine. I see similar numbers on an 4 (8) core i7 with 6Gb and 4,000,000 bodies – these are way
beyond what’s possible on existing hardware.
In all cases the optimization runs slightly slower within the margin of error of the test but it always comes out behind. If you look at the IL generated by the original version it appears to be
using intermediate variables anyhow. So I’m not convinced this is actually an improvement at all, although I do think it makes the code more readable.
By Ade Miller on Mar 25, 2009
6. Admin can you tell learnly about time series compress algorithm to c# and give sourscode and demo.
By dang thanh nghi on May 10, 2009
1. 3 Trackback(s)
2. Mar 21, 2009: Pages tagged "clinton"
3. Mar 25, 2009: C# Optimization Revisited Part 2: Concurrency | #2782 - Agile software development, patterns and practices for building Microsoft .NET applications.
Sorry, comments for this entry are closed at this time. | {"url":"https://www.ademiller.com/blogs/tech/2009/03/c-optimization-revisited-part-1-algorithms-and-bill-clinton/","timestamp":"2024-11-09T10:18:02Z","content_type":"application/xhtml+xml","content_length":"51820","record_id":"<urn:uuid:2e8f5fce-c3e2-458e-bb18-4287267909d4>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00810.warc.gz"} |
Energetic γ-rays from TeV scale dark matter annihilation resummed
The annihilation cross section of TeV scale dark matter particles χ^0 with electroweak charges into photons is affected by large quantum corrections due to Sudakov logarithms and the Sommerfeld
effect. We calculate the semi-inclusive photon energy spectrum in χ^0χ^0→γ+X in the vicinity of the maximal photon energy E[γ]=m[χ] with NLL' accuracy in an all-order summation of the electroweak
perturbative expansion adopting the pure wino model. This results in the most precise theoretical prediction of the annihilation rate for γ-ray telescopes with photon energy resolution of parametric
order m[W] ^2/m[χ] for photons with TeV energies.
Dive into the research topics of 'Energetic γ-rays from TeV scale dark matter annihilation resummed'. Together they form a unique fingerprint. | {"url":"https://portal.fis.tum.de/en/publications/energetic-%CE%B3-rays-from-tev-scale-dark-matter-annihilation-resummed","timestamp":"2024-11-10T06:24:04Z","content_type":"text/html","content_length":"49443","record_id":"<urn:uuid:1dadf9e4-9afd-4b75-ae42-a54397268571>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00467.warc.gz"} |
Math Colloquia - Random walks in spaces of negative curvature
Given a group of isometries of a metric space, one can draw a random sequence of group elements, and look at its action on the space.
What are the asymptotic properties of such a random walk?
The answer depends on the geometry of the space.
Starting from Furstenberg, people considered random walks of this type, and in particular they focused on the case of spaces of negative curvature.
On the other hand, several groups of interest in geometry and topology act on spaces which are not quite negatively curved (e.g., Teichmuller space) or on spaces which are hyperbolic, but not proper
(such as the complex of curves).
We shall explore some results on the geometric properties of such random walks.
For instance, we shall see a multiplicative ergodic theorem for mapping classes (which proves a conjecture of Kaimanovich), as well as convergence and positive drift for random walks on general
Gromov hyperbolic spaces. This also yields the identification of the measure-theoretic boundary with the topological boundary. | {"url":"http://my.math.snu.ac.kr/board/index.php?mid=colloquia&page=3&sort_index=speaker&order_type=asc&document_srl=764060&l=en","timestamp":"2024-11-05T10:31:20Z","content_type":"text/html","content_length":"46233","record_id":"<urn:uuid:21d7430e-c5a7-4572-9ca5-68fad0375292>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00055.warc.gz"} |
Sixth Grade
Force is a push or pull upon an object resulting from the object's interaction with another object. It is a vector quantity, which means it has both magnitude and direction.
Types of Forces
• Gravity: The force that attracts objects towards each other. It gives weight to objects and is responsible for keeping planets in orbit around the sun.
• Friction: The force that opposes the motion of one object moving past another. It can be static, sliding, rolling, or fluid friction.
• Applied Force: A force that is applied to an object by a person or another object.
• Normal Force: The force exerted by a surface to support the weight of an object resting on it.
• Tension Force: The force that is transmitted through a string, rope, cable, or wire when it is pulled tight by forces acting from opposite ends.
• Spring Force: The force exerted by a compressed or stretched spring upon any object that is attached to it.
• Magnetic Force: The force exerted by magnets on each other.
• Electrostatic Force: The force between electrically charged objects like electrons and protons.
These laws describe the relationship between an object and the forces acting on it.
1. First Law (Law of Inertia): An object at rest will stay at rest, and an object in motion will stay in motion with the same speed and in the same direction unless acted upon by an unbalanced
2. Second Law (Law of Acceleration): The acceleration of an object is directly proportional to the net force acting on it and inversely proportional to its mass. F = ma
3. Third Law (Action-Reaction): For every action, there is an equal and opposite reaction.
Units of Force
The standard unit of force is the newton (N), named after Sir Isaac Newton. One newton is equal to the force required to accelerate a one kilogram mass by one meter per second squared.
Calculating Force
The formula to calculate force is F = ma, where F is the force, m is the mass of the object, and a is the acceleration.
Study Tips
1. Understand the different types of forces and their effects on objects.
2. Be familiar with Newton's laws of motion and how they apply to real-life situations.
3. Practice calculating force using the formula F = ma.
4. Use diagrams and real-world examples to visualize the concept of force.
5. Review and understand the units of force and how they are used in calculations. | {"url":"https://newpathworksheets.com/science/grade-6/introduction-to-earth-science-1?dictionary=force&did=1087","timestamp":"2024-11-13T06:12:53Z","content_type":"text/html","content_length":"47616","record_id":"<urn:uuid:972f3ac8-9ca2-44e8-bd8e-1bf5089f5ed7>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00201.warc.gz"} |
I’m all about that bootstrap (’bout that bootstrap) | R-bloggersI’m all about that bootstrap (’bout that bootstrap)
I’m all about that bootstrap (’bout that bootstrap)
[This article was first published on
On the lambda » R
, and kindly contributed to
]. (You can report issue about the content on this page
) Want to share your content on R-bloggers?
if you have a blog, or
if you don't.
As some of my regular readers may know, I’m in the middle of writing a book on introductory data analysis with R. I’m at the point in the writing of the book now where I have to make some hard
choices about how I’m going to broach to topic of statistical inference and hypothesis testing.
Given the current climate against NHST (the journal Basic and Applied Social Psychology banned it) and my own personal preferences, I wasn’t sure just how much to focus on classical hypothesis
I didn’t want to burden my readers with spending weeks trying to learn the intricacies of NHST just to have them being told to forget everything they know about it and not be able to use it without
people making fun of them.
So I posed a question to twitter: “Is it too outlandish to not include the topic of parametric HTs in an intro book about data analysis. Asking for a friend.. named Tony…. You know, in favor of
bootstrapped CIs, permutation tests, etc…”
To which my friend Zach Jones (@JonesZM) replied: “they could at least be better integrated with monte-carlo methods. i think they’d make it easier to understand”. I agreed, which is why I’m
proceeding with my original plan to introduce classical tests after and within the context of Monte Carlo bootstrapping (as opposed to exhaustive bootstrapping).
Even though I’m a huge fan of the bootstrap, I want to be careful not to further any misconceptions about it—chiefly, that bootstrapping is a cure-all for having a small sample size. To be able to
show how this isn’t the case, I wrote an R script to take 1,000 samples from a population, calculate 95% confidence intervals using various methods and record the proportion of times the population
mean was within the CIs.
The four ways I created the CIs were:
• the z interval method: which assumes that the sampling distribution is normal around the sample mean (1.96 * the standard error)
• the t interval method: which assumes that the population is normally distributed and the sampling distribution is normally distributed around the sample mean (t-distribution quantile at .975
[with appropriate degrees of freedom] * standard error)
• basic bootstrap CI estimation (with boot() and boot.CI() from the boot R package)
• adjusted percentile CI estimation (with boot() and boot.CI() from the boot R package)
I did this for various sample sizes and two different distributions, the normal and the very non-normal beta distribution (alpha=0.5, beta=0.5). Below is a plot depicting all of this information.
So, clearly the normal (basic) boot doesn’t make up for small sample sizes.
It’s no surprise the the t interval method blows everything else out of the water when sampling from a normal distribution. It even performs reasonably well with the beta distribution, although the
adjusted bootstrap wins out for most sample sizes.
In addition to recording the proportion of the times the population mean was within the confidence intervals, I also kept track of the range of these intervals. All things being equal, narrower
intervals are far preferable to wide ones. Check out this plot depicting the mean ranges of the estimated CIs:
The t interval method always produces huge ranges.
The adjusted bootstrap produces ranges that are more or less on par with the other three methods BUT it outperforms the t interval method for non-normal populations. This suggests the the adjustments
to the percentiles of the bootstrap distribution do a really good job at correcting for bias. It also shows that, if we are dealing with a non-normal population (common!), we should use adjusted
percentile bootstrapped CIs.
Some final thoughts:
• The bootstrap is not a panacea for small sample sizes
• The bootstrap is cool because it doesn’t assume anything about the population distribution, unlike the z and t interval methods
• Basic bootstrap intervals are whack. They’re pathologically narrow for small sample sizes.
• Adjusted percentile intervals are great! You should always use them instead. Thanks Bradley Efron!
Also, if you’re not using Windows, you can parallelize your bootstrap calculations really easily in R; below is the way I bootstrapped the mean for this project:
dasboot <- boot(a.sample, function(x, i){mean(x[i])}, 10000,
parallel="multicore", ncpus=4)
which uses 4 cores to perform the bootstrap in almost one fourth the time.
In later post, I plan to further demonstrate the value of the bootstrap by testing difference in means and show why permutation tests comparing means between two samples is always better than | {"url":"https://www.r-bloggers.com/2015/03/im-all-about-that-bootstrap-bout-that-bootstrap/","timestamp":"2024-11-15T03:24:05Z","content_type":"text/html","content_length":"109982","record_id":"<urn:uuid:d631d939-e3fb-4496-a1c2-b32eeceaff7d>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00162.warc.gz"} |
longdiv – Long division arithmetic problems
Work out and print integer long division problems. Use: \longdiv{numerator}{denominator}. The numerator and denominator (divisor and dividend) must be integers, and the quotient is an integer too. \
longdiv leaves the remainder from the division at the bottom of its diagram of the problem.
Sources /macros/generic/misc/longdiv.tex
Version 1
Licenses Public Domain Software
Maintainer Donald Arseneau
Contained in MiKTeX as longdiv
Topics Arithmetic
See also xlop
Community Comments
Maybe you are interested in the following packages as well. | {"url":"https://ctan.org/pkg/longdiv","timestamp":"2024-11-08T13:49:22Z","content_type":"text/html","content_length":"14745","record_id":"<urn:uuid:d534b93f-0fd8-4065-928a-eaeb7c16ccc3>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00403.warc.gz"} |
Cash and I are taking a break.
Sorry cash, the grass looks greener with money market funds.
Recently, I’ve been working on moving my family’s banking strategy from traditional checking and savings account to money market funds in a (Fidelity) cash management account. The migration has
started, but I’m pretty happy with what I think will be the result.
Please note: I am not a financial advisor. I am not a finance professional. I should not be in charge of your money. I don’t even want to be in charge of my own money, but I’m the most inexpensive
guy I know. This is not advice—just a record of a decision I have made and some background on why I did it.
A quick primer for neophytes
A bank account keeps track of your deposits. If you put $20 in, you can later ask for $20 back. The bank will invest most of that $20 (see: fractional reserve banking) and pay you interest for the
privilege. The bank decides how much they will pay you.
An investment account holds financial securities and assets, like stocks, bonds, and exchange traded funds (ETFs). One kind of ETF is a money market fund. They invest not in stocks, but in “highly
liquid near-term instruments [including] cash, cash equivalent securities, and high-credit-rating, debt-based securities with a short-term maturity (such as U.S. Treasuries)” (from Investopedia). For
example, SPAXX, the Fidelity Government Money Market Fund and the default position for new cash management accounts, invests in 48.40% U.S. government repurchase agreements (are these bonds?), 29.47%
U.S. Treasury bills, 21.00% “Agency floating-rate securities” (whatever those are), 3.45% agency fixed-rate securities, and 1.32% U.S. Treasury coupons. Bills, bonds, and coupons issued by the U.S.
government are considered to be nearly as risk-free as the dollars that they also issue. ETFs are generally valued by “net asset value” (NAV), which is the classic assets minus liabilities. Money
market ETFs target a NAV of $1.00 and pay dividends.
You may have heard of a money market account, which is a bank account, offered by a bank, that invests in money market-type securities for you. It is FDIC insured because it is a bank account that
holds deposits, not an investment account that holds securities.
A cash-management account
Brokerages like Fidelity and Vanguard offer investment accounts that can act like bank accounts. That means that they have an account number and routing number for ACH transactions like a normal
checking account. Fidelity’s offering goes the extra mile by offering a debit card, checks, and bill pay to make it near identical to a checking account. The difference is that, instead of taking
dollar deposits and holding onto them, you purchase money market fund ETFs like SPAXX or VUSXX. Instead of paying a monthly interest payment, they return a dividend that is paid on a monthly basis.
They’re effectively the same thing.
Pros of the all-money market fund strategy
• SPAXX has a current annual return of 5.10% and seven-day yield of 4.98%. In comparison, Ally has an APY of 0.10% on the checking account and 4.20% on the savings account.
• Instead of paying for things with my checking account except for large balances or worrying if there is too much or too little in the checking account, I can have all money flow in and out of a
single account.
• Currently, most payments come out of my checking account, except for big things like the credit card payment which come directly out the savings account. I try to have enough money in the
checking account for all expected payments while also minimizing the balance in order to use the saving account’s higher interest rate. This sometimes leads to overdrafts or excess transactions
on the savings account, which has a monthly limit of 10 withdraws. With a single cash management account, all those worries go away. No moving money between accounts to get higher interest rates
or avoid overdrafts.
• An investment account with money market funds is not insured by the FDIC.
□ However, I believe the risk of losing my money is only slightly higher than the risk of losing the funds in a regular bank account, so I am not worried about it. An article from The Finance
Buff that is linked at the end goes into more detail, but the short version is that, even though it is not an FDIC insured bank account, it is an SIPC insured investment account which ensures
that I will still get my ETFs if Fidelity goes belly up and that the risk of the fund breaking the buck and the NAV dropping below $1 is pretty miniscule. It’s larger than the risk of teeny
risk of losing my dollar bills, larger than the itsy-bitsy risk of my bank doing something (what exactly, I don’t know) where I can’t get back my deposit, and larger than the
not-impossible-but-as-close-to-impossible-as-I-can-imagine-zero-risk of $1 not being worth $1. I can live with that.
• The yield of a money market fund may drop beneath that of a savings account. However, if that happens and the difference is drastic enough to make a material difference, I can just move back to
cash in a savings and checking accounts.
• You can’t deposit physical cash into a cash-management account. However, you can’t deposit cash into an account at Ally either, so I already live in that reality.
Future plans
Mr. The Finance Buff, in an article linked below, mentions that the equivalent of a savings account and certificates of deposit in this strategy is a ladder of short-term treasury bills and bonds. I
haven’t ever done CDs because I fear the illiquidity, but the future is about growth and getting better, right? It’s an opportunity to learn.
Further reading
On money market funds vs. classic bank accounts
On one way to implement this strategy
On other, fun things about money and banks | {"url":"https://carter.works/blog/2024-08-30-cash-and-i-are-taking-a-break/","timestamp":"2024-11-04T03:56:29Z","content_type":"text/html","content_length":"14705","record_id":"<urn:uuid:fcdf8123-2d7e-4607-8adc-fb0e7417e0b5>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00585.warc.gz"} |
mathXplain - MATH IN PLAIN ENGLISH
We already helped
more than 200,000 students
to prepare for
Are you gonna be next?
You will love that the math stuff you always thought complicated and difficult suddenly becomes clear and simple. Most things are explained with colors and drawings, so you just look at them and BOOM
you understand them right away.
WITH US, MATH IS EASIER THAN YOU THINK
Each lesson is built from many small steps—tiny crumbs, so to speak. You progress from one step to the next at your own pace, only when you understand each one. Not sooner and not later. This is why
we can explain even the most complicated concepts in an incredibly simple way.
We not only make math simple and accessible, but we also transform the process of learning it into a fun adventure. Our explanations are short and straightforward yet amazingly detailed. Thanks to
their relaxed style, you will never again be terrified of math. A math course that cheers you up—isn't that a concept!
WE ARE ON YOUR SIDE
Do you feel that math is nothing but continuous failures and struggles? You are not alone. We invented a method that solves all your math-related problems. Instead of making you anxious, math becomes
a joy to learn. | {"url":"https://www.mathxplain.com/","timestamp":"2024-11-11T05:14:10Z","content_type":"text/html","content_length":"71308","record_id":"<urn:uuid:2507d003-777d-417b-b783-7333e2b833eb>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00587.warc.gz"} |
SSA - POMS: DI 10520.030 - Determining When IRWE Are Deductible and How They Are Distributed
TN 21 (06-24)
DI 10520.030 Determining When IRWE Are Deductible and How They Are Distributed
Payments the person with a disability makes after November 30, 1980 for items needed in order to work are deductible whether the item is purchased before or after the person with a disability begins
working, if the person needs the item in order to work.
Payments the person with a disability makes after November 30, 1980 for services are deductible if the services are received while the person is working. Deductions for services may be made even
though a person must leave work temporarily to receive the services. The costs of any services received before the person begins working are not deductible.
The SGA decision in a case involving IRWE for items or services necessary for the person with a disability to work generally will be based upon the person's earnings. The exceptions to this would be:
• •
a situation in which a person with a disability is in a position to control or manipulate the amount of their earnings; e.g., the person is self-employed and the value of services rendered is
clearly worth an amount greater than the earnings received; or
• •
the person’s earnings are being subsidized.
In these situations, it will be necessary to first determine the value of services being rendered and then deduct the IRWE. See the guidelines in DI 10505.020 concerning work by employees, and DI
10510.010 concerning work by self-employed persons and/or DI 10505.010 for policy on subsidy.
Other than the exceptions above, the deduction of IRWE does not alter any of the basic concepts for evaluating SGA, e.g., averaging earnings, establishing disability, or determining whether a
person's disability has ceased. However, after a person's disability has ceased because of SGA, and the only issue is whether they are entitled to benefit payments during any remaining months of the
extended period of eligibility, the matter of SGA must be determined on a month-to-month basis and the concept of averaging earnings is not applicable. (See DI 13010.210 concerning extension of
eligibility, for benefits based on disability.)
IRWE provisions do not apply for the purpose of determining a service month in the TWP. Do not make IRWE deductions during the TWP.
The amount of the deductions must be determined in a uniform manner in both the DI and SSI programs. Therefore, the amount of deductions must be the same for both SGA determinations and SSI payment
purposes. (For exception, see DI 10520.030A.7.)
The amount of IRWE to be deducted from earnings or from earned income is the total allowable amount (subject to reasonable limits) that the person with a disability pays for the item or service. The
amount to be deducted is not determined by assigning a certain portion of the expense to work activity and a certain portion to nonwork activity (e.g., 40 percent of the time at work and 60 percent
of the time at home).
Attendant care services (See DI 10520.010D.) and vehicle operating expenses (See DI 10520.010J.). The amount deducted, however, is subject to reasonable limits (See DI 10520.010C.).
When determining countable income, IRWE are not deductible from earned income if the income used for the purchase of the impairment-related item or service is deducted as part of a plan to achieve
self-support (PASS) for the same period. However, any portion of the payment for an item or service paid with income that is not deducted as part of the PASS can be deducted as an IRWE if the expense
itself meets the requirements for an IRWE deduction. For purposes of determining SGA, the entire amount paid for the item or service is deductible.
A concurrently entitled beneficiary purchases an impairment-related item necessary to achieve their designated occupational goal at a cost of $600. They pay the bill with $500 designated for their
PASS and $100 of other income. For purposes of determining title XVI countable income, $100 is deductible. However, for title II purposes, the entire $600 is deductible as an IRWE. (If only $550 is
deducible because of the reasonable limits provision, for countable income purposes, $50 is deductible and for SGA purposes $550 is deductible.)
Some IRWE are paid on a recurring basis. In the case of durable equipment (respirator, wheelchair, etc.), the cost is ordinarily paid over a period of time under some type of installment purchase
plan. In addition to the cost of the purchased item, interest and other normal charges (e.g., sales tax) that a person with a disability pays on the purchase will be deductible. Generally, the amount
the person pays monthly will be the deductible amount. In the case of ongoing attendant care or medical services (e.g., physical therapy), the costs are generally paid and are deducted on a monthly
basis. Such costs are deductible only if the services are received while the person is working. If the entire cost of the purchased item cannot be deducted because of the reasonable limits provision,
the interest and other charges must be proportionately reduced.
A special rule applies in situations where a person with a disability pays recurring IRWEs less frequently than monthly, e.g., quarterly. These expenses either may be deducted entirely in the month
payment is made or allocated over the months in the payment period, whichever the person selects.
EXAMPLE 1:
A person with a disability starts work in October 2001; they earn and receive $800 a month. In the same month they purchase a medical device at a cost of $4,800 plus interest charges of $720. The
term of the installment contract is 48 months. No down payment is made, and they begin their monthly payments in October. The monthly allowable deduction for the item is $115 ($5,520 divided by 48)
for each month of work during the 48 months. (If, instead of $4,800, only $4,200 is deductible because of the reasonable limits provision, since $4,200 is seven-eighths of $4,800, only $630 of
interest charges, or seven-eighths of $720, is deductible.)
Part or all of a person's IRWE may not be recurring (e.g., the person with a disability makes a one-time payment in full for an item or service). Such nonrecurring expenses either may be deducted
entirely in 1 month, or may be prorated over a 12-consecutive month period, whichever the person chooses. They should consider which method will provide more benefits, including the amount of SSI
payment in SSI cases.
EXAMPLE 2:
A person with a disability starts work in October 2001; they earn and receive $950 a month. In the same month they purchase and pay for a deductible item at a cost of $250. In this situation a $250
deduction for October 2001 can be allowed, reducing the person's earnings below SGA for that month.
In the above example, if the person's earnings are just a few dollars above the SGA earnings amount, they would probably choose to have the $250 payment projected over the 1-year period, October 2001
to September 2002, thus providing an allowable deduction of $20.83 a month for each month during that period to reduce the earnings below the SGA level for 12 months.
A person with a disability may make a down payment on an impairment-related item, or possibly a service, to be followed by regular monthly payments. Such down payments either are deducted entirely in
1 month, or allocated over a 12-consecutive month period, whichever the person chooses. Explain that the down payment may be allocated in order to provide for uniform monthly deductions.
When the down payment is allocated over a 12-month period, make the following calculation:
• •
determine the total payment made over a 12-consecutive month period beginning with the month of down payment (i.e., the down payment plus the regular monthly payments that are made during that
period) and
• •
divide the total equally over the 12 months.
Beginning with the 13th month, deduct the regular monthly payment amount. If the regular monthly payments extend for less than 12 months, allocate the total amount payable (down payment plus monthly
payments) over the shorter period.
EXAMPLE 3:
A person with a disability starts working in October 2001, at which time they purchase special equipment at a cost of $4,800, paying $1,200 down. The balance of $3,600, plus interest of $540, is to
be repaid in 36 installments of $115 a month beginning November 2001. The person earns and receives $800 a month. In this situation a $205.42 monthly deduction is allowed beginning October 2001 and
ending in September 2002. After September 2002, the deduction amount is the regular monthly payment of $115. Calculation for above example:
Payment Type Amount
Down payment in 10/01 $1,200
Monthly payments 11/01 through 09/02 +1,265
$2,465 /12 = $205.42
EXAMPLE 4:
While working, a person with a disability purchases a deductible item in July 2001, paying $1,450 down. (The person earns and receives $500 a month.) However, the first monthly payment of $125 is not
due until September 2001. In this situation, a $225 monthly deduction is allowed beginning in July 2001 and ending in June 2002. After June 2002, the deduction amount is the regular monthly payment
of $125. Calculation for above example:
Payment Type Amount
Down payment in 07/01 $1,450
Monthly payments 09/01 through 06/02 +1,250
$2,700 /12 = $225
When a person with a disability rents or leases an item while working, the allowable deductible amount is the actual monthly charge. Where the rental or lease payments are made other than monthly
(e.g., weekly), it is necessary to compute monthly payment amounts. As with other costs, rental or lease payments are subject to the reasonable limits provision. (See DI 10520.025D.) An amount that
does not exceed the standard or normal rental or lease charge for the same or similar item in the person's community is considered reasonable.
In most instances, a person with a disability is working in the month in which an IRWE is both incurred and paid. Therefore, the payment amount is directly deductible from earnings attributable to
the month of work activity. Occasionally, however, an IRWE payment is made before the first or after the last month of work activity. Specific limitations are applicable to the deduction of such
payments from earnings.
• •
Durable items are things that can be used repeatedly. These include, but are not limited to, medical devices (e.g., wheelchairs, braces), prostheses, work-related equipment (e.g., typing aids,
electronic visual aids), residential modifications, nonmedical appliances (e.g., air cleaner), service animals and vehicle modifications. Things that are not considered durable items (and not
deductible) include, but are not limited to, services, drugs, oxygen, diagnostic procedures, medical supplies (e.g., catheters, incontinence pads), and vehicle operating costs.
The expenditure may be a monthly (recurring) payment, a one-time (nonrecurring) payment, or a down payment, but may not be made for a rented or leased item; and it must be made sometime in the 11
months preceding the month work starts. The person must be disabled when payment is made. Payments made prior to the established onset date of disability are not deductible. As with all expenses, the
person must pay the expense in cash (including checks or other forms of money) rather than in kind, and it must be within the reasonable limits guidelines. The item must be required in order for the
person to work (i.e., the person must use the item while working), and this need must be verified.
When an item is paid for in installments, determine the total amount of the installment payments (including a down payment) made for the particular item during the 11 months preceding the month work
starts. This total amount is considered paid in the month of the first payment (for that item) within this 11-month period.
Allocate the total of these payments (installment and down payment, if any) over a 12-month period beginning with the month of the first payment (but never earlier than 11 months before the month
work started). Deduct only that part of the total which is apportioned to the month work begins and the following months.
The deductible amount, as determined by this formula, is considered to be made in the first month of work, and is deductible in the same manner as a nonrecurring expense. That is, the total
deductible amount is deducted in 1 month or allocated over a 12-consecutive month period, whichever the person selects.
A person with a disability purchases an item in June 2002, 4 months before the month work begins in October 2002. They begin monthly payments of $240 at that time. They will have paid a total of $960
preceding the month work started. This amount is considered paid in the first month of payment (the fourth month before the month work begins). The total deductible amount is $640 ($960 divided by 12
months multiplied by 8 work months, 10/02 through 5/03).
This amount is deducted at one time or allocated over a 12-consecutive month period, whichever the person selects, beginning with the first month of work (for purposes of determining SGA), or the
first month income is received (for purposes of determining SSI countable earned income).
(The monthly payments of $240 that the person continues to make while working will also be deductible in accordance with the instructions for recurring expenses.)
When an item is paid for with a one-time payment during the 11 months preceding the month work started, allocate the payment over a 12-month period beginning with the month of payment. Deduct only
that part of the payment which is apportioned to the month work began and the following months.
The deductible amount, as determined by this formula, is considered to have been made in the first month of work, and is deductible in the same manner as a nonrecurring expense. That is, the total
deductible amount is deducted in 1 month or allocated over a 12-consecutive month period, whichever the person selects.
An item is purchased in February 2002, 7 months before the month work begins in September 2002. It is paid for with a one-time payment of $300. The total deductible amount will be $125 ($300 divided
by 12 multiplied by 5 work months, 9/02 through 1/03). If an item is purchased 3 months before the month work begins and is paid for with a one-time payment of $600, the deductible amount will be
$450 ($600 divided by 12 multiplied by 9 work months).
This amount will be deducted at one time or allocated over a 12-consecutive month period, whichever the person selects, beginning with the first month of work (for purposes of determining SGA), or
the first month income is received (for purposes of determining SSI countable earned income).
If a person with a disability receiving SSI starts working and makes an IRWE payment in one month but does not receive earned income until the following month, deduct (or begin allocating) the
payment amount in the first month earned income is received.
A person with a disability begins working on August 24 and makes an IRWE payment on August 31, but does not receive their first paycheck until September 7; the IRWE is deducted from earned income
received in September.
In most instances a person is working and receives earned income in the month in which an IRWE is both incurred and paid. Therefore, the payment amount is directly deductible from earned income
received in the month of work activity. In unusual situations, however, the payment of an IRWE may not correspond to either a month of work activity, or a month in which earned income is received, or
both. Specific limitations apply to the deduction of such payments from earned income in SSI cases.
A person with a disability may require an item or service in the month they begin working, but is no longer working in the following month when payment for the item or service is made. Under the
regular rules, the payment is not deductible for SGA purposes because the payment is not made in a month of work. However, this is an exception to the regular rules. In order not to penalize the
person, the payment is deducted in the last month the person performs SGA.
If a person with a disability receiving SSI makes an IRWE payment in the month after they last worked and receives earned income, and the payment is for an impairment-related item or service used
while working, deduct the payment amount from the earned income received in the last month of work.
A person with a disability receives a service necessary to enable them to work on April 3; they stop working on April 14, receives their last paycheck on April 28, and pays for the service on May 2;
the IRWE is deducted from earned income received in April.
If a person with a disability receiving SSI is no longer working but receives earned income and makes an IRWE payment, deduct the payment amount from earned income in the month of nonwork only if:
• •
the income received is for work activity (e.g., not income received as a silent partner in a business), and,
• •
the work activity is performed in a period when the person requires the impairment-related item or service.
A person with a disability uses an impairment-related item for work throughout January but stops working on January 26. On February 9 they receive their last paycheck for January employment and that
same day pays the bill for the item used in January. The IRWE amount is deducted from earned income received in February.
In many cases the attendant may perform services that are not allowable under SSA's definition of attendant care services (per DI 10520.010D). Therefore, the total amount paid to the attendant each
month is not deductible. Only deduct the amount which covers those services related to assisting the person to prepare for work, getting the person to and from work, helping the person on the job,
and assisting the person immediately upon returning home from work. In order to determine the amount that is deductible as an IRWE for attendant care services, prorate the attendant's earnings as
1. 1.
Determine the number of hours the attendant spends each day in providing the specified allowable services.
2. 2.
Divide the attendant's monthly earnings by the total number of hours worked in a month (or divide the weekly earnings by the number of hours worked in a week) to determine the hourly wage.
3. 3.
Multiply the number of allowable attendant care hours by the hourly wage to arrive at daily attendant care expenses.
4. 4.
Multiply the amount of allowable daily attendant care expenses by the number of workdays in the month to determine the deductible expense for attendant care services for the month.
Where structural or operational modifications are made to a vehicle without which the person could not get to and from work, the actual cost of the modification (but not the cost of the vehicle) is
deductible if paid by the person. For example: a handbrake is specially installed on an automobile for a person whose impairment involves the legs, or an electric lift is added to a van for a person
who uses a wheelchair.
In addition to the cost of modification, the operating costs of a modified vehicle that are directly related to work (and for travel to and from place of employment) are also deductible. For the
purpose of IRWE, the determination of operating costs of a vehicle in the past was based upon the vehicle class and on a mileage rate corresponding to that class. Operating costs were based on data
compiled by the Federal Highway Administration in their publication, Cost of Owning Operating Automobiles, Vans Light Trucks. However, this data was last published in April 1992. Therefore, we are
phasing out the use of vehicle class mileage rates and replacing them with the standard mileage rate permitted by IRS for non-governmental business use.
Use the IRS standard mileage rate in determining the mileage expense for IRWE purposes. Be sure to use the mileage rate in effect for the time period that the vehicle was actually used for travel.
IRS STANDARD MILEAGE RATE
Year - Cents Per Mile
2024 — 67.0
2023 — 65.5
2022 — 58.5
2021 — 56.0 (2021 rate lower than 2020)
2020 — 57.5 (2020 rate lower than 2019)
2019 — 58.0
2018 — 54.5
2017 — 53.5 (2017 rate lower than 2016)
2016 — 54.0 (2016 rate lower than 2015)
2015 — 57.5
2014 — 56.0
2013 — 56.5
2012 — 55.5
2011 — 51.0
2010 — 50.0 (2010 rate lower than 2009)
2009 — 55.0
2008 — 50.5
2007 — 48.5
2006 — 44.5
09/01/2005 to 12/31/2005 — 48.5 (temporary increase)
01/01/2005 to 08/31/2005 — 40.5
2004 — 37.5
2003 — 36.0 (2003 rate lower than 2002)
2002 — 36.5
2001 — 34.5
2000 — 32.5
A person with a disability may have a deductible transportation expense when a physician or health care provider (or VR counselor, when appropriate) verifies that the person requires a special means
of travel to and from work because of their impairment(s). Evaluation of these transportation costs must be based on two factors: the availability of public transportation in the person's community,
and the person's capacity to drive a vehicle to work. Public transportation here means standard public forms of transit, e.g., bus, subway, or train, designed for use by the general public. To deal
with these issues, first identify if public transportation is available for the person’s use. Available means it is in reasonable proximity to the individual’s place of work and that it runs when the
person needs it.
EXAMPLE 1: A person works 9:00 p.m. to 3:00 a.m. A public bus runs until 11:00 p.m. and then stops until 5:00 a.m. the next morning. Although this person could take the bus to work, they would not be
able to take the bus home. In this situation, public transportation is not available for this person’s use.
EXAMPLE 2: A person lives in a neighborhood where there is continual bus service. However, their place of work is not within walking distance of a bus stop. Public transportation is not available for
If the person’s impairment(s) does not prevent them from getting to and from work or traveling on public transit, their transportation expenses may not be deducted as IRWE.
When a person cannot use public transportation because of a physical or mental limitation resulting from the impairment(s) the operating cost of driving themselves in an unmodified vehicle to and
from work can be deducted at a per mile rate when need and payment are verified.
• •
person uses wheelchair and public transportation is not equipped for wheelchair use;
• •
person cannot manage getting on and off public transportation (e.g., impairment prohibits travel from home to bus stop);
• •
person uses a service animal not permitted on public transit, or person is not mobility-trained in use of public transportation;
• •
the nature of impairment precludes travel on public transportation (e.g., person with respiratory illness requires special air-treated environment);
• •
person cannot negotiate public transportation (e.g., transfers, directions, and schedules) due to the nature of their impairment(s).
When a person cannot use public transportation because of a physical or mental limitation resulting from the impairment(s) and cannot drive themselves in an unmodified vehicle, the following travel
expenses may be deducted when need and payment are verified.
• •
the cost of a trip to and from work by taxicab or ride-sharing services; or
• •
the cost of paying another person to drive the person with a disability to and from work; or
• •
the cost of paying paratransit, a special bus, or other types of transportation.
NOTE: If the person with a disability is driven to and from work in their own vehicle, the vehicle operating costs (at a per mile rate) are deductible, in addition to any reasonable amount paid to
the driver. If the driver is a family member, development must also include the requirements in DI 10520.025C.3. before IRWE is approved.
If the person’s impairment does not prevent driving and they are able to drive an unmodified vehicle to work, expenses of driving the unmodified vehicle may not be deducted as IRWE.
If the person is unable to drive an unmodified vehicle to work due to the nature of their impairment(s) and not simply because they are not licensed to drive, the following travel expenses may be
deducted when need and payment are verified.
• •
the cost of a trip to and from work by taxicab or ride-sharing services; or
• •
the cost of paying another person to drive the person with a disability to and from work.
NOTE: If the person with a disability is driven to and from work in their own vehicle, the vehicle operating costs (at a per mile rate) are deductible, in addition to any reasonable amount paid to
the driver. If the driver is a family member, development must verify payment in cash or by check for the service rendered (see DI 10520.025C.3.).
To Link to this section - Use this URL: DI 10520.030 - Determining When IRWE Are Deductible and How They Are Distributed - 06/14/2024
http://policy.ssa.gov/poms.nsf/lnx/0410520030 Batch run: 10/15/2024 | {"url":"https://secure.ssa.gov/poms.NSF/lnx/0410520030","timestamp":"2024-11-10T22:17:12Z","content_type":"text/html","content_length":"90200","record_id":"<urn:uuid:37ab8ae8-7506-48a1-b2f4-f6cdfdbcec57>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00336.warc.gz"} |
A group of Martians and a group of Venusians got together for an important talk. At the start of the meeting, each Martian shook hands with 6 different Venusians, and each Venusian shook hands with 8
different Martians. It is known that 24 Martians took part in the meeting. How large was the delegation for Venus? | {"url":"https://problems.org.uk/problems/442/?return=/problems/&category_id__in=105&from_difficulty=3.0&to_difficulty=3.0","timestamp":"2024-11-14T18:37:45Z","content_type":"text/html","content_length":"7975","record_id":"<urn:uuid:d2b0fc97-dc69-402d-815e-3ce26353f6ac>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00156.warc.gz"} |
Math Worksheets Grade 7 Fractions
Math Worksheets Grade 7 Fractions
This is one of our more popular pages most likely because learning fractions is incredibly important in a person s life and it is a math topic that many approach with trepidation due to its bad rap
over the years. 7th grade math worksheets.
The Multiplying And Dividing Fractions A Math Worksheet From The Fractions Wo Math Fractions Worksheets Fractions Worksheets Multiplying Fractions Worksheets
This is a comprehensive collection of free printable math worksheets for grade 7 and for pre algebra organized by topics such as expressions integers one step equations rational numbers multi step
equations inequalities speed time distance graphing slope ratios proportions percent geometry and pi.
Math worksheets grade 7 fractions. Teacher can download and print the worksheets for their students to give them class assignment or work to do from their home. Get seventh graders to have more math
practice by downloading all worksheets under this category. Each 7th grade math topic links to a page with pdf printable math worksheets covering subtopics under the main category 7th grade math
topics covered include.
If you re looking for a great tool for adding subtracting multiplying or dividing mixed fractions check out this online fraction calculator. Some of the worksheets for this concept are adding or
subtracting fractions with different denominators fractions packet adding and subtracting fractions a addsubtracting fractions and mixed numbers word problem practice workbook adding and subtracting
fractions word problems 1 8 fractions. Seventh grade fractions and decimals worksheet pdf download best quality.
These fractions worksheets are a great resource for children in kindergarten 1st grade 2nd grade 3rd grade 4th grade and 5th grade. Algebra quadratic equations algebra 2 type. 7th grade math
worksheets on math topics covered in grade 7.
Ease into key concepts with our printable 7th grade math worksheets that are equipped with boundless learning to extend your understanding of ratios and proportions order of operations rational
numbers and help you in solving expressions and linear equations in describing geometrical figures calculating the area volume and surface area finding the pairs of angles and getting an insight.
Fraction worksheets pdf preschool to 7th grade. Improper fraction and whole number.
Fraction worksheets for children to practice suitable pdf printable math fractions worksheets for children in the following grades. Pre k kindergarten 1st grade 2nd grade 3rd grade 4th grade 5th
grade 6th grade and 7th grade worksheets cover the following fraction topics. Seventh graders have 12 math problems that need to be solved with a fraction.
Introduction to fractions fractions illustrated with circles. This seventh grade math worksheet is a great way for kids to practice converting decimals into fractions. Mixed number and whole number.
Adding and subtracting fractions 7th grade displaying top 8 worksheets found for this concept. Here are the two versions of this free seventh grade math worksheet. Click here for a detailed
description of all the fractions worksheets.
Free math worksheets for grade 7. Seventh 7th grade level math worksheets to master 7th grade mathematics topics.
Fractions Worksheets Printable Fractions Worksheets For Teachers Math Fractions Worksheets Fractions Worksheets Math Fractions
Greater Than Less Than Worksheets Fractions Worksheets Math Fractions Worksheets Math Fractions
Comparing Fractions Worksheet 4th Grade Multiply Fractions With Mon Denominators Worksh In 2020 Fractions Worksheets Fractions Worksheets Grade 4 Simplifying Fractions
Adding Subtracting Fractions Worksheets Fractions Worksheets Adding And Subtracting Fractions Subtracting Fractions
Grade 7 Maths Worksheets With Answers In 2020 7th Grade Math Worksheets Math Worksheets Mathematics Worksheets
Division Fractions Worksheets Fractions Math Worksheets
Comparing Fractions Decimals Worksheets Fractions Worksheets Decimals Worksheets Comparing Fractions
Equivalent Fraction Worksheets 4th Grade Best Worksheets For Kids In 2020 4th Grade Math Worksheets Fractions Worksheets Printable Math Worksheets
Multiply Fractions With Mixed Numbers 2 Worksheets Multiplying Fractions Fractions Worksheets Fractions
Printable Fraction Worksheets Convert Mixed Numbers To Improper Fractions 2 Gif 1000 1294 Fractions Worksheets Improper Fractions Fractions
Free Printables For Kids Fractions Worksheets Math Fractions Worksheets Printable Multiplication Worksheets
Fractions And Decimals Worksheets Grade 7 Pdf In 2020 2nd Grade Worksheets Mathematics Worksheets Printable Math Worksheets
Class 7 Fractions Worksheets Math Fractions Worksheets Math Worksheets Worksheets
Pin By Yuri O On Blah Fractions Worksheets Fractions Mixed Fractions Worksheets
Reducing Fractions To Lowest Terms A Reducing Fractions Lowest Terms Fractions Worksheets
Fraction Practice Worksheet Free Printable Educational Worksheet Fractions Worksheets Math Fractions Worksheets Math Worksheets
Free Printables For Kids Multiplying Fractions Worksheets Fractions Worksheets Dividing Fractions Worksheets
Division Fractions Worksheets Fractions Math Worksheets | {"url":"https://kidsworksheetfun.com/math-worksheets-grade-7-fractions/","timestamp":"2024-11-15T01:07:49Z","content_type":"text/html","content_length":"137027","record_id":"<urn:uuid:a47252dc-abf7-4b59-920a-4b87cbf82b03>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00555.warc.gz"} |
DT <- data.table(a=rep(1:3, each=2), b=1:6)
DT2 <- transform(DT, c = a^2)
DT[, c:=a^2]
DT2 <- within(DT, {
b <- rev(b)
c <- a*2
DT[,`:=`(b = rev(b),
c = a*2,
a = NULL)]
DT$d = ave(DT$b, DT$c, FUN=max) # copies entire DT, even if it is 10GB in RAM
DT = DT[, transform(.SD, d=max(b)), by="c"] # same, but even worse as .SD is copied for each group
DT[, d:=max(b), by="c"] # same result, but much faster, shorter and scales
# Multiple update by group. Convenient, fast, scales and easy to read.
DT[, `:=`(minb = min(b),
meanb = mean(b),
bplusd = sum(b+d)), by=c%/%5] | {"url":"https://www.rdocumentation.org/packages/data.table/versions/1.14.6/topics/transform.data.table","timestamp":"2024-11-10T12:41:15Z","content_type":"text/html","content_length":"86874","record_id":"<urn:uuid:6f507cc3-26ff-4f98-82fa-224750d61de3>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00191.warc.gz"} |
Planning the Best Route with Multiple Destinations Is Hard Even for Supercomputers – a New Approach Breaks a Barrier that's Stood for Nearly Half a Century
This is important for more than just planning routes.
Computers are good at answering questions. What’s the shortest route from my house to Area 51? Is 8,675,309 a prime number? How many teaspoons in a tablespoon? For questions like these, they’ve got
you covered.
There are certain innocent-sounding questions, though, that computer scientists believe computers will never be able to answer – at least not within our lifetimes. These problems are the subject of
the P versus NP question, which asks whether problems whose solutions can be checked quickly can also be solved quickly. P versus NP is such a fundamental question that either designing a fast
algorithm for one of these hard problems or proving you can’t would net you a cool million dollars in prize money.
My favorite hard problem is the traveling salesperson problem. Given a collection of cities, it asks: What is the most efficient route that visits all of them and returns to the starting city? To
come up with practical answers in the real world, computer scientists use approximation algorithms, methods that don’t solve these problems exactly but get close enough to be helpful. Until now, the
best of these algorithms, developed in 1976, guaranteed that its answers would be no worse than 50% off from the best answer.
I work on approximation algorithms as a computer scientist. My collaborators Anna Karlin and Shayan Oveis Gharan and I have found a way to beat that 50% mark, though just barely. We were able to
prove that a specific approximation algorithm puts a crack in this long-standing barrier, a finding that opens the way for more substantial improvements.
This is important for more than just planning routes. Any of these hard problems can be encoded in the traveling salesperson problem, and vice versa: Solve one and you’ve solved them all. You might
say that these hard problems are all the same computational gremlin wearing different hats.
The Best Route Is Hard to Find
The problem is usually stated as “find the shortest route.” However, the most efficient solution can be based on a variety of quantities in the real world, such as time and cost, as well as distance.
To get a sense of why this problem is difficult, imagine the following situation: Someone gives you a list of 100 cities and the cost of plane, train and bus tickets between each pair of them. Do you
think you could figure out the cheapest itinerary that visits them all?
Consider the sheer number of possible routes. If you have 100 cities you want to visit, the number of possible orders in which to visit them is 100 factorial, meaning 100 × 99 × 98 x … × 1. This is
larger than the number of atoms in the universe.
William Cook et al., CC BY-ND
Going with Good Enough
Unfortunately, the fact that these problems are difficult does not stop them from coming up in the real world. Besides finding routes for traveling salespeople (or, these days, delivery trucks), the
traveling salesperson problem has applications in many areas, from mapping genomes to designing circuit boards.
To solve real-world instances of this problem, practitioners do what humans have always done: Get solutions that might not be optimal but are good enough. It’s OK if a salesperson takes a route
that’s a few miles longer than it has to be. No one cares too much if a circuit board takes a fraction of a second longer to manufacture or an Uber takes a few minutes longer to carry its passengers
Computer scientists have embraced “good enough” and for the past 50 years or so have been working on so-called approximation algorithms. These are procedures that run quickly and produce solutions
that might not be optimal but are provably close to the best possible solution.
William Cook et al., CC BY-ND
The Long-Reigning Champ of Approximation
One of the first and most famous approximation algorithms is for the traveling salesperson problem and is known as the Christofides-Serdyukov algorithm. It was designed in the 1970s by Nicos
Christofides and, independently, by a Soviet mathematician named Anatoliy Serdyukov whose work was not widely known until recently.
The Christofides-Serdyukov algorithm is quite simple, at least as algorithms go. You can think of a traveling salesperson problem as a network in which each city is a node and each path between pairs
of cities is an edge. Each edge is assigned a cost, for example the traveling time between the two cities. The algorithm first selects the cheapest set of edges that connect all the cities.
This, it turns out, is easy to do: You just keep adding the cheapest edge that connects a new city. However, this not a solution. After connecting all the cities, some might have an odd number of
edges coming out of them, which doesn’t make sense: Every time you enter a city with an edge, there should be a complementary edge you use to leave it. So the algorithm then adds the cheapest
collection of edges that makes every city have an even number of edges and then uses this to produce a tour of the cities.
This algorithm runs quickly and always produces a solution that’s at most 50% longer than the optimal one. So, if it produces a tour of 150 miles, it means that the best tour is no shorter than 100
Of course, there’s no way to know exactly how close to optimal an approximation algorithm gets for a particular example without actually knowing the optimal solution – and once you know the optimal
solution there’s no need for the approximation algorithm! But it’s possible to prove something about the worst-case scenario. For example, the Christofides-Serdyukov algorithm guarantees that it
produces a tour that is at most 1.5 times the length of the shortest collection of edges connecting all the cities - and, therefore, at most 1.5 times the length of the optimal tour.
A Really Small Improvement that’s a Big Deal
Since the discovery of this algorithm in 1976, computer scientists had been unable to improve upon it at all. However, last summer my collaborators and I proved that a particular algorithm will, on
average, produce a tour that is less than 49.99999% away from the optimal solution. I’m too ashamed to write out the true number of 9s (there are a lot), but this nevertheless breaks the longstanding
barrier of 50%.
The algorithm we analyzed is very similar to Christofides-Serdyukov. The only difference is that in the first step it picks a random collection of edges that connects all the cities and, on average,
looks like a traveling salesperson problem tour. We use this randomness to show that we don’t always get stuck where the previous algorithm did.
While our progress is small, we hope that other researchers will be inspired to take another look at this problem and make further progress. Often in our field, thresholds like 50% stand for a long
time, and after the first blow they fall more quickly. One of our big hopes is that the understanding we gained about the traveling salesperson problem while proving this result will help spur
Getting Closer to Perfect
There is another reason to be optimistic that we will see more progress within the next few years: We think the algorithm we analyzed, which was devised in 2010, may be much better than we were able
to prove. Unlike Christofides’ algorithm, which can be shown to have a hard limit of 50%, we suspect this algorithm may be as good as 33%.
Indeed, experimental results that compare the approximation algorithm to known optimal solutions suggest that the algorithm is quite good in practice, often returning a tour within a few percent of
The current theoretical limit is around 1% – meaning that (unless P=NP) there is no algorithm that will be able to get within 1% of optimal. The question that theoreticians hope to answer is: How
close can we get?
Nathan Klein is a PhD Student in computer science at the University of Washington.
This article is republished from The Conversation under a Creative Commons license. Read the original article. | {"url":"https://www.nextgov.com/ideas/2021/04/planning-best-route-multiple-destinations-hard-even-supercomputers-new-approach-breaks-barrier-s-stood-nearly-half-century/173293/?oref=ng-next-story","timestamp":"2024-11-12T16:04:46Z","content_type":"text/html","content_length":"139886","record_id":"<urn:uuid:0fc6982a-daa4-49ee-9b42-66a3fa78eeab>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00140.warc.gz"} |
What is the quotient and the remainder
The quotiente and the remainder of 106 divided by 81 is:
• Quotient: 1
• Remainder: 25
The quotient remainder theorem
The quotient remainder theorem says:
Given any integer A, and a positive integer B, there exist unique integers Q and R such that
A = B * Q + R where 0 ≤ R < B
We can see that this comes directly from Quotient and Remainder. When we divide A by B in Quotient and Remainder, Q is the quotient and R is the remainder. If we can write a number in this form then
A mod B = R. | {"url":"https://clickcalculators.com/quotient-remainder-calculator/What-is-the-quotient-and-the-remainder-of_106_divided-by_81_%3F","timestamp":"2024-11-04T20:11:08Z","content_type":"text/html","content_length":"66926","record_id":"<urn:uuid:4780fd22-3a10-438c-baf5-41f702efd94e>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00052.warc.gz"} |
9836358712 Revolutions Per Minute to Yottahertz (rpm to YHz) | JustinTOOLs.com
Please support this site by disabling or whitelisting the Adblock for "justintools.com". I've spent over 10 trillion microseconds (and counting), on this project. This site is my passion, and I
regularly adding new tools/apps. Users experience is very important, that's why I use non-intrusive ads. Any feedback is appreciated. Thank you. Justin XoXo :)
Category: frequencyConversion: Revolutions Per Minute to Yottahertz
The base unit for frequency is hertz (Non-SI/Derived Unit)
[Revolutions Per Minute] symbol/abbrevation: (rpm)
[Yottahertz] symbol/abbrevation: (YHz)
How to convert Revolutions Per Minute to Yottahertz (rpm to YHz)?
1 rpm = 1.666666666E-26 YHz.
9836358712 x 1.666666666E-26 YHz =
Always check the results; rounding errors may occur.
In relation to the base unit of [frequency] => (hertz), 1 Revolutions Per Minute (rpm) is equal to 0.01666666666 hertz, while 1 Yottahertz (YHz) = 1.0E+24 hertz.
9836358712 Revolutions Per Minute to common frequency units
9836358712 rpm = 163939311.80109 hertz (Hz)
9836358712 rpm = 163939.31180109 kilohertz (kHz)
9836358712 rpm = 163.93931180109 megahertz (MHz)
9836358712 rpm = 0.16393931180109 gigahertz (GHz)
9836358712 rpm = 163939311.80109 1 per second (1/s)
9836358712 rpm = 1030061075.7725 radian per second (rad/s)
9836358712 rpm = 9836358712 revolutions per minute (rpm)
9836358712 rpm = 163939311.80109 frames per second (FPS)
9836358712 rpm = 3541111798019.1 degree per minute (°/min)
9836358712 rpm = 0.00016393931180109 fresnels (fresnel) | {"url":"https://www.justintools.com/unit-conversion/frequency.php?k1=revolutions-per-minute&k2=yottahertz&q=9836358712","timestamp":"2024-11-13T08:54:02Z","content_type":"text/html","content_length":"67138","record_id":"<urn:uuid:3f4513da-2946-4dc1-88d5-2010ea281b50>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00154.warc.gz"} |
How do you solve \frac { v - 6} { v - 4} = \frac { v } { v + 1}? | HIX Tutor
How do you solve #\frac { v - 6} { v - 4} = \frac { v } { v + 1}#?
Answer 1
or, #(v-6)(v+1)=v(v-4)#
or, #v^2-5v-6=v^2-4v#
Cancelling the #v^2# on both sides, we have: #-5v-6=-4v#
or, #-v=6# Thus, we have, #v=-6#
Plugging in the value of #v# in the equation, Left hand side is: #(v-6)/(v-4)=(-6-6)/(-6-4)=(-12)/-10=6/5#
Plugging in the value of #v# in the Right hand side:
Thus, #LHS=RHS# in the equation.
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer 2
To solve the equation (\frac{v - 6}{v - 4} = \frac{v}{v + 1}), cross multiply to eliminate the fractions. Then, solve for (v).
[ (v - 6)(v + 1) = v(v - 4) ]
[ (v^2 - 5v - 6) = v^2 - 4v ]
[ -v - 6 = -4v ]
[ 6 = 3v ]
[ v = 2 ]
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer from HIX Tutor
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
Not the question you need?
HIX Tutor
Solve ANY homework problem with a smart AI
• 98% accuracy study help
• Covers math, physics, chemistry, biology, and more
• Step-by-step, in-depth guides
• Readily available 24/7 | {"url":"https://tutor.hix.ai/question/how-do-you-solve-frac-v-6-v-4-frac-v-v-1-44068a3510","timestamp":"2024-11-05T19:17:36Z","content_type":"text/html","content_length":"570124","record_id":"<urn:uuid:93ba47d9-beac-4aa6-9aae-c109d715c265>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00048.warc.gz"} |
Solving Equations Online Course - Pricing - My Maths GuyMy Maths Guy - Online Math Courses & Course Bundles
Enrol in the Solving Equations Online Course to master key algebra for solving equations, a range of equations types and make all future Math courses easier. Flexible plans so students can receive
short or longer term help.
Select a plan to start your Solving Equations online course access.
Monthly Plan
If you need short-term help, like prepping for a test
$15 per month
Monthly subscription
Half-Yearly Plan
If you need on-going help. Save on the monthly plan
$68 for 6 months
One time payment
Save 25% over the monthly plan
By enrolling in a subscription you agree that you will be charged the advertised price until you cancel. You may cancel the subscription at any time. See the FAQ page for more details. | {"url":"https://www.mymathsguy.com/solving-equations-online-course-pricing/","timestamp":"2024-11-13T06:36:29Z","content_type":"text/html","content_length":"56632","record_id":"<urn:uuid:4c995c49-5d1f-448d-ac51-74571cc30cd8>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00533.warc.gz"} |
Mixed number percentage calculator
Yahoo users found us today by entering these keywords :
• graph equation matlab
• Middle school with pizzazz Book E answers pre algebra
• square root method
• compare UCSMP to Jacobs algebra
• Solving a rational equation that simplifies to a linear equation?
• mcdougal litell + algebra
• negative integer worksheets
• how to solve large numbers in aptitude
• solve equations that have a square root
• examples of math trivia
• first order differential equation solver
• SOME QUESTIONS ON ALGEBRA FOR 6TH
• solving non linear equations numerically by matlab
• how to make the radical symbol
• glencoe mcgraw-hill algebra 1 placement test
• Multiplication of Rational Algebraic Expressions
• graphing linear equations worksheets
• poem in mathematics
• apptitudes question with answer
• online algebra free tutor help
• myalgebra.com
• need answers to factoring tree questions
• how to convert a mixed number to a decimal
• ti calculator downloads
• define pre-algebra
• wats the quadratic formula of 4xsquared+100
• texas instruments calculator need to know which one works college algebra step by step out to final answer
• free online Algebra Addition Method
• simplifying rational expressions calculator
• glencoe algebra 2 answers
• Grade 5 Algebra Solving Equations
• quadratic factor calculator
• rules of adding and subtracting powers
• 6 grade math fraction cheat
• subtraction of fractions solving problems
• online algebra help simplifying radicals calculator
• HELP USING THE NTH TERM IN MATHS
• Cost accounting sample questions
• compare numbers in scientific notation worksheets
• download free aptitude question paper
• decimals into fractions calculators
• multiply or divide expressions involving exponents worksheets
• algebra simplify radicals calculator
• roots of exponentials
• download aptitude question answers
• year 7 sats maths 2004 for free
• trigo bearing problems with answers
• prentice hall pre-algebra practice workbook answer key
• finding the square root of a fraction
• combinations and permutations powerpoint
• glencoe algebra 1 online textbooks
• gcse factorise quadratic
• matlab solve order 3 equations
• perfect square root
• dividing games
• algebra 1 answers
• how to add, subtract, multiply and divide fractions
• Mathtype laplace Symbol
• Notes on Distributive Property
• x variable and unbalanced equations math
• how to find residuals on ti-84
• how to go from decimal to fractions
• simplifying expressions calculator
• mathamatics
• square worksheets
• working restrictions for an absolute value equation
• algebra 2 honors logic worksheet
• distributive law free worksheets
• algebra with pizzazz creative publications
• math trivia and exercises
• worksheets on solving one step addition and subtraction equations
• maths test age six free
• rewriting square roots using exponents
• practice workbook algebra 1 mcdougal littell
• algebra with pizzazz answer to worksheet
• power basics consumer math for sale
• free printable expanded notation with exponents, order of operations with exponents, and power of fractions worksheet
• pre-algebra math poems
• first grade mathmatics
• sixth grade solving two step equations multiple methods
• solving systems of equations with 3 variables worksheet
• solve for specific variable worksheets
• solving systems of equations algebraically or graphically for x and y
• change .55 to a fraction
• KS3 algebra powerpoints
• partial sums addition method
• factor cubed polynomials
• radical exponents
• simplify square root fraction
• math algebra age problems powerpoint
• free download basic accounting books
• factorise quadratic cubed
• college algebra calculator
• easy trick for finding gcf
• math equations for ninth graders
• McDougal Littel worksheet keys for 8th grade math
• 9th grade algebra online
• factoring non quadratic equations
• ti 83 equations
• resource book, algebra, structure and method, book 2 answers
• multiplying exponential expressions worksheet
• worksheet permuations and combinations
• answers for the prentice hall chemistry review book
• Management and Cost Accounting: Student's Manual download
• printable exponent pages
• algebra grade 9 worksheets
• three intercept formula
• second grade singapore math powerpoints
• linear equation solution two variable formula
• cpm algebra 2 solutions
• adding subtracting multiplying and dividing integers and fractions practice
• Download Free Educational Worksheets
• solving hyperbolas
• 8th grade algebraic expressions
• help with 7th grade algebra
• Deriving the Quadratic Regression Equation Using Algebra
• use newton-raphson method in matlab to solve the nonlinear equations
• square root web games
• graphing linear inequalities on a coordinate plane
• powell's dog leg fortran
• applications of trignometry in our daily life
• answer algebra questions
• free printable pre algebra worksheets
• free video on factoring oplynomials with exponents
• a level maths combinations and permutations
• Word Problems Using Integers
• how to get a polynomial resulting from a subtraction equation
• free physical science worksheets with answers
• calculator log base 2 texas ti 84
• adding and subtracting fractions worksheets
• linear equations trivias
• simplifying complex rational expression
• convert mixed number to decimal
• whats the greatest number of right angles a triangle can contain
• prentice hall algebra 2 workbook answer key
• algebra and trigonometry structure and method book 2 mcdougal littell answeres
• algebraic equasion
• my algebra calculator
• free printable algebra 1 problems
• Units of slope of an hyperbola
• .36 decimal as a fraction in lowest terms
• An online maths test 11+
• algebra answers
• how to solve area of trapezoid using c++ programming
• class 6 sample paper
• steps in making an investigatory project in math
• printable stem and leaf worksheet
• simplfying square route problems
• square meters to lineal meters
• how to solve simultaneous linear equations in two variables
• converting a mixed number to decimal
• "intermediate 2 maths revision"
• "Conceptual Physics WORKBOOK" solutions
• pre algebra with pizzazz answers
• button on calculator to convert number to fraction
• sample mixture problem
• ti 89 laplace transform
• mcdougal littell online textbook
• permutation and combination problems for gre
• Holt, Rinehart, and Winston Modern Biology Vocabulary Teacher answers, vectors algebra beginners, accelerated reader tests cheat download
• Lesson 4.3 6th grade
• multiplying cube roots
• simplifying polynomials calculator
• tutorial mathematica
• 8th free worksheets
• Give Me Answers to My Math Homework
• multiplying and dividing Rational Numbers worksheet
• McDougal Littell World History Chapter summaries
• percent worksheet
• college algebra program
• how far do you go when trying to find the greatest common factor
• second order differential equation runge kutta solution matlab
• solving equations, worksheets, ppt, activities
• beginner algebra worksheets
• how to do beginning algebra free lessons
• real life linear equations and graph
• permutations and combinations worksheet
• answer key to glencoe accounting
• polynomial factoring calculator
• exponents and radicals
• aptitude test online 6TH GRADE
• exercises on indian math
• quadratic rational equations solve
• multiplication of polynomials square worksheet
• algebraic fraction solver free
• math trivia hardest trivia
• walter rudin solution
• binomial factoring calculator
• equilibirum equations
• TI-83 plus factoring polynomials
• algebra answers for substitution
• simplifying ratios algebra
• Add, subtact,mutiply and divide fraction worksheet
• permutation and combination tutorial
• radical expressions
• hardest maths equations
• adding and subtracting money with decimals
• simplifying square root equations
• a nonlinear equation
• physics tests 6th grade
• quadratic simultaneous equation solver
• ellipse, simple explanation for grade 9
• online factoring solvers
• what website lets you practice y-intercept math problems
• how to simplify radical expressions
• Free Printable Spelling Worksheets for college algebra
• sample algebra test houghton mifflin test 38
• algebra 1 selected all answers
• simultaneous exponential equations
• math writing algebraic expressions, equations powerpoint
• algebraic equations year 6
• positive and negative number worksheets
• Help On scale factor
• Free Printable Books - thinking for a change
• word problem in first degree equation involving equalities and inequalities
• maple simplify inequalities
• how to solve equations with variables and square roots
• mathematical trivia
• KS2 maths equation example
• sample gre combination and permutation problems
• math course textbook artin algebra
• Riemann sums on the TI-83
• 9th grade math worksheets
• algebrator.com
• best algebra book
• worksheets with algebra and variables
• easy algebra solutions
• examplesof problems in algebra high school
• euler's method for graphing calculator ti 84 plus
• mathematics poems
• solve algebra problems
• how to solve addition and subtraction of radicals
• how to factor fractions using difference of two cubes
• cube root calculator
• answers to algebra 1
• probability solved problems for note
• first lesson of simplifying bionomail expression
• LONG DIVISION OF POLYNOMIALS solver
• chart or radical square roots
• root mean square online calculator
• brandon hall ppt math module II review
• algebra 1 review for slope
• Free Math Cheats
• learn basic algebra
• free math textbook answers hbj
• ti 84 log graph
• how to do log base 5 on ti 83
• pizazz worksheets
• parabola inequality signs math games
• partial sums algorithm worksheet
• using the trace feature on a TI-83 calculator
• java sum of integers
• free ratio proportion sums for 6th graders
• printable kumon worksheets
• solve polynomial java
• sample word problems involved in application of factoring
• algebraic formula questions
• cost accounting free pdf books
• algebra with pizzazz answers pg 225
• work out a decimal AS A FRACTION AT ITS LOWEST TERM
• free printable coordinate grids worksheets
• ALGEBRA FORMULA ax+by
• Integers multiply and Divide rules and worksheets
• solving simple equations worksheets
• quadratic simultaneous equation solver on excel
• how does scientific notation work
• holt mathematics lesson 4-2 worksheet fractions 6th
• prentice hall conceptual physics answers
• high level integral solutions in permutation and combinations
• steps on how to simplify complicated rational expressions
• free software for writing algebra tests
• system of equation calculator quadratic equation
• calculate greatest common divisor euclid
• advanced algebra trigonometry/ glencoe
• integers distributive property problems
• mcdougal littell nc edition geometry pictures
• "massimiliano celaschi" OR "celaschi massimiliano"
• simplifying square root e^3
• standard form form calculator maths
• multiple step algebraic worksheets generators
• hard algebra 2 problems
• introducing algerbra
• algebra textbook india
• hard math equation
• holt algebra 1 homeschool
• expansion of algebraic expressions in real life
• multiple choice exams in abstract algebra
• table patterns free worksheet
• free cost accounting material
• solving cubed radicals
• simplifying cube square root variable problems
• writing quadratic equations with sum and product of roots powerpoint lesson
• how to balance algebraic equations
• bearing problems in trigonometry
• online conic graphing calculator
• solving multiple variable equations
• ks2 year 4 english textbook passwords on websites
• solve quotient rule calculator
• math trivia with answers and solution
• how do ellipses effect my everyday life
• how to combinations ti-84
• how to do algebra
• how to order fractions from least to greatest
• decimal to radical
• holt algebra 1
• online polynomial calculator with solutions
• cost accounting 1 exams
• maple source code for newton's method of nonlinear systems of equations
• free 6th grade lessons
• algebric
• free factoring polynomials calculator
• radical solver
• printable 5th grade placement test
• GCE sample maths exam papers of 2004
• cubed root of a large fraction
• type in your algebra problem and we solve
• free worksheets simplifying algebraic expressions
• absolute value inequalities null set
• algebra 2 factoring done for you online
• holt algebra textbook answers
• simultaneous 3 unknowns
• 9th grade science worksheets
• easy explanation on how to solve logarithmic equations
• mcdougal littell algebra 2 answers
• algebraic problem comparison
• download project pair of linear equations in two variables
• cost accounting ebook
• Preparing for the North Carolina Algebra 2 End-of Course (EOC) Test Practice and Sampe Test Workbook
• algebra and trigonometry structure and method book 2 mcdougal littell answer sheet
• What is the algebraic formula for finding percentages of a number
• java iq test statistic graph
• circumference story problems for 6th graders
• How to calculate the square root of a number in child form
• solving simultaneous algabraically
• free printable factoring worksheets
• "conceptual physics" workbook
• formula of solving ratio in mathematics
• mcdougal littell math course 3 resource book
• simplify equations calculator
• Kaseberg introductory algebra 3rd edition
• math trivia with question and answer
• download algebra point
• math equation finding a percentage
• trigonometric ratios powerpoint lessons
• free online papers for SATS revision for 6th Grade
• best basic geometry textbook
• free polynomial solver
• algebra 2 answer
• basic algibra
• balancing multiplication math equations
• McDougall Littell Vocabulary books
• how do you find the vertex in an equation?
• practicing scale factor
• PRE Algebra Online Calculator
• worksheets of LCM
• hbj algebra 2
• creative publications worksheets
• maths lesson factors and multiples grade five
• free latest SAT English sample papers for practice downloadable
• meijer function fortran
• Holt Algebra 1 Chapter 5
• solving fractions equations addition and subtraction
• answers to the year 10 mathematics triangle test
• adding square root calculator
• texas instruments ti-84 slope field program
• laplac i TI-89
• cube route fraction math
• practice sheets for slopes in algebra
• teach me algebra 2
• florida Prentice Hall Algebra 1
• Solving Basic Equations Using Fractions
• simplified radical numbers
• "kumon answer"
• free algebra solver
• polynomial root finder c#
• McDougal Littell Textbooks
• Accuplacer practice test for construction shift
• converting 3 quadratic equations
• order fractions
• college math software
• surd solver
• free trig identity solver
• math power 8 text book
• how to solve algebra equations
• how to calculate a lineal metre
• mondern chemistry worksheet answers
• trig calculator
• elementary algebra worksheets
• "simple interest" and "lesson plan" and "grade 11"
• aleks cheats
• foiling calculator
• online inequality calculator
• McDougal Littell workbook answers
• how to add formula ti 89
• cpt practice-algebra
• +mathtrivia about polynomial function
• How to do deteminant solution of linear systems usingTI-84 plus
• prime factorization of the denominator
• simplified negative radical form
• solved questions on permutation and combination
• worksheets on slope
• answers to prentice hall review physics
• programs for greatest common factor of algebreic numbers
• percents to decimal tool
• algebra worksheets for third graders
• elements of calculus and analytic geometry-answer key
• how to find cube root on scientific calculator
• hyperbola equations
• algebera sums
• matrix 3 equations and 3 unknowns applications
• convert mixed number to decimal converter
• ti 83 plus rom image
• adding subracting multiplying and devidiing integers and fractions practice
• Excel "simplify a fraction"
• mixed number to decimal calculator
• algebra free downloads for practise ks2
• add subtracting radicals exponent
• mathematics trivia
• solving logarithms algebraically
• prentice hall algebra 1 anwsers
• dummit foote pdf
• free kumon worksheets
• conversion sats maths questions
• solving subtraction equations
• south-western algebra 2 : an integrated approach worksheet answers
• Penny Doubled Excel formula
• finding greatest common denominators
• algebra 1 totorial software
• year 11 algebra free maths questions
• equation of form ax+by=c
• simplify radicals on ti-89 calculator
• free gcse math tests and answers
• free absolute value equation solver
• scale factors made easy
• Free downloadmath aptitude test
• mixing trig and complex functions
• online Algebra Addition Method
• mcdougal littell world history book worksheets
• algebra lessons for 9th in pdf format
• free lesson plans & mean median mode and range & 4th grade & worksheets
• Year nine free online maths test
• algebra for dummy
• square root property calculator
• 2.gade nyc quiz
• 8th grade graphing two variable equations worksheet
• system equation calculator
• sleeping parabola
• ninth grade math answers to math riddles
• how to solve a powers problem on my TI-83 calculator
• powerpoints and pre-algebra review of semester 1
• multiplying square roots calculator
• FLORIDA ALGEBRA 1 WORKBOOK ANSWERS
• free fraction worksheets for six grader
• math problem, trick, fun, ppt
• fractions solving
• download accounting book
• algebra with pizzazz
• Printable GED Worksheets
• square of difference
• fractional exponent with fractions
• gcse maths help-finding the area of a circle
• radical quadratic equation solver
• memorizzare pdf ti 89
• scale factor 8th grade worksheets
• combine like terms worksheet
• middle school electricity equation project
• Trig ratioword problems and answer keys
• algebra grade 10
• graphing linear equations powerpoint
• analysis of variance for RANKING THE INPUT VARIABLE
• simplified radical form for 4th root
• learning percentages worksheets
• printable worksheet of word problems involving quadratic equation
• GED For Dummies download
• formula for sq root excel
• heaviside ti 89
• parts of square root in algebra
• maths factorise expressions worksheet
• adding subtracting multiplying dividing decimals worksheet
• scale factor of reduction word problems
• free worksheet for age 13
• imaginary numbers free worksheet
• sat 10;5th grade;reading comprehension skills;cheat sheet
• mathematical trivia question and answer
• 11 plus problem solving questions
• free powerpoint on singapore maths fraction word problems
• were to find answer to glencoe workbook
• composite function to convert "kelvin to fahrenheit"
• glencoe algebra1
• simplifying radicals with squares and subtractions
• pearson mathematic "algebra problems"
• Addition and subtraction of integers worksheets
• 1150 n loop 1604 w ste 108-423
• pre algebra 7th grade advanced book
• math problem solver
• pizzazz prime time worksheet
• solve applied problems
• saxon math answer sheets
• solve cubed integers
• solving absolute value equations + 8th grade algebra
• how to use substitution method
• answers to math homework
• examples of math trvia
• lcd fractions calculator
• solving algebra problems with saxon math
• how to solve algebra word problems by converting to mathematical forms
• graphing linear equations, free worksheets
• free trigonometric calculator
• free trig problem solver
• glencoe physics answers
• convert decimals to mixed fractions
• common factor of 26 and 39
• self check probability pre-algebra quizzes
• "free" "algebra" "helps"
• cube root conversion
• pratice paper of math of 11
• factoring equations calculator
• math poems about combination
• permutations and combinations activities
• f(x) online graphing calc table
• calculate factor equations
• online algebra help simplifying radicals
• how do you solve fraction demcimal percent
• calculating with ti-89
• easy trivia questions for 6th graders
• KS3 algebra graphs
• least common multiple of 33
• write a quadratic equation with the given roots calculator
• solving multivariable equation with MATLAB
• interactive step by step how to solve equations free sites
• factoring polynomials cubes
• aptitude maths questions
• how to solve an algebra problem with two variables
• online cheat homework algebra
• north carolina eog test practice glencoe mathematics
• solving nonlinear partial differential equations
• FREE 8TH GRADE WORKSHEETS
• grade 9 math papers Algebra
• poems in math
• expanded notation with exponents, order of operations with exponents, and power of fractions worksheet
• Cube Root Calculator
• variables worksheets 6th grade
• hi square calculator
• free dividing multiplying adding subtracting integers worksheet
• application of linear equation in two variables
• how to teach solving equations
• math poems points
• algebra 1 sample questions
• adding and subtracting positive and negative worksheets
• solving for exponents with x calculator
• learn to add subtract fractions
• how to change xres on ti-89
• phisical activity to teach graph for GCSE maths
• steps to balancing equation
• quadratic equation by factoring calculator
• algebra 1 rate word problems tutorial software
• free worksheets for slopes in algebra
• Completing the square of a cubic equation of two variables
• ti-89 system of equations
• online practice clep test for statistic
• holt physics test answers
• prentice hall algebra 1 ca online textbook
• least common denominator formula
• free elementary algebra tutorial
• answers to chemistry prentice hall workbook
• 3th order polynom solver
• root solver
• simplifying trigonometric equations
• algebrator online
• twistor solution nonlinear differential equation
• summation/practice problems with answers
• simplify polynomial calculator
• maths+exercises on permutations
• vertex in algebra
• multiplying standard form
• free textbook answers trig
• aptitude questions and answers+pdf
• massimiliano celaschi
• trigonometry test papers for class tenth
• using addition to solve subtraction worksheet
• GCSE ALGEBRA WORKSHEET
• lcm calculator
• sums of maths 9th std
• free trig function solver
• algebra 2 answers
• free polynomial solutions online
• Multiplying algebraic expressions with distributive property
• radical on calculator
• prealegra.
• what is a fraction expression
• surface integral math + free book
• print of math revision sheets ks3
• 6th grade division problems
• how to solve algebra 1 slope equations
• free online IQ test for 6th standard
• simplifying square roots 2
• convert mixed number in decimal form
• mathmatical depreciation formula
• sample test for algebraic symbols
• factor two variable
• graphing a function with a restricted domain texas instruments
• homework helper for 11th grade algebra 2
• prentice hall chemistry topic 2 formulas and equations answers
• mathematics trivias (geometry and trigonometry)
• how to use probability models on the graphing calculator
• factoring, worksheet
• algebra calculator simplify
• solving equations worksheet
• factor quadratic equations on calculator
• solution exam abstract algebra
• java bigdecimal convert base 10 to time
• roots with exponents
• sums of radicals
• algebra 2 by McDougal Littell solution manual download
• online practise free ks2 fractions
• factor quadratic equations calculator
• elementary mathematical balanced equations worsheets
• 7yh grade math multiply and divide fractions
• grade 5 math exam papers and answers
• example of math trivia question about eualities and inequalities
• Advanced Algebra Help
• "ordered pairs games"
• fractions to decimals calculator
• easy ways to learn math facts
• exponents variables letters
• algebraic calculator
• leaner equation
• solve exponential subtraction
• algebracic expressions
• Answers for my Homework on Real Numbers
• free online learning beginning algebra
• florida prentice hall mathematics pre-algebra
• sample project of calculating prime numbers in visual basic using loops
• McDougal Littell geometry workbook answers Lesson 6.4 practice B
• focus and vertex of a parabolic function
• simplifying variable exponents
• elementary math trivia
• first order semilinear pde system
• finding the nth term of a logarithmic sequence equations
• glencoe math problems
• free year 6 science test papers
• t1-83 calc online calculator
• aptitude questions with answer free download
• worksheet answers
• How to determine an exponential equation using a TI-83 calculator
• percent formulas
• balancing algebraic equations
• math poem
• solving equations with rational coefficients calculator
• slope of the line with polynomial phrase
• can you do the substitution method on. graphing calculator
• algebra worksheets elementary
• how to do cube root on calculator
• remove
• percent to fraction converter
• common techniques in addition and subtraction
• variable chapter 4 prentice hall florida math
• pre algebra 7th grade advanced book worksheets
• roots of equation & algebra
• slope + graphing calculator
• ti-83 calculator cube root
• Prentice Hall Algebra 1 Answer Keys
• compare and contrast the various methods for solving quadratic equations.
• free Algebra help +type in problem +solutions
• polynomial division solver
• free algebra quizzes with answers
• t1-83 scientific calculator online
• ti 84 cheat sheet
• how to find the gcf of variable expressions
• latest math trivia
• inequality worksheets free
• Answers to McDougal Littell Worksheets
• algebraic expressions with variable practice online
• elementary and intermediate algebra mark phoenix
• prentice hall chemistry worksheets
• free maths for dummies
• interesting math poems
• trig identities homework solver
• free learn boolean algebra software
• "Chapter 6 Test A" "McDougal Littell"
• FREE ONLINE FOR MATHS WORKSHEET FOR 8TH STANDARD
• Coordinate Plane Free Worksheets
• algebra 2 least common denominators
• free intercept graphing calculator online
• complex number free worksheet
• area worksheet
• radical calculator scientific
• aptitude test pdf files download
• Learning simple Algebra online
• negative numbers worksheets free
• high marks regents chemistry made easy answers
• matlab formulas
• solving higher order polynomials
• Great common factors calculator
• fraction equation
• how to solve Birkhoff Interpolation problems with maple
• multiple variables nonlinear equation system with matlab
• t-83 calculator programing codes
• evaluate variable with exponents
• graphing linear equations ppt
• rules for exponent worksheets
• how to calculate parabola stretch
• factoring polynomials slope roots
• permutation exercises gmat
• scott foresman addison wesley Algebra lesson masters
• algebra trivia
• saxon math algebra 2 test answer
• grade 9 pass papers Math
• algebra math trivia's
• factorize cube root polynomial
• how to solve fractions as decimals
• math grade 10 worksheet calculate slope
• solving quadratic equations with two graphical points
• printable algebra games
• sum and difference of rational algebraic expressions
• glencoe algebra chapter 9
• T-83 and Statistics cheatsheet
• java convert int to time
• examples of math trivias?
• online converting fractions to mixed fractions calculator
• nonhomogeneous first order differential equation
• Free Printable Algebra Problems
• beginning algebra worksheets
• prentice hall biology workbook answers
• 9th grade math worksheets multiplying and dividing fractions
• geometry words + poems
• long division w/ decimals no remainders whole numbers
• holt algebra 1 answers
• where can i get an answear to an algebra term by expanding
• pre algebra with pizzazz answers for free
• solve each formula for the specified variable worksheet
• prealgebra+ratios and equations
• 11+ printable sheets for grammer school
• Easy math basics for Compass test
• how to factor a polynomial with a greates monomial factor of 1
• www.NJ Middle School Math With Pizzazz! Book E answers
• changing from fraction to a radical form
• solving complex trinomials
• algebra power formula
• factor equation program
• factor calculator
• quadratic equation factoring calcualtor
• percent equations
• free highschool math books download
• laplace transform calculator
• solving software for 0x80072745
• similtaneous equation solver
• how to calculate powers in visual basic
• free radical simplifier solver
• algebraic substitution worksheet
• convert 6 x 7 meters to square meters
• prentice hall chemistry answers
• solving addition and subtraction equations
• ONLINE FACTORIZATION
• equality and inequality worksheets
• Accounting Book for free
• simplifying radical calculator
• algebra problems
• solve for regression quadratic line algebraically
• inequality word problems
• multiple square root radical sign
• scale factor math
• Root Squaring Method
• free intermediate algebra tutors
• best accountant sheets downloads
• changing difference type
• free combining like terms worksheets
• precalculus with limits a graphing approach third edition answer key
• vector linear equation and plane problem
• limits pre calc sover
• How to find a scale math
• algebra and trigonometry textbook munem
• poems about algebra
• given vertices write the equation of the hyperbola
• algebra exercices 2th grade
• question and answer math trivia
• prentice hall fraction equations
• absolute value inequalities product and quotient
• substitution method adding and subtracting multiplying and dividing
• ti-89 solve remove variable
• ti84 emulator
• rule of quadratic equation
• common denominator calculator
• how to get scale factor in seventh grade pre algebra
• sum maths key GCSE worksheets
• learning basics for 8 years math games
• Advanced Algebra Worksheets
• Radicals that can be simplified chart
• coordinate plane power points
• convert decimals to fractions worksheet
• prentice hall workbook answer key
• graphing pictures on a calculator
• Glencoe algebra 2 Chapter 3 test form 2c
• accounting pdf printout
• online free algebra prep test
• college algebra sloution solver
• most complex mathematics in the world
• Simplifying Algebraic Expressions
• Algebra Addition Method Tutor online
• topic 7-b est of genius in pizzaz
• division of polynomials with multiple variable
• worksheet answers for algebra 1 mcdougal littell
• mathematical induction for dummies
• pictures showing the application of rational algebraic expressions
• free printable resources ks3 maths
• "Fluid Mechanics" PPT
• algebra lessons the properties of roots
• quadratic inequality solver
• free online algebra quiz
• addition equations worksheet
• simplify square roots with exponents
• free help with algebra grade 9 finding area and perimeter
• mathproblems.com
• Star Math font download
• Mathtype laplace
• aptitude books downloads
• Free Algebra Problem Solver Online
• Algebra with Pizzazz Answer Key
• simplify radical calculator online
• complex quadratic equation solver
• 9th grade math printable worksheets
• algebra with pizzazz worksheets
• what is percent and proportion
• list of algebra formulas
• writing a quadratic equation in vertex form
• algebra rittle anwers
• 11+ mathematics paper
• simple math trivias
• trigonometry in daily life pictures
• Solve Radical Expressions
• how do i do square root on excel
• how to solve a quadratic formula while missing the constant
• how to multiply large radicals together calculator
• basic calculas
• solving inequalities in matlab
• comparisons between fraction, percents and decimals worksheet
• solve problems using patterns worksheets
• worded chemical equation
• site that calculates fractions from least to greatest
• middle school math with pizzazz book e answers
• how to solve algebra on my TI-83 calculator
• elementary math trivia with answers
• Multiplying and Dividing Square Roots with variables
• free accounting learning book
• system of equation as graph
• convert radical 8
• quadratic equation factoring imaginary roots
• ONLINE MATHS MCQ SOLVING
• math, translating expressions into algebraic along using phrase less than and subtracted from
• mcdougal littell world of chemistry book answers
• free help on how to solve rational equations
• holt mcdougal history worksheet
• how to solve algebraic equations with square roots
• answers to saxon math algebra 2
• simplying rational expresions
• modern chemistry chapter 8 review answers
• why is it important to simplify radical expressions before adding or subtracting
• formula solving
• square root property formula
• free math e-books
• solve algebra problems show steps
• multiplying and dividing Rational Numbers variables worksheet
• y =mx+b to standard form worksheet and answer sheet
• algebra 2 facts
• how to solve quadratic equations k
• cancellation of square roots in equations
• word problem involving rational algebraic expreesion
• adding mix numbers Fractions with
• algebraic symbol manipulation calculator
• math for dummies
• how to solve fractions
• Investigatory - mathematics
• cost accounting MCQ's
• Simplify the square root of 125
• square root algebra
• Free online ti 83 calculator download
• free ninth grade algebra course online
• free worksheets for first grade with answer key
• Integer Addition and Subtraction Equations
• solving the equation by extracting square roots
• how to understand basic algebra
• online solve for two functions
• how to convert mixed numbers into decimals
• algebra dividing calc
• Convert a Fraction to a Decimal Point
• linear combination method algebra 1
• matrices and determinants for beginners
• multiply & divide scientific notation; worksheet
• Basic biology trivia questions and answers
• radical calculator
• turn decimals to radicals
• specialproductsproblems
• variable in the exponent
• system of equations table of values
• basic formulas of permutation and combination
• simplify by combining like terms
• angle notation/TI-83
• basketball dealing with algebra
• order least common multiple
• 8th grade rational and irrational activities/worksheets
• prentice hall algebra 1 california edition
• printable algebra worksheets
• least common factor
• linear equation problems, 4th grade
• Sample Math Trivia
• interactive square cube
• mcdougal littel world history worksheet answers
• int_alg_tut29_specfact.htm
• nc 9th grade algebra practice
• SOLVE FOR A VARIABLE worksheet
• secant method+matlab
• mcgraw-hill indiana science 8th grade student worksheet answers
• how to solve binomial equations?
• Solving Eguations
• downloadable ti 89 calculator
• trigonometry solver online
• McDougal Littell Algebra 2 Standardized Test Practice Workbook answers
• how to calculate gcd
• "compound interest" and "lesson plan" and "grade 11"
• free algebra ii worksheets
• arithmetic sequence, point slope form on a graph
• math investigatory project
• download aptitude paper
• printable mat sheets for a 7th grader
• free download chemistry question papers for revision with answers for grade 13
• simplifying a cube root
• download applications for ti84-plus
• TI-84 PLUS CALCULATOR INSTRUCTIONS FOR SQUARE ROOT IN GRAPHING
• accounting book, smaple book
• free practice sat questions ks3 science
• online maths test paper
• linear equation solver 3 unknown solver
• largest common dominator
• answers to rational expressions
• prentice hall algebra 1 online textbook
• simplify equations worksheet Combining Like Terms and Solving
• algebra online question o level
• different method to solve polinomial degree 3
• quadratic equation 3 points solve online
• teach me about hyperbolas
• radical program for TI-84 plus
• saxon math 7 and 8 cheat sheets
• how to solve for x on a graphing calculator
• nonlinear differential equations
• mcdougal littell workbook answers
• mathematics presentation linear inequalities ppt
• simplest radical form calculator ti-84
• 3rd math sheets
• erb sample test
• completing the square worksheet
• free pre-algebra projects
• java aptitude question
• intermidiate algebra worksheet
• third grade algebraic expressions
• Algebra and Trigonometry Structure and Method Book 2 Mcdougal Littell Mid-Year Test
• equilateral hyperbola logarithm
• Software hp "Solve Equations"
• greatest common factor of 12,18,36
• chapter 7 algebra 1 test answers mcdougal littell
• synthetic division solver
• integers, worksheets, free
• INTERMEDIATE ACCOUNTING pdf free download
• definition of quadratic inequalities
• algebra fractional exponents
• history of linear equation in two variables
• dividing integers worksheet
• how do i solve lcm with ti-83 plus
• Translate mathematical equations cheat
• math worksheets 10th grade russian
• adding mixed numbers solver
• 8thgrade math test
• standard window settings for ti-89 in diff equations mode
• maths yr 8 test
• algebra expression calculator
• simplifying "variable expressions" games
• need answers to math problems factoring trees
• add subtract integers worksheet
• Algebra Dummies Free
• artin algebra solution
• algebra addition graphing
• add subtract divide multiply fractions
• calculating hyperbola equations
• Simultaneous Equations online calculator
• 11+ practice papers maths and english online test paper
• algebraic math for idiots
• factoring method equation
• hot to simplify algebraic fractions
• permutation and combination solved
• Free Answers to Geometry Homework
• trivia in math
• solve by graphing 2cos(x+2)-(X/3)=0
• High school basic algebra workbooks
• application to trigonometry in daily life
• learn computer lanuage for beginers
• quadratic equations, fraction form
• cubic root calculator
• how to change a decimal to a mixed number
• Convert mixed numbers to decimals
• Free Printable Homework Sheets
• equation solver PPC
• worded sats maths questions
• maths work shets for ks-2
• Pre-algebra: Tools For A Changing World india
• algebra radical on ti 89
• 7th grade math question with answer key
• learn pre algebra for free
• variable fractions
• permutation & combination of c program
• algebra program
• how to solve table of values equations
• nth terms when the sequence is decreasing
• algebra with pizzazz page 191
• how to convert mixed number to decimal
• algebra solver free trial
• how to draw pictures using coordinates with TI-84+ calculators
• Free 5th grade math worksheets ploting on coordinate grid
• ns model question papers download+8th class
• PREALGREBA
• 9th grade algebra help
• techniques in solving addition
• algebra equations powerpoint
Yahoo users came to this page yesterday by typing in these keywords :
│adding subtracting multiplying and dividing fractions │calculate greatest common divisor │
│worksheets │ │
│graphing hyperbolas applet │free lesson plans for simultaneous linear equation │
│rational equations Calculator │ti-83 rom download │
│find the least common multiple of 24 and 55 │how to find a solution set radical │
│8th grade algebra 1 midterm, study guide │rational expressions(multiply and divide) │
│Least Common Denominator Calculator │equations with variables on both sides calculator │
│how do i change a mixed number to a decimal? │beginners graphing in algebra │
│trivias and games about quadratic inequalities │ti 84 guide simplify radicals │
│calculator program graph curve ti │worksheets+add/subtract fractions with unlike denominators │
│Teach yourself college algebra │free gce advance level accounting tutorials │
│AJmain │algebra II textbook for TI-89 │
│help in 9th grade algebra │algebra cal programe emulator │
│sample 2nd grade math test │8th grade rational and irrational free worksheets │
│how to simplify square roots │Glenco Mathematics Florida Edition Geometry Answers │
│elementary algebra mark dugopolski 4th edition answers │how do i find the square root of A FRACTION │
│Answers to Trigonometry Problems │how to find the 4th root of a variable │
│formula used to convert time from hundred to regular │basic logarithmic │
│free polynomial solve │solve systems 3 variable worksheet │
│ │roots of equation solver │
│quadratic equations perfect squares │trivia questions in math │
│free download engineering aptitude questions │prentice hall math worksheets │
│math poems for seventh grade │percentage of a number formula │
│ti calculator roms │differences between linear and quadratics equations │
│convert fractions into simple form │difference between trigonometry edition 8 and edition 9 lial │
│expanding expressions activities │writing a function in vertex form │
│how do i measure distance on a right triangle in pre algebra│ks2 numeracy homework test questions test sheet 8 │
│algebra 1 work sheets EOC │solving problems with two variables │
│first order differential nonlinear equation nonhomogeneous │how to fully simplify the square root of 16 │
│find an equation for linear function f │radical expressions solving │
│sample papers of maths for 7th class │math b regents answers shown work │
│ti-83 plus sistems solver │simultaneous equation calculator 3 unknowns │
│slope and y intercept worksheet │florida algebra 1 exam │
│slope worksheet │Radicals that can be simplified │
│games download for ti 84 plus │absolute one variable equations │
│what is a radical form │algebraic proportion worksheet │
│aptitiude solving answers │product and quotient calculator online │
│How to solve third order equations │sqm to lineal metres │
│apti questions free download │Middle School math with Pizzazz! Book B Test of Genius │
│glencoe algebra 2 textbook answers │quadratic standard formula calculator │
│pre-algebra with pizzazz dd-11 │worksheet change the subject of the formula │
│transformation math worksheets │equation solver square root │
│examples of math +trivias? │how to simplify fractions with square roots │
│cubic root calculator equation │help solving elementary algebra problems │
│quadratic equations, and permutations │ti 84 cheats │
│5th grade math word problems │10th maths guide software │
│Online TI-84 │algebra hungerford solutions │
│free fraction tree worksheets │self teach algebra 2 program │
│free │ │
│ │dividing monomials free worksheets │
│elementary algebra │ │
│6th grade math notes │mcdougal littell math answers │
│ordering fractions least to greatest │free practice sat questions ks3 │
│trivias about math │algebric equations │
│mixed fraction to decimal │how to write an equation in vertex form │
│mcdougal littell algebra 2 resource book answers │math problem solving worksheet for fractions │
│texas Alg 2 prentice hall ANSWERS │substitution homework worksheet KS3 │
│adding variable exponents │simplifying radicals cheater │
│high school science test worksheet │saxon algebra II parabola problems │
│8th grade pre algebra worksheets │subtract two integers what is 3- (-5) │
│decimal to fraction worksheet │free mcdougal littell algebra 2 worksheet answers │
│factoring equations machine │beginner algebra games │
│prentice hall pre algebra answers │college algebra and trigonometry sixth edition answers │
│algebra software │implicit differentiation calculator │
│worksheets for evaluating math sets and subsets │trigonometric trivia math │
│sample papers class 8 │"Mathematics: Structure and Method Course 1" practice │
│glencoe math algebra 2 EOC workbook answer │matlab triangles tutorial │
│3rd grade free practice │answers to the North Carolina Test of Algebra 1 │
│elementary algebra linear equations with one variables study│the first and second terms of geometric sequence have a sum of 15, while the second and third terms have a sum of 60. Use an agebraic │
│guide │method to find the three terms │
│year 11 maths work sheet │TENTHS TO FRACTIONAL CONVERSION │
│conceptual physics practice exam │graphing 3 variable equation on ti83 │
│adding square root rule │pre algebra square roots worksheet │
│multiplying expression calculator │prentice hall physics answer key │
│year 7 simplifying equations worksheet │math trivia with charts │
│mcdougal littell practice workbook factor the sum or │dividing mix numbers │
│difference of cubes │ │
│pre algebra pizzazz answers │percentage equations │
│how to become good in algebra │diamond method solving quadratic equations │
│decimals multiplication student solution errors on sats │write standard form equations with graph │
│papers │ │
│free printable math sheets for a 7th grader │online free solutions for :"contemporary abstract algebra" │
│combine like terms activity │algebra 1 concepts and skills whole book online │
│conceptual physics 9th edition teachers edition │solved aptitude questions │
│math trivia with answers mathematics math word problems │ged cordinate geometry worksheets │
│algebra │ │
│how to simplify ratios with mixed fractions over mixed │math formulas percentages │
│fractions │ │
│write each decimal as a fraction or mixed number in it's │prentice hall pre-algebra practice workbook answers │
│simplest form │ │
│free test papers for year 6 │quadratic equation relationship between roots and coefficient │
│math poems │math homework answers │
│find the vertex │glencoe algebra 2 cheat sheet │
│examples of solving quadratic equation by extracting the │t-83 calculator online │
│roots │ │
│2nd grade printouts free │how to pass algebra │
│biography of mathecians │Precalculus for beginners │
│delta dirac in ti 89 titanium │what is the formula for modulo n │
│solving systems of linear equations worksheet │radical to fraction │
│radicals absolute values │basic concept in math algebra │
Google users found us yesterday by typing in these keyword phrases :
Slope conversion chart, trivia of linear equation, radical multiplication calculator, multivariable algerbra, simplifying radicals calculator, adding and subtracting variable expressions worksheet.
Mcdougal littell solution key, mcdougal littell algebra 2 worksheet answers, subtraction first number bigger, College Algebra, free algebra problem solver, algebrator free download, how to solve
maths problem fast.
Online printable graphing calcuklator, how to convert a mixed fraction into a percent, mathmatics combination.
Fractions and decimals least to greatest, GMAT probability filetype: PDF, MATLAB Programmes+Difference equation, completing the square, finding the range, Permutations and combination questions.
Ks2 practice SAT maths questions, how you you divide a whole number by a radical?, saxon algebra 2 answers, math combinations worksheet, free download Apptitude test booklet, free online help,
algebra 2, radicals.
Example polynomial problem, advanced algebra calculator, root of an equation Newton method C++ program, 3 simple ways in finding the percentage(6th grade), free kumon maths worksheet download,
download free c andc++ aptitude questions and answers, online ellipse calculator.
How solve by extracting square roots, how to view pdf on ti-89, coordinate plane powerpoints, GMAT practise papers, how to use a casio calculator, chapter 3 review games and activities worksheet in
Solve by elimination method ti-89, QUICK PLACEMENT TEST ANSWERS key version 1, balancing math equations, pre-algebra pizzazz worksheet, enrichment activities for positive and negative numbers, I want
to download a math test tutorial for 5th grade.
Finding the greatest common factor calculator, Math Statistics Problems Made Easy, application of (algebra, free prinarble math worksheets for sixth graders, lesson plans on exponents, free kumon
lessons worksheets.
Ratio math problem solver, factored form of a cubed function, printable math poems, third order polynomial, algebra 2 solver, patterns in t-charts math free worksheets.
Middle school math with plzzazz book d related angles, free online help with entry exams GCSE, "Prentice Hall physics answer key", Student solution manual for Conceptual Physics 10th edition
Addition and subtraction to 12 worksheet, mixed numbers to decimals, solution to an equation algebra exponent variable, solve three simultaneous equations, find the missing denominator, WWW.Practise
Exercise of Physic.
Algebra simplify calculator, quadratic equations by finding square roots, first course in abstract algebra online solutions manual, 9th grade homework help,tx, answers to glencoe advanced
mathematical concepts Precalculus with applications, math with pizzazz algebar, excel permutation formula.
Essons on simplifying expressions in algebra, FREE GED WORKSHEETS, algebraic questions of grade 3, glencoe algebra 1 answer.
Explanation about solving the bionomial expression in maths, difference between a zero and a root in algebra, algebra solver, The Rules for adding, subtracting, Multiplying and dividing polynomials.
Arithematic, List of Math Trivia, what is the greates common factor of 120 and 150, adding,subtracting,multiplying,and dividing decimals, online radical equation calculator, Free
Algebra+beginners+exercises, how to teach basic algebra.
Graphing calculater, simplifying cube square root exponent variable problems, printable slope worksheets, free absolute value worksheets grade 7, how to get scale factor in pre algebra, convert the
mixed fraction to a decimal, free online mathematics papers 11+.
Free important questions for matric science exam, help writing linear functions, subtracting negative fractions, linear equation percentage, decimal and reduced fraction, factor cubed powers
Free cpa MCQs.ppt, solving equations with multiplying and dividing, introduction to integers worksheet, First grade math books online, multiplying variable worksheet\.
Free solved problems on fractions in algebra, simplying rational, free online graphing liner, converting a mixed number into a percent, algebra 1 chapter test answers, circle theorem worksheets,
limit solver calculator.
Calculator for simplifying radicals, how to make a TI 83 plus rationalize a number, what do i need to know to pass an algebra final exam.
Math for 6 grade how to solving percentages and fractions, free mathematic softwares, multiplying and dividing roots machine, cost accounting free ebook, answers to math problems free, convert metre
square to lineal metre.
Free online tutor for elementary algebra, texas Alg 2 prentice hall ANSWERS, pre algebra pre test, grade 11 ontario math test paper, graphing linear equations in three variables.
Logbase on TI 89, software that solves math problems?, MCDOUGAL LITTELL MATH ANSWERS, algebra help for dummies.
Least common multiple of polynomials, convert to algebra, ronald farris, answers for prentice hall algebra 2 with trigonometry, printable games of intermediate algebra, Mcdougal Littell Algebra 2
Standardized Test Practice Workbook answers, examples of algebraic additon and subtraction.
Math Problem Solver, free graphing ordered pairs worksheets, basic math interest worksheets, convert 8 digits to binary code, how to convert decimal to type int in java, pre algebra for dummies.
Permutation answers, using matrix to solve chemistry balancing formulae, linear equations worksheets, HOW TO DO PERCENTAGES, powerpoint notes for holt chemistry visualizing matter, math free logic
print "first grade", aptitude test papers.
Math trivia, roots of quadratic equation calculator, math trivia with answers.
Help solving algebra, complex rational algebraic expression, poems with mathematical terms.
What is the easiest way to understand basic algebra, calculater for fraction, modern world history mcdougal littell answer book, mathematics test papers for grade 9 exams, ontario grade 10 math
formulas, algerba word problem solver.
Basic number work cubes multiply divide worksheet, calculas basic study guide, square root property, quadratics equation factoring calculator, I need for you to show me how to work out Algebra 2
problems, free worksheets linear relationships, college algebra fun worksheet.
20 lineal metres, free matlab programme book pdf, adding and subtracting variables with fractions calculator, free 9th grade.
Free math summation training video, prentice hall algebra 1 answers, solving systems of linear equations with ti 83, class 8 test paper on algebraic expressions, ALGEBRA 1 MCDOUGAL LITTELL FLORIDA
Excel exam past papers, holt modern chemistry mixed review answers, problems involving quadratic equation number problems samples, converting mixed fractions to decimals calculator, free 8th grade
math worksheets, solving third order polynomials.
TI 89 iLaplace, error analysis in exponent algebra, fun math transformations worksheet, free online accounting books, how to factor a cubed polynomial, free ordered pairs worksheets, alegerbra
Math worksheet grade 2 in india, 10th grade math worksheets plus answers, factoring cubed binomials, math caching answer book.
How to write the equation of a sleeping parabola?, examples of trivia of algebra, algebra simplifying calculator, tips on college algebra, free math answers to algebra 1 glencoe book.
Answers to saxon algebra 1, prentice hall algebra 2 nc, third root calculation, GREATEST ALGEBRA CALCULATOR.
Holt algebra 1 textbook answers, maple solve nonlinear, the expression (2/1-(radical 3)) is equivalent to, multiplying rational expressions calculator, middle school math with pizzazz book d,
advanced algebra help calculator.
Multiplying and dividing fractions practice, solved aptitude test papers, Algebrator, What are some examples of hyperbolas?, solved statistics problems online.
Properties of addition free worksheets, year 9 math skills worksheet, soft solver math.
Glencoe world history chapter 13 answer key, roots second order equation, 26447#post26447, difficult math trivias, ratio simplest form/pre algebra, free printable word problems for solving
inequalities, turial questions on fractions for class 6th.
How to go from a decimal to a fraction, Percent and Proportion Theory, answers to math questions for algebra 2, free step by step math problem software.
Learn algebra, math symbols 9th grade, download aptitude questions, third grade coordinate grid worksheets.
Lession on discrete math structure, operations with radical expressions solver, national achievement test in advanced algebra, what does lineal metre mean?.
Free download chemistry question papers, Using Maple to solve the simultaneous equation system, Algebra Graphing Linear Equations.
Parabolas properties of functions, rational expression algebra II online tutor, factoring third roots, resource book, algebra, structure and method, book 2, working with inequalties, define parabola,
algebra ppt+6grade, maths for dummies.
Calculate multivariable equations on the TI-84, Exponents and Roots Simplify the expression answer, free download 'A levels' statistics 1 question papers.
10664788, eighth grade connected math lesson sample, integer worksheets, find the right one, linear square metre calculator, combing like terms worksheet.
Coordinate plane worksheets, cubed route math, roots of real numbers activities, laplace font, printable math puzzle for high school, worksheet on simplifying algebraic expressions, learn math for
Maths paper 11+, calculate Easy method Guide, Algebra Answers, riddle worksheets holts math.
5th grade math worksheets greatest common factor, online parabola calculator, online trig graph calculator, how to find the slope on a graphing calculator, calculator for solving binomials, saxon
math volume conversion chart.
Math poems on quadratic equations, factoring a cubed quadratic, polynomial long division solver, how to write a mixed number as a decimal, solution of non-homogeneous linear ODE using eigenvalue,
adding cube roots, probability with at least ti.
Eog practice 5th grade worksheet, "Importance of algebra", researched based placement tests for pre algebra and algebra high school, simplifying square root algebraic equation, second order
deferential equatio tutorial.
Mcgraw-hill algebra 2 cheat, free maths 11+ exam papers, sat practise papers, glencoe algebra answers, physics chapter 7 glencoe, math trivia problem with answer.
Sum of radical, simplify fractions, lowest common denominator calculator, formula for diameter Ask Jeeves.
McDougal littell geometry worksheets, order fractions, mix numbers, free maths work sheets ks2.
Rational inequality worksheet, McDougal Littell history book 7th UNITE 1 answER, grade 9 algebra practice, saxson math answers to homework, find vertices given the equation of the hyperbola, algebra
2 vertex formula.
Free 11th grade math worksheets online, write square root of 45 in simplified radical form, how to answer x matric exam paper, adding three integers worksheets.
Solve equations excel, step by step radical equation solver, sample simple plans of algebraic expressions, Adding and subtracting nevative and positive numbers questions.
Algebra 2 for 10 grade, math with pizzazz book d answers smog, Mixed numbers to decimals, Alegbra review Mid Year 8th grade.
Sqare root algebra help, quadratics by completing the square powerpoint, real life linear equation graphs, maths quiz for class 9th, lesson plans for teaching area of a circle to middle schoolers.
Radical Equations with wordproblems, dividing decimals worksheets, 6th grade pre-algebra worksheets, online problem solver math.
Slope and y-intercept for 8th grade algebra homework, how do I convert decimal measurements to a mixed number, Free Answer Algebra Problems Calculator, implicit differentiation solver, math trivia
geometry, learn algebra online for free, expression factoring calculator.
6th grade science - interval, standard form calculator, comparing fractions worksheets for fifth graders, worlds hardest easy math problem answer, exercices in linear algebra.
Mental maths practice worksheet for grade5, simplify radical in maple, Learning in algebra 2 printable.
Factoring trinomials with double variables, simplifying fractions with exponents calculator, how to solve fraction and algebra mixed equations?, how to pass college algebra, Free algebraic
expressions worksheets.
Fractions worksheets free, poems on graphing quadratic equations, factorizing equations algebra, how to SUM numbers in Java, limits calculator online.
Teach me algebra online, Diamonds method algebra 2, middle school math with pizzazz book c answers.
Physics holt practice test, alegebra software/tutor, what is the difference between exponential and radical expression, can u simplify a variable with an exponent, Factoring Method in College
Ks2 papers online to do online, Holt Algebra 1 wordsearch, linear factors calculator, how to solve a math combination, Algerbra and Rate of Work Problems, HOLT algebra 1, 6th grade chapter 2
statistics test math.
Sets Venn Diagrams worksheet quiz practice, workbook answers for math, algebra 2 software, roots exponent, graphing linear equations worksheet, permutations and combinations and ti 84.
How to solve a log ti 89, rational expressions calculator, glencoe algebra 2 Answers, why do we get two values when we solve the quadratic equation., math algebraic prayers examples, solving
equations powerpoint .
Physics, Principles and Problems Glencoe/McGraw Hill. preview, standard form calculator for equations, expanding binomials worksheet, prentice hall pre algebra, Fractions For Dummies, programming
language system of equations solver, nonlinear first order differential equations.
How to Solve Least Common Multiple, solving non linear differential equation, tests on multiplying, dividing, adding and subtracting fractions, latest trivia in math trinometry.
How to find ascending order in unlike denominators in fraction, joy saxton sa tx, how to find slope of a quadratic graph, sixth grade Holt ed. complementary/supplementary Angles ppt.
Algebra worksheets for first grade, factorization exercise, how to simplify an expression for 7th grade math, Free KS2 Algebra worksheets, adding/subtracting math pages, summation notation worksheet,
adding subtracting multiplying and dividing FRACTIONS.
Free 8th grade language worksheets, algerbra, free answering algebra problems, simplifying rational solver, 4th grade linear equations tutorials, square root property quadratic.
LCM FORMULA, algebra 2 made easy, stem project mcdouglas littell, algebra 1 help online'.
Sq. foot caculator, printable practice compass exam, "mean, median , mode worksheets", third order quadratic equation+solution, advance algebra 2, algebraic expressions worksheet.
Square root solver, maths workrate problems, square root properties.
Algebra problem involving clock, quadratic equation factorer, linear equations in two variables/ coordinate geometry, Tenth Matric Maths Question Bank, complex algebra sample problem solution,
Aptitude question with solutions.
Foil algebra worksheet, how to put notes on T1-83 plus, solving cube roots, algebra calculater, Worksheets on Positive and Negative numbers.
Ninth grade math, 1st grade assessment worksheet, houghton algerbra book, how to solve aptitude questions, free gmat question bank with explanation, multiplying mixed fractions holt, rinehart and
Activities to teach factorising and expansion to grade 9, "free ebook" download "solution manual", adding and subtracting negative numbers free worksheet, algebra equation solution calculator,
Graphing calculator game downloads, Hyperbolas in real life.
Free algebra cartoons, old exams papers of animation college, calculus programming code for ti84.
Simplifying radical expressions, Mcdougal Littell Algebra 2, download a free rom ti 84, ti-89 pdf.
"Math Formula Wallpaper", algebra equation solver java, solve an equation using fractons, geometers sketchpad free, algebra tutor software, base 2 log TI 83, math questions for the nuber 1929.
Worksheets with integers with answers, printable accounting worksheets, physics revision worksheets for grade 9, activities with exponential functions, Math homework help factoring cubics, multiply
school worksheet.
Ti-89 fluid, least common multiple powerpoint, how to store a formula on ti-83, hyperbolic tan on ti-83, Solving algebraic formulas, dividing polynomial calculator.
How to factor using Ti-83 plus, aptitude questions using set theory, domain calculator algebra, algebra 2 an integrated approach answers, maths paper grade 11, t1-89 games on calculator.
Adding andsubtracting of fractions, numbers and numeracy worksheets grade 7 ontario, Casio calculator solving trinomials, convert decimal to fraction ti-89, mixture of solution calc math.
Ppt.math presentations, gmat practise, how many factors are perfect squares, tips for solving trigonomic identities, free algebra problems and answers, what is a good tool to use to learn algebra,
square root to the 4th power ti 83.
Free ks3 revision test papers, adding subracting exponents, intermediate algebra extended, finding maximum of an equation, rational expressions and equations online calculator, integers add/subtract
worksheets, "how to store information" graphing calculators.
Intermedia algebra, multiplication properties of exponents+worksheet, aptitude test paper with answer.
Rules for multipling adding subtrating and dividing fractions, online math problem solver, what,who,when,where,how and the importance of pi the mathmatic symbol, roots equation of 3rd order
polynomial, multiplying +sqaure roots, free pocket pc ged test, online calculator for adding fractions.
Greek algebra, gre math cheat, positive and negative numbers worksheet, properties of rational numbers worksheets, revision games P.P.T for grade 2, ti-89 + solve.
Free first grade math printouts, cheating ti 89 pretty print, softmath.com 2nd. grade, trigonomic equation.
COLLEGE ALGERBA ON LINE, programming on the ti84 examples, ks2 maths sats, "download kumon sheets", free gmat practise, mathmatical root worksheets.
" MathType 5.0 free", mix number division calculator, 7th grade math/ratio, find integer from java, simplifying algerbra, ordinals numbers worksheet and graph.
Ti-83 rom code, sample aptitude questions in javascript, algebra liner, ti 84 "variable error", grade 5 math sheets-fractions, download ti-83+ ROM image, homework answers prentice hall math.
Linear algebra+factorization+third order polynomial, how to solve for slope, dividing a monomial by a monomial worksheet, algebra with pizzazz answers , creative publications.
Quadratic sequences worksheet, san antonio market squae, algebra formulas.
College compass test dealing with square roots, saxon physics answer book, faction calculator, simple trigonometry revision.
Graphing Inequalities, algebra exercise for primary, how to solve a cubic function synthetic divisio.
Linear algebra creative explanations, graphical aptitude test sample, matlab program for physic calculation, Greatest Common Factor of 646, highest common factor.
Free online clep practice test college algebra, permutations and combinations problems and exercises, Sample Trivia Math Questions, modulo function on TI-83 plus, algrebra sequence of operations.
Chemistry worksheet answers addison wesley, "prealgebra software", absolute value of a square, answers to math Glencoe Mathematics, online algebra for dummies.
Summations+ti 89, kumon download free, zero-product principle online calculator, equation factoring calculator, program to find square root of a number, solve algebraic equation java, solving
equations TI-83 Plus.
Yes:%root%:off, interactive exponent rules, free christmas quiz for schools ks3, fly world algebra, chemistry,, online algebra tests free, ladder method.
Pdf on ti-89, expansions simplification factorization GCSE, mcdougal littell algebra 2 answers, tricky practice sheets, coordinate grid ks2 homework, why the dissolution of a salt can be a slow
process?, hard algebra problems.
Sample questions and step by step on how to do elementary algebra, complete factoring equation solver, Domain Range of a Parabola games, ti-85 fractions, Sample placement Question Papers on Logic,
TI-83 ROM download, fluid mechanics cheat sheet, algebra2 problems, laplace ti 89 source basic, pre-algebra cheatsheet, STEPS IN BALANCING CHEMICAL EQUATION.
Famous algebra text, 5 Math trivia, factorization simplification expansion GCSE, kindergaten math worksheets.
Ratios and rates math downloadable practice worksheets, graphing equations steps, "simplify radicals" jail, simplifying square roots with exponents, free intermediate algebra software, algebra for
students, like terms algebra ks3 games.
History of quadritic equation, where can I find a online calculater to help me with my math problems, simple algebra math.
Java remove punctuation from string, ti 84 addition and subtraction of rational expressions, ks2 english practice papers free, simultaneous non linear equations solve.
Equasion editor 3.0, tennessee gateway Algebra pretest worksheets, "algebra homework help", quadratic factor calculator, percent formulas, printable math sheets adding positive and negative numbers,
adding negatives work sheet.
"textbook comparison" algebra holt glencoe, free printable geometry, math equations+percents.
Distributive properties online calculators, ti-84 plus ROM image extraction program, nonlinear simultaneous equations, online graphing calculator program, greatest common divisor computation,
comparison of methods, Homework booklet do polynomials, Algebra Help.
Distributive property information, easy worksheet/english, chapter 7 Algebra 2 McDougal Littell notes, Solve and Graph, free algebra test, pre-algebra worksheets free.
Square root of exponenets, "free matlab download", download function excel book free, math homework cheats, conics program for ti83+, cognitive tutor algebra 1, biology chapter 7 sample test k12.
Objective quetions+physics, overdetermined simultaneous equations matrix, ti 83 quadraitc formula.
Quadratic equations, parabolas, worksheets and answers free, chemical equation on the principle of specific changes in the environment, solve system of equations ti-83, online calculator that does
fractions, complete the square worksheets, mcdougal littell history worksheet answers, program that will factor quadratic equations.
Probability exercises for grade eights, mathcad download free, free online algebra 1 class, easy algebra II, solving and graphing equalities algebra 1.
McDougal littell puzzles, English+games+testpapers+online+for kids, Basic Trigonometry, advance accounting book.
Rules for solving polynomials functions, what is calculas?, perpendicular alegbra tutorial, free online 7th grade worksheet, worksheet on ascending order, third root, ALGEBRA CALCULATORS POLYNOMIAL.
Online Algebra 2 Homework solver, maths worksheets to print ks3, online algebra problem solver.
Completing the square worksheets, pre algebra worksheet, "kumon exercises", simultaneous equation solver, greatest common factor worksheets free printables, free easy algebra lessons, steps to
balance chemical equations.
Elementary algebra, simplifying radicals with variable coefficients, learning basic algerbra.
Business statistics practise problems, free college algerbra courses, factoring polynomials completely worksheets, log help TI-89.
Maths combination, discriminant of algebraic expression, free ebooks math pdf, algebra help, Algebra instruction finding linear slope, simplifying addition logarithmic expressions?.
Calculator for rational expressions, how to program quadratic formula into a TI-84 Plus calculator, pre algebra problem solving worksheet.
Math worksheets for calculating interest, How to do summation on Ti-83, sample of algebra application?, how to convert from decimal to fraction in ti-89, free lcm gcf worksheets, "coordinate graph
MATH cheat sheet grade 7, algebra problem solving chart, free factoring completely solver, can a calculator help LD students to solve math problems.
Solving domain and range equations, log base 2 on ti 83, using the formula for the nth term of an arithmatic sequence, what is 101st term, +("index of") +("/ebooks"|"/book")+(chm|pdf|zip|rar)+gmat,
examples of math trivia.
Quadratic formula calculator program, How to solve hyperbolas graphing problems, simplify rational expressions and factional exponents, "SYnthetic division" worksheet, Permutation lessons, print out
Free aptitude questions with answers, graphing applet linear equations in standard form, examples of math trivia mathematics, "fraction order" least greatest, CPT sample math problem time and
distance, review for ninth grade algebra exam, intermediate algebra radicals help.
Accounting +standard+secondry, adding fractions calculator, how to uses a calulator on a PDA, Math word trivia, aptitude question papers.
Simplifying equations decimals worksheet, math ratio solver, prentice hall mathematics algebra1 virginia sol test prep workbook, free basic triginometry.
Pre-algebra printable review sheets, algebra 2 mcdougal littell teacher edition online, subracting whole numbers with fraction, casio calculaters, pre-algebra lcm, fraction order greatest to least.
84 hours is fraction and desimal, solving equations by multiplying or dividing, prealgebra for dummies.
Parabola calculator download, Calculas, reduce the fraction 7^11 in lowest terms, algebra solver web, free sample SAT maths papers, inequalities math exercise-answer.
Pre-algebra worksheets, the common names for the allotropes of this element are based upon their colors, solved clock worded problems in algebra, algebra 2 problems and answers.
"solve quadratic equation", math slopes equations, free math study sheets, Glencoe Algebra 2 Chapter Notes.
Pre-algebra saxon quiz, quadratic equation to vertex form converter, 9th class olevels paper past paper, free algebra prep.
Advanced parabola graphs, logarithm solver, adding decimels and percentages, calulator and fractions.
Vector, ks3, math, basic alebra word problems free, Algebra review and applications.
Zero-product principle java, math for dummies, Math 7th grade Glencoe/McGraw-Hill worksheets greatest common factor, least squares ti 89, plug in eqation.
Free worksheets algebraic equations with variables, Algebra 2 tips, factoring calculators, what is the easiest way to learn algebra, practical lambda-calculator with code.
Radical expressions calculator, graphing linear equasions, Examples balancing scientific equations, understanding algerbra mathematics, quadratic equation two variable, solve cubic equatin.
Advanced algebra factoring tips, help solving math problems, least common multiples calculator, free online 7th grade math nyc exam, binomial theory, ti-83+ ROM image.
Complex algebra solver, free online 7th grade practice exam, free math aptitude test canada.
Algebra 2 test generator, "what are" the first six rows of blaise pascal's triangle, difference of two squares fun sheets mathematics algebra.
Printable worksheets for grade one math in ontario, scientific notation printable worksheet, TI 89- Polynomial FOIL, how to set up algebra problems, pre college algebra course outline, sats online
practice papers y6.
Math worksheet integers, simplify radicals by transforming into exponential form, what is a scale factor in algebra.
South-Western Algebra 1: An integrated Approach, cube roots on a ti83, "mathtype"+"free download", algebra one.
"system of equation"+games+activity, Algebra and Trigonometry Structure and Method Book 2, printable factor trees gcf lcf, divide iNtegers Worksheets, help understanding algebra 2, algebra pdf.
Pre-algebra-slope and y intercept, algebra worksheets, free graphing coordinates on plane worksheets.
Step by step combine like terms algebra, ti-89 polynomial solver imaginary numbers, c program sample permutation and combination.
Calculating slopes, mathmatical sample problems, interger in math in math 70, T1-84 statistics tutorial, Fractions/mathmatics, questions on simplifying fractions with algebra, algrebra best way to
understand factoring, solving+equations+answering+generator.
Passing college algebra, factoring+algebra, free maths and learning for yr 9, free download lambda calculator, solving 3rd order polynomials.
Helpwithalgebra, trig calculator, algebra facts, free accountant book, sat math "formula sheet", exercises math to practise.
Algebra factoring calculator, algebre dictionary, Algebra and trigonometry answer, "grade 4 long division", algebra program, algebra sums beginners, online work on algebra for y7.
Excel graph template for algebra, simplify math equations, percentage problems gcse, solving multiple variable equations.
"Algebrator", ti-89 simple graphing picture programs, galois theory +homework solutions, statistical equations and cheat sheet.
Free algebra solver for rational expression, coordinate graphing PRINTABLES!! for 7th graders, how doyou solve radical equations with exponents, quadratic equation that has the solution set of
(7,-3), a.java program to calculate compound interest\.
Free algebra solver software, free on line tutorial mathematic, factoring polynomial great common factor, mathimatical converstions, third graders printouts, mcdougal +littel algebra structure and
method answer key, parabolas used in life.
FREE ks3 science papers, year 8 online maths test, LCM formula.
TI-84 Conics Application Regents, maple solve nonlinear system of equations, LEAST COMMON MULTIPLE OF 90 AND 20, +Boolean +algebra +exercises.
Worksheet on solving equatuions, Algebra, nonlinear equations maple, ti-85 user guide modulo.
Maths worksheet ks3 (bearing), log ti-83 statistics, convert squares to square footage, difficult algebra problems, calculus "trigonometry cheat sheet".
Free mathematics book, algrebra 2 quizes, help learn algebra, "convert decimal to a fraction", gcse basic statistics.
Help with Complex Trigonomic Identities, inequalities graphing calculator online, mcdougal littell answers course 2, intermedia algebra 7th edition book, glencoe algebra 1.
Proportions with distributive property, combining like term worksheets, simplifying complex radicals, Worlds hardest maths, free exponential function solver, Algebra Problem Solvers for Free, Adding
and Subtracting Positive and Negative numbers worksheet.
TI-85 for dummies, application of equations and inequalities, basic algebra Ti-89, elementary algebra trivia games.
Algebra, worksheets, Expressing fractions in lowest terms KS2 worksheet, programming code for permutation and combination, c#, Holt physics textbook answers, algebra for ged.
"numerical analysis" ti 89-basic, Free Algebra Answers, linear examples involving cookies, solutions of chapter 4 of rudin, holt physics worksheets, a picture of T1-81 calculator.
Algebra solving software, hrw advanced algebra-parabolas, Tussy/Gustafson Elementary and Intermediate Algebra, Final Exam, pre sats question., what is FOIL in maths.
"maths,factorization", advance level maths[question], Dividing with decimals printable worksheet, algebra with Pizzazz answers, online polynom solver, ppt on algebra 1.
TI-92 tutorial pdf, factoring with the ti-83 plus calculator, mathematical peopms about algebra, games for multiplying integers, Least Common Multiple Calculator, Free Elementary Algebra Help,
college algebra formulas.
Online advanced calculator TI-89 emulator, problems solving and applyiig quadratic equation by quadratic formula, math worksheets for 8 yr olds.
Gcd von polynomen ti89, "math type 5.0 equation""download free", free printable algebra pre-test, Trigonomic formula.
Basic maths .ppt, online t1-83 calculator, whats a perfect square trinomal?, Trigonomic Formulas, algebra multiple parenthesis worksheet.
Applied exponential function calculator, merrill algebra 2 with trigonometry applications and connections helper, algebra calulator, factoring quadratic calculator, graphing linear equations and
inequalities word problem pactice, ti calculator rom code, mathcad free download.
Calculas, glencoe algebra 2 test chapter 7, MATHS TEST YR 9, linear regression ti 83 r2, worksheets on combining like terms, y intercept program for ti-84 plus, Online equasion solver.
Ks3 math test, online accounting books, pre-algebra lessons, solve equations with free complex roots calculator.
Free algebra solver on line, permutation combination exam questions, what is decimels, addition method.
Free question papers with answers+java, algebra worksheets 8th grade, how to quadratic equation on TI-85, free gre physics previous years papers, chemical application of group theory.ppt.
"sample word problem" involving bar graphs, How to calculate slope of a triangle, math-associative property, pregrade maths worksheets.
Graphing worksheets for 6th grade, permutation combination tutorial, combinations permutations how to homework help, aptitude Question, parabola and hyperbola graph differences, quadratic equations/
completing the square, examples of the latest mathematical trivia.
Online algebraic calculator, mix numbers, addition rational expressions without factoring denominator worksheet, free download trigonometry book, solving 3rd order polynomial.
Chinese math dallas tx, Hard algebra sums, free worksheets on graphing sin functions, ks2 maths practice paper, algebra 2 an integrated approach, grammer basics worksheets, ineed more free print
Visual basic Trigonometry, algebra helper software, polynom excel, worksheet for math transformations hs, simplifying expression calculator, percentage formulas.
Math answer keys algebra 1 CA, solving inequalities combining like terms, test revision maths yr 9, fraction codes java, Mathematics fundamentals laplace simple tutorials pdf, "calculating
percentages" worksheet, mcdougal littell Algebra 2 chapter test b.
Quadratic formula ti-83 plus program writing, answers to chemistry concepts and applications study guide by Glencoe, university of alberta/ algebra cheat sheet, free printable college algebra math
worksheets, math sats papers.
Free GCSE english practise papers, pre algebra worksheets, algebra equations answers.
Solving linear equation using matrix, FREE printable pre-algebra worksheets, answer exercises java "how to program".
Www.math promblems.com, worksheet on the sum and product of the root, polynomial long division calculator.
How to use the ti-83 plus calculator to do factoring, free linear programing homework problems, calulator for dividing fractions, diamond factoring quadratic function, use an algebra 1 caculator
online, "way to find least common denominator".
What is the difference between evaluation and simplification of an expression?, multiplying decimal practice, new version algebra book ucsmp, interpolation casio 9850, aptitude solved questions,
McDougal Littell/Houghton Mifflin Co. Pre Algebra worksheet, pre calc cheat cheats.
Doing fractions on a calulator, system of quadratic differential equations, rational expressions solver, algebra 2 McDougal Littell answer key to check work, Merrill Geometry "answers".
Bank aptitude question, McDougal Algebra 1 vocabulary, math decimels and percentages, matlab solve nonlinear equations.
Domy algebra homework for me free, solve simultaneous non linear equations, polynomial trinomial calculator, aptitude question downloads, lattice multiplication template, maths+gcse+past papers,
graphing system of liner linear equations.
3rd grade algebra worksheets, free english ks2 past sat papers, "synopsis " "O Level maths".
Dividing polynomials calculator, How to find the cube root on a TI-83, free download of a book which has the chapter of parabola.
Fraction lowest common denominator free worksheet, solving equations with exponents, formula for percentage, clep test pass rate.
Free printable first grade math tests, free ged pretests, 7th grade prealgebra fractions help drill, What is the formula to find the area of an elipse, 6th grade mathematics exercise, advance
mathematic problem solution, simplifying rational algebraic expressions.
Polynomial root finder for ti83, 6 Simultaneous Equations, free Grade 7 study sheets helpers.
Algebra semester review ppt, slop calculator, Free printable math First Grade, GCSE MATH TEST, roots pre-algebra worksheets, holt introductory algebra 1, how to do "step functions" on TI-83.
Calculating eigenvalues with ti 83, Glencoe/McGraw-Hill worksheets 7th grade, factor pairs+free worksheets, solve equation, Factoring trinomials+calculator program, rational exponents, equation
5th and 6th grade, fractions to decimals practice, aptitude question, is there a website that can give me the answer to any intermediate algebra problems i have?, algebra worksheets expanding
KS3 History Exams Download, advanced algebra answers, ti-84 calculator games, teach me algebra.
Hardest algebra equations, grade one downloadable homework sheets, 4 laws of exponets.
Free downloadable sats papers, how do you solve radical exponents, worksheet on the five exponent, linear equation in two variables, saxon algebra teachers addition.
Calculate gcd, how to convert scientific notation to decimal in java source, Arithmatic word Problem, homework answer guide online for saxon math, adding,subtracting,multiplying, dividing mixed
numbers worksheet.
Math trivia, mcdougal littell algebra structure and method quiz, a square with 6 straight lines with 22 open spaces solving poblems, y - intercept math downloadable practice worksheets, entropy and
gibbs free energy powerpoint, symbolic method.
Elementary linear algebra larson solutions download, gcse math formulas in algebra, algebra 2 helpers, adding, subtracting, multiplying and dividing fractions, algebra calculators, solving
polynomials java, mathematics trivia.
"solve system of nonlinear equations"+excel, Prentice Hall +Pre-algebra +PPT, solving for x squared.
Free tricks to solve square, dividing and simplifying in algebra grade 10, advanced algebra, secondry level mathematics, homework help saxon math algebra 2, a system of linear aquations defined,
+calculator +radical.
Graphing systems of equations graphically, least common denominator powerpoint, answers to math algebra questions, math tutorial bittinger algebra software, worksheets algebra two mcdougal littell,
"conversion formula", "square meter to square feet", algebra 2 calculator.
Calculator phoenix game, highest common factor for 100 and 150, online math test cheat calculator, kids Mathamatics to learn, solquiz grade4.
Quadratic equation slope intercept, decimal 3rd grade worksheet, algebra quiz tutor, covert square feet to square meter, test generator algebra, simpified radical math.
Mastering Algebra Hamilton, "online radical calculator", "TI83 ONLINE CALCULATOR", "vector mechanics for engineers" chapter 3 solution, polynomial inequality online calculators.
Kids percentile calculater, solving non linear simultanous equations in matlab, graphing constraint inequalities, eguation grapher, McDougal Littell Integrated 1 Mathematics answer sheet, algebra 2
Algebra homework, free printable measurement conversion table, algebra polynomial calculator.
Elipse in algebra, examples of slope using in equation, pictures of real-life hyperbolas, number problems trivia, 6th grade Math trivia.
Teach yourself mathmatics, nonlinear equations solver applets, EXAM QUESTION SOLVED PAPERS ON COST ACCOUNTING.
"principles of mathematical analysis" rudin solution, permutation and combination, hard math equation, binomial pdf on ti 84, exponent printables elementary.
Free online aptitude online practice question papers, "java +permutation", SUBRACTING PERCENTAGES, algebra software package, graphing inequalities 8th worksheets.
Laplace transform help, I need the answers to New York State Testing Program Mathematics Book 2 Sample Test, rational expressions online calculator, sample physics problems using trig and vectors,
solving for the least square lines, exponents math problems easy, free samples of aptitute test.
Definition of contemporary abstract algebra gallian, elementary algebra prob, Elementary calculas formulas cheat sheet, ONLINE TEST BASIC ALGEBRA.
Two step equation solver, "pre-algebra midterm", free GMAT Papers, cube root, print extra maths homework ks3, algebra+formulaes.
"free kumon exercises", subtracting and adding negative fractions, ti 84 free download games.
KS2 FREE education english programs, McDougal Littell Integrated 1 mathematics answers, ti rom code, math tutors in detroit michigan, simplify radical numbers pythagorean theorem, 6TH GRADE WORKSHEET
IN FINDING PERIMETERS AND AREAS, where are the answers to a quadratic formula and why do you use it?.
Quadradic equations for kids, solvind simple square root equations examples, calculator rom code, t183 graphing calculator online.
Math problem sets printouts 8th grade, aptitude questions.pdf, free intermediate algebra downloads, algebra puzzels, multiplication of fractons worksheet, College Math Problems Examples Answers,
common denominator calculation.
Doing logarithms in a TI 89, applet with polinomial functions in java, Tawnee Stone Hard, sixth class mathematics practical.
Equations in excel, ti 83 plus+quadratic equation, maths exercises for age 11 / india, "angle chart","degrees","radians", www.algebra 1B. com, second order differential equation, online square root
Free college algebra problem solver, 9th grade algebra worksheets, rational exponents worksheet, examples of college algebra beginning, free algebra solver, algebra for dummies slope, radical
expression calculator cubed roots.
Factor calculator, algebra worksheets for children, integer adding worksheet.
Rational expressions calculator, Integer Worksheets, formulas and literal equations worksheet, mathematical analysis+walter rudin+pdf+solution problems, pre algebra definitions, HOW TO SOLVE
Trigonometry word problem, maths exercises for dummies, math trivia with answer, "formula" + "changing" + "celsius" + "farenheit", download free ebook of aptitude test.
Sample problems in algebra, ti-89 rom image, fluid mechanics.ppt, (example of trigonometric games), free past maths papers for keystage 3.
Dividing polynomial solver, integers algebraic expressions worksheet, free SATS revision ebooks.
Calculating percentages math fractions, kumon answer key, Clock problems + Aptitude.
Solving cuberoot equations, grade ten trigonometry help, ti-83 + tricks, contemporary abstract algebra answers, AJweb, Cost accounting books, PASS ALGEBRA ON CALCULATOR.
"completing the square" and "questions", coordinate plane-pictures, how is online exam cheats, phoenix calculator game, holt algebra 1 tests.
Free primary 5 and 6 mathematics worksheet, graphing equations worksheets, solving equations with a denominator, precalculus problem solvers, algebra worksheets polynominals.
Decimal to fraction calculator, draw ellipse "geometers sketchpad", algebra equation solve java shows work, clepping college algebra, basic triginometry, "math lessons/worksheets" changing percents
decimals, math solver quadratic equations factoring method.
Simplfy problems online, hard algebra worksheets, free radical substitution.ppt, lcm gcf game, Graphing "Plotting points" pictures.
Parabola math test, calculator formula conversion casio "equation solving", free maths sats paper, adding and subtracting multiply and divide integers, Algebra soultions Using Boxes.
Free practice worksheets for year 9, kumon answers, logarithm calculator, intermidiate college algebra, ecology unit 10th grade biology vocabulary terms, algebra II problems.
Online second degree polynom solver, Trigonometry questions w answers, math sol test alegebra, nonlinear differential equations matlab, simplifying radicals calculator, prealgebra problems.
Answers for mcdougal geometry: an integrated approach, powerpoint presentation in solving parabola, pre-calculus poems, median and mode 6th grade worksheet, polynom division, Free Algebraic
Expression Calculators. | {"url":"https://softmath.com/math-com-calculator/function-range/mixed-number-percentage.html","timestamp":"2024-11-04T21:12:07Z","content_type":"text/html","content_length":"153035","record_id":"<urn:uuid:c96cd8d1-2de4-4291-b6f7-2bb5e351ecd6>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00266.warc.gz"} |
75.54 hectometers per square second to millimeters per square second
76 Hectometers per square second = 7,554,000 Millimeters per square second
Acceleration Converter - Hectometers per square second to millimeters per square second - 75.54 millimeters per square second to hectometers per square second
This conversion of 75.54 hectometers per square second to millimeters per square second has been calculated by multiplying 75.54 hectometers per square second by 100,000 and the result is 7,554,000
millimeters per square second. | {"url":"https://unitconverter.io/hectometers-per-square-second/millimiters-per-square-second/75.54","timestamp":"2024-11-05T13:04:20Z","content_type":"text/html","content_length":"27194","record_id":"<urn:uuid:1818110e-ef35-43f8-8148-b6cd9fc65ee1>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00618.warc.gz"} |
AP Inter 1st Year Maths 1A Formulas PDF Download
Inter 1st Year Maths 1A Formulas PDF: Here we have created a list of Telangana & Andhra Pradesh BIEAP TS AP Intermediate Inter 1st Year Maths 1A Formulas PDF Download just for you. To solve
mathematical problems easily, students should learn and remember the basic formulas based on certain fundamentals such as algebra, arithmetic, and geometry.
Get a unique way of solving the maths problem, which will make you learn how the equation came into existence. This is the better way of memorizing and applying the Intermediate 1st Year Maths 1A
Formulas PDF. Math formulas are expressions that have been created after several decades of research that help to solve questions quickly.
Students can also go through Intermediate 1st Year Maths 1A Textbook Solutions and Inter 1st Year Maths 1A Important Questions for exam preparation.
AP Intermediate 1st Year Maths 1A Formulas PDF Download
We present you with Inter 1st Year Maths 1A Formulas PDF for your reference to solve all important mathematical operations and questions. Also, each formula here is given with solved examples.
Here students will find Intermediate 1st Year Maths 1A Formulas for each and every topic and also get an idea of how that equation was developed. Thus, you will not have to memorize formulas, as you
understand the concept behind them. Use these Inter 1st Year Maths 1A Formulas to solve problems creatively and you will automatically see an improvement in your mathematical skills.
Leave a Comment | {"url":"https://apboardsolutions.com/inter-1st-year-maths-1a-formulas/","timestamp":"2024-11-05T03:17:31Z","content_type":"text/html","content_length":"68037","record_id":"<urn:uuid:d96c573e-8b85-4dc6-a6e3-28c13de42eb3>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00074.warc.gz"} |
CORNELL ECON 3120 - Lecture 24: Counterfactual Averages - D3104047 - Gradebuddy
CORNELL ECON 3120 - Counterfactual Averages
Type Lecture Note
View full document
Unformatted text preview:
Econ 3120 1st Edition Lecture 24 Outline of Last Lecture I Dependent Variable Errors Outline of Current Lecture II Counterfactual Averages Current Lecture 2 We can also define what we don t observe
counterfactual averages E Y0i T 1 average outcome for individuals facing the policy in the state where they didn t face the policy E Y1i T 0 average outcome for individuals not facing the policy in
the state where they faced the policy Now let s return to the regression Yi 0 1Ti ui I showed above that this estimates 1 E Yi T 1 E Yi T 0 or in our new notation E Y1i T 1 E Y0i T 0 Note that if I
omit the state of the world in the subscript for Yi you can assume it s the observed state of the world But the way we ve the treatment effect the causal effect on the policy on the treated group is
actually E Y1i T 1 E Y0i T 1 Thus we have to assume that E Y0i T 1 E Y0i T 0 Sometimes I call this the critical assumption of causal inference It turns out that this is just a more detailed way of
expressing the zero conditional mean assumption E ui Ti 0 E ui T 0 E ui T 1 0 Why Because it turns out that ui represents the actual outcome for the control group and the counterfactual outcome for
the treatment group The easest way to see this is a situation where 0 0 so Yi 1Ti ui For the control group ui Yi This is the outcome for the control group in the absence of the treatment For the
treatment group ui Yi 1Ti But think about the hypothetical situation where the treatment goup did not get treated the counterfactual Using the model we take the treatment group and set T 0 yielding
ui Yi0 So the zero conditional mean assumption that E ui T 0 E ui T 1 is the same thing as E Yi0 T 0 E Yi0 T 1 One way to think about the critical assumption is that there unobserved characteristics
in this case the counterfactual that differ between the two groups 2 3 When can we assume that E Y0i T 1 E Y0i T 0 Let s take a simple example Suppose we are interested in whether a college
scholarship program increases attendance An organization e g Gates Foundation gives out scholarships to high school students and we measure subsequent college attendance Does the critical assumption
hold It depends on how the scholarships are targeted Let s consider a situation where sholarships have are given to the most qualified applicants to the program In that case the counterfactual
outcome for students who receive the scholarships might be higher than the actual outcome for students who do receive them Thus E Y0i T 1 E Y0i T 0 and hence our estimate of 3 E Y1i T 1 E Y0i T 0
will be greater than the true effect of E Y1i T 1 E Y0i T 1 and we will have upwardly biased estimates Exercise What if scholarships are instead targeted towards disadvantaged applicants Finally let
s consider random assignment Suppose that within a population These notes represent a detailed interpretation of the professor s lecture GradeBuddy is best used as a supplement to your own notes not
as a substitute scholarships are given to a randomly selected group of students This would occur for example if the program had more qualified applicants than it had scholarships and the scholarships
were allocated by lottery If assignment is random then that ensures that for a large enough sample E Y0i T 1 E Y0i T 0 This occurs because on average all characteristics observable and unobservable
will be the same for both groups
View Full Document | {"url":"https://gradebuddy.com/doc/3104047/counterfactual-averages/","timestamp":"2024-11-10T00:01:05Z","content_type":"application/xhtml+xml","content_length":"84396","record_id":"<urn:uuid:cc75f86b-0be4-4c11-91e8-ab646a04584b>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00794.warc.gz"} |
Alessio Savini (Università di Ginevra)
Explicit measurable cocycles for actions at infinity of some semisimple Lie groups
Abstract: Since the formulation of Dupont's conjecture, it has been evident the importance to understand the boundedness of characteristic classes appearing in the cohomology ring of a semisimple Lie
group. This problem is deeply related to Monod's conjecture, which relates the continuous bounded cohomology of a semisimple Lie group with its continuous variant. An important step towards a
possible proof of those conjectures was the isometric realization of the continuous bounded cohomology of a semisimple Lie group G as the cohomology of the complex of essentially bounded functions on
the Furstenberg-Poisson boundary (and more generally for any regular amenable G-space). Surprisingly, Monod has recently proved that the complex of measurable unbounded functions on the same boundary
does not compute the continuous cohomology of G unless the rank of the group is not one, but an additional term appears. Nevertheless, there is a way to characterize explicitly the defect in terms of
the invariant cohomology of a maximal split torus. In this seminar we will exhibit two main examples of such phenomenon: the product of isometry groups of real hyperbolic spaces and the group SL3.
The first part of the seminar will be devoted to an overview about the state of art. Then we will move to examples and we will give a characterization of Monod's Kernel in low degree. Finally we will
show that Monod's conjecture is true in those cases. In the second part of the seminar we will discuss in details the main results and the techniques we used, such as the explicit computation on
Bloch-Monod spectral sequence. If time allows we will show how we can implement all this stuff using a software like Sagemath | {"url":"https://site.unibo.it/seminar-algebra-geometry/it/elenco-seminari/savini","timestamp":"2024-11-10T03:10:51Z","content_type":"application/xhtml+xml","content_length":"29342","record_id":"<urn:uuid:e7f9a0e9-0b39-4cec-8d89-b18e7b77df3d>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00678.warc.gz"} |
• One or more equations did not get rendered due to their size.
Instances For
• One or more equations did not get rendered due to their size.
Instances For
• One or more equations did not get rendered due to their size.
Instances For
• One or more equations did not get rendered due to their size.
Instances For
• One or more equations did not get rendered due to their size.
Instances For
• One or more equations did not get rendered due to their size.
Instances For
• One or more equations did not get rendered due to their size.
Instances For
• One or more equations did not get rendered due to their size.
Instances For
• One or more equations did not get rendered due to their size.
Instances For
• One or more equations did not get rendered due to their size.
Instances For
• One or more equations did not get rendered due to their size.
Instances For
• One or more equations did not get rendered due to their size.
Instances For
• One or more equations did not get rendered due to their size.
Instances For
• One or more equations did not get rendered due to their size.
Instances For
• One or more equations did not get rendered due to their size.
Instances For
• One or more equations did not get rendered due to their size.
Instances For
Return syntax Prod.mk elems[0] (Prod.mk elems[1] ... (Prod.mk elems[elems.size - 2] elems[elems.size - 1])))
EquationsInstances For
Return syntax PProd.mk elems[0] (PProd.mk elems[1] ... (PProd.mk elems[elems.size - 2] elems[elems.size - 1])))
EquationsInstances For
Return syntax MProd.mk elems[0] (MProd.mk elems[1] ... (MProd.mk elems[elems.size - 2] elems[elems.size - 1])))
EquationsInstances For
Return some if succeeded expanding · notation occurring in the given syntax. Otherwise, return none. Examples:
• · + 1 => fun x => x + 1
• f · · b => fun x1 x2 => f x1 x2 b
• One or more equations did not get rendered due to their size.
Instances For
Auxiliary function for expanding the · notation. The extra state Array Syntax contains the new binder names. If stx is a ·, we create a fresh identifier, store it in the extra state, and return it.
Otherwise, we just return stx.
Helper method for elaborating terms such as (.+.) where a constant name is expected. This method is usually used to implement tactics that take function names as arguments (e.g., simp).
• One or more equations did not get rendered due to their size.
Instances For
• One or more equations did not get rendered due to their size.
Instances For
• One or more equations did not get rendered due to their size.
Instances For
• One or more equations did not get rendered due to their size.
Instances For
• One or more equations did not get rendered due to their size.
Instances For
• One or more equations did not get rendered due to their size.
Instances For
• One or more equations did not get rendered due to their size.
Instances For
• One or more equations did not get rendered due to their size.
Instances For
• One or more equations did not get rendered due to their size.
Instances For
Elaborator for by_elab.
• One or more equations did not get rendered due to their size.
Instances For
• One or more equations did not get rendered due to their size.
Instances For
• One or more equations did not get rendered due to their size.
Instances For | {"url":"https://leanprover-community.github.io/mathlib4_docs/Lean/Elab/BuiltinNotation.html","timestamp":"2024-11-07T03:27:52Z","content_type":"text/html","content_length":"58787","record_id":"<urn:uuid:8d254202-e51d-4e0b-b1ca-fb2efbbe95c7>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00358.warc.gz"} |
Time and Work Methods shortcut tricks - Math Shortcut Tricks
Time and Work Methods shortcut tricks
Some important facts are needed for Time and Work Methods. If a person do a piece of work which has to be require more time to complete, and capacity is limited to that person but two or more person
do the same piece of work which require less time and effort is minimum. This type of problem are given in Quantitative Aptitude which is a very essential paper in banking exam.
29 comments
1. midhunvarghese says:
2. K.Vedhanayagi says:
very useful excersice and very thank u
3. diwakar singh patel says:
How can we solve problem with the help.of alligation method. Pleadr provide me tricks
4. sankarshan satapathy says:
give note
5. Huda says:
Thanks for this its really helpful to me
6. mona says:
this website is very different..its amazing and it looks mesmerize me…whoever has made this website ..i really wanna say that-“excellent work”
7. Priya says:
Hey!Ur website was reallly unique!! I like it !
8. Priya says:
Ur website is really unique!!great thinking!
9. bharti says:
10. Savita says:
11. Savita says:
12. Meenu says:
13. Raj Kumar Roy says:
Awesome !!!!! I liked the black board background also !!!!!!!!
14. Balaram Tudu says:
This is very useful tricks for us.
15. Balaram Tudu says:
This is very useful tricks for us.
16. Balaram Tudu says:
very useful tricks
17. malay mondal says:
I want short cut methods for every competitive exam math and reasoning and also books name
18. G Rangaswamy says:
very nice tricks and very well
19. SANTOSH MULE says:
sir very nice tricks and very well thans
20. mounika says:
21. Aabha Singh says:
I have a question, if 2 women nd 10 childern compltes a wrk in 8 days then in many days 10 children wud complte the work, when 8women takes 6 days for completion of work?
Sir if u can tell me the tricks to solve this….
□ debraj chowdhury says:
8 women 6day 100%
1…. 1……..100/8X6
2…………………….100X2/8X6 =25/6
1day 25/6
2women 8day 33%
10 child 8 days 66%
remaining work can do 4day.
8+4=12 days
□ Vinod Utlapalli says:
very good
□ Abhi says:
Please tell me how to find ans of_
A &B can do a work in 8daysboth worked for 6days and A leaves the work.remaining work was completed by B in 6days then how many days will A and B take to complete work individually.
☆ Mukesh badtya says:
A n B’s 1 day work is 1/8
Their work in 6 day is 6/8
Remaining work is 1-6/8 is 1/4
B completed 1/4th work in 6 days
Therefore B completes the work in 24 days
A can complete the work in (1/8-1/24 equal to 1/12 ) 12 days
○ Ponnuswamy says:
Very useful!
○ Sunil baba says:
24 man can do work in 16 days ,same work
32 women can do this in 24 days .16 man and 16 women do work till 12 days remaining work 2 day how many man will need ?
○ Hariprasad Roy says:
○ GURSIMRAN SINGH says:
VERY NICE
Leave a Reply
If you have any questions or suggestions then please feel free to ask us. You can either comment on the section above or mail us in our e-mail id. One of our expert team member will
respond to your question with in 24 hours. You can also Like our Facebook page to get updated on new topics and can participate in online quiz.
All contents of this website is fully owned by Math-Shortcut-Tricks.com. Any means of republish its content is strictly prohibited. We at Math-Shortcut-Tricks.com are always try to
put accurate data, but few pages of the site may contain some incorrect data in it. We do not assume any liability or responsibility for any errors or mistakes in those pages.
Visitors are requested to check correctness of a page by their own.
© 2024 Math-Shortcut-Tricks.com | All Rights Reserved. | {"url":"https://www.math-shortcut-tricks.com/time-and-work-methods/","timestamp":"2024-11-06T17:52:24Z","content_type":"text/html","content_length":"247120","record_id":"<urn:uuid:58c85093-4c9f-4f9d-a064-446ea0f6d81c>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00818.warc.gz"} |
. Diego Corro
Dr. Diego Corro
Project leader
43Singular Riemannian foliations and collapse
Publications within SPP2026
We prove that the group of isometries preserving a metric foliation on a closed Alexandrov space \(X\), or a singular Riemannian foliation on a manifold \(M\) is a closed subgroup of the isometry
group of \(X\) in the case of a metric foliation, or of the isometry group of \(M\) for the case of a singular Riemannian foliation. We obtain a sharp upper bound for the dimension of these subgroups
and show that, when equality holds, the foliations that realize this upper bound are induced by fiber bundles whose fibers are round spheres or projective spaces. Moreover, singular Riemannian
foliations that realize the upper bound are induced by smooth fiber bundles whose fibers are round spheres or projective spaces.
Related project(s):
43Singular Riemannian foliations and collapse
We study \(\mathsf{RCD}\)-spaces \((X,d,\mathfrak{m})\) with group actions by isometries preserving the reference measure \(\mathfrak{m}\) and whose orbit space has dimension one, i.e. cohomogeneity
one actions. To this end we prove a Slice Theorem asserting that each slice at a point is homeomorphic to a non-negatively curved \(\mathsf{RCD}\)-space. Under the assumption that \(X\) is
non-collapsed we further show that the slices are homeomorphic to metric cones over homogeneous spaces with \(\mathrm{Ric}\geq 0\). As a consequence we obtain complete topological structural results
and a principal orbit representation theorem. Conversely, we show how to construct new \(\mathsf{RCD}\)-spaces from a cohomogeneity one group diagram, giving a complete description of \(\mathsf{RCD}
\)-spaces of cohomogeneity one. As an application of these results we obtain the classification of cohomogeneity one, non-collapsed \(\mathsf{RCD}\)-spaces of essential dimension at most 4.
Related project(s):
43Singular Riemannian foliations and collapse
We present how to collapse a manifold equipped with a closed flat regular Riemannian foliation with leaves of positive dimension on a compact manifold, while keeping the sectional curvature uniformly
bounded from above and below. From this deformation, we show that a closed flat regular Riemannian foliation with leaves of positive dimension on a compact simply-connected manifold is given by torus
actions. This gives a geometric characterization of aspherical regular Riemannian foliations given by torus actions.
Related project(s):
43Singular Riemannian foliations and collapse
A singular foliation \(\mathcal{F}\) on a complete Riemannian manifold \(M\) is called Singular Riemannian foliation (SRF for short) if its leaves are locally equidistant, e.g., the partition of M
into orbits of an isometric action. In this paper, we investigate variational problems in compact Riemannian manifolds equipped with SRF with special properties, e.g. isoparametric foliations, SRF on
fibers bundles with Sasaki metric, and orbit-like foliations. More precisely, we prove two results analogous to Palais' Principle of Symmetric Criticality, one is a general principle for \(\mathcal
{F}\) symmetric operators on the Hilbert space \(W^{1,2}(M)\), the other one is for \(\mathcal{F}\) symmetric integral operators on the Banach spaces \(W^{1,p}(M)\). These results together with a \(\
mathcal{F}\) version of Rellich Kondrachov Hebey Vaugon Embedding Theorem allow us to circumvent difficulties with Sobolev's critical exponents when considering applications of Calculus of Variations
to find solutions to PDEs. To exemplify this we prove the existence of weak solutions to a class of variational problems which includes \(p\)-Kirschoff problems.
Related project(s):
43Singular Riemannian foliations and collapse
We show that a singular Riemannian foliation of codimension two on a compact simply-connected Riemannian (????+2)-manifold, with regular leaves homeomorphic to the n-torus, is given by a smooth
effective n-torus action. This solves in the negative for the codimension 2 case a question about the existence of foliations by exotic tori on simply-connected manifolds.
Journal Mathematische Zeitschrift
Publisher Springer
Volume 304
Link to preprint version
Link to published version
Related project(s):
43Singular Riemannian foliations and collapse
Using variational methods together with symmetries given by singular Riemannian foliations with positive dimensional leaves, we prove the existence of an infinite number of sign-changing solutions to
Yamabe type problems, which are constant along the leaves of the foliation, and one positive solution of minimal energy among any other solution with these symmetries. In particular, we find
sign-changing solutions to the Yamabe problem on the round sphere with new qualitative behavior when compared to previous results, that is, these solutions are constant along the leaves of a singular
Riemannian foliation which is not induced neither by a group action nor by an isoparametric function. To prove the existence of these solutions, we prove a Sobolev embedding theorem for general
singular Riemannian foliations, and a Principle of Symmetric Criticality for the associated energy functional to a Yamabe type problem.
Journal Calculus of Variations and Partial Differential Equations
Link to preprint version
Link to published version
Related project(s):
43Singular Riemannian foliations and collapse
We expand upon the notion of a pre-section for a singular Riemannian foliation \((M,\mathcal{F})\), i.e. a proper submanifold \(N\subset M\) retaining all the transverse geometry of the foliation.
This generalization of a polar foliation provides a similar reduction, allowing one to recognize certain geometric or topological properties of \((M,\mathcal{F})\) and the leaf space \(M/\mathcal{F}
\). In particular, we show that if a foliated manifold \(M\) has positive sectional curvature and contains a non-trivial pre-section, then the leaf space \(M/\mathcal{F}\) has nonempty boundary. We
recover as corollaries the known result for the special case of polar foliations as well as the well-known analogue for isometric group actions.
Journal Ann. Global Anal. Geom.
Link to preprint version
Link to published version
Related project(s):
43Singular Riemannian foliations and collapse | {"url":"https://www.spp2026.de/members-guests/43-member-pages/diego-corro-tapia","timestamp":"2024-11-07T09:28:35Z","content_type":"text/html","content_length":"37954","record_id":"<urn:uuid:3ef02740-8617-49ee-ad1e-7ef747f6877e>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00114.warc.gz"} |
[Solved] Sexton Corporation has projected the foll | SolutionInn
Answered step by step
Verified Expert Solution
Sexton Corporation has projected the following sales for the coming year: Sales in the year following this one are projected to be 2 0 percent
Sexton Corporation has projected the following sales for the coming year:
Sales in the year following this one are projected to be $20$ percent greater in each quarter.
Calculate payments to suppliers assuming that the company places orders during each quarter equal to $20$ percent of projected sales
for the next quarter. Assume that the company pays immediately.
a$.$ What is the payables period in this case?
Note: Do not round intermediate calculations and round your answer to the nearest whole number, e$.$g$.,\text{}32.$
Payables period
What are the payments to suppliers each quarter?
Note: Do not round intermediate calculations and round your answers to $2$ decimal places, e$.$g$.,\text{}32.16.$
b$.$ Calculate payments to suppliers assuming that the company places orders during each quarter equal to $20$ percent of projected
sales for the next quarter. Assume a $90-$day payables period.
Note: Do not round intermediate calculations and round your answers to $2$ decimal places e$.$g$.,\text{}32.16.$
c$.$ Calculate payments to suppliers assuming that the company places orders during each quarter equal to $20$ percent of projected
sales for the next quarter. Assume a $60-$day payables period.
Note: Do not round intermediate calculations and round your answers to $2$ decimal places e$.$g$.,\text{}32.16.$
There are 3 Steps involved in it
Step: 1
Get Instant Access to Expert-Tailored Solutions
See step-by-step solutions with expert insights and AI powered tools for academic success
Ace Your Homework with AI
Get the answers you need in no time with our AI-driven, step-by-step assistance
Get Started | {"url":"https://www.solutioninn.com/study-help/questions/sexton-corporation-has-projected-the-following-sales-for-the-coming-9140742","timestamp":"2024-11-11T00:43:38Z","content_type":"text/html","content_length":"101168","record_id":"<urn:uuid:5f9bb79f-45db-4c52-8b71-2dcea301b0a0>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00842.warc.gz"} |
38.46 an hour is how much a year(); How To Look At Wage In Yearly Form
More Work to Determine How Much You Make in a Year As far as that goes. Of course, working at $38.46 per hour, you are going to want to know how much this is a year! We will break it down in easy
language to know this 38.46 An Hour Is How Much A Year annually.
Hourly Rate to Annual Salary: If You Make 38,46$ Hour How Much a Year
Well, let’s look at this if you would make $38.46 an hour in one year. Assuming a typical workweek of 40 hours and working for all possible weeks, like if holidays are on weekdays or after; the
calculation is as follows:
Annual Salary = Hourly wage x Hours per week x Weeks per year
Then, 38.46 X 40 = $1,538 (Weekly) x 52 Weeks In A Year = $80,028 On The High End So you know your hourly is a range of anywhere from roughly to.90/hour + at least Vacations costing the end payers
another year and kicker: For:”.$
Salary = $38.46 × 40 hrs weekly /52 weeks
Now, calculate step by step :
1. Calculate Weekly Income:
2. Calculate Yearly Income:
Also Read N: With That Being Said, $17 An Hour Is How Much A Year After Taxes?
Determination of annual income per hourly wage
So if you make $38.46 an hour, your yearly salary before tax and deductions would be 38.46 An Hour Is How Much A Year’s Gross income is tax-free – it is your total earnings before any taxes or
deductions are removed.
Also Read P: $25 an hour is how much a year after taxes
Hourly Wages To Annual Salary
Keep in mind that the gross income figure is before taxes (federal, state, and local if applicable) plus other deductions such as healthcare premiums and retirement contributions are taken from your
earnings – so when you take home pay will be less than this amount.
Being able to convert 38.46 An Hour Is How Much A Year into yearly counts is highly useful since you will need that information as well. If you’re trying to budget (IDEAL), plan for the future, or
just your typical wage slave and curious about how much money is going into that sweet bank account of yours- this calculation can help.
38.46 an hour is how much a year? So the next time you mull over how much you would be making annually year versus hourly now, use this simple formula to measure a good ballpark of what is your
annual salary in regards to that rate per hour. It comes down to educating yourself with finance so you can arm yourself and make strategic decisions regarding your life and finances.
How Does $38.46 An Hour Translate Per Year?
At $38.46 an hour and a 40-hour workweek with the traditional number of weeks in a year, that computes to roughly an annual wage of $79,884.80
What Is The Annual Salary Of $38.46 Per Hour?
To get your total annual salary when paid $38.46 per hour, multiply (7) this by 40 hours a week and then x 52 weeks in a year
Paid on a per-hour basis Does $38.46 Hourly Include Year Overtime Pay Calculations
The calculation does not – it is based on the traditional 40 hours per week. In this calculation only the basic use was added overtime pay should be included in annual wages.
Does The $79,884.80 Per Year Estimate For $38.46 An Hour Include Taxes And Deductions?
The $79,884.80 is not take-home pay; It is your gross income before taxes and deductions After adjusting for takes, healthcare premiums, retirement contributions, and other deductions the take-home
net income is much lower.
How can hourly wages like $38.46 be annual salaries?
Hourly wages can also vary by industry, region, experience level, and employer policies. This can result from a number of factors that will influence the annual salary calculation based on $38.46 per | {"url":"https://zoominks.com/38-46-an-hour-is-how-much-a-year/","timestamp":"2024-11-08T12:11:38Z","content_type":"text/html","content_length":"69086","record_id":"<urn:uuid:988f4a2d-3d28-4268-9dbc-ef2f69a7f41b>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00194.warc.gz"} |
Fermi Liquid Properties of Dirac Materials
Gochan, Matthew. “Fermi Liquid Properties of Dirac Materials”, Boston College, 2020. http://hdl.handle.net/2345/bc-ir:108726.
One of the many achievements of renowned physicist L.D. Landau was the formulation of Fermi Liquid Theory (FLT). Originally debuted in the 1950s, FLT has seen abundant success in understanding
degenerate Fermi systems and is still used today when trying to understand the physics of a new interacting Fermi system. Of its many advantages, FLT excels in explaining why interacting Fermi
systems behave like their non-interacting counterparts, and understanding transport phenomena without cumbersome and confusing mathematics. In this work, FLT is applied to systems whose low energy
excitations obey the massless Dirac equation; i.e. the energy dispersion is linear in momentum, ε α ρ, as opposed to the normal quadratic, ε α ρ². Such behavior is seen in numerous, seemingly
unrelated, materials including graphene, high T[subscript]c superconductors, Weyl semimetals, etc. While each of these materials possesses its own unique properties, it is their low energy behavior
that provides the justification for their grouping into one family of materials called Dirac materials (DM). As will be shown, the linear spectrum and massless behavior leads to profound differences
from the normal Fermi liquid behavior in both equilibrium and transport phenomena. For example, with mass having no meaning, we see the usual effective mass relation from FLT being replaced by an
effective velocity ratio. Additionally, as FLT in d=2 has been poorly studied in the past, and since the most famous DM in graphene is a d=2 system, a thorough analysis of FLT in d=2 is presented.
This reduced dimensionality leads to substantial differences including undamped collective modes and altered quasiparticle lifetime. In chapter 3, we apply the Virial theorem to DM and obtain an
expression for the total average ground state energy $E=\frac{B}{r_s}$ where $B$ is a constant independent of density and $r_s$ is a dimensionless parameter related to the density of the system: the
interparticle spacing $r$ is related to $r_s$ through $r=ar_s$ where $a$ is a characterstic length of the system (for example, in graphene, $a=1.42$ \AA). The expression derived for $E$ is unusual in
that it's typically impossible to obtain a closed form for the energy with all interactions included. Additionally, the result allows for easy calculation of various thermodynamic quantities such as
the compressibility and chemical potential. From there, we use the Fermi liquid results from the previous chapter and obtain an expression for $B$ in terms of constants and Fermi liquid parameters
$F_0^s$ and $F_1^s$. When combined with experimental results for the compressibility, we find that the Fermi liquid parameters are density independent implying a unitary like behavior for DM. In
chapter 4, we discuss the alleged universal KSS lower bound in DM. The bound, $\frac{\eta}{s}\geq\frac{\hbar}{4\pi k_B}$, was derived from high energy/string theory considerations and was conjectured
to be obeyed by all quantum liquids regardless of density. The bound provides information on the interactions in the quantum liquid being studied and equality indicates a nearly perfect quantum
fluid. Since its birth, the bound has been highly studied in various systems, mathematically broken, and poorly experimented on due to the difficult nature of measuring viscosity. First, we provide
the first physical example of violation by showing $\frac{\eta}{s}\rightarrow 0$ as $T\rightarrow T_c$ in a unitary Fermi gas. Next, we determine the bound in DM in d=2,3 and show unusual behavior
that isn't seen when the bound is calculated for normal Fermi systems. Finally we conclude in chapter 5 and discuss the outlook and other avenues to explore in DM. Specifically, it must be pointed
out that the physics of what happens near charge neutrality in DM is still poorly understood. Our work in understanding the Fermi liquid state in DM is necessary in understanding DM as a whole. Such
a task is crucial when we consider the potential in DM, experimentally, technologically, and purely for our understanding.
My Bookmarks | {"url":"https://dlib.bc.edu/islandora/object/bc-ir:108726","timestamp":"2024-11-02T21:36:18Z","content_type":"text/html","content_length":"24334","record_id":"<urn:uuid:2e1d50c3-21c6-4a90-9ce7-4049ddc4a67f>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00292.warc.gz"} |
Knowing Linear Regression better!! | Towards AI
Knowing Linear Regression better!!
Last Updated on July 21, 2023 by Editorial Team
Originally published on Towards AI.
OLS and Gradient Descent
Regression Introduction
Regression is an Algorithm of the Supervised Learning model. When the output or the dependent feature is continuous and labeled then, we apply the Regression Algorithm. Regression is used to find the
relation or equation between the Independent variables and the output variable. E.g., given below, we have variable xβ , xβ , β ¦.,xβ , which contribute towards the output of variable y. We have
to find a relation between x variables and dependent variable y. So the equation is defined as shown below, e.g.-
y = F( xβ , xβ , β ¦.,xβ )
y = 5 xβ +8xβ +β ¦.+12xβ
To find a relation between independent features and dependent features in a better manner, we will reduce the independent feature to find an equation for the understanding purpose here.
Linear Regression
Image by Author
Assume that we want to calculate the salary based on the input variable Experience. So the salary becomes our independent feature x, and the experience becomes our dependent feature y.
Consider fig(1) in the above image where the value of y is increasing linearly with an increase in the value of x. It can be concluded that the dependency is linearly proportional. So the graph
obtained is a straight line, and the relation between x and y is given y=2x+4.
Consider fig(2) in the above image where the value of y is not proportional to the increasing values of x. It's visible from fig(2) that the value of y is irregular w.r.t to y. So this type of
regression is known as Non-Linear Regression.
With the above discussion, we must understand that life is not always going to be so smooth. Consider fig(1) in the below image where the points are spread across the plane. Is it possible to find
the best fit line and the equation that passes through all the points in the plane? Yes, practically, it's possible, visible from fig(2) in the below image. But is it feasible? No, as itβ s clearly
visible from fig(2) that we are overfitting the data, this will compromise the modelβ s performance in terms of accuracy.
Image by Author
How do we avoid overfitting, and at the same time, we can find a line that passes through all the points? We select the best fit line that passes through all the points so that the distance of all
the given points is minimum from the line, and we can find a relation between the output and input features by minimizing the error.
Image by Author
We plot all the points (x1,y1), (x2,y2),β ¦..,(xβ ,yβ ) and calculate the error between the various lines which have been selected to obtain the best fit line. E.g. , in the below image, we have a
point (xβ ,yβ ) which calculates the distance from the line (1) using the formula (Yβ -αΏ©)Β². Similarly, we calculate the distance of the second point (xβ ,yβ ) from the line(1) using the
formula (Y2 – αΏ©)Β².In this way, we obtain the error difference of actual result and predicted result for each point from the (1) and then sum up the errors (1). Similarly, we calculate the sum of
the value of errors for all the points from line(2), line(3), as our main goal is to minimize the error, so we focus on choosing the best fit line, which has points lying at minimum distance.
Once the line is obtained, we have the equation related to that line, which is denoted as Y = ΞΈβ + ΞΈβ *X
Image by Author
As we can observe from the above image, the main goal is to minimize the sum of errors between the actual and predicted value of the dependent variable. The equation is known as the Loss function.
This process will, in turn, optimizes the value of ΞΈβ and ΞΈβ , which will result in defining the best fit line that is having minimum distance from all the given points in space.
OLS Method
OLS (Ordinary Least Square) method is applied to find the linear regression model's values of parameters. We had derived the above equation Y = ΞΈβ + ΞΈβ *X.
Image by Author
Revision of Maxima and Minima
Before we move ahead, let's understand the concept of engineering mathematics of Maxima and Minima. Points 1, 2, and 3 indicate where the slope changes its direction, either upward or downward. If
you observe, itβ s visible from the below figure that if we draw a tangent at points 1,2, and 3, it appears parallel to X-axis. So the slope at these points is 0. These points at which the slope
becomes zero or are known as Stationary points. Also, point 1 in the below fig, where the slope's value suddenly starts increasing, is known as global minima, and vice-versa at point 2 is known as
global maxima.
Image by Author
From the above concept, we can conclude that we can find the stationary points of any given equation by equating its derivative to zero, i.e., the slope at that point equal to zero. By solving the
first derivative, we find the stationary points for the equation. But we need to conclude whether the stationary point is the global maxima or global minima.
On taking the second derivative, if the value is greater than zero, it represents global minima, and if the value is less than zero, it represents global maxima. In this way, we figure out the global
maxima and minima.
We take the below example to understand the above rules about maxima and minima.
Image by Author
From all the above discussion, we can observe that it achieves minima when the graph attains a convex curve. Hence we can conclude that our point of discussion must be to attain convexity of the loss
Optimization of ΞΈβ and ΞΈβ
So we have a problem statement in the hand of optimizing the values ΞΈβ and ΞΈβ such that the Loss function attains convexity(minimum value). As we learned from the maxima and minima concept, we
focus on obtaining the given loss function's stationary points, as shown below. Using that, we get two equations, as shown in the below image. Looking at both the equations, we know that β x β , β
yβ , β xβ yβ , and β xβ Β² can be obtained from the given data, which will have an independent variable salary(x) and dependent variable(y).
Image by Author
Image by Author
To take it a bit more forward, we can take the second derivative w.r.t ΞΈβ and ΞΈβ and form the Hessian matrix given in the below images.
Image by Author
Image by Author
Image by Author
Method to find out the convexity using the matrix
Although we found out the values of ΞΈβ and ΞΈβ at the stationarity point, we need to confirm now if the point of stationarity is convex or concave. To confirm the convexity of a matrix, we have
a method given below:
Leading Principal Minor
If the matrix that corresponds to a principal minor is a determinant of the quadratic upper-left part of the larger matrix (i.e., it consists of matrix elements in rows and columns from 1 to k), then
the principal minor is called a leading principal minor (of order k).
The kth order leading principal minor is the determinant of the kth order principal submatrix formed by deleting the last n β k rows and columns.
Image by Author
The principal leading minor of A is the submatrices formed from the first r rows and r columns of A for r=1,2,β ¦..,n.
These submatrices are
Image by Author
A Symmetric matrix A is positive definite if and only if every leading principal minor's determinant is positive.
Example to understand the Principal Minor
Given below Matrix A and we form the Principal minors out of it. Once that is done, we take the value of the determinant.
If the value of Determinant of Principal Minors is greater than zero for all, then it's called positive definite (e.g., 2,3,1).
If the value of Determinant of Principal Minors is greater than or equal to zero for all, then itβ s called positive semidefinite (e.g., 2,0,1).
If the value of Determinant of Principal Minors is less than zero for all, then itβ s called negative definite (e.g., -2,-3,-1).
If the value of Determinant of Principal Minors is less than or equal to zero for all, then itβ s called negative semidefinite (e.g., -2,0,-1).
Image by Author
As it is visible from the above image that the Principal minor determinant values are greater than zero, so it is positive.
Convexity of function
For a given matrix A,
A is convex β A is positive semidefinite.
A is concave β A is negative semidefinite.
A is strictly convex β A is positive definite.
A is strictly concave β A is negative definite.
Limitations of OLS
• If we observed the OLS method, then there are high chances of obtaining false-positive convex points, i.e., there might be two minimal in the same equation. One of them might be local minima
while the other being global minima.
• Also, there isn't any fine-tuning parameter available to control the performance of the model. We aren't having control over the accuracy of the model. As we are targeting to build an optimized
system, we must focus on controlling the model efficiency.
Gradient Descent Method
To overcome the OLS method's limitations, we introduce a new model for optimizing the values of ΞΈβ and ΞΈβ , known as the Gradient Descent Method. Here we initialize the values of ΞΈβ and ΞΈβ
to some random values and calculate the model output's total error. If the error is not within the permissible limits, we go for decrement of the ΞΈβ and ΞΈβ using the learning parameter Ξ±.
Image by Author
If we get the value of error in the model output higher in the present iteration compared to the previous iteration, then we might have possibly achieved convergence in the previous iteration. Or
else we have another option of updating the value Ξ± and validate the output error. Here we take the derivative of Error function J(ΞΈβ ,ΞΈβ ) w.r.t ΞΈβ and ΞΈβ and use those values to obtain
the new values of ΞΈβ and ΞΈβ in the next iteration.
Image by Author
Image by Author
This was all about the mathematical intuition behind Linear Regression, which helps us to know the concept better. One can get the methods to be used while performing the linear Regression from the
Python packages easily.
Thanks for reading out the article!!
I hope this helps you with a better understanding of the topic.
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an
AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.
Published via Towards AI | {"url":"https://towardsai.net/p/l/knowing-linear-regression-better","timestamp":"2024-11-12T18:42:47Z","content_type":"text/html","content_length":"311530","record_id":"<urn:uuid:0b844fda-4790-4b46-aac0-4ef3ebb7958c>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00448.warc.gz"} |