content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
ontrol for
- IEEE Trans. Commun , 1997
"... For wireless communication systems, iterative power control algorithms have been proposed to minimize transmitter powers while maintaining reliable communication between mobiles and base
stations. To derive deterministic convergence results, these algorithms require perfect measurements of one or mo ..."
Cited by 89 (8 self)
Add to MetaCart
For wireless communication systems, iterative power control algorithms have been proposed to minimize transmitter powers while maintaining reliable communication between mobiles and base stations. To
derive deterministic convergence results, these algorithms require perfect measurements of one or more of the following parameters: (i) the mobile's signal to interference ratio (SIR) at the
receiver, (ii) the interference experienced by the mobile, and (iii) the bit error rate. However, these quantities are often difficult to measure and deterministic convergence results neglect the
effect of stochastic measurements. In this work, we develop distributed iterative power control algorithms that use readily available measurements. Two classes of power control algorithms are
proposed. Since the measurements are random, the proposed algorithms evolve stochastically and we define the convergence in terms of the mean squared error (MSE) of the power vector from the optimal
power vector that is t...
- Wireless Networks , 1998
"... Power control algorithms assume that the receiver structure is fixed and iteratively update the transmit powers of the users to provide acceptable quality of service while minimizing the total
transmitter power. Multiuser detection, on the other hand, optimizes the receiver structure with the assump ..."
Cited by 37 (4 self)
Add to MetaCart
Power control algorithms assume that the receiver structure is fixed and iteratively update the transmit powers of the users to provide acceptable quality of service while minimizing the total
transmitter power. Multiuser detection, on the other hand, optimizes the receiver structure with the assumption that the users have fixed transmitter powers. In this study, we combine the two
approaches and propose an iterative and distributed power control algorithm which iteratively updates the transmitter powers and receiver filter coefficients of the users. We show that the algorithm
converges to a minimum power solution for the powers, and an MMSE multiuser detector for the filter coefficients. 1.
, 2008
"... Distributed power control is an important issue in wireless networks. Recently, noncooperative game theory has been applied to investigate interesting solutions to this problem. The majority of
these studies assumes that the transmitter power level can take values in a continuous domain. However, r ..."
Cited by 15 (0 self)
Add to MetaCart
Distributed power control is an important issue in wireless networks. Recently, noncooperative game theory has been applied to investigate interesting solutions to this problem. The majority of these
studies assumes that the transmitter power level can take values in a continuous domain. However, recent trends such as the GSM standard and Qualcomm’s proposal to the IS-95 standard use a finite
number of discretized power levels. This motivates the need to investigate solutions for distributed discrete power control which is the primary objective of this paper. We first note that, by simply
discretizing, the previously proposed continuous power adaptation techniques will not suffice. This is because a simple discretization does not guarantee convergence and uniqueness. We propose two
probabilistic power adaptation algorithms and analyze their theoretical properties along with the numerical behavior. The distributed discrete power control problem is formulated as an-person,
nonzero sum game. In this game, each user evaluates a power strategy by computing a utility value. This evaluation is performed using a stochastic iterative procedures. We approximate the discrete
power control iterations by an equivalent ordinary differential equation to prove that the proposed stochastic learning power control algorithm converges to a stable Nash equilibrium. Conditions when
more than one stable Nash equilibrium or even only mixed equilibrium may exist are also studied. Experimental results are presented for several cases and compared with the continuous power level
adaptation solutions.
- in Proc. WiOpt’03, INRIA Sophia Antipolis , 2003
"... This paper proposes a new scheme that couples power control with a minimum outage probability multiuser detector. The resultant iterative algorithm is conceptually simple and finds the minimum
sum power of all users with a set of outage probability constraints. Bounds on the outage probability expre ..."
Cited by 2 (1 self)
Add to MetaCart
This paper proposes a new scheme that couples power control with a minimum outage probability multiuser detector. The resultant iterative algorithm is conceptually simple and finds the minimum sum
power of all users with a set of outage probability constraints. Bounds on the outage probability expression are found that extend a previous result that did not include receiver noise. These bounds
are used to create a sub-optimal scheme coupling power control and a MMSE multiuser detector. This new problem becomes a variant of an existing problem where outage probability constraints are first
mapped to average SIR threshold constraints. Using a recent result that transforms complex SIR expressions into a compact and decoupled form, a non-iterative and computationally inexpensive power
control algorithm is developed for large systems of users. Simulation results are presented showing the closeness of the optimal and mapped schemes, speed of convergence and performance comparisons
with other standard receivers. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=3375578","timestamp":"2014-04-17T18:47:19Z","content_type":null,"content_length":"22339","record_id":"<urn:uuid:b1bb8232-78e8-4670-bab4-d9a0ed51302b>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00593-ip-10-147-4-33.ec2.internal.warc.gz"} |
Predict Resonances Of Shielded PCBs
The equations presented here make it possible to predict and analyze the resonant behavior of microwave circuits enclosed in rectangular shields.
Microwave circuits are generally enclosed in rectangular shields before integration into a larger system. Unfortunately, when the shield cover goes on, it can cause unexpected results, such as the
oscillation of "unconditionally stable" amplifiers, an increase in transmission-line losses, and unwanted coupling. Essentially, the presence of the shielded enclosure can throw off all those
advanced computer-aided-engineering (CAE) predictions. And, because it is late in the design cycle, the only recourse may be the addition of RF absorbers and gasket material to the enclosure. But the
effects of a shielded enclosure in high-frequency printed-circuit boards (PCBs) can be minimized by properly predicting the frequency, location, and nature of these enclosure-induced resonant modes.
A rectangular shield can be considered a rectangular waveguide with two of its open sides enclosed by a conducting wall. To better understand the behavior of the resonant modes in a rectangular
cavity, it might make sense to review some of the fundamental relationships of rectangular waveguide theory.
A rectangular waveguide can be considered a hollow rectangular tube that supports the propagation of electromagnetic (EM) waves. Figure 1 shows a rectangular waveguide with dimensions a, b, and l.
Note that a > b. The two types of EM waves supported in a rectangular waveguide are the transverse-electric (TE) waves and transverse-magnetic (TM) waves. TE waves do not contain an electricfield
(E-field) component in the direction of propagation while TM waves do not contain a magnetic-field (H-field) component in the direction of propagation.
A simple way to understand how an EM wave can propagate in a rectangular waveguide can be deduced starting with the transmission line model of Fig. 2. It shows a two-wire transmission line with
quarter-wave shorted stubs attached across it. The shorted stubs have no effect on the propagation of a signal on the two-wire line (at the quarter-wave frequency). If quarter-wave shorted stubs were
added with infinitesimally small spacing between them, the structure would assume the behavior of a rectangular waveguide transmission line (Fig. 3).
In Fig. 3, the larger cross-sectional diameter is one-half wavelength while the shorter dimension is the spacing of the original two-wire line. This configuration is the smallest cross-section that
can be used to efficiently propagate a signal of a given wavelength.
If the wavelength of the signal is larger in comparison with the cross-sectional dimensions of the line, the signal will be significantly attenuated as it propagates down the waveguide. If the
wavelength of the signal is shorter in comparison with the cross-sectional dimensions of the line, then other modes of propagation may occur.^3 These conditions can be modeled as the superposition of
two plane waves reflecting and re-reflecting down the line. The plane waves set up different mode patterns and propagation characteristics, which have been reproduced in equation form (See ref. 1)
below for both TE and TM waves.^1
For TM waves, the E- and H-fields as a function of position along the waveguide are given by Eqs. 1-5:
a and b = box dimensions as oriented in Fig. 1 and = a propagation constant given by Eq. 6:
Both m and n are integers starting at zero, and define a possible transverse mode commonly referred to as a TM[mn ]mode. The first subscript denotes the number of half-cycle variations of the fields
in the x-direction, and the second subscript denotes the number of half-cycle variations of the fields in the y-direction. It is evident that there are infinite modes that can exist based on the
dimensions of the waveguide.
Similar expressions for TE waves are given by Eqs. 8 through 14:
Page Title
Both m and n are integers starting at zero, and define a possible transverse mode commonly referred to as a TE[mn ]mode. The first subscript denotes the number of half-cycle variations of the fields
in the x-direction, and the second subscript denotes the number of half-cycle variations of the fields in the y-direction.
Of practical importance is the lowest order mode, known as the dominant mode. For TM modes in a rectangular waveguide, neither m nor n can be zero due to the sine dependency of both the E- and
H-fields; therefore, the dominant mode will be the TM[11 ]mode.
For TE modes, either m or n can be zero, but both cannot be zero otherwise the E- and H-fields become zero as can be seen from Eqs. 9 to 12. Therefore, the lowest TE mode is the TE[10 ]mode for the
case b < a shown in Fig. 1. It should be noted that if a < b, then the dominant mode would be the TE[01 ]mode.
As deduced from the two-wire transmission line model in Fig. 2, a rectangular waveguide has a highpass frequency response determined by its dimensions. When the propagation constant is real, this
corresponds to a propagating wave. When the propagation constant is imaginary, this corresponds to an exponentially decaying wave. Based on this, the cutoff frequency for a particular mode is defined
when the propagation constant in Eq. 6 or 13 is 0, which means that the expression of Eq. 15 must hold true.
Solving this expression for frequency yields:
Note that the expression for the cutoff frequency is the same for TE or TM modes.
Ifω^2 µε(m≠/a)^2 + (n≠/b)^2, then the propagation constant is imaginary which translates to exponentially decaying fields away from the source of excitation.
For an example, the derivations will be used to calculate the dominant mode cutoff frequency for a rectangular waveguide with dimensions, a = 2.286 cm and b = 1.016 cm. Since the dominant mode is the
TE[10 ]mode, m = 1 and n = 0. The value of the permittivity, µ, in free space is 4≠ 10^–7 H/m while the value of ε in free space is (1/36≠) 10^–9 F/m. The value of a is 0.02286 m and the value of b
is 0.00106 m. Substituting all of these values into Eq. 16 yields:
F[c mm ]= 1/(2{(4≠ 10^–7)–9)>}^0.5) 2 + (0/0.01016)^2>^0.5
which is:
F[c mm ]= 6.562 GHz
In the simulated S[21 ]response of the waveguide structure, signals below the cutoff frequency are attenuated, whereas signals at and above the cutoff frequency propagate with relatively low loss (
Fig. 3). If a rectangular waveguide is closed on its two open sides by a conducting wall, a rectangular cavity is formed as shown in Fig. 4. Applying the new boundary conditions, the expressions for
the E- and H-fields within the cavity for TM waves from Eqs. 17 to 21 hold:
Similar expressions for the E- and H-fields for the TE modes are shown in Eqs. 22 to 26:
The resonant frequencies can be calculated in a similar fashion to the cutoff frequency for a rectangular waveguide, with the result shown in Eq. 27.
Parameters m, n, and p denote the number of half-wave cycles in the x, y, and z directions, respectively. Using this fact, the location of the E- (or H- ) field maximums can be determined within the
rectangular cavity.
Page Title
The lowest resonant frequency for a TM wave is the TM[110 ]mode and for a TE wave is the TE[101 ]mode. This can be seen by substituting the mode values, m, n, and p into Eqs. 17 through 26. For
example, for the TM case, if either m or n is zero the expressions for the E- and H-fields collapse to zero.
For a cavity with dimensions shown in Fig. 4, the E-field and H-field hotspots for the TM[110 ]mode can be located by applying the appropriate expressions. As a second example, using Eqs. 17 to 21,
for m = n = 1 and p = 0, there are three nonzero field components as shown in Eq. 28.
Only one maximum exists for this function and it occurs if (a/2,b/2,z), which is shown in Eq. 29:
The maximum occurs at (a/2,0,z) and (a/2,b,z) (as in Eq. 30):
and the maximum occurs at (0,b/2,z) and (a,b/2,z).
Since a > b, the maximums calculated by Hx are actually the true maximums and are located at (a/2,0,z) and (a/2,b,z).
By using the definition of modes, one can easily determine the maximum locations of the E- and H-fields within
a cavity. For the TM[110 ]mode, there exists one half-wave cycle in the x-axis, and one half-wave cycle in the y-axis, and no variation in the z-axis of the fields (Fig. 5). Note that the maximum
occurs when x = a/2 and y = b/2 intersects, which occurs in the center of the cavity.
The location of an E-field maximum is the location of a H-field minimum and vice-versa. Using this fact, the corresponding locations for the maximum H-field can be found.
To account for the PCB effects, the rectangular cavity can be modeled as a rectangular waveguide with a dielectric slab perpendicular to the E-field with two ends closed in by conducting walls (Fig.
4). An approximate solution to the resonant frequency using this method is given below in Eq. 31 (see ref. 2)
h = the thickness of the printed-circuit board;
d = the height of the shield plus the thickness of the printed circuit board; and
ε[r ]= the dielectric constant of the PCB.
As a third example, consider a PCB with shield dimension of 1.675 X 2.375 X 0.25 in., board thickness of 20 mils, and relative dielectric constant of 4.5 (ε[r ]= 4.5). The equations can be used to
find the five lowest resonant frequencies and the locations of the maximum electric fields within the shield. To perform the calculations, the following parameters are known: h = 20 mils and d = 270
mils. Using Eq. 31, the TM[110], TM[120],
TM[310], TM[210], and TM[220 ]modes correspond to the five lowest resonant frequencies within the shield. The lowest
TE mode is the TE[101 ]mode with a corresponding resonant frequency of over 21 GHz, which is higher than any of the TM modes listed above. The table lists the five lowest frequencies, as well as the
corresponding locations of the maximum electric fields.
As a check of these calculations, the shield and PCB were simulated using the High-Frequency Structure Simulator (HFSS) from Ansoft Corp. (www.ansoft.com). The corresponding resonant frequencies are
also provided in the table for comparison. Plots of both the E- and H-fields within the structure are also simulated with HFSS and shown in Figs. 5, 6, and 7.
For the TM[110 ]mode, it is expected that there will be a one-half-wave variation in both the x- and y-dimensions and no variation in the z-direction. This results in a maximum E-field hot spot
directly in the center of the cavity (Fig. 6B). Note that where an E-field maximum occurs, there is a corresponding Hfield minimum (Fig. 6A).
For a TM[210 ]mode, it is expected that there would be two one-half-wave cycles in the x-direction, one one-half cycle in the y-direction, and no variation in the z-direction, which is shown in Fig.
7A (the H-field variation) and in Fig. 7B (the E-field variation)
For a TM[120 ]mode, it is expected that there would be a single one-half-cycle variation in the x-direction and two one half-cycle variations in the y-direction, and no variation in the z-direction
which is shown in Fig. 7A (the H-field variation) and in Fig. 7B (the E-field variation).
For a TM[310 ]mode, it is expected that there would be three one-half-cycle variations in the x-direction and one one-half-cycle variation in the y-direction, and no variation in the z-direction
which is shown in Fig. 7A (the H-field variation) and in Fig. 7B (the E-field variation).
For a TM[220 ]mode, it is expected that there would be two one-half-cycle variations in the x-direction and two one-half-cycle variations in the y-direction, and no variation in the z-direction which
is shown in Fig. 7A (the H-field variation) and in Fig. 7B (the E-field variation).
Note that these results correspond to what is predicted in the table based on the definition of modes. It should also be noted that Eq. 28 is valid as long as there aren't large or high-profile
devices attached to the PCB. Large obstacles complicate this simple computation. To account for obstacles, a full-featured three-dimensional (3D) electromagnetic (EM) solver is recommended for more
complex modeling.
To excite a resonant mode, a probe with a signal at or near the resonant frequency inserted at the location of maximum electric fields will set up an E-field of considerable intensity. Similarly, a
magnetic loop, inserted where the maximum H-fields are located, will set up a strong H-field of considerable intensity. Multiple probes (or loops) located in areas of maximum E-field (or H-field)
intensity, can excite higher-order resonant modes.
Knowledge of the location of the maximum E- and H-fields and the corresponding resonant frequency enables a designer to avoid placement and routing of circuits that could efficiently excite these
resonant modes. Note that placement and routing of circuits does not get rid of resonant modes, but can reduce the effects, which may just be the difference between a working versus a non-working
As an example, two filters were placed in the shield described earlier in example 1. Filter A was located in the center of the can as shown in Fig. 8. The TM[110], TM[210], and TM[310 ]modes all have
Efield hot spots along this path. So, it is expect that there should be field excitations at 4.1, 7.2, and 8.3 GHz. A simulation of the frequency response using Ansoft's HFSS is shown in Fig. 9.
See the August 2007 issue for Part 2 of this article. | {"url":"http://mwrf.com/components/predict-resonances-shielded-pcbs","timestamp":"2014-04-21T03:42:06Z","content_type":null,"content_length":"94033","record_id":"<urn:uuid:6e6de487-6452-420c-a350-f4b1b752cce6>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00041-ip-10-147-4-33.ec2.internal.warc.gz"} |
Julia functions for the Rmath library
1.1 Signatures of the d-p-q-r functions in libRmath
Users of R are familiar with the functions for evaluating properties of and for sampling from probability distributions, the so-called d-p-q-r functions. The designation d-p-q-r reflects the fact
that, for each distribution, there are up to 4 such functions, each beginning with one of these letters and followed by an abbreviated name of the distribution. The prefixes indicate:
the density of a continuous random variable or the probability mass function of a discrete random variable
the cumulative probability function, also called the cumulative distribution function or c.d.f.
the quantile function, which is the inverse of the c.d.f. defined on the interval (0, 1)
random sampling from the distribution
Members of R-core, notably Martin Maechler, have devoted considerable effort to ensuring that these functions are both reliable and general. Because they are part of an Open Source system, they can
be used in other Open Source projects. In fact, there is the capability to collect these functions in a stand-alone library. One of the R packages for Debian Linux and distributions that are derived
from it, such as Ubuntu, is r-mathlib which contains this stand-alone library. The header file for this library is $RHOME/include/Rmath.h on most systems (/usr/share/R/include/Rmath.h on Debian and
derived distributions because of the rules for Debian packaging).
The names and arguments of these functions follow well-defined patterns. For example, those for the χ^2 distribution have signatures
double dchisq(double, double, int);
double pchisq(double, double, int, int);
double qchisq(double, double, int, int);
double rchisq(double);
The first argument of dchisq is x, the abscissa, of pchisq is q, the quantile, and of qchisq is p, the probability. The next argument of the d-p-q functions, and the only argument of rchisq, is the
distribution parameter, df, which is the degrees of freedom of the χ^2. The last argument in the d-p-q functions controls whether the probability is on the logarithm scale. It is named give_log for
the density function and log_p for the probability and quantile functions. (At first glance this argument may seem unnecessary as the probability density, for example, could be calculated on the
probability scale then transformed to the logarithm scale. However, operations that are mathematically equivalent do not always produce the same result in floating point arithmetic. There are good
reasons for these cases.)
The second last argument in the p and q functions determines whether probability is accumulated from the left, the lower tail, or from the right, the upper tail. The default is the lower tail. Again,
there is an apparent redundancy because one can always subtract the probability in the lower tail from 1 to get the probability in the upper tail. However when x is extremely small but positive, 1 -
x rounds to 1 and small upper-tail probabilities truncate too quickly. (Remember Kernighan and Plauger's aphorism that "10 times 0.1 is hardly ever 1" - that's what you live with in floating point
1.2 Calling these functions from Julia
It is surprisingly easy from within Julia to call C functions like these having simple signatures. First you call the Julia function dlopen to access the shared object file and save the handle
julia> _jl_libRmath = dlopen("libRmath")
Ptr{Void} @0x0000000003461050
I use an awkward, patterned name for this variable to avoid potential name conflicts. At present the Julia namespace is flat (but that is likely to change as namespaces are already being discussed).
Then a call to, say, pchisq needs only
julia> ccall(dlsym(_jl_libRmath,:pchisq),Float64,(Float64,Float64,Int32,Int32), 1.96^2, 1, true, false)
The first argument to the ccall function is the symbol extracted with dlsym, followed by the specific type of the function value, the argument signatures and the actual arguments. The actual
arguments are converted to the signature shown before being passed to the C function. Thus the true is converted to 1 and the false to 0.
The value of this function call, 0.95000, is what we would expect, because a χ^2 on 1 degree of freedom is the square of a standard normal distribution and the interval [-1.960, 1.960] contains
approximately 95% of the probability of the standard normal.
1.3 Creating a Julia function for scalar arguments
Obviously, calling such a C function from the Rmath library is simpler if we wrap the call in a Julia function. We can define such a function as a one-liner (well, two lines but only because it would
be a very long single line)
pchisq(q::Number, df::Number, lower_tail::Bool, log_p::Bool) =
ccall(dlsym(_jl_libRmath,:pchisq),Float64,(Float64,Float64,Int32,Int32), q, df, lower_tail, log_p)
This illustrates one of the methods of defining a function in Julia,
fname(arg1, arg2, ...) = expression
This form, which reads much like the mathematical description f(x) = value, is useful when the function body is a single expression.
This code defines a particular method, with a reasonably general signature, for pchisq. The C function pchisq requires double precision values (called Float64 in Julia) for the first two arguments
and integers for the last two. However, these integers are treated as Boolean values, so we require the arguments passed to the Julia function to be Boolean. The Number type in Julia is an abstract
type incorporating all the integer and real number types (including complex and rational numbers, as it turns out). The Julia function, pchisq, can have methods of many different signatures. In a
sense a function in Julia is just a name used to index into a method table.
We could have allowed unrestricted arguments, as in
pchisq(q, df, lower_tail, log_p) = <something>
but then we would need to define checks on the argument types, to ensure that they are appropriate and to check for vector versus scalar arguments, in the function body. It is better to use the
method signature to check for general forms of arguments and to distinguish the scalar and vector cases. (Vector methods are defined in the next section).
This is one of the organizing principles of Julia; it is built around functions but actually all functions are methods chosen via multiple dispatch. So thinking in terms of signatures right off the
bat is a good idea.
1.4 Default argument values
In R we can match arguments by names in a function call and we can define default values for arguments that are not given a value. At present neither of these capabilities is available in Julia.
However, we can provide defaults for trailing arguments by creating methods with reduced signatures
pchisq(q::Number, df::Number, lower_tail::Bool) = pchisq(q, df, lower_tail, false)
pchisq(q::Number, df::Number) = pchisq(q, df, true, false)
The fact that the signature is recognized by position and not by using named actual arguments means that whenever log_p is true we must specify lower_tail, even if its value is the default value,
In the case of pchisq the distribution parameter, df, does not have a default value. For many other distributions there are default values of distribution parameters. For example, the distribution
parameters for the logistic distribution are location and scale with defaults 0 and 1. Having defined the 5-argument scalar version of plogis
plogis(q::Number, l::Number, s::Number, lo::Bool, lg::Bool) =
ccall(dlsym(_jl_libRmath,:plogis),Float64,(Float64,Float64,Float64,Int32,Int32), q, l, s, lo, lg)
(for brevity we have used the names l, s, lo and lg for the last four arguments) we can define the methods with default values as
plogis(q::Number, l::Number, lo::Bool, lg::Bool) = plogis(q, l, 1, lo, lg)
plogis(q::Number, lo::Bool, lg::Bool) = plogis(q, 0, 1, true, false)
plogis(q::Number, l::Number, s::Number, lo::Bool) = plogis(q, l, s, lo, false)
plogis(q::Number, l::Number, lo::Bool) = plogis(q, l, 1, lo, false)
plogis(q::Number, lo::Bool) = plogis(q, 0, 1, lo, false)
plogis(q::Number, l::Number, s::Number) = plogis(q, l, s, true, false)
plogis(q::Number, l::Number) = plogis(q, l, 1, true, false)
plogis(q::Number) = plogis(q, 0, 1, true, false)
Because a Bool is not a Number, a signature such as (Number, Bool) can be distinguished from (Number, Number).
These method definitions allow for some combinations of default values but not all combinations. Having named arguments, possibly with default values, is still a more flexible system but these method
signatures do handle the most important cases.
Defining all those methods can get tedious (not to mention providing the possibility of many transcription errors) so it is worthwhile scripting the process using the macro language for Julia. See
the file extras/Rmath.jl in the Julia distribution for the macros that generate both these methods and the vectorized methods described in the next section.
1.5 Vectorizing the function
In R there are no scalars, only vectors of length 1, so, in that sense, all functions are defined for vector arguments. In addition, many functions, including the d-p-q functions, are vectorized in
the sense that they create a result of the appropriate form from vector arguments whose length is greater than 1. Thus providing a vector of quantiles to pchisq will return a vector of probabilities.
We would want the same to be true for the Julia functions.
In Julia scalars are distinct from vectors and we use the method dispatch to distinguish scalar and vector cases. I think a general programming paradigm for Julia is first to define a function for
scalar arguments and then build the vectorized version from that. However, I don't yet have enough experience to say if this really is a general paradigm.
We can write methods for a vector q as
pchisq(q::Vector{Number}, df::Number, lower_tail::Bool, log_p::Bool) =
[ pchisq(q[i], df, lower_tail, log_p) | i=1:length(q) ]
pchisq(q::Vector{Number}, df::Number, lower_tail::Bool) =
[ pchisq(q[i], df, lower_tail, false) | i=1:length(q) ]
pchisq(q::Vector{Number}, df::Number) =
[ pchisq(q[i], df, true, false) | i=1:length(q) ]
These methods use a "comprehension" to specify the loop over the individual elements of the array producing another array. They could, of course, be written in a more conventional looping notation
but the comprehension notation is powerful and compact, similar to the "apply" family of functions in R, so we use it here. Like the one-line function definition, the comprehension reads much like a
mathematical expression to create an array of values of a certain form for i = 1,…,n.
By the way, experienced R programmers, who know not to write 1:length(q) because of unexpected results when length(q) is zero, need not worry. The sequence notation, a:b doesn't count down in Julia
when a is greater than b so 1:0 has length zero. That is, 1:length(q) in Julia always produces the same answer as seq_along(q) does in R.
We could also vectorize the df argument which leads us to consider the case of vector df and scalar q and the case of vector df and vector q, etc. At this point we realize that we are seeing many
variations on a theme and decide that it is best to script this vectorization. Fortunately the file base/operations.jl has macros _jl_vectorize_1arg and _jl_vectorize_2arg which we can adapt for this
1.6 The result
As of this writing, loading the file extras/Rmath.jl
julia> load("../extras/Rmath.jl")
julia> whos()
R_pow Function
_jl_libRmath Ptr{None}
dbeta Function
dbinom Function
dcauchy Function
dchisq Function
dexp Function
df Function
dgamma Function
dgeom Function
dlnorm Function
dlogis Function
dnbinom Function
dnchisq Function
dnorm Function
dpois Function
dsignrank Function
dt Function
dunif Function
dweibull Function
dwilcox Function
pbeta Function
pbinom Function
pcauchy Function
pchisq Function
pexp Function
pf Function
pgamma Function
pgeom Function
plnorm Function
plogis Function
pnbinom Function
pnchisq Function
pnorm Function
ppois Function
psignrank Function
pt Function
punif Function
pweibull Function
pwilcox Function
qbeta Function
qbinom Function
qcauchy Function
qchisq Function
qexp Function
qf Function
qgamma Function
qgeom Function
qlnorm Function
qlogis Function
qnbinom Function
qnchisq Function
qnorm Function
qpois Function
qsignrank Function
qt Function
qunif Function
qweibull Function
qwilcox Function
rchisq Function
rgeom Function
rpois Function
rsignrank Function
rt Function
set_seed Function
and a function like plogis provides many method signatures
julia> plogis
Methods for generic function plogis
plogis(Number,Number,Number,Bool,Bool) at /home/bates/build/julia/extras/../extras/Rmath.jl:275
plogis(Number,Number,Bool,Bool) at /home/bates/build/julia/extras/../extras/Rmath.jl:281
plogis(Number,Number,Number,Bool) at /home/bates/build/julia/extras/../extras/Rmath.jl:277
plogis(Number,Bool,Bool) at /home/bates/build/julia/extras/../extras/Rmath.jl:287
plogis(Number,Number,Bool) at /home/bates/build/julia/extras/../extras/Rmath.jl:283
plogis(Number,Number,Number) at /home/bates/build/julia/extras/../extras/Rmath.jl:279
plogis(Number,Bool) at /home/bates/build/julia/extras/../extras/Rmath.jl:289
plogis(Number,Number) at /home/bates/build/julia/extras/../extras/Rmath.jl:285
plogis(Number,) at /home/bates/build/julia/extras/../extras/Rmath.jl:291
plogis{T1<:Number,T2<:Number}(T1<:Number,AbstractArray{T2<:Number,N}) at operators.jl:162
plogis{T1<:Number,T2<:Number}(AbstractArray{T1<:Number,N},T2<:Number) at operators.jl:165
plogis{T1<:Number,T2<:Number}(AbstractArray{T1<:Number,N},AbstractArray{T2<:Number,N}) at operators.jl:169
plogis{T1<:Number,T2<:Number,T3<:Number}(AbstractArray{T1<:Number,N},T2<:Number,T3<:Number) at /home/bates/build/julia/extras/../extras/Rmath.jl:7
plogis{T1<:Number,T2<:Number,T3<:Number}(T1<:Number,AbstractArray{T2<:Number,N},T3<:Number) at /home/bates/build/julia/extras/../extras/Rmath.jl:10
plogis{T1<:Number,T2<:Number,T3<:Number}(T1<:Number,T2<:Number,AbstractArray{T3<:Number,N}) at /home/bates/build/julia/extras/../extras/Rmath.jl:13
plogis{T1<:Number,T2<:Number,T3<:Number}(AbstractArray{T1<:Number,N},AbstractArray{T2<:Number,N},T3<:Number) at /home/bates/build/julia/extras/../extras/Rmath.jl:16
plogis{T1<:Number,T2<:Number,T3<:Number}(T1<:Number,AbstractArray{T2<:Number,N},AbstractArray{T3<:Number,N}) at /home/bates/build/julia/extras/../extras/Rmath.jl:20
plogis{T1<:Number,T2<:Number,T3<:Number}(AbstractArray{T1<:Number,N},T2<:Number,AbstractArray{T3<:Number,N}) at /home/bates/build/julia/extras/../extras/Rmath.jl:24
You will see that the vectorized methods have slightly more general signatures involving AbstractArrays of Numbers, which also handles cases such as matrix arguments.
The methods allow for calling plogis in a way that is natural to R programmers
julia> plogis(-1:0.2:1, 0)
[0.268941, 0.310026, 0.354344, 0.401312, 0.450166, 0.5, 0.549834, 0.598688, 0.645656, 0.689974, 0.731059]
julia> plogis(-1:0.2:1, 0, 2)
[0.377541, 0.401312, 0.425557, 0.450166, 0.475021, 0.5, 0.524979, 0.549834, 0.574443, 0.598688, 0.622459]
(In running that example I realized that I had omitted some cases from my macros. That will be repaired "in the fullness of time" as Bill Venables is wont to say.) | {"url":"http://dmbates.blogspot.com/2012/03/julia-functions-for-rmath-library.html","timestamp":"2014-04-20T23:34:52Z","content_type":null,"content_length":"92221","record_id":"<urn:uuid:3498aa91-8ba4-45e1-af17-b322a3f5b84a>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00274-ip-10-147-4-33.ec2.internal.warc.gz"} |
South Kearny, NJ Trigonometry Tutor
Find a South Kearny, NJ Trigonometry Tutor
...I have also recently passed the Praxis II Exam in Math Content Knowledge while earning Recognition of Excellence for scoring in the top 15 percent over the last 5 years. My most valuable
quality, however, is my ability to relate to students of all ages, and make even the most difficult subjects ...
22 Subjects: including trigonometry, reading, English, chemistry
...I'm now a PhD student at Columbia in Astronomy (have completed two Masters by now) and will be done in a year. I have a lot of experience tutoring physics and math at all levels. I have been
tutoring since high school so I have more than 10 years of experience, having tutored students of all ages, starting from elementary school all the way to college-level.
11 Subjects: including trigonometry, Spanish, calculus, physics
...I am photography-based artist and a graduate of the International Center of Photography General Studies Program in 2012. My work featured at the Rita K. Hillman gallery at ICP.
21 Subjects: including trigonometry, Spanish, reading, calculus
...I have tutored introductory Statistics and well as Calculus-based courses working from texts by Sheldon Ross and Robert Hogg. I am able to tutor such topics as standard deviation, Central
Limit Theorem, marginal probabilities, order statistics, and maximum likelihood estimators. I am not able to tutor topics such as Measure Theory and sigma-algebras.
32 Subjects: including trigonometry, calculus, physics, geometry
...Where you can do anything with the ball except touch it with your hands. You play with 11 people on a soccer field that consist of a goalkeeper, defenders, midfielders, and forwards. All must
cooperate together in order to win and score goals.
27 Subjects: including trigonometry, reading, chemistry, Spanish
Related South Kearny, NJ Tutors
South Kearny, NJ Accounting Tutors
South Kearny, NJ ACT Tutors
South Kearny, NJ Algebra Tutors
South Kearny, NJ Algebra 2 Tutors
South Kearny, NJ Calculus Tutors
South Kearny, NJ Geometry Tutors
South Kearny, NJ Math Tutors
South Kearny, NJ Prealgebra Tutors
South Kearny, NJ Precalculus Tutors
South Kearny, NJ SAT Tutors
South Kearny, NJ SAT Math Tutors
South Kearny, NJ Science Tutors
South Kearny, NJ Statistics Tutors
South Kearny, NJ Trigonometry Tutors
Nearby Cities With trigonometry Tutor
Belleville, NJ trigonometry Tutors
Bloomfield, NJ trigonometry Tutors
East Newark, NJ trigonometry Tutors
East Orange trigonometry Tutors
Glen Ridge trigonometry Tutors
Harrison, NJ trigonometry Tutors
Irvington, NJ trigonometry Tutors
Kearny, NJ trigonometry Tutors
Lyndhurst, NJ trigonometry Tutors
Montclair, NJ trigonometry Tutors
Newark, NJ trigonometry Tutors
North Arlington trigonometry Tutors
Nutley trigonometry Tutors
Orange, NJ trigonometry Tutors
West Orange trigonometry Tutors | {"url":"http://www.purplemath.com/south_kearny_nj_trigonometry_tutors.php","timestamp":"2014-04-17T11:11:49Z","content_type":null,"content_length":"24445","record_id":"<urn:uuid:91541376-1675-402e-b08f-cdb523dd96d4>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00115-ip-10-147-4-33.ec2.internal.warc.gz"} |
How can one do calculus with (nilpotent) infinitesimals?: An Introduction to Smooth Infinitesimal Analysis
Many mathematicians, from Archimedes to Leibniz to Euler and beyond, made use of infinitesimals in their arguments. These were later replaced rigorously with limits, but many people still find it
useful to think and derive with infinitesimals.
Unfortunately, in most informal setups the existence of infinitesimals is technically contradictory, so it can be difficult to grasp the means by which one fruitfully manipulates them. It would be
useful to have an axiomatic framework with the following properties:
1. It is consistent.
2. The system acts as a good “intuition pump” for the real world. In particular, this entails that if you prove something in the system, then while it won’t necessarily be true in the real world,
there should be a high probability that it’s morally true in the real world, i.e., with some extra assumptions it becomes true. It should also ideally entail that many of the proofs of Archimedes, et
al., involving infinitesimals can be formulated as is (or close to “as is”).
“Smooth infinitesimal analysis” is one attempt to satisfy these conditions.
(This is a blogified version of the first part of an article I wrote here.)
Axioms and Logic
Consider the following axioms:
Axiom 1. $R$ is a set, 0 and 1 are elements of $R$ and $+$ and $\cdot$ are binary operations on $R$. The structure $\langle R, +, \cdot, 0, 1\rangle$ is a commutative ring with unit.
Furthermore, we have that $\forall x\, ((x e 0) \implies (\exists y\, xy = 1))$, but I don’t want to call $R$ a field for a reason I’ll discuss in a moment.
Axiom 2. There is a transitive irreflexive relation $<$ on $R$. It satisfies $0 < 1$, and for all $x$, $y$, and $z$, we have $x < y \implies x + z < y + z$ and ($x < y$ and $z > 0)\implies xz < yz$.
It also satisfies $\forall x, y\, (xe y) \implies (x > y \vee x < y)$, but I don’t want to call $<$ total, for a reason I’ll discuss in a moment.
Axiom 3. For all $x > 0$ there is a unique $y > 0$ such that $y^2 = x$.
Axiom 4 [Kock-Lawvere Axiom]. Let $D = \{d \in R\mid d^2 = 0\}$. Then for all functions $f$ from $D$ to $R$, and all $d\in D$, there is a unique $a\in R$ such that $f(d) = f(0) + d\cdot a$.
After reading the Kock-Lawvere Axiom you are probably quite puzzled. In the first place, we can easily prove that $D = \{0\}$: Let $d\in D$. For a proof by contradiction, assume that $de 0$, then
there is a $d^{-1}$ and if $d^2$ equalled 0, we would have $d = d^2 d^{-1} = 0$.
For an alternate proof that $D = \{0\}$: Again assume that $de 0$ for a contradiction. Then $d > 0$ or $d < 0$. In the first case, $d^2 > 0$, so $de 0$ (since $<$ is irreflexive). In the second case,
we have $0 < -d$ by adding $-d$ to both sides, and again $d^2 > 0$.
Now, if $D = \{0\}$, then for any $a\in R$, and any function $f$ from $D$ to $R$, we have $f(d) = f(0) + d\cdot a$ for all $d\in D$. This contradicts the uniqueness of $a$. Therefore, the axioms
presented so far are contradictory.
However, we have the following surprising fact.
Fact. There is a form of set theory (called a local set theory, or topos logic) which has its underlying logic restricted (to a logic called intuitionistic logic) under which Axioms 1 through 4 (and
also the axioms to be presented later in this paper) taken together are consistent
Definition. Smooth Infinitesimal Analysis (SIA) is the system whose axioms are those sentences marked as Axioms in this paper and whose logic is that alluded to in the above fact.
References for this theorem are [Moerdijk-Reyes] and [Kock]. References for topos logic specifically are [Bell1] and [MacLane-Moerdijk].
Essentially, intuitionistic logic disallows proof by contradiction (which was used in both proofs that $D = \{0\}$ above) and its equivalent brother, the law of the excluded middle, which says that
for any proposition $P$, $P\vee eg P$ holds.
I won’t formally define intuitionistic logic or topos logic here as it would take too much space and there’s no real way to understand it except by seeing examples anyway. If you avoid proofs by
contradiction and proofs using the law of the excluded middle (which usually come up in ways like: “Let $x\in R$. Then either $x = 0$ or $xe 0$.”), you will be okay.
But before we go further we might ask, “what does this logic have to do with the real world anyway?” Possibly nothing, but recall that our goals above do not require that we work with “real” objects;
just that we have a consistent system which will act as a good “intuition pump” about the real world. We are guaranteed that the system is consistent by a theorem; for the second condition each
person will have to judge for themselves.
To conclude this section, it should now be clear why I didn’t want to call $R$ a field and $<$ a total order:
Even though we have $\forall x\, ((xe 0)\implies x$ invertible), we can’t conclude from that that $\forall x\, ((x = 0) \vee (x$ invertible)), because the proof of the latter from the former uses the
law of the excluded middle. Calling $R$ a field would unduly give the impression that the latter is true.
For the rest of this blog entry I will generally work within SIA (except, obviously, when I announce new axioms or make remarks about SIA).
Single-Variable Calculus
An Important Lemma
This lemma is easy to prove, but because it is used over and over again, I’ll isolate it here:
Lemma [Microcancellation] Let $a, b\in R$. If for all $d\in D$ we have $ad = bd$, then $a = b$.
Let $f\in R^D$ be given by $f(d) = ad = bd$. Then by the uniqueness condition of the Kock-Lawvere axiom, we have that $a = b$.
Basic Rules
Let $f$ be a function from $R$ to $R$, and let $x\in R$. We may define a function $g$ from $D$ to $R$ as follows: for all $d\in D$, let $g(d) = f(x + d)$. Then the Kock-Lawvere axiom tells us that
there is a unique $a$ so that $g(d) = g(0) + ad$ for all $d\in D$. Thus, we have that for all functions $f$ from $R$ to $R$ and all $x\in R$, there is a unique $a$ so that $f(x + d) = f(x) + ad$ for
all $d$. We define $f'(x)$ to be this $a$.
We thus have the following fundamental fact:
Proposition [Fundamental Fact about Derivatives]
For all $f\in R^R$, all $x\in R$, and all $d\in D$,
$f(x + d) = f(x) + f'(x)d$
and furthermore, $f'(x)$ is the unique real number with that property.
Proposition Let $f$, $g\in R^R$, $c\in R$. Then:
1. $(f + g)' = f' + g'$
2. $(cf)' = cf'$
3. $(fg)' = f'g + fg'$
4. If for all $x$, $g(x) e 0$, then $(f/g)' = (gf' - fg')/g^2$.
5. $(f\circ g)' = (f'\circ g)\cdot g'$.
I’ll prove 3 and 5 and leave the rest as exercises.
To prove 3: Let $x\in R$ and $d\in D$. Let $h(x) = f(x)g(x)$. Then
$h(x + d) = f(x + d)g(x + d) = (f(x) + f'(x)d)(g(x) + g'(x)d)$
which, multiplying out and using $d^2 = 0$, is equal to
$f(x)g(x) + d(f'(x)g(x) + f(x)g'(x)) = h(x) +d(f'(x)g(x) + f(x)g'(x)).$
On the other hand, we know that $h(x + d) = h(x) + h'(x)d$, so
$h'(x)d = d(f'(x)g(x) + f(x)g'(x)).$
Since $d$ was an arbitrary element of $D$, we may use microcancellation, and we obtain $h'(x) = f'(x)g(x) + f(x)g'(x)$.
To prove 5: Let $x\in R$ and $d\in D$. Then
$f(g(x + d)) = f(g(x) + g'(x)d).$
Now, $g'(x)d$ is in $D$ (since $(g'(x)d)^2 = d^2(g'(x))^2 = 0$), so
$f(g(x) + g'(x)d) = f(g(x)) + g'(x)f'(g(x))d.$
As before, this gives us that $g'(x)f'(g(x))$ is the derivative of $f(g(x))$.
In order to do integration, let’s add the following axiom:
Axiom 5 For all $f\in R^R$ there is a unique $g\in R^R$ such that $g' = f$ and $g(0) = 0$. We write $g(x)$ as $\int_0^x f(t)\,dt$.
We can now derive the rules of integration in the usual way by inverting the rules of differentiation.
Deriving formulas for Arclength, etc.
I’d now like to derive the formula for the arclength of the graph of a function $y = f(x)$ (say, from $x = 0$ to $x = 1$). Because “arclength” isn’t formally defined, the strategy I’ll take is to
make some reasonable assumptions that any notion of arclength should satisfy and work with them.
For this problem, and other problems which use geometric reasoning, it’s important to note that the Kock-Lawvere axiom can be stated in the following form:
Proposition [Microstraightness] If $f\colon R\to R^n$ is any curve, $x\in R$, and
$d\in D$, then the portion of the curve from $f(x)$ to $f(x+d)$ is straight.
Let $f\in R^R$ be any function, and let $s(x)$ be the arclength of the graph of $y = f(x)$ from 0 to $x$. (That is, $s$ is the function which we would like to determine.)
Let $x_0 \in R$ and $d\in D$ be arbitrary and consider $s(x_0 + d) - s(x_0)$. It should be the length of the segment of $y = f(x)$ from $x_0$ to $x_0 + d$, as in the following figure.
Because of microstraightness, we know that the part of the graph of $y=f(x)$ from $P$ to $Q$ is a straight line. Furthermore, it is the hypotenuse of a right triangle with legs $PR$ and $RQ$. The
length of $PR$ is $d$.
To determine the length of $RQ$: Note that the height of $P$ is $f(x)$, so the height of $R$ is $f(x)$. On the other hand, the height of $Q$ is $f(x + d) = f(x) + f'(x)d$, so the length of $RQ$ is
The hypotenuse of a right triangle with legs of length 1 and $f'(x)$ is $\sqrt{1 + f'(x)^2}$. By scaling down, we see that the length of $PQ$ is $d\sqrt{1+f'(x)^2}$.
So, we know that $s(x + d) - s(x)$ should be $d\sqrt{1 + f'(x)^2}$. On the other hand, $s(x + d) - s(x) = ds'(x)$. By microcancellation, we have that $s'(x) = \sqrt{1 + f'(x)^2}$. Since $s(0) = 0$,
we have
$s(x) = \int_0^x \sqrt{1 + f'(t)^2}\,dt$
Several other formulas can be derived using precisely the same technique. For example, suppose we want to know the surface area of revolution of $y = f(x)$. Furthermore, suppose we know that the
surface area of a frustum of a cone with radii $r_1$ and $r_2$ and slant height $h$ as in the figure below is $\pi(r_1 + r_2)h$. (See below to eliminate this assumption.)
Then, let $A(x_0)$ be the surface area of revolution of $y = f(x)$ from $x = 0$ to $x = x_0$ about the $x$-axis. As before, consider $A(x_0 + d) - A(x_0)$ where $d$ is arbitrary. This should be the
surface area of the frustum obtained by rotating $PQ$ about the $x$-axis. The slant height is the length of $PQ$, which we determined earlier was $(\sqrt{1 + f'(x)^2})d$. The two radii are $f(x)$ and
$f(x + d) = f(x) + f'(x)d$. Therefore,
$A(x_0 + d) - A(x_0) = \pi(f(x) + f(x) + f'(x)d)(\sqrt{1 + f'(x)^2})d$
which, multiplying out, becomes $d2\pi f(x)\sqrt{1 + f'(x)^2}$. As before, $A(x_0 + d) - A(x_0)$ is also equal to $A'(x_0)d$, so
$A(x) = 2\pi\int_0^x f(t) \sqrt{1 + f'(t)^2}\,dt$
In a precisely analogous way, one may derive the formula for the volume of the solid of revolution of $y = f(x)$ about the $x$-axis, the formula for the arclength of a curve $r = f(\theta)$ given in
polar form, and show that the (signed) area under the curve $y = f(x)$ from $x = a$ to $x = b$ is $\int_a^b f(x)\,dx$.
Above we assumed that we knew the surface area of a frustum of a cone. Finally, as an exercise, eliminate this assumption by deriving the formula for the surface area of a cone (from which the
formula for the surface area of a frustum follows by an argument with similar triangles) as follows:
Fix a cone $C$ of slant height $h$ and radius $r$. The cone $C$ can be considered to be the graph of a function $y = mx$ from $x = 0$ to $x = r/m$ revolved a full $2\pi$ radians around the $x$-axis.
Let $A(\theta)$ be the area of the surface formed by revolving the graph of $y = mx$ from $x = 0$ to $x = r/m$ only $\theta$ radians around the $x$-axis.
Using a method similar to that above, determine that $A(x) = (1/2)xrh$. This gives the surface area as $A(2\pi) = \pi rh$.
The Equation of a Catenary
In the above section, essentially the same method was used again and again to solve different problems. As an example of a different way to apply SIA in single-variable calculus, in this section I’ll
outline how the equation of a catenary may be derived in it. The full derivation is in [Bell2].)
To do this, we’ll need the existence of functions $\sin$, $\cos$, $\exp$ in $R^R$ satisfying $\sin(0) = 0$, $\cos(0) = \exp(0) =1$, $\sin' = \cos$, $\cos' = -\sin$ and $\exp' = \exp$. We get this
from the following set of axioms.
Axiom (Scheme) 6. For every $C^\infty$ function $f\colon \mathbb{R}^n \to \mathbb{R}^m$ (in the real world), we assume we have a function $f\colon R^n \to R^m$ (in SIA). Furthermore, for any true
identity constructed out of such functions, composition, and partial differentiation operators, we may take the corresponding statement in SIA to be an axiom. (“True” means true for the corresponding
functions between cartesian products of $\mathbb{R}$ in the real world.)
(We can actually go further. For every $C^\infty$ manifold $\mathbb{M}$ in the real world, we may assume that there is a set $M$ in SIA, and for every $C^\infty$ function $f\colon \mathbb{M}\to\
mathbb{N}$ we may assume that there is a function $f\colon M\to N$ in SIA, and we may assume that these functions satisfy all identities true of them in the real world. But I will not use these extra
axioms in this article.)
Suppose that we have a flexible rope of constant weight $w$ per unit length suspended from two points $A$ and $B$ (see the figure below). We would like to find the function $f$ such that the graph of
$y = f(x)$ is the curve that the rope makes. (We will actually disregard the points $A$ and $B$ and consider $f$ to be defined on all of $R$.)
Let $T(x)$ be the tension in the rope at the point $(x,f(x))$. (Recall that the tension at a point in a rope in equilibrium is defined as follows: That point in the rope is being pulled by both sides
of the rope with some force. Since the rope is in equilibrium, the magnitude of the two forces must be equal. The tension is that common magnitude.)
Let $\phi(x)$ be the angle that the tangent to $f(x)$ makes with the positive $x$-axis. (That is, $\phi(x)$ is defined so that $\sin\phi(x) = f'(x)\cos\phi(x)$). We suppose that we have chosen the
origin so that $\phi(0) = 0$.
Let $s(x)$ be the arclength of $f(x)$ from 0 to $x$.
Let $x_0\in R$ and $d\in D$ be arbitrary. Consider the segment of the rope from $P = (x_0,f(x_0))$ to $Q = (x_0 + d,f(x_0 + d))$. This segment is in equilibrium under three forces:
1. A force of magnitude $T(x_0)$ with direction $\phi(x_0) + \pi$.
2. A force of magnitude $T(x_0 + d)$ with direction $\phi(x_0 + d)$.
3. A force of magnitude $w(s(x_0 + d) - s(x_0)) = ws'(x_0)d$ with direction $-\pi/2$.
By resolving these forces horizontally and using microcancellation, one can show that the horizontal component of the tension (that is, $T(x)\cos\phi(x)$) is constant. Call the constant tension $T_0$
By resolving these forces vertically and using microcancellation and the fact that $\phi(0) = 0$, one can show that the vertical component of the tension (that is $T(x)\sin\phi(x)$) is $ws(x)$.
Finally, by combining the results of the previous two paragraphs and using the fact that $\sin\phi(x) = \cos\phi(x) f'(x)$ and $s'(x) = \sqrt{1 + f'(x)^2}$, one can show that $f$ satisfies the
differential equation $1 + (u')^2 = a^2(u'')^2$, where $a = T_0/w$.
Solving differential equations symbolically is the same in SIA as it is classically, since no infinitesimals or limits are involved. In this case, the answer turns out to be
$f(x) = a\cosh\left(\frac{x}{a}\right) = \frac{a(e^{x/a} + e^{-x/a})}{2},$
if we add the initial condition $f(0) = a$ to our previously assumed initial condition $f'(0) = 0$.
I’ll include a post on multivariable calculus later.
8 responses to “How can one do calculus with (nilpotent) infinitesimals?: An Introduction to Smooth Infinitesimal Analysis”
1. I don’t understand a word/character/symbol but it looks great!
2. I recently became very interested in smooth infinitesimal analysis, and found your article a useful guide when trying to understand some of the earlier works (Kock, Reyes-Moerdijk, etc.).
However, there is something that bothers me about this setup, and I wonder if you have any ideas on it. Specifically, the problem is Axiom 5. This axiomatizes the fundamental theorem of calculus!
Is there no way to define an area function using infinitesimals, then show that its derivative is the original function?
I think the idea of taking as an axiom “every curve is made up of infinitesimally small line segments” a beautiful concept, but it seems to me to be giving up too much to also have to axiomatize
the existence of an anti-derivative.
Do you know of any way to not have to take this as an axiom?
3. Hi Geoff,
I’m glad you liked the article and find the subject interesting!
Axiom 5 doesn’t really axiomatize the fundamental theorem of calculus, since all Axiom 5 says is that antiderivatives of functions exist. The fundamental theorem of calculus says something more:
it says that not only do antiderivatives of (suitable) functions $f(x)$ exist, but that furthermore the function $F(x)$ giving the (signed) area under the graph of $f$ from ${0}$ to $x$ is a
specific example of an antiderivative. (Also, that all other antiderivatives are equal to $F$ plus some constant.)
The notation is confusing though, because in Axiom 5, I said that antiderivatives of functions exist, and furthermore I denoted the antiderivative of $f$ which takes the value ${0}$ on input ${0}
$ by $\int_0^x f(t)\,dt$. This could be confusing, because normally $\int_0^x f(t)\,dt$ is defined as a Riemann sum, which is the classical formalization of the “area” concept, and so with that
notation, the statement that the derivative of $\int_0^x f(t)\,dt$ is $f(x)$ is indeed the fundamental theorem of calculus. But with the notation I used in this article, it’s just a definition.
As an aside, you can prove a version of the fundamental theorem of calculus in smooth infinitesimal analysis, in much the same way that you can find the area of a cone.
If you want to not take the existence of antiderivatives as an axiom, you might want to take a look at Bell’s book “A Primer on Smooth Infinitesimal Analysis,” which does a lot of stuff without
that axiom, but in its place he assumes that if $f' = g'$ and $f(0) = g(0)$ then $f = g$.
4. Ah, of course you’re right. I think I was confused by the notation. Thanks for the quick reply.
I guess my next question would be: how does one define the “area under a curve” function? I assume we would like to do it without limits.
Now that I look over the above again, I notice that there’s something similar with the arc length function: you mention that it’s not formally defined, so instead work with what reasonable
properties it should have. But there has to be some nice way to formally define these notions; perhaps I just need to get Bell’s book.
As a side note, I’m amazed by how much more elegant some of the proofs with smooth infinitesimal analysis are than with regular calculus. In particular, the proofs of the product and chain rules
are so much nicer than their usual counterparts.
5. I definitely agree that the proofs in smooth infinitesimal analysis are much nicer than in classical calculus; that’s the main reason why I like it.
You define the “area under a curve” function in a similar way to arclength as follows: You show that, given $f$, there is a unique function $g(a,b)$ (to be interpreted as the area under the curve
$y = f(x)$ from $x = a$ to $x = b$ such that: for all $a,b,c$, $g(a,b) + g(b,c) = g(a,c)$ and, if $f$ is linear on $\lbrack a,b\rbrack$, then $g(a,b)$ is the appropriate value (i.e., the formula
for the area of a trapezoid: $(f(a) + f(b)/2)\cdot (b-a)$. You can then prove that there is a unique function $g(a,b)$ with these properties, and it’s $\int_a^b f(t)\,dt$.
The way area is dealt with in smooth infinitesimal analysis (i.e., isolate some conditions that an area function must have, then prove there is a unique function satisfying those conditions) is
not so different from what is done in the classical case. The problem is that it’s done over again for each problem type: that is, you use it once to determine the area of a cone, then once again
to prove the fundamental theorem of calculus, etc.
If I understand your question right, you’re wondering if you can do it once and for all. That is, can you prove that there is a unique function assigning to subsets of, say, $R^2$ an element of
$R$ satisfying the appropriate conditions. I don’t know, but I would guess that the answer is “no.” I would have to think some more about that (and probably learn some more first!). If you have
any thoughts, let me know.
6. Can these notions be extended to the calculus of variations?
7. Robert, I wouldn’t call myself an expert, but yes: the fact that smooth infinitesimal analysis is modeled in a topos means “smooth spaces” include function spaces, where the calculus of
variations naturally lives. For instance, one has a smooth space of smooth paths between two points of a manifold, and it is an easy proposition that tangent vectors in that smooth space are
equivalent to vector fields along a chosen path. One can go on to perform analysis on smooth functionals on such function spaces within this setting in a very intuitive fashion.
8. Have you seen Keisler text book
Elementary Calculus: An Infinitesimal Approach
1976, 1986 available on line free at
Filed under Intuitionistic Logic, Smooth Infinitesimal Analysis, Toposes | {"url":"http://xorshammer.com/2008/08/11/smooth-infinitesimal-analysis/","timestamp":"2014-04-20T18:23:10Z","content_type":null,"content_length":"138365","record_id":"<urn:uuid:bf5e86d0-b3de-4b54-8b28-fef56da31e0f>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00473-ip-10-147-4-33.ec2.internal.warc.gz"} |
hi ninjaman,
You've got in a muddle about where to put the pair that gives 12 and the pair that gives 6.
Look at my diagram below.
There are four ?numbers to find. The final factorisation will be like this
So ?1 times ?2 must come to 6t^2
and ?3 times ?4 must come to 12.
There are several ways of achieving this, but find one where the two 't' terms come to -18t together.
ps. In this example, it is also possible to factorise a 6 before you start on the quadratic. This will make it easier to find the ?s.
And a factorisation is not complete until the numbers are factorised out too.
You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei | {"url":"http://www.mathisfunforum.com/viewtopic.php?pid=300653","timestamp":"2014-04-20T06:17:47Z","content_type":null,"content_length":"12607","record_id":"<urn:uuid:86a8d667-ff48-4c15-8120-186aba55f077>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00302-ip-10-147-4-33.ec2.internal.warc.gz"} |
MrBayes (Huelsenbeck and Ronquist)
MrBayes is a program that estimates phylogeny using as input an alignment. The program using Markov chain Monte Carlo to approximate the posterior probability distribution of trees.
Structurama (Huelsenbeck, Huelsenbeck, and Andolfatto)
Structurama infers population structure using as input genetic data for a set of individuals. It uses a Dirichlet process prior, which allows the number of populations to be a random variable.
MDIV is a program that will simultaneously estimate divergence times and migration rates between two populations under the infinite sites model and under a finite sites model (HKY). The program can
be used to test if there is evidence for migration between two populations or evidence for shared recent common ancestry. In addition, you get maximum likelihood estimates of the demographic
parameters. The program assumes that there is no recombination. The output of the program are integrated likelihood surfaces for the three parameters: q (two times the effective population size times
the mutation rate), M (2 times the migration rate) and T (the divergence time divided by the effective population size. For more information regarding the program, please see:
Nielsen, R. and J. W. Wakeley. 2001. Distinguishing Migration from Isolation: an MCMC Approach. Genetics 158: 885-896.
This version of the program is only applicable to a single locus and assumes equal population sizes in all populations. A program with enhanced features and better documentation is available from
Jody Hey's web site.
At the moment I only distribute a Windows executable version of the program. Please send enquiries regarding source code or executables for other platforms to me.
• Windows Executable (coming soon)
• Documentation (Readme file) (coming soon)
• Example infile (coming soon)
MISAT is a program for estimating the likelihood surface for q (4 times the effective population size times the mutation rate) for microsatellite data. Two models are implemented: a stepwise mutation
model and a mutation model allowing multi-step mutations, i.e. mutational jumps larger than on repeat unit in size. There are several other programs available for doing this type of analysis, and to
my knowledge, this is probably the slowest program publicly distributed. It is, by now, somewhat outdated although the multi-step mutational model probably is not implemented in any other programs.
For more information, please see:
Nielsen, R. 1997. A Maximum Likelihood Approach to Population Samples of Microsatellite alleles. Genetics. 146: 711-716.
Nielsen R. 1997. A likelihood approach to populations samples of microsatellite alleles (vol 146, pg 711, 1997). Genetics. 147: 349-349.
Nielsen, R. and P. J. Palsbøll. 1999. Tests of Microsatellite Evolution: Multi-Step Mutations and Constraints on Allele Size. Mol. Phyl. Evol. 11: 477-484.
At the moment I only distribute a Windows executable version of the program. Please send enquiries regarding source code or executables for other platforms to Rasmus Nielsen.
• Windows Executable (coming soon)
• Documentation (Readme file in Word format) (coming soon)
• Example infile (coming soon)
SweepFinder is a program implementing the method described in:
Nielsen et al. 2005. Genomic scans for selective sweeps using SNP data. Genome Research 1566-1575.
It can be used to detect the location of a selective sweep based on SNP data. It will also estimate the frequency spectrum of observed SNP data in the presence of missing data.
trueFS is a program used for finding the ascertainment corrected frequency spectrum based on ascertained SNP data. It can perform the corrections under multiple different models including double-hit
ascertainment models and ascertainment with or without overlap between the original ascertainment sample and the final genotyped sample. It uses a bootstrap method to quantify statistical uncertainty
in the estimates. For more information regarding the method, please see:
Nielsen, R., M. J. Hubisz and A. G. Clark. 2004. Reconstituting the frequency spectrum of ascertained SNP data. Genetics 168: 2373-2382.
• Source Code and Instructions (coming soon)
• Instructions (coming soon)
This program will allow the user to estimate selection coefficients relating to optimal codon usage. The basic methodology is similar to the codon based models implemented in the popular program PAML
by Ziheng Yang. However, it explicitly models selection for optimal codon usage on different lineages of a phylogeny. The program may be a bit hard to compile because it requires special libraries
(see documentation). For any analysis which can also be done in PAML, we recommend using PAML as PAML is much superior to our program in a number of ways including ease of use, computational speed,
how well it has been tested, etc.
• Source Code and Instructions (coming soon)
• Instructions (coming soon)
CodonRecSim is an old program written by R. Nielsen for simulating samples in a codon based models under the coalescent with recombination. This program was used in:
Anisimova, M., R. Nielsen and Z. Yang. 2003 Effect of recombination on the accuracy of the likelihood method for detecting positive selection at amino acid sites. Genetics 164: 1229-1236.
It is not very well-supported--but if you are used to simulating samples using Evolver distributed in the PAML package by Ziheng Yang--you may be able to figure out how this program works. Most of
the interface is modelled based on evolver--and in many analyses it should be able to work just as evolver but with an extra parameter: R--the population scaled recombination rate. However, there are
a number of evolver options that are not implemented in the program. Notice that some code in this program is copyrighted to Ziheng Yang.
• Windows Executable and Source Code (coming soon)
PATRI (PaTeRnity Inference) is a program for paternity analysis of genetic data. The program requires genotypic, diploid data from one or more loci from mother-offspring pairs and from potential
fathers. Typical data might include microsatellite markers, Restriction Fragment Length Polymorphisms (RFLPs) or Single Nucleotide Polymorphisms (SNPs). Given such genotypic data, PATRI can calculate
posterior probabilities of paternity for all sampled offspring. When behavioral or ecological information can be used to divide the sampled males into different groups, PATRI can perform maximum
likelihood analyses of hypotheses regarding the relative reproductive success of those groups. The underlying statistical methodology was described in:
Nielsen,R., Mattila,D.K., Clapham,P.J. and Palsbøll,P.J. 2001. Statistical Approaches to Paternity Analysis in Natural Populations and Applications to the North Atlantic Humpback Whale. Genetics
For all genotypes, PATRI can estimate the posterior probability that a particular male has sired a particular offspring, assuming a uniform prior among all males in the population. The male
population size (N) can either be specified by the user as a fixed value, or uncertainty regarding N can be modeled using a uniform or Gaussian prior. Using a uniform prior corresponds to assuming no
prior information regarding the male population size, except that an upper bound can be specified. PATRI can also produce a maximum likelihood estimate of N based solely on the parent-offspring
genotypic data. The estimation of N assumes equal fecundity and unbiased sampling of males.
If sampled males can be divided into groups based on behavioral or ecological information, PATRI can be used to evaluate hypotheses regarding the relative reproductive success of these groups. For k
groups the user starts with a full model containing k-1 parameters, α[2] , α[3] , ..., α[k] , where α[i] , is defined as the reproductive success of group i relative to group 1. The user can then
enter restrictions on these parameters. For example, the hypothesis that males from groups i and j have equal reproductive corresponds to the restriction α[i] = α[j] . Given a set of restrictions,
PATRI can 1) maximize the likelihood and 2) plot a profile likelihood surface for any particular α[i] . The profile likelihood surface for α[i] is constructed by optimizing over all α[j] , j1 i. The
maximum likelihood values are stored in a table, allowing the user to perform likelihood ratio tests of various hypotheses regarding reproductive success. This analysis can be done using a fixed
value of N or by assuming N is uniformly or Gaussian distributed.
• Windows Executable (coming soon)
• Executable for Linux on Sun processor (coming soon)
• Executable for Linux on Intel processor (coming soon)
• Documentation (Readme file) (coming soon)
• Example infile (coming soon)
MIMAR (MCMC estimation of the Isolation-Migration model Allowing for Recombination) is a Markov chain Monte Carlo method to estimate parameters of an isolation-migration model. It uses summaries of
polymorphism data at multiple loci surveyed in a pair of diverging populations or closely related species and in contrast to previous methods, allows for intralocus recombination. Note that you need
to know the ancestral allele at each polymorphic site in order to calculate the summary statistics. The method is described in Becquet and Przewroski (2007) Genome Research.
iMCMC (Huelsenbeck)
This program was inspired by Paul Lewis's fantastic windows program MCROBOT. iMCMC is a Macintosh application that illustrates Markov chain Monte Carlo (MCMC) for a simple landscape. Have fun!
DPPDiv (Heath, Holder, and Huelsenbeck)
DPPDiv is a program for estimating species divergence times and lineage-specific substitution rates on a fixed topology. The prior on branch rates is a Dirichlet process prior which clusters branches
into distinct rate classes. Alternative priors including the global molecular clock and the independent rates model are also available. The priors on node ages include the birth-death (and Yule)
model and the uniform distribution.
nSL (Ferrer-Admetlla et al)
nSL is a program for efficiently computing the nSL statistic described in Ferrer-Admetlla et al. 2014. On Detecting Incomplete Soft or Hard Selective Sweeps Using Haplotype Structure. MBE Source
code, executable, instructions, and example files are included in the .zip. | {"url":"http://cteg.berkeley.edu/software.html","timestamp":"2014-04-16T07:14:00Z","content_type":null,"content_length":"16609","record_id":"<urn:uuid:5c84abc8-0bc6-4b93-9e41-7037c1e03f73>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00316-ip-10-147-4-33.ec2.internal.warc.gz"} |
A Whirlwind Tour through Computer Architecture
Honors Computer Systems Organization (Prof. Grishman)
Goal: highest performance within
• technical constraints
• financial constraints
• (sometimes) compatibility constraints
How has this goal affected the choices of computer architecture (primarily, the choice of instruction sets)?
To understand the trade-offs, we must first understand something about the components from which a processor is built.
• perform the basic Boolean operations (and, or, not, ...) on single bits
• built from a few switches; speed limited by switching speed
• as technology improves, switches (transistors) get smaller; as a result, individual switches get faster and we can put more switches on a single chip (now have a few million transistors on a
Combinatorial circuits
• are built out of gates
• can compute any function which can be stated as a Boolean formula, including arithmetic operations (add, subtract, ...)
• speed depends on number of gates a signal must go through
• multiplexer: select one of N inputs
• adder: add two binary numbers; fast adders take time on order of log(2) of number of bits
• hold data
• may be grouped together into register files
• act like large register files
• bigger memories are slower (main memory significantly slower than CPU)
Simple CPU Design
• combine several combinatorial circuits (adder, subtractor, logical operations) into one big combinatorial circuit which performs all the operations needed for the CPU: 'Arithmetic-Logic Unit'
• connect inputs and outputs of ALU to register file
• this simple design supports register op register --> register operations | {"url":"http://cs.nyu.edu/courses/fall03/V22.0201-001/architecture1.html","timestamp":"2014-04-20T00:40:30Z","content_type":null,"content_length":"2652","record_id":"<urn:uuid:5519cfd3-38e5-4865-86fd-e2e7507c839c>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00603-ip-10-147-4-33.ec2.internal.warc.gz"} |
Thermo VLE problem with mathcad
April 22nd 2009, 04:36 PM #1
Apr 2009
Thermo VLE problem with mathcad
The problem is not the actual solution of the problem - it is getting mathcad to produce the answers (all homework is required to be done in Mathcad for this class). See the mathcad file here.
So, I need to find 3 things - bubble pressure, y1 and y2. I have 3 equations - modified Rault (1 for each species) and y1 + y2=1. I put these in a solve block, tell mathcad to find the answers
and... it turns red. The solve block is on the second page.
Can anyone point out what I'm doing wrong here? I run into trouble all the time with these Find functions. So far in this class I'm spending much more time trying to figure out Mathcad than I am
Follow Math Help Forum on Facebook and Google+ | {"url":"http://mathhelpforum.com/math-software/85129-thermo-vle-problem-mathcad.html","timestamp":"2014-04-21T03:45:58Z","content_type":null,"content_length":"28428","record_id":"<urn:uuid:2a6bed94-cda2-46ca-8315-8887230a4eaa>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00463-ip-10-147-4-33.ec2.internal.warc.gz"} |
Playing With Graphs: People in Albany Don’t Own KindlesPlaying With Graphs: People in Albany Don't Own Kindles
A few days back, Matthew Beckler added the Kindle edition to his sales rank tracker for How to Teach Physics to Your Dog. Given my well-known love for playing with graphs of data, it was inevitable
that I would plot both of these in a variety of ways.
So, what do we learn from this? Well, we learn that people in the Albany. NY area don’t own Kindles:
OK, maybe that’s not obvious to everybody…
When you look at that graph, the blue line is the Amazon sales rank of the physical book edition, while the red line is the Amazon sales rank of the Kindle edition. The two track each other pretty
well for a while, but diverge dramatically after about 48 hours, with the physical book sales rank shooting into the triple digits, while the Kindle sales rank stayed around 3,500. So, what happened
at 48 hours?
The only significant development that I’m aware of that took place around then (which was about 1pm ET Sunday) is that both the Albany Times Union and the Schenectady Gazette ran articles about the
book (the Times Union had a piece about the book written for them by a freelance writer, the Gazette went with the AP review, plus a notice of this weekend’s signing). Nothing else happened around
that time that I know of.
So, this tells us that notices in the local papers were enough to drive up the sales rank of the physical book, but not the Kindle edition. So, people in the Albany area don’t own Kindles. Or, to be
more precise, people in the Albany area who read print newspapers (neither the Gazette nor the Times Union put the book on their web sites) don’t own Kindles.
Amazing what you can learn from looking at graphs.
The other obvious thing that you can do with these data is to look at what relationship, if any, exists between the book sales rank and the Kindle sales rank. The easiest way to get at this is to
plot one on the vertical axis and the other on the horizontal axis:
I’ve divided the data into two sets for this graph. The purple points at the first 48 hours of the data set, and the green are the last 48 hours. For the first 48 hours, they track each other pretty
well– a straight line drawn through the purple points would come close to most of them, and has a slope close to 1 (1.27, to be precise). It’s not perfect, but it’s plenty good enough for social
The green points are way off that line, for the most part, but there’s a big clump of them over in the upper left, that would fit reasonably well to a line with a slope of a bit less than 3 (making a
rough cut of that group gives a slope of 2.93). Those points are the stretch from Sunday afternoon through Monday night, when the physical book rank was at its highest point.
There’s also a sort of a tail connecting the Sunday-Monday group, as the book rank drifts back up to more or less where it was before the dramatic spike. At the time of this writing, the book rank is
back up to 2500, which is about where it’s been since the AP review ran.
So that’s this week’s thrilling installment of Playing With Graphs…
1. #1 Jérôme ^ January 26, 2010
Sales rank is probably not the good data to plot here; sales volume would be more fitting (you might recover it by assuming that it follows a Pareto law).
2. #2 asad January 26, 2010
It’s not perfect, but it’s plenty good enough for social science.
I LOL’ed.
3. #3 Danny January 26, 2010
What an irresponsible analysis.
You’re supposed to be in a position of scientific authority here, yet you post a couple charts then unapologetically, without qualification, jump to some obscure conclusion because it makes for a
sensationalist headline? This is a terrible example to be setting for your readers.
You want to conclude from these two charts that “people in the Albany area who read print newspapers don’t own Kindles.”?
There are plenty of other explanations. Let me try a few:
1. Something else you don’t know about (god forbid) caused a few more people to buy print versions.
2. A larger chunk of the population (which includes disproportionately fewer Kindle owners) buys books on Sundays.
3. People who own Kindles are less likely than non-Kindle owners to be convinced by a stupid article they read in the local newspaper to buy the book.
4. People who read the newspaper and _and_ own a Kindle still prefer to buy some books in print.
5. Random variation.
You’re not being a scientist. You’re being a sensationalist. Then we wonder why science is so misrepresented in the media…
4. #4 idlemind January 26, 2010
Well I thought it was funny…
5. #5 tde January 26, 2010
6. Scientists should never, ever be funny.
6. #6 Danny January 26, 2010
Ok, then you’re not a very good blogger for failing to recognize that sarcasm doesn’t work on the internet. Clearly the first commenter thought this was a serious analysis.
7. #7 Skribb January 26, 2010
If I were a social scientist, I might examine the comments of this post and come to the unequivocally true conclusion that readers of science themed blogs are incapable of recognizing hyperbole
and sarcasm, despite when the blogger is laying it on awful thick.
@1 and especially @3: Oh come on! He was drawing absolute conclusions from a scatter plot. It is blatantly obvious that his conclusions are meant to be comical and an attempt to poke at the soft
8. #8 Sven DIMIlo January 27, 2010
For variables like these, without any obvious cause-effect relationship, you should actually be calculating the slope for a reduced-major-axis regression, rather than simple Least Squares. The
RMA slope is always higher.
But the slopes are irrelevant here anyway; it’s the correlation coefficients we want.
Yes, I’m having fun.
9. #9 BdN January 27, 2010
Ah! I’ll repeat what idlemind wrote : I thought it was funny… Though maybe a little unfair to social scientists. They would at least run a covariance matrix…
Ok, then you’re not a very good blogger for failing to recognize that sarcasm doesn’t work on the internet.
Or maybe you’re not a good enough blog reader ? Or not a regular reader of this particular blog ? Ever heard of inside jokes ?
And you left out one of the more plausible explanations. “Or, to be more precise, people in the Albany area who read print newspapers [...] don’t own Kindles.” : there may be a link between usage
of technology for reading newspapers and usage of technology for reading in general. Or maybe those who buy for immediate reading have different habits or buy at different hours of the day or
maybe it has to do with part of the population of the country waking up at another time or people seeing the print edition of the newspaper liked better the black and white picture of the book
rather than the color one,etc.
Or maybe I’m reading it backwards since sale rank is inverse number of copies sold : the more copies, the higher rank. 1000 copies = 3000th rank 3000 copies = 1000th rank. According to this, Mr.
Orzel is misreading the “data”, if we can call it so, since the spike in rank at 48 hours means the exact opposite : it “fell” to the 3 digits area. And the Kindle version “rose”.
So everybody in Albany who reads print papers DO OWN a Kindle.
And the second graph is also interesting : it clearly shows that the book (both versions) now sell more than in the beginning. And that, when more books are bought, well, more books are bought.
That’s a good lesson.
10. #10 BdN January 27, 2010
Or maybe I am the one misreading the graph…
*runs hiding under his bed*
11. #11 Mark January 27, 2010
Ok, then you’re not a very good blogger for failing to recognize that sarcasm doesn’t work on the internet.
Sarcasm doesn’t work on the internet? Oh really. Fantastic. Thanks so much for telling me.
Seems to be working just fine. Great post, btw.
12. #12 shaodonglin March 1, 2010
To the forecasting of earthquakes in Chile and Argentina newspapers, e-mail.
Argentina Newspapers: Buenos Aires province, and the helicopter crash in Santiago, Chile earthquake-related. Crash area meteorological disasters will occur. Sao Paulo, Rio de Janeiro, the Amazon
River in central Peru has also crash will occur. Details http://www.shaodl.com/sspl7.htm
This is before the earthquake to the United Nations in Haiti, Dominica, Jamaica, newspapers, e-mail. http://www.dominicantoday.com/dr/world/2009/10/11/33512/
English Channel, California, Ecuador, Peru,south and north, New Zealand, Italy south — north, Mexico, Spain, Turkey southern ,Morocco, Caucasus, northern Iran, in recent earthquake. | {"url":"http://scienceblogs.com/principles/2010/01/26/playing-with-graphs-people-in/","timestamp":"2014-04-17T22:35:15Z","content_type":null,"content_length":"89071","record_id":"<urn:uuid:ab36552e-d2c1-421d-9eb0-c0fd13694b67>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00617-ip-10-147-4-33.ec2.internal.warc.gz"} |
Skewed Distributions
Measures of Central Tendency
A measure of central tendency is a single value that attempts to describe a set of data by identifying the central position within that set of data. As such, measures of central tendency are
sometimes called measures of central location. They are also classed as summary statistics. The mean (often called the average) is most likely the measure of central tendency that you are most
familiar with, but there are others, such as the median and the mode.
The mean, median and mode are all valid measures of central tendency, but under different conditions, some measures of central tendency become more appropriate to use than others. In the following
sections, we will look at the mean, mode and median, and learn how to calculate them and under what conditions they are most appropriate to be used.
Mean (Arithmetic)
The mean (or average) is the most popular and well known measure of central tendency. It can be used with both discrete and continuous data, although its use is most often with continuous data (see
our Types of Variable guide for data types). The mean is equal to the sum of all the values in the data set divided by the number of values in the data set. So, if we have n values in a data set and
they have values x[1], x[2], ..., x[n], the sample mean, usually denoted by
This formula is usually written in a slightly different manner using the Greek capitol letter,
You may have noticed that the above formula refers to the sample mean. So, why have we called it a sample mean? This is because, in statistics, samples and populations have very different meanings
and these differences are very important, even if, in the case of the mean, they are calculated in the same way. To acknowledge that we are calculating the population mean and not the sample mean, we
use the Greek lower case letter "mu", denoted as µ:
The mean is essentially a model of your data set. It is the value that is most common. You will notice, however, that the mean is not often one of the actual values that you have observed in your
data set. However, one of its important properties is that it minimises error in the prediction of any one value in your data set. That is, it is the value that produces the lowest amount of error
from all other values in the data set.
An important property of the mean is that it includes every value in your data set as part of the calculation. In addition, the mean is the only measure of central tendency where the sum of the
deviations of each value from the mean is always zero.
When not to use the mean
The mean has one main disadvantage: it is particularly susceptible to the influence of outliers. These are values that are unusual compared to the rest of the data set by being especially small or
large in numerical value. For example, consider the wages of staff at a factory below:
Staff 1 2 3 4 5 6 7 8 9 10
Salary 15k 18k 16k 14k 15k 15k 12k 17k 90k 95k
The mean salary for these ten staff is $30.7k. However, inspecting the raw data suggests that this mean value might not be the best way to accurately reflect the typical salary of a worker, as most
workers have salaries in the $12k to 18k range. The mean is being skewed by the two large salaries. Therefore, in this situation, we would like to have a better measure of central tendency. As we
will find out later, taking the median would be a better measure of central tendency in this situation.
Another time when we usually prefer the median over the mean (or mode) is when our data is skewed (i.e., the frequency distribution for our data is skewed). If we consider the normal distribution -
as this is the most frequently assessed in statistics - when the data is perfectly normal, the mean, median and mode are identical. Moreover, they all represent the most typical value in the data
set. However, as the data becomes skewed the mean loses its ability to provide the best central location for the data because the skewed data is dragging it away from the typical value. However, the
median best retains this position and is not as strongly influenced by the skewed values. This is explained in more detail in the skewed distribution section later in this guide.
The median is the middle score for a set of data that has been arranged in order of magnitude. The median is less affected by outliers and skewed data. In order to calculate the median, suppose we
have the data below:
We first need to rearrange that data into order of magnitude (smallest first):
Our median mark is the middle mark - in this case, 56 (highlighted in bold). It is the middle mark because there are 5 scores before it and 5 scores after it. This works fine when you have an odd
number of scores, but what happens when you have an even number of scores? What if you had only 10 scores? Well, you simply have to take the middle two scores and average the result. So, if we look
at the example below:
We again rearrange that data into order of magnitude (smallest first):
Only now we have to take the 5th and 6th score in our data set and average them to get a median of 55.5.
The mode is the most frequent score in our data set. On a histogram it represents the highest bar in a bar chart or histogram. You can, therefore, sometimes consider the mode as being the most
popular option. An example of a mode is presented below:
Normally, the mode is used for categorical data where we wish to know which is the most common category, as illustrated below:
We can see above that the most common form of transport, in this particular data set, is the bus. However, one of the problems with the mode is that it is not unique, so it leaves us with problems
when we have two or more values that share the highest frequency, such as below:
We are now stuck as to which mode best describes the central tendency of the data. This is particularly problematic when we have continuous data because we are more likely not to have any one value
that is more frequent than the other. For example, consider measuring 30 peoples' weight (to the nearest 0.1 kg). How likely is it that we will find two or more people with exactly the same weight
(e.g., 67.4 kg)? The answer, is probably very unlikely - many people might be close, but with such a small sample (30 people) and a large range of possible weights, you are unlikely to find two
people with exactly the same weight; that is, to the nearest 0.1 kg. This is why the mode is very rarely used with continuous data.
Another problem with the mode is that it will not provide us with a very good measure of central tendency when the most common mark is far away from the rest of the data in the data set, as depicted
in the diagram below:
In the above diagram the mode has a value of 2. We can clearly see, however, that the mode is not representative of the data, which is mostly concentrated around the 20 to 30 value range. To use the
mode to describe the central tendency of this data set would be misleading.
Skewed Distributions and the Mean and Median
We often test whether our data is normally distributed because this is a common assumption underlying many statistical tests. An example of a normally distributed set of data is presented below:
When you have a normally distributed sample you can legitimately use both the mean or the median as your measure of central tendency. In fact, in any symmetrical distribution the mean, median and
mode are equal. However, in this situation, the mean is widely preferred as the best measure of central tendency because it is the measure that includes all the values in the data set for its
calculation, and any change in any of the scores will affect the value of the mean. This is not the case with the median or mode.
However, when our data is skewed, for example, as with the right-skewed data set below:
we find that the mean is being dragged in the direct of the skew. In these situations, the median is generally considered to be the best representative of the central location of the data. The more
skewed the distribution, the greater the difference between the median and mean, and the greater emphasis should be placed on using the median as opposed to the mean. A classic example of the above
right-skewed distribution is income (salary), where higher-earners provide a false representation of the typical income if expressed as a mean and not a median.
If dealing with a normal distribution, and tests of normality show that the data is non-normal, it is customary to use the median instead of the mean. However, this is more a rule of thumb than a
strict guideline. Sometimes, researchers wish to report the mean of a skewed distribution if the median and mean are not appreciably different (a subjective assessment), and if it allows easier
comparisons to previous research to be made.
Summary of when to use the mean, median and mode
Please use the following summary table to know what the best measure of central tendency is with respect to the different types of variable.
│Type of Variable │Best measure of central tendency │
│Nominal │Mode │
│Ordinal │Median │
│Interval/Ratio (not skewed)│Mean │
│Interval/Ratio (skewed) │Median │
For answers to frequently asked questions about measures of central tendency, please go the next page. | {"url":"https://statistics.laerd.com/statistical-guides/measures-central-tendency-mean-mode-median.php","timestamp":"2014-04-18T23:15:44Z","content_type":null,"content_length":"18909","record_id":"<urn:uuid:377fcceb-67ea-4539-9916-f5f9b4c78db9>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00646-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics[Check] - check the occurrence of more than one noncommutative or anticommutative object in commutative products and the correctness of the free and repeated spacetime indices in a
mathematical expression
Calling Sequence
Check(expression, indices = truefalse, kind_of_indices, products = truefalse, quiet)
expression - any mathematical expression or relation between expressions
indices = truefalse - (optional) boolean value indicating whether to check the free and repeated indices in expression for correctness (default value is true)
kind_of_indices - (optional) one of the keywords free, repeated, or all; return a list or set with the specified kind of indices found
products = truefalse - (optional) boolean value indicating whether to check commutative products regarding incorrect presence of anti or noncommutative objects (default value is true)
quiet - (optional) indicates that the check should proceed without displaying messages on the screen
• The Check command receives a mathematical expression or a relation between expressions (such as an equation) and checks for incorrect free or repeated spacetime indices or ill-formed commutative
products involving noncommutative or anticommutative objects.
• When checking the repeated and free indices of tensor objects (see Physics,Define), Check takes into account the sum rule for repeated indices. That is, it gives an error when the same index
appears more than once in a product, or when the free indices of the summands of a given expression, or of the sides of a relation (for example, of an equation), are not the same.
• If only the expression to be checked is given, Check makes the check of the indices mentioned and either returns NULL, displaying the message The repeated and free indices in the given expression
check ok., or interrupts with an error pointing to the problem detected.
• To avoid this checking of indices, use the optional argument indices = false.
• When any of the optional keywords free, repeated, or all are given, instead of NULL, Check returns all, the free, or the repeated indices per term, respectively, as explained in the examples
• When the Physics package is loaded, a more general `*` product operator is also loaded. Unlike the default Maple product operator (herein called the commutative product operator), the operator
loaded with Physics does not assume commutativity of its operands, and instead operates like a commutative, anticommutative, or noncommutative product operator, depending on the operands. However,
after loading Physics, the original commutative product operator is still available as :-`*`, and can be invoked as a function.
• Regarding commutative products constructed by :-`*`, Check scans the received expression for incorrect occurrences with more than one noncommutative or anticommutative operand.
• If the expression checks okay, a related message is displayed. Otherwise, an error message is triggered, and the commutative product involving not commutative objects is pointed out.
• To avoid this checking of products, use the optional argument products = false.
Check free and repeated indices found in tensor expressions
To check the free and repeated indices found in tensor expressions for correctness, first Define the tensor objects of your problem.
Consider now an expression where the (spacetime) tensors , , and appear in products and sums.
By inspecting this expression, you can see that is a repeated index, so the sum rule over ranging from 1 to the dimension of the spacetime is assumed, and and are free indices, the same in both
terms of the sum. So everything is correct, and the Check command returns NULL (that is, it just displays the message).
To avoid displaying the message, use the optional keyword quiet, as in Check(expr, quiet). To actually return the free and repeated indices, use the optional parameter kind_of_indices.
When the free or repeated indices are incorrect, an error interruption points to the problem found.
Error, (in Physics:-Check) wrong use of the summation rule for repeated indices: `mu repeated 3 times`, in A[mu, mu]*B[mu, rho]
Error, (in Physics:-Check) found different free indices in different operands of a sum; in operand 1: [nu, rho], in operand 2: [mu, rho]
Check noncommutative products
Note that when loading Physics at the beginning of this Examples section we have loaded the `*` operator that comes with the package, designed to handle anticommutative and noncommutative operands
in products. For the purpose of illustrating Check with noncommutative variables, unload that command, and set a macro to refer to it instead:
Set now prefixes for identifying anticommutative and noncommutative variables.
So in what follows, variables prefixed by or , followed by a positive integer, are considered anticommutative and noncommutative, respectively.
Consider now the following product constructed by the generalized product operator of the Physics package (in what follows, `&*` represents this operator, due to the macro used), and also using
the global product operator `*`
Although perhaps it is not evident on inspection, this product is ill-formed.
Error, (in Physics:-Check) found more than 1 anticommutative object in the commutative product: Q5*Q7
Analogously, products can be ill-formed regarding other types of problems.
Error, (in Physics:-Check) found more than 1 noncommutative object in the commutative product: Z5*Z7
Error, (in Physics:-Check) found anticommutative and noncommutative objects in the commutative product Q5*Z7
When the products check okay, a message is displayed.
To avoid displaying the message, use the optional keyword quiet, as in Check(%, quiet).
See Also
Define, Physics, Physics conventions, Physics examples, Physics/`*`, Setup
Was this information helpful?
Please add your Comment (Optional)
E-mail Address (Optional)
What is This question helps us to combat spam | {"url":"http://www.maplesoft.com/support/help/Maple/view.aspx?path=Physics/Check","timestamp":"2014-04-18T21:22:29Z","content_type":null,"content_length":"206313","record_id":"<urn:uuid:d8bc8e83-fbcb-46d9-872a-22f8ae0c513a>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00335-ip-10-147-4-33.ec2.internal.warc.gz"} |
Applications of modular forms outside Number Theory?
up vote 0 down vote favorite
Are there applications of modular forms to areas other than Number Theory (and Galois Theory) such as Combinatorics, Algebraic Topology, Algebraic Geometry, Theoretical Physics,...?
12 Yup.$ $ $ $ $ $ – S. Carnahan♦ Oct 7 '10 at 4:02
3 A related question has already been asked, with answers that (somewhat adequately) cover your question too: mathoverflow.net/questions/24604/… – Peter Humphries Oct 7 '10 at 4:06
Googling "applications of modular forms" yields 329,000 hits. – David Hansen Oct 7 '10 at 4:09
That would be an ecumenical matter – Yemon Choi Oct 7 '10 at 5:18
2 Yoou might consult Sarnak's Some Applications of Modular Forms: books.google.co.uk/… – Robin Chapman Oct 7 '10 at 6:31
show 1 more comment
closed as not a real question by HJRW, Andy Putman, Ryan Budney, Pete L. Clark, Yemon Choi Oct 7 '10 at 5:19
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying
this question so that it can be reopened, visit the help center.If this question can be reworded to fit the rules in the help center, please edit the question.
1 Answer
active oldest votes
I am by no means an expert, in fact I don't really know what a modular form is! But yes.
Wikipedia, http://en.wikipedia.org/wiki/Category:Modular_forms
or if you want more,
up vote 1 down vote http://tinyurl.com/25unfrb
You may want to try asking a more specific question if there's something you've been wondering about. Otherwise is seems a bit too vague to really address here.
add comment
Not the answer you're looking for? Browse other questions tagged soft-question or ask your own question. | {"url":"http://mathoverflow.net/questions/41350/applications-of-modular-forms-outside-number-theory","timestamp":"2014-04-17T04:29:59Z","content_type":null,"content_length":"48393","record_id":"<urn:uuid:c9738f27-b4ee-4dcb-8efb-4e82b830707a>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00063-ip-10-147-4-33.ec2.internal.warc.gz"} |
Banach-Mazur Distance of Central Sections of a Centrally Symmetric Convex Body
Marek Lassak
Institute of Mathematics and Physics, University of Technology
Kaliskiego 7, 85-796 Bydgoszcz, Poland, e-mail: lassak@utp.edu.pl
Abstract: We prove that the Banach-Mazur distance between arbitrary two central sections of co-dimension $c$ of any centrally symmetric convex body in $E^n$ is at most $\big(2c+1)^2$.
Keywords: convex body, section, Banach-Mazur distance
Classification (MSC2000): 52A21, 46B20
Full text of the article:
Electronic version published on: 26 Feb 2008. This page was last modified: 28 Jan 2013.
© 2008 Heldermann Verlag
© 2008–2013 FIZ Karlsruhe / Zentralblatt MATH for the EMIS Electronic Edition | {"url":"http://www.kurims.kyoto-u.ac.jp/EMIS/journals/BAG/vol.49/no.1/18.html","timestamp":"2014-04-17T18:32:56Z","content_type":null,"content_length":"3684","record_id":"<urn:uuid:f49b197c-045e-4d6b-8d3a-64143f0c728b>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00635-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
explain why segments connecting any pair of corresponding vertices of congruent pentagons are congruent. Make a sketch to support your answer.
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50a6457be4b044c2b5fb03c1","timestamp":"2014-04-19T22:41:52Z","content_type":null,"content_length":"60702","record_id":"<urn:uuid:bd4d1c0f-7a61-4815-9785-d3fdce65132d>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00507-ip-10-147-4-33.ec2.internal.warc.gz"} |
How To Convert Decimal Numbers To Fractions
How To Change Decimals To Fractions
Bluetutors more than delivers on changing decimals into fractions. Beyond the basics, this short film will also show how to convert a recurring decimal into a fraction.
Hi, I'm Peter Edwards from Bluetutors. We teach children of all ages, right from primary school to degree level and we find the highest quality tutors. And today I'm going to teach you some maths.
We're now going to look at how to change a decimal into a fraction. And so we're going to start off with a fairly simple example and hopefully expand that into something a bit more complicated. So if
we look at the first example we have up here, 0.
12. So what you have to think about when you're converting this to a fraction is how many decimal places do we have. In this case, we have two.
The first decimal place represents tenths and the second represents hundreds. So, this in fact is equal to 12/100. And we can simplify that by dividing top and bottom by four to get 3/25.
So this decimal here, 0.12, is equal to 3/25. So let's do a similar thing with this decimal now.
In fact, you can see we have four decimal places. So we have tenths, hundredths, thousandths, and ten thousandths. So this is going to be equal to 7524/10000.
And so again we can simplify that. This might take me a while. So we'll divide top and bottom by two first to give us 3762/5000 which is equal to 1881/2500.
And so that is that fraction simplified. Now, we're going to look at a more complicated situation where we have a decimal but these two dots here mean that that line of digits is repeated over and
over again. This is a recurring decimal which never ends and we're going to try and work out what fraction this is.
Now, the way to do that is to take this and in fact, multiply it so that this decimal point comes after this seven here. So we need the point to jump six times. So we're going to multiply this by one
So we're going to call this number here 'x' and we're going to say 1,000,000 times x is equal to 142857.142587. And again this recurs over and over again.
Now we have this, what we can do is take our other x and what we're going to do now is take this bottom line away from this top line. That leaves us with 999999x equal to 142857. And so x is equal to
And if you put that into your calculator and work it out, you will see that that is equal to 1/7. And so that is how to work out a fraction when you have a recurring decimal. And as long as it is a
recurring decimal and not an irrational number, then you'll always be able to that.
And that is how to convert decimals into fractions. . | {"url":"http://www.videojug.com/film/how-to-change-decimals-to-fractions","timestamp":"2014-04-19T14:29:44Z","content_type":null,"content_length":"40796","record_id":"<urn:uuid:1e5fa946-c9fe-4449-9371-e7450445596e>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00290-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Find the characteristic EQ of the following matrix.
• one year ago
• one year ago
Best Response
You've already chosen the best response.
@TuringTest pic to follow.
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
[7*lambda-8+(lambda-5)*(lambda(lambda)-1)] seems to be the answer, accordng to my book in the first question it gave the "characteristic polynomial" as the answer... So I wanted to do the same
Best Response
You've already chosen the best response.
which problem are you doing?
Best Response
You've already chosen the best response.
all of them in exercise 6?
Best Response
You've already chosen the best response.
I assume you know to subtract lambda from the diagonals, then find the determinant of that matrix?
Best Response
You've already chosen the best response.
I'm doing 6E
Best Response
You've already chosen the best response.
MY book just shows for a 2x2 matrix that it's lambda - # on the main diagonal, and everything else is negated.
Best Response
You've already chosen the best response.
so I took the determinant for the 3x3, but my book wants the characteristic EQ, which is what I posted above, but the answer in the b ook showed the polynomial characteristic which was lambda^3
etc etc and then sifted out....
Best Response
You've already chosen the best response.
\[A=\left[\begin{matrix}5&0&1\\1&1&0\\-7&1&0\end{matrix}\right]\]\[(\lambda I-A)=0=\left|\begin{matrix}\lambda-5&0&-1\\-1&\lambda-1&0\\7&-1&\lambda\end{matrix}\right|\]take the determinant and
what do you get?
Best Response
You've already chosen the best response.
I got [7*lambda-8+(lambda-5)*(lambda(lambda)-1)]
Best Response
You've already chosen the best response.
as shown in my pic
Best Response
You've already chosen the best response.
I don't think you simplified correctly...
Best Response
You've already chosen the best response.
Idk check my pic, that's what maple gave me anyways....
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
I have always done det(A - lambda*I)=0
Best Response
You've already chosen the best response.
That's what the book shows us.
Best Response
You've already chosen the best response.
It comes from \[ Ax =\lambda x\] or \[ Ax -\lambda x=0\] \[ (A -\lambda I) x=0\]
Best Response
You've already chosen the best response.
\[(\lambda-5)(\lambda-1)(\lambda)+(-1)^3-[7(\lambda-1)(-1)]=0\]what @phi said is true, you can do it either way
Best Response
You've already chosen the best response.
Yeah the book says that, then goes into normal determinants.e
Best Response
You've already chosen the best response.
Why is it only [7(λ−1)(−1)]?
Best Response
You've already chosen the best response.
the other parts have 0's so we can ignore them
Best Response
You've already chosen the best response.
and why do you only do it twice? I'm so confused. I thought the diagonal rule is 3 diags - 3 diags.
Best Response
You've already chosen the best response.
oic yeah.
Best Response
You've already chosen the best response.
deerrp :P
Best Response
You've already chosen the best response.
I don't like Maple's answer >( Wolfram doesn't either :P.
Best Response
You've already chosen the best response.
I use co-factors (L-5) * det (lower right 2x2) ignore the 0 -1 * det(lower left 2x2)
Best Response
You've already chosen the best response.
I hate co factors... From TT's example I get [-7*lambda(-1)+1+(lambda(lambda-1))(lambda)] = 0
Best Response
You've already chosen the best response.
[((lambda-5)(lambda-1))(lambda)+(-1)^3]+[-7*(lambda-1)(-1)] = 0; [-7 lambda(-1) + 1 + lambda(lambda - 1)(lambda)] = 0
Best Response
You've already chosen the best response.
I just took what you did and entered it into Maple :P.
Best Response
You've already chosen the best response.
oh I think I know what Id id :P.
Best Response
You've already chosen the best response.
> [((lambda-5)(lambda-1))(lambda)+(-1)^3]+[-7*(lambda-1)(-1)] = 0; [-7 lambda(-1) + 1 + lambda(lambda - 1)(lambda)] = 0
Best Response
You've already chosen the best response.
I frogot to add some extra brackets :). Does this shizzle look better >(
Best Response
You've already chosen the best response.
Same thing it looks like >(
Best Response
You've already chosen the best response.
\[(\lambda-5)(\lambda-1)(\lambda)+(-1)^3-[7(\lambda-1)(-1)]=0\]\[\lambda^3-6\lambda^2+5\lambda-1+7\lambda-7=0\]\[\lambda^3-6\lambda^2+12\lambda-8=0\]I could have messed up, but I did it by eye...
Best Response
You've already chosen the best response.
TT never messes up :)
Best Response
You've already chosen the best response.
hehe, that attitude has cost me plenty of credits, so I strongly suggest you double-check me, and everyone else for that matter, on everything. I appreciate the confidence though :)
Best Response
You've already chosen the best response.
Normally maple gives me the good answer, but nope it's stupid :P.
Best Response
You've already chosen the best response.
yay :D always trust your brain first is the moral, I'd say Ok, dinner time, see ya!
Best Response
You've already chosen the best response.
NOOOOOOOOO!! 1 more quesitn plz.
Best Response
You've already chosen the best response.
\[\lambda^3-6\lambda^2+12\lambda-8=(\lambda-2)^3=0\]triple Eigenvalue, you can manage that I think ;) I really have to go, food getting cold read the link I gave you and good luck!!!!!!!!!!!!
Best Response
You've already chosen the best response.
Thanks, diff Q on the eigenvalues though :)
Best Response
You've already chosen the best response.
COME BACK SOON SO I CAN ASK MOAR Q'S!
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50a9772ce4b0129a3c905787","timestamp":"2014-04-20T15:59:46Z","content_type":null,"content_length":"134290","record_id":"<urn:uuid:e3f2c550-6107-4b3a-9539-9169dcf9e187>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00501-ip-10-147-4-33.ec2.internal.warc.gz"} |
Non-quasi separated morphisms
Take the 2-minute tour ×
MathOverflow is a question and answer site for professional mathematicians. It's 100% free, no registration required.
What are some examples of morphisms of schemes which are not quasi separated?
A morphism is separated if its diagonal is a closed immersion. A morphism is quasi-separated if its diagonal is quasi-compact. In particular, a quasi-separated scheme over a field has the property
that the intersection of two affine open subsets is quasi-compact.
Anton Geraschenko Sep 29 '09 at 19:54
Is there a finite dimensional example?
David Zureick-Brown♦ Sep 30 '09 at 12:26
add comment
Suppose $U\hookrightarrow X$ is a non-quasi-compact open immersion. Then you can glue two copies of $X$ together along $U$ (effectively doubling up the complement of $U$) to get a
non-quasi-separated scheme $Y$.
By the assumption that $U\hookrightarrow X$ is not quasi-compact, there is some open affine $W$ of $X$ such that the intersection of $W$ and $U$ is not quasi-compact. So there are two copies of $W$
sitting inside of $Y$ (one for each copy of $X$). The intersection of these two is exactly $U\cap W$, which is not quasi-compact. So we found two open affines in $Y$ whose intersection is not
quasi-compact, which shows that $Y$ is not quasi-separated.
Now we just have to find some non-quasi-compact open immersions. The complement of the origin in $\mathbb A^\infty$ is one (so $\mathbb A^\infty$ with a doubled origin is a non-quasi-separated
Edit: Here's another one that I don't completely understand, but gives a finite-dimensional example (zero-dimensional in fact). Consider $X=Spec(\overline{\mathbb Q}\otimes_{\mathbb Q}\overline{\
mathbb Q})$. Topologically, $X=Gal(\overline{\mathbb Q}/\mathbb Q)$, with the profinite topology (perhaps somebody could explain how to see this in a comment). In particular, any point is closed,
but the complement $U$ is not quasi-compact, so we get another example of a non-quasi-compact open immersion, so $X$ with a doubled point is non-quasi-separated.
I was just in the middle of trying the example you just edited in! Profinite topology... very nice :) I'd also like to see an explanation of this...
Andrew Critch Nov 6 '09 at 18:13
+1: nice answer.
Paul Balmer May 9 '10 at 0:53
A quick explanation to the Galois example: Spec \bar{Q} --> Spec Q is a pro-torsor, in fact a pro-Galois one, with Galois group Gal(\bar{Q}/Q). If we take the fiber product of the torsor against
itself, we get the product of Spec \bar{Q} and the Galois group. This is set-theoretically (not scheme-theoretically) a disjoint union of copies of Spec \bar{Q}, indexed by the Galois group.
shenghao Oct 10 '10 at 15:13
add comment
Here is some intuitive propaganda for Anton's answer...
We know that a qsep (quasi-separated) scheme (over $\mathbb{Z}$) is precisely one where the intersection U∩V of any two open affines, U=Spec(A) and V=Spec(B), is quasi-compact. Looking at
compliments gives a different perspective: that their differences U\V and V\U are cut out by finitely many elements in A,B respectively, meaning that these differences are "easy to see".
I'd say this justifies the following credo:
□ A quasi-separated scheme is one where any two open affines are "easy to distinguish".
□ A non-qsep scheme is one containing some "subtle distinction" between open affines.
The two copies of $\mathbb{A}^\infty$ in Anton's answer differ only by the origin, which is "hard to see" in that it cannot be cut out by finitely many ring elements, and I'd say using
infinitely many variables to cut out one point is about the most natural way to achieve this. Thus, I like to characterize non-qsep schemes as containing "(infinitely) subtle distinctions"
such as this one.
Further tinkering yields a similar way to think about a qsep morphism $f:X\to Y$. I'd say the corresponding credo is that:
□ A quasi-separated morphism is one which preserves the existence of "subtle distinctions".
□ A non-qsep morphism is one which destroys some "subtle distinctions".
This helps intuitivize theorems like:
up vote 5
down vote (1) " Any map from a qsep scheme is qsep ", because it has no subtle distinction that can be destroyed.
(2) " If $Y$ is qsep, then $f:X\to Y$ is qsep iff $X$ is qsep ", since $f$ destroys subtle distinctions iff $X$ has them.
(3) " If $g\circ f$ is qsep, then $f$ is qsep ", since if $f$ destroyed some subtle distinction, then $g$ could not recover it.
Here is a coarse and a fine justification for this credo in each direction...
Coarse version: By 1971 EGA I 6.1.11, for any cover of Y by qsep opens $V_{i}$, $f$ is qsep iff each preimage $f^{-1}(V_i)$ is qsep. Thus, $f$ is non-qsep iff there is some qsep open $V\
subseteq Y$ such that $f^{-1}(V)$ is non-qsep, meaning it contains some subtle distinction which is lost after application of by $f$.
Fine version: Suppose $f$ is qsep. By 1971 EGA I 6.1.9, fibre products and compositions of qsep morphisms are qsep, and any universal injection is qsep (for example any immersion). Now
$S\hookrightarrow X$
$T\hookrightarrow Y$
are any universal injections such that $f|_S$ factors through $T$, for example if $T$ is the scheme-theoretic image of $S$. Then $T$ qsep $\Rightarrow$ $S$ qsep, hence $S$ non-qsep $\
Rightarrow$ $T$ non-qsep, meaning $f$ preserves the existence of subtle distinctions in passing from any such $S$ to $T$.
add comment | {"url":"http://mathoverflow.net/questions/37/non-quasi-separated-morphisms?sort=oldest","timestamp":"2014-04-20T06:08:07Z","content_type":null,"content_length":"64348","record_id":"<urn:uuid:3475cb2b-16a3-4bdb-9e3b-9b650ab33f81>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00469-ip-10-147-4-33.ec2.internal.warc.gz"} |
How can I draw the this surface using Matlab?
March 14th 2012, 08:46 PM #1
May 2009
How can I draw the this surface using Matlab?
I want to draw this surface on $[0,1]^3$:
$z=x$ for $x\geq y$ and $z=y$ for $y>x$
I know "meshgrid" coupled with "surface" command allows you to draw $z=x$ and $z=y$, respectively. But I have no idea how to account for the inequalities.
Follow Math Help Forum on Facebook and Google+ | {"url":"http://mathhelpforum.com/math-software/195982-how-can-i-draw-surface-using-matlab.html","timestamp":"2014-04-21T00:32:51Z","content_type":null,"content_length":"29936","record_id":"<urn:uuid:7f2758bd-56fb-4744-a778-a1354a914e09>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00055-ip-10-147-4-33.ec2.internal.warc.gz"} |
Post a reply
Hmmm ...
you know that when x=1, the value of y=0, also the slope (y') is flat, and the change of slope (y'') is 0.
The only value you don't know at x=1 is y''', and that can be solved:
3y'''+ 5y''-y' +7y=0 ==> 3y'''+ 0 - 0 + 0=0 ==> y''' must equal 0
So, all given values are 0 and all flat at x=1 ...
Now what happens as you move away from x=1? If y were to change then the slope (y') will change, and you know that the slope is constrained by the equation 3y'''+ 5y''-y' +7y=0.
So let us say that as x increases, y increases. So the slope of y goes from 0 to positive, and so the rate of change in slope (y'') must also increases, and hence y''' also! So they must all increase
What would that do to: 3y'''+ 5y''-y' +7y=0 ?
I suspect (but haven't got as far as proving) that it is not possible to have y increase, because the various rates of change would make 3y'''+ 5y''-y' +7y ≠ 0.
Sorry, but that is as far as I have got. | {"url":"http://www.mathisfunforum.com/post.php?tid=1821&qid=16968","timestamp":"2014-04-21T07:22:44Z","content_type":null,"content_length":"16542","record_id":"<urn:uuid:74ab8e75-96eb-40df-8cdf-d916053e8852>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00256-ip-10-147-4-33.ec2.internal.warc.gz"} |
Prediction of scattering cross sections using averaged models.
ASA 129th Meeting - Washington, DC - 1995 May 30 .. Jun 06
5pSA8. Prediction of scattering cross sections using averaged models.
Douglas M. Photiadis
David Hughes
Naval Res. Lab., Washington, DC 20375-5350
The presence of internal structure can greatly alter the acoustic behavior of elastic structures. Even in relatively simple systems, it is necessary to employ approximate models, effectively
averaging out unwanted detail. Some phenomenological aspects can be easily obtained in this way provided the ``base'' structure is not too complicated. For example, the locus of peaks in the
scattering cross section as a function of frequency and angle can often be predicted in a deterministic manner, but estimating the actual scattering levels requires more sophisticated modeling. One
approach to this modeling problem is to employ a ``fuzzy'' structures paradigm in which a Neumann series involving the random aspects of the internal structure is averaged. A more sophisticated
approach involves directly constructing and approximating equations for the desired averages; employing the Dyson and Bethe-Salpeter equations of random media theory. These techniques have been
applied to predict the scattering cross section from an irregular ribbed structure. The results will be discussed and compared. | {"url":"http://www.auditory.org/asamtgs/asa95wsh/5pSA/5pSA8.html","timestamp":"2014-04-17T07:53:14Z","content_type":null,"content_length":"1751","record_id":"<urn:uuid:fec6a49a-78bc-45c3-a94d-73b6d70bc02b>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00277-ip-10-147-4-33.ec2.internal.warc.gz"} |
Earliest Known Uses of Some of the Words of Mathematics (P)
Last revision: March 28, 2014
p-ADIC INTEGER was coined by Kurt Hensel (1861-1941) (Katz, page 824).
The p* FORMULA, also called Barndorff-Nielsen’s formula, was introduced by Ole Barndorff-Nielsen in his article “On a Formula for the Distribution of the Maximum Likelihood Estimator,” Biometrika, 70
, (1983), 343-365.
P-VALUE and prob-value. David (1995) discusses the difficulties in dating P-value, the idea of which goes back to Laplace--at least--before opting for a reference from 1960! Subsequently David (1998)
chose W. E. Deming's Statistical Adjustment of Data of 1943. When Deming wrote the phrase "value of P" was current. It was used in Karl Pearson's (1900) "On the Criterion that a Given System of
Deviations from the Probable in the Case of Correlated System of Variables is such that it can be Reasonably Supposed to have Arisen from Random Sampling" (Philosophical Magazine, 50, 157-175) and
used very heavily in R. A. Fisher's Statistical Methods for Research Workers (1925). The use of P-values (or prob-values) is often set against the use of fixed significance levels, especially 5%. It
is ironical then that the "value of P" should feature so strongly in Fisher's book when that work also did so much to popularise the use of the 5% level. [John Aldrich]
See also SIGNIFICANCE and HYPOTHESIS AND HYPOTHESIS TESTING.
PAIRWISE. A JSTOR search found the term in P. Jordan; J. v. Neumann; E. Wigner “On an Algebraic Generalization of the Quantum Mechanical Formalism,” Annals of Mathematics, 35, (1934), 29-64. The
phrase “a set of pairwise orthogonal unit elements” appears on p. 35.
PANGEOMETRY is the term Nicholas Lobachevsky (1796-1856) gave to his non-Euclidea geometry (Schwartzman, p. 157).
PARABOLA was probably coined by Apollonius, who, according to Pappus, had terms for all three conic sections. Michael N. Fried says there are two known occasions where Archimedes used the terms
“parabola” and “ellipse,” but that “these are, most likely, later interpolations rather than Archimedes own terminology.”
However, G. J. Toomer believes the names parabola and hyperbola are older than Apollonius, based on an Arabic translation of Diocles' On burning mirrors.
PARABOLIC GEOMETRY. See hyperbolic geometry.
PARACOMPACT. The term and the concept are due to J. Dieudonné (1906-1992), who introduced them in Une généralisation des espaces compacts, J. Math. Pures Appl., 23 (1944) pp. 65-76. A topological
space X is paracompact if (i) X is a Hausdorff space, and (ii) every open cover of X has an open refinement that covers X and which is locally finite. The usefulness of the concept comes almost
entirely from condition (ii), while the role of condition (i) has been somewhat controversial. Thus, in his book General Topology (1955), John Kelley (p. 156) replaces (i) by the condition that X be
regular (and his definition of regularity does not include the Hausdorff separation axiom), while some other authors do not even mention (i) in defining paracompactness. In any case, however, it is
possible to state this important fact (conjectured by Dieudonné in the paper above): every metric space is paracompact. This was proved by A. H. Stone in Paracompactness and product spaces, Bull.
Amer. Math. Soc., 54 (1948) 977-982. [This entry was contributed by Carlos César de Araújo.]
PARACONSISTENT LOGIC. The first formal calculus of inconsistency-tolerant logic was constructed by the Polish logician Stanislaw Jaskowski, who published his paper "Propositional calculus for
contradictory deductive systems" (in Polish) in Studia Societatis Scientiarum Torunensis, 55--77 in 1948. It was reprinted in English in Studia Logica 24, 143--157 (1969).
Newton Carneiro Affonso da Costa, one of the most prominent researchers in paraconsistent logic, referred to it as inconsistent formal systems in his 1964 thesis, which used that term as its title.
[See the introduction of the work "Sistemas Formais Inconsistentes", Newton C. A. da Costa, Editora da UFPr, Curitiba, 1993, p. viii. This work is a reprint of the Prof. Newton's original 1964
thesis, the initial landmark of all studies in the matter.
The term paraconsistent logic was coined in 1976 by the Peruvian philosopher Francisco Miró Quesada, during the Terceiro Congresso Latino Americano.
[Manoel de Campos Almeida, Max Urchs]
PARADOX is a label fixed to many arguments in Mathematics. See e.g. BANACH-TARSKI PARADOX, HAUSDORFF PARADOX, RUSSELL'S PARADOX, ST. PETERSBURG PARADOX, SIMPSON’S PARADOX, ZENO’S PARADOXES.
Paradox (from the Greek for contrary to received opinion) is an exasperatingly ambiguous word and it is not unusual to read statements like, "This is not a paradox at all, the only reason that it is
given this name is that it is counter-intuitive." W. V. Quine Ways of Paradox (1966) classifies paradoxes, distinguishing 3 types:
● A veridical paradox produces a result that appears absurd but is demonstrated to be true nevertheless. E.g. SIMPSON’S PARADOX.
● A falsidical paradox produces a result that not only appears false but actually is false; there is a fallacy in the argument. E.g. ZENO’S PARADOXES.
● An antinomy (from the Greek for against law) produces a self-contradiction by accepted ways of reasoning. E.g. RUSSELL'S PARADOX.
The examples are taken from these pages. The placing of a paradox can be a matter of dispute. Quine notes, "One man’s antinomy can be another man’s veridical paradox, and one man’s veridical paradox
can be another man’s platitude." For further discussion see BANACH-TARSKI PARADOX and HAUSDORFF PARADOX.
Paradoxes have been discussed since antiquity; see the LIAR PARADOX. In the Middle Ages variants of the Liar paradox were studied under the heading insolubilia (W. & M. Kneale The Development of
Logic (1962) pp. 227-8). The early 20^th century with its disputes on set theory and logic was the great age for paradoxes. Amongst the antinomies discovered then were those of BURALI-FORTI, RUSSELL
and RICHARD.
Logical and semantic paradoxes. F. P. Ramsey pointed out that the "contradictions fall into two fundamentally distinct groups." ("Foundations of Mathematics" in Foundations of Mathematics and Other
Essays (1931, pp. 20-1)) Ramsey's Type A are now called the logical paradoxes and his Type B the semantic paradoxes: RUSSELL'S PARADOX is an example of the first and RICHARD'S PARADOX an example of
the second. A JSTOR search found the logical/semantic terminology in use in W. V. Quine’s review of The New Logic by Karl Menger et al. in the Journal of Symbolic Logic, 3, (1938), p. 48.
This entry was contributed by John Aldrich.
PARALLEL appears in English in 1549 in Complaynt of Scotlande, vi. 47: "Cosmaghraphie ... sal delcair the eleuatione of the polis, and the lynis parallelis, and the meridian circlis" (OED2).
PARALLELEPIPED. According to Smith (vol. 2, page 292), "Although it is a word that would naturally be used by Greek writers, it is not found before the time of Euclid. It appears in the Elements (XI,
25) without definition, in the form of 'parallelepipedal solid,' the meaning being left to be inferred from that of the word 'parallelogrammic' as given in Book I."
Parallelipipedon appears in English in 1570 in Sir Henry Billingsley's translation of Euclid's Elements.
In the 1644 edition of his Cursus mathematicus (in Latin), Pierre Herigone used the spelling parallelepipedum.
The first citation in the OED2 with the shortened spelling parallelepiped is Walter Charleton (1619-1707), Chorea gigantum, or, The most famous antiquity of Great-Britain, vulgarly called Stone-heng
: standing on Salisbury Plain, restored to the Danes, London : Printed for Henry Herringman, 1663.
Charles Hutton's Dictionary (1795) shows parallelopiped and parallelopipedon.
In Noah Webster's A compendious dictionary of the English language (1806) the word is spelled parallelopiped.
Mathematical Dictionary and Cyclopedia of Mathematical Science (1857) has parallelopipedon.
U. S. dictionaries show the pronunciation with the stress on the penult, but some also show a second pronunciation with the stress on the antepenult.
PARALLELOGRAM appears in English in 1570 in Sir Henry Billingsley's translation of Euclid's Elements (OED2).
In 1832 Elements of Geometry and Trigonometry by David Brewster, which is a translation of Legendre, has:
The word parallelogram, according to its etymology, signifies parallel lines; it no more suits the figure of four sides than it does that of six, of eight, &c. which have their opposite sides
parallel. In like manner, the word parallelopipedon signifies parallel planes; it no more designates the solid with six faces, than the solid with eight, ten, &c. of which the opposite faces are
parallel. The names parallelogram and parallelelopipedon*, have the additional inconvenience of being very long. Perhaps, therefore, it would be advantageous to banish them altogether from
geometry; and to substitute in their stead, the names rhombus and rhomboid, retaining the term lozenge, for quadrilaterals whose sides are all equal.
*The word is misspelled this way in Brewster.
PARAMETER. Claude Mydorge used the word parameter with the meaning of "latus rectum" on page 3 of “Prodromi catoptricorum et dioptricorum sive conicorum operis libri primus et secundus,” Paris 1631.
[Alessio Martini, Siegmund Probst]
A reference to Mydorge’s term is in Frans van Schooten, “In geometriam Renati Descartes commentarii,” 1659, printed in: Geometria, a Renato Des Cartes anno 1637 gallice edita, postea autem una cum
notis Florimondi de Beaune (...) in latinam linguam versa et commentariis illustrata opera atque studio Francisci a Schooten (...) Nunc demum ab eodem diligenter recognita, locupletorioribus
commentariis instructa, multisque egregiis accessionibus (...) exornata. 2 vols. Amsterdam 1659-1661, vol. I, p. 208.
According to Kline (page 340), parameter was introduced by Gottfried Wilhelm Leibniz (1646-1716). He used the term in 1692 in Acta Eruditorum 11 (Struik, page 272). Kline used the term in its modern
sense. According to Siegmund Probst, Frans van Schooten is probably the source that Leibniz used (numerous references since 1673); starting in 1673 Leibniz used parameter for constants in formulas
with variables, e.g. equations of curves or series. See Leibniz volumes VII,3 VII,4 VII,5.
PARAMETER (in statistics) Although parameter had been used by earlier writers--David (2001) cites J. C. Kapteyn Skew Frequency Curves in Biology and Statistics (1903)--it was established as the
standard term by R. A. Fisher. He introduced it, along with many other terms, in "On the Mathematical Foundations of Theoretical Statistics", Philosophical Transactions of the Royal Society of
London, Ser. A. 222, (1922) 309-368.
Parameter arrived with statistic for Fisher saw the need for two terms (p. 311):
it has happened that in statistics a purely verbal confusion has hindered the distinct formulation of statistical problems; for it is customary to apply the same name, mean, standard deviation,
correlation coefficient, etc. both to the true value which we should like to know but can only estimate, and to the particular value at which we arrive by our method of estimation ...
With the new terms Fisher could write, "Problems of Estimation ... involve the choice of methods of calculating from a sample ... statistics, which are designed to estimate the values of the
parameters of the hypothetical population." (p. 313) He would recall, "I was quite deliberate in choosing unlike words for these ideas which it was important to distinguish as clearly as possible."
(letter (p. 81) in J. H. Bennett Statistical Inference and Analysis: Selected Correspondence of R. A. Fisher (1990)).
Parameter did not replace any existing standard term. Fisher had used "arbitrary element" (1912) and "population-character" (1921), while Karl Pearson's "frequency-constant" could mean either a
parameter or a statistic.
While Fisher’s use of statistic was criticised (See entry on Statistic), parameter fell in with an established usage, which the OED2 traces to the mid-nineteenth century, viz. "a quantity which is
constant (as distinct from the ordinary variables) in a particular case considered, but which varies in different cases."
The use of the terms parametric and nonparametric for different kinds of hypotheses dates from the 1940s. See the entry NONPARAMETRIC.
See also STATISTIC and NUISANCE PARAMETER AND PARAMETER OF INTEREST.
[This entry was contributed by John Aldrich, using Hald (1998).]
PARAMETRIC EQUATION is found in 1894 in "On the Singularities of the Modular Equations and Curves" by John Stephen Smith in the Proceedings of the London Mathematical Society [University of Michigan
Historical Math Collection].
PARETO DISTRIBUTION, PARETO'S LAW. In the 1890s the economist Vilfredo Pareto (1843-1925) found a pattern in the way incomes were distributed within countries. "La courbe des revenus" is given by
log N = log A - α log x
where N is the number of individuals with incomes higher than x, and A and α are constants. In his Cours d'économie politique (1897) Pareto described the curve and provided evidence of its wide
applicability. Economists were soon referring to Pareto’s "law of the distribution of income" though they did not necessarily agree that it constituted a law; see e.g. Henry L. Moore "The Statistical
Complement of Pure Economics," Quarterly Journal of Economics, 23, (1908), 1-33. The formula can be rewritten as a frequency distribution and statisticians found it natural to refer to "Pareto’s
distribution;" see e.g. J. O. Irwin "Recent Advances in Mathematical Statistics (1934)," Journal of the Royal Statistical Society, 99, (1936), 714-769. [John Aldrich]
PARTIAL DERIVATIVE and PARTIAL DIFFERENTIAL. Partial derivatives appear in the writings of Newton and Leibniz.
Partial differential equation was used in 1770 by Antoine-Nicolas Caritat, Marquis de Condorcet (1743-1794) in the title "Memoire sur les Equations aux différence partielles," which was published in
Histoire de L'Academie Royale des Sciences, pp. 151-178, Annee M. DCCL&ldquo:III (1773).
Partial differential equation appears in English in 1809 in a letter from “Mr. Thomas Knight, Of Papcastle, near Cockermouth” in The Mathematical Repository, New Series, Volume III (1809). The same
issue of The Mathematical Repository contains the expression partial fluxion. [James A. Landau]
An early use of the term partial derivative in English is in an 1834 paper by Sir William Rowan Hamilton [James A. Landau].
Partial differential equation is found in English in 1820 in A Collection of Examples of the Applications of the Differential and Integral Calculus by George Peacock: "Given a solution of a partial
differential equation, to find whether it is included in the general solution or not." [Google print search]
See the Earliest Uses of Symbols of Calculus page.
PARTIAL FRACTION. Fraction partielle occurs in Legendre’s 1792 paper Mémoire Sur Les Transcendantes Elliptiques: “que chaque fraction partielle soit de la forme N / (1 + n sin^2 φ)^k, n et N étant
des coéfficiens constans reels ou imaginaires.”
In English partial fraction appears in the 1809 translation of Legendre’s 1792 paper, which translates the above French text as “that every partial fraction shall be of the form N / (1 + n sin^2 φ)^
k, n and N being constant coefficients, real or imaginary.”
[James A. Landau]
PARTIAL PRODUCT is found in English in an 1822 translation of Elements of Algebra by Euler. [Google print search]
PARTICULAR SOLUTION is found in 1735-6 Phil. Trans. 39, 325:
In the Author's second Problem, or the Relation of the Fluxions being given to determine the Relation of the Fluents,..he [sc. Newton] begins with a particular Solution of it. He calls this
Solution particular, because it extends only to such Cases, wherein the given Fluxional Equation either has been, or might have been, derived from some previous finite Algebraical Equation.
The above is a report of a hitherto unpublished work by Newton in Latin [Alan Hughes].
The term particular case of the general integral is due to Lagrange (Kline, page 532).
Particular integral is found in English in 1814 in New Mathematical and Philosophical Dictionary by P. Barlow:
Particular Integral, in the Integral Calculus, is that which arises in the integration of any differential equation, by giving a particular value to the arbitrary quantity or quantities that
enter into the general integral (OED2).
The name PASCAL'S TRIANGLE is a tribute to Blaise Pascal's Traité du triangle arithmétique (Cambridge University) of 1654. Behind the Traité were many related investigations spanning many centuries
and many countries and the triangle has had several names, e.g. in Italy it is called after Nicolo Tartaglia (1499-1547). The story of the triangle(s) is told in A. W. F. Edwards's Pascal's
Arithmetical Triangle. Because the Traité "brought together all the different aspects of the numbers" Edwards concludes, "that the Arithmetical Triangle should bear Pascal's name cannot be disputed."
Montmort (Essay d'analysis sur les jeux de hazard, 1708) was the first to attach Pascal's name to the triangle, "Table de M. Pascal pour les combinaisons," while De Moivre (Miscellanea analytica,
1730) used the expression "Triangulum Arithmeticum PASCALIANUM." (From Edwards op. cit.) Montmort and De Moivre both wrote on probability and the application of the triangle to this field was an
important new element in the Traité. Indeed the Traité was one of the founding works of probability.
Arithmetical triangle of Pascal is found in Ed. Lucas, "Note sur le triangle arithmétique de Pascal et sur la série de Lamé, N. C. M. (1876). Pascal's triangle appears in 1886 in Algebra by George
Chrystal (1851-1911).
While triangle research in Western Europe only gained momentum in the Renaissance, there was much earlier work in India, Persia and China. In India there was a tradition beginning with Pingala (ca
200BC). His rule for finding the number of combinations, known as the Meru Prastara (Staircase of Mount Meru), was put into triangular form by the 10^th century AD. (Roger Cooke and Edwards op. cit.)
In Persia and China the binomial theorem seems to have been discovered around 1100. In China the triangle is called Yang Hui's triangle. The 'Pascal' triangle as depicted in A.D. 1303 tabulates the
binomial coefficients up to the eighth power. See Materials for the History of Statistics for the full reference.
(Based on Edwards's Pascal's Arithmetical Triangle.)
See the entries on COMBINATION and PROBABILITY.
PASCAL'S WAGER, an argument for acting as if one believed in God, has been claimed as "the first well-understood contribution to decision theory" (Hacking (1975, ch. 8)). The argument was published
in the Pensées, a work assembled from Pascal's notes after his death; see Infinite--nothing (§233). The argument was discussed by Leibniz, Locke, Voltaire and Diderot as well as by religious writers.
The argument never became part of the probability literature but it received renewed attention in the 20^th century with the rise of decision theory. See Pascal’s wager in the Stanford Encyclopedia.
A Google print search finds Pascal's wager in 1896 in The New World: A Quarterly Review of Religion, Ethics, and Theology, volume 5: “In Pascal's Thoughts there is a celebrated passage known in
literature as Pascal's wager.”
PATH ANALYSIS is the modern term for what Sewall Wright called the method of path coefficients in his paper "Correlation and Causation," Journal of Agricultural Research, 20, (1921), 557-585. Wright
introduced the method as follows, "In the biological sciences, especially, one often has to deal with a group of characteristics or conditions which are correlated because of a complex of
interacting, uncontrollable, and often obscure causes.... The present paper is an attempt to present a method of measuring the direct influence along each separate path in such a system and thus of
finding the degree to which variation of a given effect is determined by each particular cause." (p. 557) The term path analysis seems to have become common only around 1960. (JSTOR search.)
PATHOLOGICAL as “satisfying the conditions of a theory or theorem but contrary to intuition as to the general nature of the objects concerned, and therefore regarded as bizarre or defective.”
(Borowski & Borwein)
The word has been in English as a medical term since the 17^th century but its use as a mathematical term of art appears to date from the 1930s. A JSTOR search found Murray and von Neumann writing in
“On Rings of Operators,” Annals of Mathematics, 37, (1936), p. 227 of another “pathological” possibility. The authors put quotes around the word but they quickly became superfluous. The OED’s
earliest citation is from I. S. Sokolnikoff Advanced Calculus (1939): “Such pathological behavior of continuous functions led to a careful inquiry into the meaning of such geometrical concepts as the
area under a curve.”m [John Aldrich]
See the entry WELL-BEHAVED.
PAULI MATRICES are named after the physicist Wolfgang Pauli, who used them in his “Zur Quantenmechanik des magnetischen Elektrons,” Zeitschrift für Physik, 43, (1927), p. 60. However they had
appeared long before in Cayley’s “A Memoir on the Theory of Matrices” (1858) Coll Math Papers, II, 475-96. In paragraph 45 (p. 491) Cayley writes that these matrices satisfy a system of relations
“precisely similar to that in the theory of quaternions.” See the Encyclopedia of Mathematics entry Pauli matrices.
See MATRIX, MATRIX MECHANICS and QUATERNIONS.
PEANO’S AXIOMS for arithmetic were presented by Giuseppe Peano in Arithmetices Principia (1889) translated from the Latin in Heijenoort (1967, pp. 83-107). Grattan-Guinness (2000, p. 228) notes that,
while Peano refers to Dedekind's booklet Was sind und Was sollen die Zahlen? (1887) Works, 3, 335-391, he later said he had found his axiom system independently. B. Russell Principles of Mathematics
(1903) refers to "Peano’s primitive propositions."
The PEANO CURVE was presented by Peano in "Sur une courbe, qui remplit une aire plane." Math. Ann. 36, 157-160, 1890. Alas there are no diagrams!
The term PEANO-GOSPER CURVE was coined by Mandelbrot in 1977 in Fractals: Form, chance, and dimension. See Mathworld.
PEARLS OF SLUZE. Blaise Pascal (1623-1662) named the family of curves to honor Baron René François de Sluze, who studied the curves (Encyclopaedia Britannica article: "Geometry").
The PEARSON system of CURVES (describing probability distributions) was introduced by Karl Pearson in his "Contributions to the Mathematical Theory of Evolution. II. Skew Variation in Homogeneous
Material," Philosophical Transactions of the Royal Society A, 186, (1895), 343-414. The curves were originally classified into Types I to IV but over the years the number of types and their
definitions changed. References to "Professor Pearson's Type III" can found in G. U. Yule "Notes on the History of Pauperism in England and Wales from 1850 ... " Journal of the Royal Statistical
Society, 59, (1896), p. 324 or in Student (1908, p. 4). R. A. Fisher (1915, p. 520) refers to the "Pearson curves." [John Aldrich]
See also BETA DISTRIBUTION, CAUCHY DISTRIBUTION, and GAMMA DISTRIBUTION.
The term PEDAL CURVES is due to Olry Terquem (1782-1862) (Cajori 1919, page 228).
PELL'S EQUATION was so named by Leonhard Euler (1707-1783) in a paper of 1732-1733, even though Pell had only copied the equation from Fermat's letters (Burton, page 504) of 1657 and 1658.
The following is taken from Sir Thomas L. Heath, Diophantus of Alexandria: A Study in the History of Greek Algebra, page 285-286:
Fermat rediscovered the problem and was the first to assert that the equation x^2 - Ay^2 = 1, where A is any integer not a square, always has an unlimited number of solutions in integers. His
statement was made in a letter to Frénicle of February, 1657 (cf. Oeuvres de Fermat, II, pp. 333-4). Fermat asks Frénicle for a general rule for finding, when any number not a square is given,
squares which, when they are respectively multiplied by the given number and unity is added to the product, give squares. If, says Fermat, Frénicle cannot give a general rule, will he give the
smallest value of y which will satisfy the equations 61y^2 + 1 = x^2 and 109y^2 + 1 = x^2 ? ... The challenge was taken up in England by William, Viscount Brouncker, first President of the Royal
Society, and Wallis. At first, owing apparently to some misunderstanding, they thought that only rational, and not necessarily integral solutions were wanted, and found of course no difficulty in
solving this easy problem. Fermat was, naturally, not satisfied with this solution, and Brouncker, attacking the problem again, finally succeeded in solving it. The method is set out in letters
of Wallis of 17th December, 1657, and 30th January, 1658, and in chapter XCVIII of Wallis' Algebra; Euler also explains it fully in his Algebra (Footnote 3: Part II, chap. VII), wrongly
attributing it to Pell (Footnote 4: This was the origin of the erroneous description of our equation as the "Pellian" equation. Hankel (in Zur Geschichte der Math. im Alterthum und Mittlelalter,
p. 203) supposed that the equation was so called because the solution was reproduced by Pell in an English translation (1668) by Thomas Brancker of Rahn's Algebra; but this is a misapprehension,
as the so-called "Pellian" equation is not so much as mentioned in Pell's additions (Wertheim in Bibliotheca Mathematica, III, 1902, pp. 124-6); Konen, pp. 33-4 note). The attribution of the
solution to Pell as a pure mistake of Euler's, probably due to a cursory reading by him of the second volume of Wallis' Opera where the solution of the equation ax^2 + 1 = y^2 is given as well as
information as to Pell's work in indeterminate analysis. But Pell is not mentioned in connexion with the equation at all (Eneström in Bibliotheca Mathematica, III, 1902, p. 206).
The following is taken from Harold M. Edwards, Fermat's Last Theorem: A Genetic Introduction to Algebraic Number Theory, page 33:
This problem of Fermat is now known as "Pell's equation" as a result of a mistake on the part of Euler. In some way, perhaps from a confused recollection of Wallis's Algebra, Euler gained the
mistaken impression that Wallis attributed the method of solving the problem not to Brouncker but to Pell, a contemporary of Wallis who is frequently mentioned in Wallis's works but who appears
to have had nothing to do with the solution of Fermat's problem. Euler mentions this mistaken impression as early as 1730, when he was only 23 years old, and it is included in his definitive
Introduction to Algebra written around 1770. Euler was the most widely read mathematical writer of his time, and the method from that time on has been associated with the name of Pell and the
problem that it solved --- that of finding all integer solutions of y^2 - Ax^2 = 1 when A is a given number not a square --- has been known ever since as "Pell's equation", despite the fact that
it was Fermat who first indicated the importance of the problem and despite the fact that Pell had nothing whatever to do with it.
These quotations were provided by Raul Nunes to a mathematics history mailing list.
The 1910 Encyclopaedia Britannica has: "Although Pell had nothing to do with the solution, posterity has termed the equation Pell's Equation" (OED2).
PENCIL OF LINES. Desargues coined the term ordonnance de lignes, which is translated an order of lines or a pencil of lines [James A. Landau].
PENTAGON. Pentagons are discussed in Book IV of Euclid’s Elements. The word “pentagon” appears in English in 1570 in Sir Henry Billingsley’s translation of the Elements. (OED) Earlier in 1551 in
Pathway to Knowledge Robert Recorde introduced the word cinqueangle: “Defin., Figures of .v. sydes, other v. corners, which we may call cinkangles, whose sydes partlye are all equall as in A, and
those are counted ruled cinkeangles” (OED). See EQUILATERAL and HEXAGON. [John Aldrich]
PENTAGRAM is found in English in 1825 in a translation of Faust. [Google print search]
The term PENTOMINO was coined by Solomon W. Golomb, who used the term in a 1953 talk to the Harvard Math Club. According to an Internet web page, the term was trademarked in 1975. (The first known
pentomino problem is found in Canterbury Puzzles in 1907.)
PERCENTILE appears in 1885 in Francis Galton, 'Some results of the Anthropometric Laboratory.' Journal of the Anthropological Institute, 14, 275-287: "The value which 50 per cent. exceeded, and 50
per cent. fell short of, is the Median Value, or the 50th per-centile, and this is practically the same as the Mean Value; its amount is 85 lbs." (p. 276) (OED2).
According to Hald (p. 604), Galton introduced the term.
PERFECT NUMBER. According to Smith (vol. 2, page 21), the Pythagoreans used this term in another sense, because apparently 10 was considered by them to be a perfect number.
Proposition 36 of Book IX of Euclid's Elements is: "If as many numbers as we please beginning from a unit be set out continuously in double proportion, until the sum of all becomes a prime, and if
the sum multiplied into the last make some number, the product will be perfect."
The Greek poet and grammarian Euphorion (born c. 275 BC?) used the phrase ". . . equal to his [or their] limbs, with the result that they are called perfect." This is an apparent reference to perfect
numbers, according to J. L. Lightfoot, "An early reference to perfect numbers? Some notes on Euphorion, SH 417," Classical quarterly 48 (1998), 187-194.
The term was used by Nicomachus around A. D. 100 in Introductio Arithmetica (Burton, page 475). One translation is:
Among simple even numbers, some are superabundant, others are deficient: these two classes are as two extremes opposed to one another; as for those that occupy the middle position between the
two, they are said to be perfect.
Nichomachus identified 6, 28, 496, and 8128 as perfect numbers.
St. Augustine of Hippo (354-430) wrote De senarii numeri perfectione ("Of the perfection of the number six") in De Civitate Dei. He wrote, in translation: "Six is a number perfect in itself, and not
because God created the world in six days; rather the contrary is true. God created the world in six days because this number is perfect, and it would remain perfect, even if the work of the six days
did not exist."
Perfect number appears in English in 1570 in Sir Henry Billingsley's translation of Euclid.
In 1674, Samuel Jeake wrote in Arithmetic (1696) "Perfect Numbers are almost as rare as perfect Men" (OED2).
PERFECT SETS. Georg Cantor introduced perfecte Punktmengen (perfect point-sets) in his article “Über unendliche, lineare Punktmannichfaltigkeiten 5,” Mathematische Annalen, 21 (1883), p. 576. For an
account see J. W. Dauben Georg Cantor (1979, pp. 110ff). The French expression appears in a translation, Cantor’s “De la puissance des ensembles parfaits de points,” Acta Mathematica 4 (1884),
381-392. A JSTOR search found the English term perfect set in W. F. Osgood “Non-Uniform Convergence and the Integration of Series Term by Term,” American Journal of Mathematics, 19, (1897) p. 188.
This entry was contributed by John Aldrich. See also SET and SET THEORY.
PERIODOGRAM. Arthur Schuster introduced the term periodogram in "On the Investigation of Hidden Periodicities with Application to a Supposed 26 Day Period of Meteorological Phenomena," Terrestial
Magnetism, 3, (1898), 13-41. He had already used the technique in his "On Lunar and Solar Periodicities of Earthquakes," Proceedings of the Royal Society, 61, (1897), 455-465 and the theory was
related to his research on optics, "On Interference Phenomena," Philosophical Magazine, 37, 509-545. (David 2001)
See also HARMONIC ANALYSIS.
PERMANENT (of a square matrix). In a paper written with M. Marcus ("Permanents", Amer. Math. Monthly, 1965, p. 577) Henryk Minc, one of the great authorities in permanents, wrote:
The name "permanent" seems to have originated in Cauchy's memoir of 1812 [B 3]. Cauchy's "fonctions symétriques permanentes" designate any symmetric function. Some of these, however, were
permanents in the sense of the definition (1.1). (...) As far as we are aware the name "permanent" as defined in (1.1) was introduced by Muir [B 38].
The paper by T. Muir is "On a class of permanent symmetric functions", Proc. Roy. Soc. Edinburgh, 11 (1882) 409-418. [B3] is "Mémoire sur les fonctions Qui ne peuvent obtenir que deux valeurs égales
et de signes contraires par suite des transpositions opérées entre les variables qu'elles renferment", J. de l'Éc. Polyt., 10 (1812) 29-112. According to J. H. van Lint in "The van der Waerden
Conjecture: Two Proofs in One Year", The Mathematical Intelligencer:
In his book Permanents [9] H. Minc mentions that the name permanent is essentially due to Cauchy (1812) although the word as such was first used by Muir in 1882. Nevertheless a referee of one of
Minc's earlier papers admonished him for inventing this ludicrous name!
[This entry was contributed by Carlos César de Araújo.]
PERMUTATION. Leibniz used the term variationes and Wallis adopted alternationes (Smith vol. 2, page 528).
In 1678 Thomas Strode, A Short Treatise of the Combinations, Elections, Permutations & Composition of Quantities, has: “By Variations, permutation or changes of the Places of Quantities, I mean, how
many several ways any given Number of Quantities may be changed.” [OED]
Lexicon Technicum, or an universal English dictionary of arts and sciences (1710) has: “Variation, or Permutation of Quantities, is the changing any number of given Quantities, with respect to their
Places.” [OED]
According to Smith vol. 2, page 528, permutation first appears in print with its present meaning in Ars Conjectandi by Jacques Bernoulli: "De Permutationibus. Permutationes rerum voco variationes..."
This seems to be incorrect.
[Mark Thakkar contributed to this entry.]
The term PERMUTATION GROUP was coined by Galois (DSB, article: "Lagrange").
Permutation group appears in English in W. Burnside, "On the representation of a group of finite order as a permutation group, and on the composition of permutation groups," London M. S. Proc. 34.
The term PERMUTATION TEST appears in G. E. P. Box & S. L. Andersen "Permutation Theory in the Derivation of Robust Criteria and the Study of Departures from Assumption," Journal of the Royal
Statistical Society. Series B, 17, (1955), p. 3. They use the term for a "remarkable new class of tests" introduced by R. A. Fisher in The Design of Experiments (1935) and quote from p. 51 of this
It seems to have escaped recognition that the physical act of randomisation, which, as has been shown, is necessary for the validity of any test of significance, affords the means, in respect of
any particular body of data, of examining the wider hypothesis in which no normality of distribution is implied.
(David 2001)
PERPENDICULAR was used in English by Chaucer about 1391 in A Treatise on the Astrolabe. The term is used as a geometry term in 1570 in Sir Henry Billingsley's translation of Euclid's Elements.
PERRON-FROBENIUS THEOREM. This result (or collection of results) is named for Oskar Perron "Zur Theorie der Matrizen" Math. Ann., 64 (1907) pp. 248-263 and Georg Frobenius "Ueber Matrizen aus nicht
negativen Elementen" Sitzungsber. Königl. Preuss. Akad. Wiss. (1912) pp. 456-477. See the entry in Encyclopedia of Mathematics.
PETERS’ FORMULA or METHOD for estimating the standard deviation of the normal distribution using absolute deviations from the mean was widely used by astronomers in the 19^th century. The method was
proposed by Christian August Friedrich Peters in his “Über die Bestimmung des wahrscheinlichen Fehlers einer Beobachtung aus den Abweichungen der Beobachtungen von ihrem arithmetischen Mittel,”
Astronomische Nachrichten, 44, (1856), 29-32. R. A. Fisher examined Peters’ method (mean error method) in his paper A Mathematical Examination of the Methods of Determining the Accuracy of an
Observation by the Mean Error, and by the Mean Square Error (1920) and concluded that it was inferior to the mean square error method. See the entries HELMERT TRANSFORMATION, SUFFICIENCY and
The term PFAFFIAN was introduced by Arthur Cayley, who used the term in 1852: "The permutants of this class (from their connexion with the researches of Pfaff on differential equations) I shall term
'Pfaffians'." The term honors Johann Friedrich Pfaff (1765-1825).
PIECEWISE is found in 1933 in the phrase "vectors which are only piecewise differentiable" in Vector Analysis by H. B. Phillips (OED2).
The name PIE CHART is found in 1922 in A. C. Haskell, Graphic Charts in Business (OED2). Pie charts only became common in the 20^th century but they seem to have been first used by William Playfair
in 1801. See that date in Milestones in the History of Thematic Cartography, Statistical Graphics, and Data Visualization 1800-1849.
PIGEONHOLE PRINCIPLE. The principle itself is attributed to Dirichlet in 1834, although he apparently used the term Schubfachprinzip.
In Dirichlet's Vorlesungen über Zahlentheorie (Lectures on Number Theory, prepared for publication by Dedekind, first edition 1863), the argument is used in connection with Pell's equation but it
bears no specific name [Peter Flor, Gunnar Berg].
In 1905 in Bachmann's "Zahlentheorie," part 5, the principle is stated as a "very simple fact" on which Dirichlet is said to have based his theory of units in number fields; no name is attached to
the principle [Peter Flor].
In 1910 in Geometrie der Zahlen, Minkowski calls it "a famous method of Dirichlet" [Peter Flor].
According to Peter Flor, "the term Schubfachschluss, with or without a reference to Dirichlet, was used widely by German speaking number theorists at the universities of Vienna and Hamburg when I
studied there in the 1950s. It occurs, among others, in the number theory books by Hasse and by Aigner."
In Swedish, the principle is called (in translation) "Dirichlets box principle" [Gunnar Berg]. The French term is "le principe des tiroirs de Dirichlet," which can be translated "the principle of the
drawers of Dirichlet." In Portuguese, the term is "principio da casa dos pombos" (lit. principle of the house of the pigeons) or "das gavetas de Dirichlet" (lit. of the drawers of Dirichlet) [Julio
González Cabillón].
Pigeonhole principle occurs in English in Raphael M. Robinson's paper "On the Simultaneous Approximation of Two Real Numbers," presented to the American Mathematical Society on November 23, 1940, and
published in the Bulletin of the Society in 1941. Cf. volume 47, pp 512-513. In a footnote to this article, Robinson states:
The method used in this proof (Schubfachprinzip or "pigeonhole principle") was first used by Dirichlet in connection with a similar problem. We sketch the proof here in order to compare it with
the proof of the theorem below, which also uses that method.
This citation was provided by Julio González Cabillón.
Paul Erdös referred to Dedekind's pigeon-hole principle in "Combinatorial Problems in Set Theory," an address he delivered in 1953 before the AMS [Julio González Cabillón].
Pigeon-hole principle occurs in English in Paul Erdös and R. Rado, "A partition calculus in set theory," Bull. Am. Math. Soc. 62 (Sept. 1956):
Dedekind's pigeon-hole principle, also known as the box argument or the chest of drawers argument (Schubfachprinzip) can be described, rather vaguely, as follows. If sufficiently many objects are
distributed over not too many classes, then at least one class contains many of these objects.
E. C. Milner and R. Rado, "The pigeon-hole principle for ordinal numbers," Proc. Lond. Math. Soc., III. Ser. 15 (Oct., 1965) begins similarly:
Dirichlet's pigeon-hole principle (chest-of-drawers principle, Schubfachprinzip) asserts, roughly, that if a large number of objects is distributed in any way over not too many classes, then one
of these classes contains many of these objects.
PIVOT, PIVOTAL ELEMENT, PIVOTAL CONDENSATION, ETC. in numerical linear algebra. This terminology for Gaussian elimination was introduced by the Edinburgh mathematicians, E. T. Whittaker and his
student A. C. Aitken. OED2 gives the following quotations: from Whittaker and G. Robinson's Calculus of Observations (1924) v. 71 "We prepare the determinant for our subsequent operations by
multiplying some row or column by such a number p as will make one of the elements unity, and put 1/p as a factor outside the determinant. This unit element will henceforth be called the pivotal
element."; from Aitken writing in Proc. Edin. Math. Soc. III, (1933) 211 "At Stage II we choose another pivot at will ... and cross-multiply with respect to it in the same way, dividing each result,
however, by the previous pivot ...."; from Aitken's Determinants & Matrices (1939) ii. 47 "A determinant of order n being reduced by a first pivotal condensation to one of order n-1, the latter in
its turn can be reduced by a second pivotal condensation to one of order n-2, and so on" [John Aldrich].
PIVOTAL in Statistics. The term was introduced by R. A. Fisher in his "The Asymptotic Approach to Behrens's Integral, with Further Tables for the d Test of Significance", Annals of Eugenics, 11,
(1941), 141-172. Fisher wrote, "In Student's test the quantity t appears in two roles. First, it is the pivotal quantity the distribution of which is independent of the population sampled, and the
distribution of which is therefore accepted for the particular sample under consideration ... Secondly it is the quantity tabulated." (p. 147) Fisher only used the term "pivotal" when expounding the
fiducial argument but now the term is used more widely. [John Aldrich, based on information in David (2001)].
PLACE VALUE appears in 1847 in A Treatise on Algebra, Containing the Latest Improvements. Adapted to the Use of Schools and Colleges Charles William Hackley: "It is clear from this, that if we add
the figures of the number without regarding their place value, the sum obtained and the proposed number will have the same minimum residue." [Google print search]
The word PLAGIOGRAPH was coined by James Joseph Sylvester (DSB).
PLANE GEOMETRY appears in English in a letter from John Collins to Oldenburg for Tschirnhaus written in May 1676: "...Mechanicall tentative Constructions performed by Plaine Geometry are much to be
preferred..." [James A. Landau].
PLATONIC SOLIDS. The five regular polyhedra are discussed by Plato in the Timaeus where they provide the basis for a theory of the universe. Their earlier history is obscure: William C. Waterhouse
writes in “The Discovery of the Regular Solids” Archive for History of Exact Science, 9, (1972-1973)
The history of the regular solids thus rests almost entirely on a scholium to Euclid which reads as follows: “In this book, the 13th, are constructed the 5 figures called Platonic, which however
do not belong to Plato. Three of these 5 figures, the cube, pyramid, and dodecahedron, belong to the Pythagoreans; while the octahedron and icosahedron belong to Theaetetus.”
This citation was provided by Paul Bien. The five solids are the subject of Book XIII of Euclid’s Elements.
In English, the OED shows a use of Platonicall bodies in 1571 by Thomas Digges and a Google books search found a use of Platonic solids in 1787 in The young geometrician’s companion: being a new and
comprehensive course of practical geometry by the Reverend Richard Turner.
See the entries CUBE, POLYHEDRON and PYRAMID.
PLATONISM. In the specific sense now widely used in discussions on the foundations of mathematics, this term was introduced by Paul Bernays (1888-1977) in Sur lê platonisme dans les mathematiques,
Einseignement Math., 34 (1935-1936), 52-69. We quote the relevant passage:
If we compare Hilbert's axiom system to Euclid's (...), we notice that Euclid speaks of figures to be constructed, whereas, for Hilbert, systems of points, straight lines, and planes exist from
the outset. (...) This example shows already that the tendency (...) consists in viewing the objects as cut off from all links with the reflecting subject. Since this tendency asserted itself
especially in the philosophy of Plato, allow me to call it "platonism".
(The translation from the French is by Charles Parsons. This entry was contributed by Carlos César de Araújo.)
PLETHYSM. According to Richard P. Stanley in Enumerative Combinatorics, the term was introduced in D. E. Littlewood, “Invariant theory, tensors and group characters,” Philos. Trans. Roy. Soc. London.
Ser. A. 239, (1944), 305–365. The term was suggested to Littlewood by M. L. Clark after the Greek word plethysmos (“multiplication” in modern Greek). [Information from this web page.]
PLUQUATERNION was coined by Thomas Kirkman (1806-1895), as he attempted to extend further the notion of quaternions.
PLUS and MINUS. From the OED2:
The quasi-prepositional use (sense I), from which all the other English uses have been developed, did not exist in Latin of any period. It probably originated in the commercial langauge of the
Middle Ages. In Germany, and perhaps in other countries, the Latin words plus and minus were used by merchants to mark an excess or deficiency in weight or measure, the amount of which was
appended in figures. The earliest known examples of the modern sense of minus are German, of about the same date as our oldest quotation. ... In a somewhat different sense, plus and minus had
been employed in 1202 by Leonardo of Pisa for the excess and deficiency in the results of the two suppositions in the Rule of Double Position; and an Italian writer of the 14th century used meno
to indicate the subtraction of a number to which it was prefixed.
PLUS OR MINUS. The expression "plus or minus" is very old, having been in common use by the Romans to indicate simply "more or less" (Smith vol. 2, page 402).
PLUS OR MINUS SIGN. In 1801 Mathematics by Thomas & Andrews has: "The double or ambiguous sign ± signifies plus or minus the quantity, which immediately follows it, and being placed between two
quantities, it denotes their sum, or difference." [Google print search]
The symbol ± is called the ambiguous sign in 1811 in An Elementary Investigation of the Theory of Numbers by Peter Barlow [James A. Landau].
PLUS SIGN. Positive sign is found in 1704 in Lexicon Technicum.
Affirmative sign is found in 1752 in The Elements of Algebra: In a New and Easy Method by Nathaniel Hammond. [Google print search]
Plus sign is found in 1835 in Key to professor Young's Algebra by W. H. Spiller and John Radford Young. [Google print search]
POINT. Definition 1 in the first book of Euclid’s Elements states “A point is that which has no part.” In the notes to his edition of the Elements T. L Heath (1926, vol. 1, pp. 155-6) describes how
sêmeion, the term used by Euclid and which elsewhere signified a punctuation mark, replaced stigmê, meaning a puncture. Euclid’s Latin translators used punctum. Thus Capella (c. 460) translated
Euclid’s definition as “Punctum est cuius pars nihil est.” (Quoted by Smith vol. 2, p. 274.)
The word appears in English in this sense in John Trevisa’s translation (a1398) of the encyclopaedia De Proprietatibus Rerum which was written about 1245 by Bartholomaeus Anglicus. The quotation in
the OED is, “þe lyne..bigynneþ at a poynt and endeþ at a poynt.” Trevisa’s translation was one of the first books to be printed in English (around 1495). The work in its final form (of 1582) has been
called “Shakespeare’s encyclopaedia.”
In the nineteenth and early twentieth centuries the concept of space was broadened to accommodate non-euclidean geometries and abstract spaces—see the entry SPACE. Elements of these new spaces were
called points. Thus Hausdorff introduces a topological space with the words “Unter einem topologischen Raum verstehen wir einen Menge E, worin den Elementen (Punkten) [a set E, where the elements
(points)]...” Grundzüge der Mengenlehre (1914, p. 213).
POINTLESS (geometry or topology). Forms of geometry (topology) that do not use the point as a primitive concept are called “pointless” or “point-free” or “without points.” Such structures have a long
history and titles include “Topology without Points” by K. Menger, Rice Institute Pamphlets, 27, (1940), 80-107 and, perhaps inevitably, “The Point of Pointless Topology” by P. T. Johnstone, Bulletin
of the American Mathematical Society, 8, (1983), 41-53.
POINT OF ACCUMULATION. See limit point.
The term POINT-SERIES GEOMETRY was coined by E. A. Weiss [DSB, article: "Reye"].
POINT-SET. The term was introduced in German by Georg Cantor. At first Cantor used the term Punktmannichfaltigkeit; see e.g. his “Über unendliche, lineare Punktmannichfaltigkeiten, (Part 1)
Mathematische Annalen, 15 (1879), 1-7. He then changed to Punktmenge; see e.g. his “Über verschiedene Theoreme aus der Theorie der Punktmengen in einem n-fach ausgedehnten stetigen Raume,” Acta
Mathematica, 7, (1885) 105-124.
The English term is used in E. H. Moore “Concerning Harnack's Theory of Improper Definite Integrals,” Transactions of the American Mathematical Society, 2, (1901), p. 297. See SET and SET THEORY.
The term POINT-SET TOPOLOGY was coined by Robert Lee Moore (1882-1974), according to the University of St. Andrews website. Moore’s Foundations of point set topology was published in 1932.
POINT-SLOPE FORM. Slope-point form is found in 1904 in Elements of the Differential and Integral Calculus by William Anthony Granville [James A. Landau].
Point-slope form is found in 1904 in The Elements of Analytic Geometry by Percey Franklyn Smith and Arthur Sullivan Gale. [Google print search]
See slope on Earliest Uses of Symbols from Geometry.
POISSON DISTRIBUTION. S. D. Poisson’s result on the limiting form of the binomial appears in the Mémoire sur la proportion des naissances des filles et des garcons (1830, pp. 261-2) and is reproduced
in the Recherchés sur la probabilité des jugements en matière criminelle et matière civil (1837, pp. 205ff). Poisson’s ‘ownership’ of this distribution has been debated for De Moivre had come very
close to it in 1712 (Hald (1990, p. 214)) and Poisson "does not discuss the properties and applications of this distribution" (Hald (1998, p. 571)). L. J. Bortkiewicz discussed both in his tract Das
Gesetz der kleinen Zahlen (1898); this has the famous example of cavalrymen being killed by the kick of a horse.
In Britain the distribution was not well known at the beginning of the 20^th century--Student actually rediscovered it in his "On the Error of Counting with a Haemacytometer," Biometrika, 5, (1907),
351-360. However Poisson received full attention in two papers published in 1914: Lucy Whitaker’s "On the Poisson Law of Small Numbers," Biometrika, 10, 36-71 and Herbert Edward Soper’s "Tables of
Poisson's Exponential Binomial Limit" Biometrika, 10, 25-35 Whitaker called the distribution the Poisson-exponential, while Soper referred to Poisson's exponential binomial limit or Poisson’s
exponential series.
Poisson distribution appears in 1922 in R. A. Fisher, H.G. Thornton and W.A. Mackenzie The Accuracy of the Plating Method of Estimating the Density of Bacterial Populations, p. 331: "When the
statistical examination of these data was commenced it was not anticipated that any clear relationship with the Poisson distribution would be obtained" (OED2). Fisher’s Statistical Methods for
Research Workers (1925, section 15) established Poisson as ‘core repertory.’
[This entry was contributed by John Aldrich, based on Hald (1990 and -98) David (1995).]
POLAR COORDINATES. According to Daniel L. Klaasen in Historical Topics for the Mathematical Classroom:
Isaac Newton was the first to think of using polar coordinates. In a treatise Method of Fluxions (written about 1671), which dealt with curves defined analytically, Newton showed ten types of
coordinate systems that could be used; one of these ten was the system of polar coordinates. However, this work by Newton was not published until 1736; in 1691 Jakob Bernoulli derived and made
public the concept of polar coordinates in the Acta eruditorum. The polar system used for reference a point on a line rather than two intersecting lines. The line was called the "polar axis," and
the point on the line was called the "pole." The position of any point in a plane was then described first by the length of a vector from the pole to the point and second by the angle the vector
made with the polar axis.
According to Smith (vol. 2, page 324), "The idea of polar coordinates seems due to Gregorio Fontana (1735-1803), and the name was used by various Italian writers of the 18th century."
Polar co-ordinates is found in English in 1816 in a translation of Lacroix's Differential and Integral Calculus: "The variables in this equation are what Geometers have called polar co-ordinates"
POLE and POLAR. The term pôle (in projective geometry) was introduced by François Joseph Servois (1768-1847) in 1811 (Smith vol. 2, page 334). It was introduced in his first contribution to
Gergonne's Annales de mathématiques pures et appliquées (DSB).
The term polar (polaire) was introduced by Joseph-Diez Gergonne in its modern geometric sense in 1813 (Smith vol. II, page 334).
The term POLAR (with respect to a triangle) was coined by Arthur Cayley. The term is found in Cayley "Sur quelques théorèmes de la géométrie de position," Crelle's journal 34 (1847) 270-275, or
Cayley's collected mathematical papers Vol. 1 # 55, pp. 356-361: " ...seront situés (comme on le sait) sur une même droite, qui est celle que je nomme polaire de O, relative aux côtés du triangle, et
que M. Plücker a nommé "harmonicale." [Ken Pledger]
The term POLE (in complex analysis) appears in Briot & Bouquet’s Théorie des fonctions elliptiques (1859, p. 15). The concept was used by Cauchy but the term was not. (Grattan-Guinness (1997, p.
388). See the Mathworld entry.
POLISH SPACE (espace polonaise) was defined in Nicolas Bourbaki, Topologie Generale Chapitre IX, deuzième édition 1958 [Stacy Langton]. See the Wikipedia article.
PÓLYA or PÓLYA-EGGENBERGER DISTRIBUTION, FORMULA, URN MODEL etc. are terms associated with the paper by George Pólya and F. Eggenberger “Uber die Statistik verketteter Vorgänge,” Zeitschrift fur
Angewandte Mathematik und Mechanik 3 (1923), 279-89. (Reprinted in George Pólya Collected Papers Volume IV.) The term “Pólya-Eggenberger distribution” appears in W. Feller “On a General Class of
“Contagious” Distributions,” Annals of Mathematical Statistics, 14, (1943), 389-400. [John Aldrich]
POLYGON was used in classical Greek. Euclid, however, preferred "polypleuron," designating many sides rather than many vertices.
Polygon appears in English in 1570 in Sir Henry Billingsley's translation of Euclid, folio 125. In an addition after Euclid IV.16, which Billingsley ascribes to Flussates (François de Foix, Bishop of
Aire), he mentions "Poligonon figures;" and in a marginal note explains "A Poligonon figure is a figure consisting of many sides." [Ken Pledger]
In 1571 in A Geometricall Practise, named Pantometria, Thomas Digges (d. 1595) wrote, "Polygona are such Figures as haue moe than foure sides" (OED2).
Multangle is found in 1674 in Samuel Jeake, Arith. (1696): "If 3 [angles] then called a Triangle, if 4 a Quadrangle, if more a Multangle or Polygone" (OED2).
In 1768-1771 the first edition of the Encyclopaedia Britannica has: "Every other right lined figure, that has more sides than four, is in general called a polygon."
In the 1828 Webster dictionary, the definition of polygon is: "In geometry, a figure of many angles and sides, and whose perimeter consists at least of more than four sides." In this dictionary, the
word polygon appears in the definition of the enneagon (nine sides) and the dodecagon, but not in the definitions of figures consisting of fewer than nine sides.
In 1828, Elements of Geometry and Trigonometry (1832) by David Brewster (a translation of Legendre) has: "Regular polygons may have any number of sides: the equilateral triangle is one of three
sides; the square is one of four."
POLYGONAL NUMBER and FIGURATE NUMBER. Pythagoras was acquainted at least with the triangular numbers, and very probably with square numbers, and the other polygonal numbers were treated by later
members of his school (Burton, page 102).
According to Diophantus, Hypsicles (c. 190 BC-120 BC) defined polygonal numbers.
Nicomachus discussed polygonal numbers in the Introductio.
A tract on polygonal numbers attributed to Diophantus exists in fragmentary form.
Boethius defined figurate numbers as numbers "qui circa figuras geometricas et earum spatia demensionesque versantur" (Smith vol. 2, page 24).
In 1646 Vieta (1540-1603) referred to triangular and pyramidal numbers: "In prima adfectione per unitatis crementum, in secunda per numeros triangulos, in tertia per numeros pyramidales, in quarta
per numeros triangulo-triangulos, in quinta per numeros triangulo-pyramidales."
In 1665 Pascal wrote his Treatise on Figurative Numbers.
Pentagonal number appears in English in 1670 in Collins in Rigaud Corr. Sci. Men (1841): "It is likewise a pentagonal number, or composed of two, three, four, or five pentagonal numbers" (OED2).
Pyramidal number appears in English in 1674 in Samuel Jeake's Arithmetic: "Six is called the first Pyramidal Number; for the Units therein may be so placed, as to represent a Pyramis" (OED2).
Polygonal number is found in English in 1704 in Lexicon Technicum: "Polygonal Numbers, are such as are the Sums or Aggregates of Series of Numbers in Arithmetical Progression, beginning with Unity;
and so placed, that they represent the Form of a Polygon" (OED2).
Figurate number and triangular (as a noun) appear in English in 1706 in William Jones, Synopsis palmariorum matheseos: "The Sums of Numbers in a Continued Arithmetic Proportion from Unity are call'd
Figurate ... Numbers. ... In a Rank of Triangulars their Sums are called Triangulars or Figurates of the 3d Order" (OED2).
Triangular number appears in English in 1796 in Hutton's Math. Dict.: "The triangular numbers 1, 3, 6, 10, 15, &c" (OED2).
In 1811 Peter Barlow used multangular numbers in An Elementary Investigation of the Theory of Numbers [James A. Landau].
POLYHEDRON. According to Ken Pledger, polyhedron was used by Euclid without a proper definition, just as he used "parallelogram." In I.33 he constructs a parallelogram without naming it; and in I.34
he first refers to a "parallelogrammic (parallel-lined) area," then in the proof shortens it to "parallelogram." In a similar way, XII.17 uses "polyhedron" as a descriptive expression for a solid
with many faces, then more or less adopts it as a technical term.
However, according to Smith (vol. 2, page 295), "The word 'polyhedron' is not found in the Elements of Euclid; he uses 'solid,' 'octahedron,' and 'dodecahedron,' but does not mention the general
solid bounded by planes."
In English, polyhedron is found in 1570 in Sir Henry Billingsley's translation of Euclid XII.17. Early in the proof (folio 377) Billingsley amplifies it to "...a Polyhedron, or a solide of many
sides,..." [Ken Pledger].
In English, in the 17th through 19th centuries, the word is often spelled polyedron.
POLYNOMIAL was used by François Viéta (1540-1603) (Cajori 1919, page 139).
The word is found in English in 1674 in Arithmetic by Samuel Jeake (1623-1690): "Those knit together by both Signs are called...by some Multinomials, or Polynomials, that is, many named" (OED2).
[According to An Etymological Dictionary of the English Language (1879-1882), by Rev. Walter Skeat, polynomial is "an ill-formed word, due to the use of binomial. It should rather have been
polynominal, and even then would be a hybrid word."]
The term POLYOMINO was coined by Solomon W. Golomb in 1954 (Schwartzman, p. 169).
The term POLYSTAR was coined by Richard L. Francis in 1988 (Schwartzman, p. 169).
The word POLYTOPE is a translation of the German Polytop introduced for a four dimensional convex solid by Reinhold Hoppe “Refelmässige linear begrenzte Figuren von vier Dimensionen,” Archiv der
Mathematik und Physik, 67, (1882), 29–43.
The English word appears in Alicia Boole Stott “Geometrical Deduction of Semiregular from Regular Polytopes and Space Fillings,” Verhandelingen der Koninklijke Akademie van Wetenschappen te Amsterdam
, 11, (1910), 3–24. See Irene Polo-Blanco “Alicia Boole Stott, a Geometer in Higher Dimension,” Historia Mathematica, 35, (2008), 123-139.
PONS ASINORUM usually refers to Proposition 5 of Book I of Euclid. From Smith vol. 2, page 284:
The proposition represented substantially the limit of instruction in many courses in the Middle Ages. It formed a bridge across which fools could not hope to pass, and was therefore known as the
pons asinorum, or bridge of fools. It has also been suggested that the figure given by Euclid resembles the simplest form of a truss bridge, one that even a fool could make. The name seems to be
The proposition was also called elefuga, a term which Roger Bacon (c. 1250) explains as meaning the flight of the miserable ones, because at this point they usually abandoned geometry (Smith vol. 2,
page 284).
Pons asinorum is found in English in 1751 in Smollett, Per. Pic.: "Peregrine..began to read Euclid..but he had scarce advanced beyond the Pons Asinorum, when his ardor abated" (OED2).
According to Smith, pons asinorum has also been used to refer to the Pythagorean theorem.
POPULATION and SAMPLE have been linked technical terms in statistics since the turn of the 20^th century. In the 19^th century population came to be applied to animals and plants and sample, which
was primarily a commercial term came, to be applied to objects of scientific interest; the OED quotes T. H. Huxley writing in 1878 (Physiography, xvi. 2), "numerous samples of the sea bottom were
secured." The terms were brought into statistics by people interested in the statistical analysis of biological populations.
Population and sample acquired a statistical colouring in the work of Francis Galton and W. F. R. Weldon. In "Typical laws of heredity," Nature, 15, (1877), April 19^th, p. 532 Galton wrote, "the
population ... will conform to the law of deviation [the normal distribution]." Weldon applied statistical methods to "samples" of crabs in On Certain Correlated Variations in Carcinus moenas,
Proceedings of the Royal Society, 54, (1893), 318-329.
The third founder of biometry, Karl Pearson, further abstracted the terms, brought them together and established the population-sample terminology in theoretical statistics. In "On the Probable
Errors of Frequency Constants," Biometrika, 2, (1903) p. 273 he wrote, "If the whole of a population were taken we should have certain values for its statistical constants, but in actual practice we
are only able to take a sample ...."
It soon became clear that the population of theoretical statistics was a much more complex concept than the population of a country. In his "The Probable Error of a Mean", Biometrika, 6, (1908), pp.
1-25 Student considered a "normal population" from which "random samples" are drawn. In another paper, "Probable Error of a Correlation Coefficient," (Biometrika, 6, (1908), p. 302), Student
explained that the "indefinitely large population [from which the random sample is obtained] need not actually exist," i.e. it may exist only in imagination. R. A. Fisher used the phrase
"hypothetical infinite population" in "On the Mathematical Foundations of Theoretical Statistics", (Philosophical Transactions of the Royal Society of London, Ser. A, 222, (1922), p. 311) and
"infinite hypothetical population" in "Theory of Statistical Estimation.," Proceedings of the Cambridge Philosophical Society, 22, (1925), 700-725. His prefatory note to the latter (p. 700) tries to
clarify what he meant by such a thing.
Although Student (1908) had used the phrases, "mean of the population" and "mean of the sample," it was not until the 1930s that such terms as sample mean or population standard deviation became
prominent. The new sensitivity to the population-sample distinction was largely a response to Fisher’s complaint that statisticians had not properly distinguished population and sample quantities, a
complaint that led him to introduce the terms parameter and statistic.
This entry was contributed by John Aldrich. See BIOMETRY, PARAMETER and RANDOM SAMPLE.
POSET, an abbreviation of "partially ordered set", is due to Garret Birkhoff (1911-1996), as said by himself in the second edition (1948, p. 1) of his book Lattice Theory. The term is now firmly
established [Carlos César de Araújo].
POSITIONAL NOTATION is found in 1890 in The Theory of Determinants: In the Historical Order of Development by Thomas Muir: "Taking this up in order, we observe that Vandermonde proposes for
coefficients a positional notation essentially the same as that of Leibnitz, writing 1[2] where Leibnitz wrote 12 or 1[2]." [Google print search]
Positional notation is also found in "Our Symbol for Zero" by George Bruce Halsted in American Mathematical Monthly, Vol. 10, No. 4. (Apr., 1903), pp. 89-90 [JSTOR].
POSITIVE. In the 15th century the names "positive" and "affirmative" were used to indicate positive numbers (Smith vol. 2, page 259).
In 1544 in Arithmetica integra Stifel called positive numbers numeri veri (Smith vol. 2, page 260).
Cardano (1545) called positive numbers numeri veri or veri numeri (Smith vol. 2, page 259).
Napier (c. 1600) used the adjective abundantes to designate positive numbers (Smith vol. 2, page 260).
The OED shows a use of affirmative to mean positive in 1693 by E. Halley, "Algebra" in Phil. Trans. XVII: "Which is affirmative when 2rq is less than dr - dq, otherwise negative."
Positive is found in English in the phrase "the Affirmative or Positive Sign +" in 1704 in Lexicon technicum, or an universal English dictionary of arts and sciences by John Harris.
In the French language, zero is a positive number. Trésor de la Langue Française has "Nombre positif. Nombre réel égal ou supèrieur à zéro ..." [William C. Waterhouse].
POSITIVE DEFINITE appears in 1905 in volume I of The Theory of Functions of Real Variables by James Pierpont [James A. Landau].
POSTERIOR PROBABILITY and PRIOR PROBABILITY. Jakob Bernoulli used the terms a priori and a posteriori to distinguish two ways of deriving probabilities: deduction a priori (without experience) is
possible when there are specially constructed devices, like dice but otherwise, "what you cannot deduce a priori, you can at least deduce a posteriori--i.e., you will be able to make a deduction from
many observed outcomes of similar events." (Ars Conjectandi (1713) Part IV, Chapter 4.) Cournot uses the term in this sense in Chapter VIII, "Des probabilités à posteriori," of his Exposition de la
Théorie des Chances et des Probabilités.
In the course of the 19^th century "a priori probability" and "a posteriori probability" became the standard terms in stating the theorem now called after Bayes. Although Bayes’s theorem (the theorem
on the probability of causes, the theorem on inverse probability) had an established place in expositions of probability from Laplace’s Théorie Analytique des Probabilités, (1812) onwards, it took
some decades for the terminology to become standardised.
In the English literature W. Lubbock & J. E. Drinkwater-Bethune (On Probability, 1830, p. 25) referred to "the probability [of the hypothesis] antecedent to the observations under consideration" as
its "à priori probability." W. F. Donkin added the term "a posteriori probability" ("On Certain Questions Relating to the Theory of Probabilities," Philosophical Magazine, 1, (1851), 353-368). By
1866 Isaac Todhunter was writing in his widely-used textbook, Algebra for the Use of Colleges and Schools (p. 456), that "a priori probability" and "a posteriori probability" are the "usual" terms.
Todhunter (following Donkin) wrote the formula as
where P[r] is "the probability of the hypothesis of the r^th cause" (a priori probability), p[r] is "the probability of the event on the hypothesis of the r^th cause" and Q[r] is "the probability of
the hypothesis of the r^th cause estimated after the event" (a posteriori probability). There was no standard term for p[r] until Harold Jeffreys adopted R. A. Fisher's term "likelihood" in the
1930s. Nor was there a standard term (or symbol) for conditional probability.
The contractions posterior probability and prior probability were introduced by Dorothy Wrinch and Harold Jeffreys "On Certain Fundamental Principles of Scientific Inquiry," Philosophical Magazine,
42, (1921), 369-390.
Howard Raiffa and Robert Schlaifer introduced the term preposterior, "choice of a terminal act after an experiment has already been performed ... we call terminal analysis, and choice of the
experiment which is to be performed ... we call preposterior analysis." (Applied Statistical Decision Theory (1961) p. x.)
This entry was contributed by John Aldrich, using David (2001) and Hald (1998, p. 162). See also BAYES, CONDITIONAL PROBABILITY, INVERSE PROBABILITY and LIKELIHOOD.
POSTFIX NOTATION is found in R. M. Graham, "Bounded Context Translation," Proceedings of the Eastern Joint Computer Conference, AFIPS, 25 (1964) [James A. Landau].
POSTULATE appears in the early translations of Euclid’s Elements and was commonly used by the medieval Latin writers (Smith vol. 2, page 280). The Greek original was
The most debated of the postulates, the parallel postulate, is postulate 5. In the notes to his edition of the Elements T. L Heath (1926, vol. 1, p. 202) writes, "From the very beginning ... the
Postulate was attacked as such and attempts were made to prove it as a theorem or to get rid of it by adopting some other definition of parallels."
In English, postulate is found in 1646 in Pseudodoxia epidemica or enquiries into very many received tenents by Sir Thomas Browne in the phrase "the postulate of Euclide" (OED2).
See AXIOM.
POTENTIAL FUNCTION. This term was used by Daniel Bernoulli in 1738 in Hydrodynamica (Kline, page 524).
According to Smith (1906) and the Encyclopaedia Britannica (article: "Green"), the term potential function was introduced by George Green (1793-1841) in 1828 in Essay on the Application of
Mathematical Analysis to the Theory of Electricity and Magnetism: "Nearly all the attractive and repulsive forces..in nature are such, that if we consider any material point p, the effect, in a given
direction, of all the forces acting upon that point, arising from any system of bodies S under consideration, will be expressed by a partial differential of a certain function of the co ordinates
which serve to define the point's position in space. The consideration of this function is of great importance in many inquiries... We shall often have occasion to speak of this function, and will
therefore, for abridgement, call it the potential function arising from the system S." ( Green’s Papers, p. 9)
POTENTIAL as the name of a function was introduced by Gauss in 1840, according to G. F. Becker in Amer. Jrnl. Sci. 1893, Feb. 97. [Cf. Gauss Allgem. Lehrsätze d. Quadrats d. Entfernung Wks. 1877 V.
200: "Zur bequemern Handhabung..werden wir uns erlauben dieses V mit einer besonderen Benennung zu belegen, und die Grösse das Potential der Massen, worauf sie sich bezieht, nennen."]
POWER appears in English in 1570 in Sir Henry Billingsley's translation of Euclid's Elements: "The power of a line, is the square of the same line."
POWER (meaning the cardinal number of a set) was coined by Georg Cantor (1845-1918) (Katz, page 734). Cantor used the German word Machtigkeit. See p. 481 of his " Beiträge zur Begründung der
transfiniten Mengelehre" [Contributions to the founding of the theory of transfinite numbers], Mathematische Annalen, 46, (1895), 481-512.
POWER (of a test) is found in 1933 in J. Neyman and E. S. Pearson, "The Testing of Statistical Hypotheses in Relation to Probabilities A Priori," Proceedings of the Cambridge Philosophical Society,
24, 492-510. "The probability of rejecting the hypothesis tested, H[0], when the true hypothesis is H[i], or P(w| H[i]), may be termed the power of the critical region w with respect to H[i]." The
concept of a test being more powerful than another is introduced in the same paper, as is the concept of a uniformly more powerful test.
The term uniformly most powerful test (with a result on the existence of such tests) appears in R. A. Fisher, "Two New Properties of Mathematical Likelihood", Proceedings of the Royal Society, Series
A, vol. 144 (1934) p. 295 [James A. Landau]. Fisher had not yet started criticising Neyman and Pearson, beyond insisting that their "interesting new line of approach" would benefit if they paid more
attention to his theory of estimation.
Power function appears in J. Neyman and E. S. Pearson’s "Contributions to the Theory of Testing Statistical Hypotheses," Statistical Research Memoirs, 1, (1936), 1-37. (David 2001.)
See also HYPOTHESIS AND HYPOTHESIS TESTING.
The expression POWER OF A POINT WITH RESPECT TO A CIRCLE was coined (in German) by Jacob Steiner [Julio González Cabillón].
POWER SERIES is found in English in 1893 in Theory of Functions of Complex Variable by A. R. Forsyth: "Any one of the continuations of a uniform function, represented by a power-series, can be
derived from any other" (OED2).
PRECALCULUS is found in 1947 in Mary Draper Boeker, The Status of the Beginning Calculus Students in Pre-Calculus College Mathematics, Bureau of Publications, Teachers College, Columbia University.
Precalculus is found as a noun on Dec. 1, 1968, in the Sunday Gazette-Mail of Charleston, W. Va.: “Although he is chairman of the department, Dr. [James C.] Eaves is now teaching precalculus to about
125 freshmen, and he reportedly knows each by name.”
PREDICATE CALCULUS. The OED refers to D. Hilbert & R. Ackermann Grundzüge der theoretischen Logik (1928) ii. p. 34 for prädikatenkalkül. A JSTOR search found the English term in 1939 in Laszlo Kalmar
"On the Reduction of the Decision Problem. First Paper. Ackermann Prefix, A Single Binary Predicate," Journal of Symbolic Logic, 4, (1939), p. 7.
PREFIX (notation) is found in S. Gorn, "An axiomatic approach to prefix languages," Symbol. Languages in Data Processing, Proc. Sympos., March. 26-31, 1962, 1-21 (1962).
PRENEX NORMAL FORM. According to Webster's Third New International Dictionary, the word comes from Late Latin praenexus (tied up or bound in front), from Latin prae- pre- + nexus, (past participle of
nectere to tie, bind).
A JSTOR search finds "the equivalent prenex form" in Lazlo Kalmar, "On the Reduction of the Decision Problem. First Paper. Ackermann Prefix, A Single Binary Predicate," The Journal of Symbolic Logic,
March 1939.
Prenex normal form is found in 1944 in A. Church, Ann. Math. Stud. xiii. 60 (OED2).
PRESENT VALUE appears in Edmund Halley, "An Estimate of the Degrees of the Mortality of Mankind," Philosophical Transactions of the Royal Society, XVII (1693) [James A. Landau].
PRE-WHITENING occurs in G. Hext, "A note on pre-whitening and recolouring," Stanford Univ. Dept. Statist. Tech. Rep no. 13 (1964) [James A. Landau]. The term was probably first used in B. Blackman &
J. W. Tukey's "The Measurement of Power Spectra," Bell System Technical Journal, 37, (1958).
PRIMALITY is is found in 1919 in Dickson: "T. E. Mason described a mechanical device for applying Lucas' method for testing the primality of 2^4q+3 - 1."
PRIME NUMBER. Iamblichus writes that Thymaridas called a prime number rectilinear since it can only be represented one-dimensionally.
In English prime number is found in Sir Henry Billingsley's 1570 translation of Euclid's Elements (OED2).
Some older textbooks include 1 as a prime number.
In his Algebra (1770), Euler did not consider 1 a prime [William C. Waterhouse].
In 1859, Lebesgue stated explicitly that 1 is prime in Exercices d'analyse numérique [Udai Venedem].
In 1866, Primary Elements of Algebra for Common Schools and Academies by Joseph Ray has:
All numbers are either prime or composite; and every composite number is the product of two or more prime numbers. The prime numbers are 1, 2, 3, 5, 7, 11, 13, 17, etc. The composite numbers are
4, 6, 8, 9, 10, 12, 14, 15, 16, etc.
In 1873, The New Normal Mental Arithmetic by Edward Brooks has on page 58:
Numbers which cannot be produced by multiplying together two or more numbers, each of which is greater than a unit, are called prime numbers.
In 1892, Standard Arithmetic by William J. Milne has on page 92:
A number that has no exact divisor except itself and 1 is called a Prime Number. Thus, 1, 3, 5, 7, 11, 13, etc. are prime numbers.
A list of primes to 10,006,721 published in 1914 by D. N. Lehmer includes 1.
[James A. Landau provided some of the above citations.]
PRIME NUMBER THEOREM. The theorem was proved independently by Hadamard and de la Vallée Poussin in 1896. Edmund Landau called it der Primzahlsatz, for brevity and in recognition of the theorem’s
importance: see Handbuch der Lehre von der Verteilung der Primzahlen (1909, Erster Band, p. vii.) See also Cajori 1919, page 439. The term was quickly translated into English: see the 1915 quotation
from Ramanujan in the entry DEEP THEOREM. For further information see Mathworld: PrimeNumberTheorem. [John Aldrich]
PRIMITIVE (in group theory). The German word primitiv appears in Sophus Lie, Theorie der Transformationsgruppen (1888).
Primitive appears in J. M. Page’s exposition of Lie’s theory, “On the Primitive Groups of Transformations in Space of Four Dimensions,” American Journal of Mathematics 10, (1888), 293-346. Page
writes: “A group in the plane is primitive when with each ordinary point which we hold, no invariant direction is connected.” (OED). [John Aldrich]
PRIMITIVE FUNCTION. In Theory of Analytic Functions (Théorie des fonctions analytiques 1797), Joseph Lagrange wrote, in translation:
Let us assign to the variable of a function some increment by adding to this variable an arbitrary quantity; we can, if the function is algebraic, expand it in terms of the powers of this
quantity by using the familiar rules of algebra. The first term of the expansion will be the given function, which will be called the primitive function; the following terms will be formed of
various functions of the same variable multiplied by the successive powers of the arbitrary quantity. These new functions will depend only on the primitive function from which they are derived
and may be called the derivative functions.
The preceding was taken from Struik, A Source Book in Mathematics, p. 388. [Citation provided by Dave L. Renfro]
Lacroix used fonction primitive in Traité du calcul différentiel et integral (1797-1800). The term appears in English in the 1816 translation of this work.
PRIMITIVE RECURSIVE FUNCTION was coined by Rózsa Péter (1905-1977) in “Über den Zusammenhang der verschiedenen Begriffe der rekursiven Funktion,” Mathematische Annalen, 110, (1934), 612-632.
The English term appeared in S. C. Kleene “General Recursive Functions of Natural Numbers,” Mathematische Annalen, 112, (1936), 727-742.
[John Aldrich, Cesc Rossello, and Dirk Schlimm contributed to this entry].
The term PRIMITIVE ROOT was introduced by Leonhard Euler (1707-1783), according to Dickson, vol. I, page 181.
In "Demonstrationes circa residua ex divisione potestatum per numeros primos resultantia," Novi commentarii academiae scientiarum Petropolitanae 18 (1773), Euler wrote: "Huiusmodi radices
progressionis geometricae, quae series residuorum completas producunt, primitivas appellabo" [Heinz Lueneburg].
Primitive root is found in English in 1811 in An Elementary Investigation of the Theory of Numbers by Peter Barlow [James A. Landau].
The method of PRINCIPAL COMPONENTS was introduced by H. Hotelling in "Analysis of a Complex of Statistical Variables into Principal Components," Jrnl. Educ. Psychol. &ldquo:IV, (1933). On p. 421:
"We..determine the components, not exceeding n in number, and perhaps neglecting those whose contributions to the total variance are small. This we shall call the method of principal components."
The term PRINCIPAL GROUP was introduced by Felix Klein (1849-1925) (Katz, page 791).
PRINCIPAL SQUARE ROOT appears in 1898 in Text-Book of Algebra by G. E. Fisher and I. J. Schwatt, according to Manning (1970).
The term PRINCIPLE OF CONTINUITY was coined by Poncelet (Kline, page 843).
PRINCIPLE OF INDIFFERENCE/INSUFFICIENT REASON. In his Treatise on Probability (1921) J. M. Keynes re-named the principle of insufficient reason the principle of indifference for he thought the older
term "clumsy and unsatisfactory. The essence of the principle was, "equal probabilites must be assigned to each of several alternatives, if there is an absence of positive ground for assigning
unequal ones."
The term principle of insufficient reason was used by Johannes von Kries in his probability textbook in 1871, according to The Emergence of Probability by Ian Hacking [Hans Fischer].
The principle is usually traced to Jakob (Jacques) Bernoulli's Ars Conjectandi (1713). Later the principle provided a justification for the use of a uniform prior in problems of inverse probability.
In the course of the 19^th century its use in this role was subject to increasingly heavy criticism. R. A. Fisher echoed these criticisms when he discussed the application of the principle to Bayes's
problem of inference to the probability of success in Bernoulli trials: "Apart from evolving a vitally important piece of knowledge, that of the exact form of the distribution of p, out of complete
ignorance, it is not even a unique solution. For ... we might equally have measured probability upon an entirely different scale ..." "On the Mathematical Foundations of Theoretical Statistics" (
Phil. Trans. Royal Soc. Ser. A. 222, (1922), p. 326).
See also BAYES, INVERSE PROBABILITY and UNIFORM DISTRIBUTION.
[This entry was contributed by John Aldrich, based on Hacking (op. cit.) and Hald (1998)]
The term PRINCIPLE OF THE PERMANENCE OF EQUIVALENT FORMS was introduced by George Peacock (1791-1858) (Eves, page 377).
PRISM is found in English in Sir Henry Billingsley's 1570 translation of Euclid's Elements (OED2). See the Elements, XI, def.13.
PRISMATOID (as a geometric figure) occurs in the title Das Prismatoid, by Th. Wittstein (Hannover, 1860) [Tom Foregger].
Prismatoid is found in English in 1881 in Metrical geometry. An elementary treatise on mensuration by George Bruce Halsted: "&ldquo:XIV. A prismatoid is a polyhedron whose bases are any two polygons
in parallel planes, and whose lateral faces are determined by so joining the vertices of these bases that each line in order forms a triangle with the preceding line and one side of either base.
REMARK. This definition is more general than &ldquo:XIII., and allows dihedral angles to be concave or convex, though neither base contain a reentrant angle. Thus, BB' might have been joined instead
of A'C" [University of Michigan Digital Library].
PRISMOID is found in 1704 in Lexicon Technicum: "I, Prismoid, is a solid Figure, contained under several Planes whose Bases are rectangular Parallelograms, parallel and alike situate" [OED2].
The PRISONER’S DILEMMA was posed by A. W. Tucker in 1950, when addressing an audience of psychologists at Stanford University, where he was a visiting professor. The OED entry includes an account it
received from Tucker, "The Prisoner's Dilemma is my brain child. I concocted it at Stanford in early 1950 as a catchy example to enliven a semi-popular talk on Game Theory... My example became known
by the ‘grapevine’, but I did not publish it." It is discussed in the 1957 book by Luce & Raiffa Games & Decisions.
PROBABILISTIC is found in Tosio Kitagawa, Sigeru Huruya, and Takesi Yazima, The probabilistic analysis of the time-series of rare event, Mem. Fac. Sci. Kyusyu Univ., Ser. A 2 (1942).
The English words PROBABILITY and CHANCE were given new meanings when the mathematics of Pascal, Fermat and Huygens was translated and developed. (The OED traces "probability" to the mid 16^th
century and "chance" to the turn of the 14^th.)
The origins of probability theory are usually traced to the 1654 correspondence between Pascal and Fermat, Les Lettres de Blaise Pascal (pp.188-229), or in 20^th century English translation.
Probability (or probabilité) does not figure in the letters and the only word a modern reader might want to translate as probability is le hasard, used by Fermat in his letter of September 25^th: "La
somme des hasards... ce qui fait en tout 17/27." Probability in its modern sense is used the last chapter of La Logique, ou L’Art de Penser (1682) by Pascal’s friends Arnauld and Nicole. See La
Logique de Port-Royal pp. 365ff.
The word kans (chance) was used repeatedly by Huygens in his Dutch work Van Rekeningh in Spelen van Geluck. (Kees Verduin) The Latin version of this work, De Ratiociniis in Ludo Aleae (1657)), was
translated into English as The Value of All Chances ... (1714). Here chances are possibilities or opportunities: e.g. "If the number of Chances I have to gain a, be p, and the number of Chances I
have to gain b, be q. Supposing the Chances be equal; my Expectation will then be worth ap+bq / p+q." (Prop. III) The expression "chances are equal," which is used a lot, means that the probabilities
of the opportunities are the same. The word probability appears once in the expression "more probability" and once in "equal probability."
The term "probability" was much more important in De Moivre's The Doctrine of Chances: or, a Method of Calculating the Probability of Events in Play (1718). De Moivre uses "probability" in its modern
sense, e.g. "CASE I^st: To find the Probability of throwing an Ace in two throws of one Die." The book's opening proposition connects chance and probability: "The Probability of an Event is greater
or less, according to the number of Chances by which it may happen, compared with the whole number of Chances by which it may happen or fail." Chances are counted and probabilities are derived from
them. In his Essay (1767) Bayes re-defined chance when he wrote, "By chance I mean the same as probability." (Definition 6)
Bayes's title, An Essay towards solving a Problem in the Doctrine of Chances, illustrates the common practice in 18^th century England of referring to the subject as the "doctrine of chances." In the
19^th century probability was more likely to be in the title: e.g. Laplace's Théorie Analytique des Probabilités, (1812) and, in English, Lubbock & Drinkwater-Bethune's On Probability (1830) and De
Morgan's Essay on Probabilities (1838). The phrase theory of probability came into use in English after 1860, a fashion set by Todhunter's A History of the Mathematical Theory of Probability (1865).
Bertrand set the fashion for titles in French with his Calcul des Probabilités (1889).
Since 1840 or so there has been a continuing debate on the nature of probability. Everyone with an interest in probability--mathematicians, philosophers, physicists, economists, etc.--has contributed
and a special vocabulary has evolved. The most common terms, such as subjective and objective, are used in a variety of senses and have a complex history. Poisson distinguished two concepts in his
Recherches sur la Probabilité des Jugements en Matières Criminelles et Matiere Civile (1837): probabilité, a question of "the reason we have to believe that [an event] will or will not occur," and
chance, a question of "events in themselves and independent of the knowledge we have of them." More recently Carnap Logical Foundations of Probability (1950) used "probability[1]" for probability as
degree of confirmation and "probability[2]" for probability as relative frequency. Savage Foundations of Statistics (1954) called his probability construction "personal probability." (See I. Hacking
The Emergence of Probability and L. Daston "How Probabilities came to be Objective and Subjective," Historia Mathematica, 21, (1994), 330-344. The Poisson translations are Daston's.)
When Kolmogorov axiomatised probability in the Grundbegriffe der Wahrscheinlichkeitsrechnung (1933) he exploited the analogy between the measure of a set and the probability of an event. The
development generated new probability terms. A JSTOR search produced the following appearances in English.
Probability measure appears in J. L. Doob "Stochastic Processes with an Integral Valued Parameter," Transactions of the American Mathematical Society, 44, (1938), 87-150: "any non-negative completely
additive function of point sets, defined on a Borel field of sets of some abstract space Ω will be called a probability measure if the space Ω is itself in the field of definition and if the set
function is defined as 1 on the space Ω."
Probability space appears in J. L. Doob & R. A. Leibler "On the Spectral Analysis of a Certain Transformation," American Journal of Mathematics, 65, (1943), 263-272: "we consider an abstract space Ω
with a measure P of Lebesgue type--that is, completely additive and non-negative on some Borel field of Ω--with P(Ω) = 1; that is we consider a probability space." (p. 268)
Chance variable had a brief career around 1935-40 in the sense of random variable. See e.g. Doob's paper on martingales, "Regularity Properties of Certain Families of Chance Variables," Transactions
of the American Mathematical Society, 47, 455-486.
The word probabilist has been in English since the 17^th century (OED) but it has only been in common use in the sense of a specialist in probability theory since the 1950s (JSTOR). An early sighting
is in a 1946 letter from R. A. Fisher "In Paris recently I found an interesting and perhaps useful distinction being made between statisticians and probabilists, broadly speaking putting me in the
first class and Cramer in the second." Statistical Inference and Analysis: Selected Correspondence of R. A. Fisher (p. 331)
This entry was contributed by John Aldrich. A complete list of the probability and statistics terms on this web site is here. See also Symbols in Probability on the Symbols in Probability and
Statistics page.
PROBABILITY DENSITY FUNCTION. Probability function appears in J. E. Hilgard, "On the verification of the probability function," Rep. Brit. Ass. (1872).
Wahrscheinlichkeitsdichte appears in 1912 in Wahrscheinlichkeitsrechnung by A. A. Markoff (David, 1998).
In J. V. Uspensky, Introduction to Mathematical Probability (1937), page 264 reads "The case of continuous F(t), having a continuous derivative f(t) (save for a finite set of points of
discontinuity), corresponds to a continuous variable distributed with the density f(t), since F(t) = integral from -infinity to t f(x)dx" [James A. Landau].
Probability density appears in 1939 in H. Jeffreys, Theory of Probability: "We shall usually write this briefly P(dx|p) = f'(x)dx, dx on the left meaning the proposition that x lies in a particular
range dx. f'(x) is called the probability density" (OED2).
Probability density function appears in 1946 in an English translation of Mathematical Methods of Statistics by Harald Cramér. The original appeared in Swedish in 1945 [James A. Landau].
See also the Probability and Statistics section of the companion page on the history of mathematical notation.
PROBABILITY DISTRIBUTION appears in a paper published by Sir Ronald Aylmer Fisher in 1920 (p. 758) [James A. Landau].
PROBABILITY DISTRIBUTIONS and STOCHASTIC PROCESSES, NAMES FOR. Several patterns in naming can be identified. The object can be named after a person associated with it (EPONYMY), e.g. CAUCHY,
GAUSSIAN, MARKOV, POISSON, WEIBULL, WIENER, WISHART. The object can take its name from the phenomenon with which it is associated, e.g. BRANCHING PROCESS, BROWNIAN MOTION, ERROR, or from the
mathematical construction on which it is based, e.g. BETA, BINOMIAL, EXPONENTIAL, GAMMA. The mathematical construction may itself be named after a person, as in the case DIRICHLET. In some cases the
symbol used for the random variable has given its name to the distribution, e.g. CHI-SQUARED and F. Systems of distributions, e.g. the PEARSON CURVES, generate ‘family’ names for the distributions:
so the beta distribution is also known as a Pearson Type I curve.
PROBABILITY GENERATING FUNCTION. A. de Moivre used this technique when he found the number of chances of throwing s points with n dice in his Miscellanea Analytica (1730); the analysis is reproduced
in the 2^nd edition of the Doctrine of Chances (1738). Generating functions were used by other 18^th century authors, including Thomas Simpson in On the Advantage of Taking the Mean of a Number of
Observations (1755). Laplace gave the technique its name and developed it further; Book I of his Théorie Analytique des Probabilités (1812) is called Calcul des Functions Génératrices. See Hald
(1990, pp. 210-2)
The term probability generating function has been current only since the 1940s. The earliest result from a JSTOR search was M. S. Bartlett "The Present Position of Mathematical Statistics," Journal
of the Royal Statistical Society, 103, (1940), 1-29. Perhaps the growing use of other types of generating function, including the moment generating function and the cumulant generating function, made
a more specific term desirable.
This entry was contributed by John Aldrich. See CHARACTERISTIC FUNCTION and MOMENT GENERATING FUNCTION.
PROBABILITY INTEGRAL TRANSFORMATION. The term first appears in E. S. Pearson “The Probability Integral Transformation for Testing Goodness of Fit and Combining Independent Tests of Significance,”
Biometrika, 30, (1938), 134-148. Pearson (p. 135) states that the idea had been used in recent work by R. A. Fisher, Karl Pearson and Neyman. However Stephen M. Stigler indicates an earlier use,
writing in “Simon Newcomb, Percy Daniell, and the History of Robust Estimation 1885-1920,” Journal of the American Statistical Association, 68, (1973), p. 876 that the transformation was used
“apparently for the first time” by P. J. Daniell “Observations Weighted According to Order,” American Journal of Mathematics, 42, (1920), 222-236. [John Aldrich]
PROBABLE ERROR appears in a non-technical sense 1812 in Phil. Mag.: "All that can be gained is, that the errors are as trifling as possible--that they are equally distributed--and that none of them
exceed the probable errors of the observation" (OED2).
According to Hald (p. 360), Friedrich Wilhelm Bessel (1784-1846) introduced the term probable error (wahrscheinliche Fehler) without detailed explanation in 1815 in "Ueber den Ort des Polarsterns" in
Astronomische Jahrbuch für das Jahr 1818, and in 1816 defined the term in "Untersuchungen über die Bahn des Olbersschen Kometen" in Abh. Math. Kl. Kgl. Akad. Wiss., Berlin. Bessel used the term for
the 50% interval around the least-squares estimate.
Probable error is found in 1852 in Report made to the Hon. Thomas Corwin, secretary of the treasury by Richard Sears McCulloh. This book uses the term four times, but on the one occasion where a
computation can be seen the writer takes two measurements and refers to the difference between them as the "probable error" [University of Michigan Digital Library].
Probable error is found in 1853 in A dictionary of science, literature & art edited by William Thomas Brande: "... the probable error is the quantity, which is such that there is the same probability
of the difference between the determination and the true absolute value of the thing to be determined exceeding or falling short of it. Thus, if twenty measurements of an angle have been made with
the theodolite, and the arithmetical mean or average of the whole gives 50° 27' 13"; and if it be an equal wager that the error of this result (either in excess or defect) is less than two seconds,
or greater than two seconds, then the probable error of the determination is two seconds" [University of Michigan Digital Library].
Probable error is found in 1853 in A collection of tables and formulae useful in surveying, geodesy, and practical astronomy by Thomas Jefferson Lee. The term is defined, in modern terminology, as
the sample standard deviation times .674489 divided by the square root of the number of observations [James A. Landau; University of Michigan Digital Library].
Probable error is found in 1855 in A treatise on land surveying by William Mitchell Gillespie: "When a number of separate observations of an angle have been made, the mean or average of them all,
(obtained by dividing the sum of the readings by their number,) is taken as the true reading. The 'Probable error' of this mean, is the quantity, (minutes or seconds) which is such that there is an
even chance of the real error being more or less than it. Thus, if ten measurements of an angle gave a mean of 350 18', and it was an equal wager that the error of this result, too much or too
little, was half a minute, then half a minute would be the 'Probable error' of this determination. This probable error is equal to the square root of the sum of the squares of the errors (i. e. the
differences of each observation from the mean) divided by the number of observations, and multiplied by the decimal 0.674489. The same result would be obtained by using what is called 'The weight' of
the observation. It is equal to the square of the number of observations divided by twice the sum of the squares of the errors. The 'Probable error' is equal to 0.476936 divided by the square root of
the weight" [University of Michigan Digital Library].
Probable error is found in 1865 in Spherical astronomy by Franz Brünnow (an English translation by the author of the second German edition): "In any series of errors written in the order of their
absolute magnitude and each written as often as it actually occurs, we call that error which stands exactly in the middle, the probable error" [University of Michigan Digital Library].
In 1872 Elem. Nat. Philos. by Thomson & Tait has: "The probable error of the sum or difference of two quantities, affected by independent errors, is the square root of the sum of the squares of their
separate probable errors" (OED2).
In 1889 in Natural Inheritance pp. 57-8 Galton criticized the term probable error, saying the term was "absurd" and "quite misleading" because it does not refer to what it seems to, the most probable
error, which would be zero. He suggested the term Probability Deviation be substituted, opening the way for Pearson to introduce the term standard deviation (Tankard, p. 48).
"Probable error" went out of use in the early 20th century to be replaced by "standard error": the probable error is 0.67449 times the standard error. R. A. Fisher, one of those who adopted the
standard error, remarked in Statistical Methods for Research Workers (1925, p. 48) "The common use of the probable error is its only recommendation." [John Aldrich]
See also STANDARD ERROR.
The term PROBABLE PRIME TO BASE a was suggested by John Brillhart [Carl Pomerance et al., Mathematics of Computation, vol. 35, number 151, July 1980, page 1021].
PROBIT. The OED and David (2001) refer to C. I. Bliss "The Method of Probits," Science, 79, (1934), 38-9: "These arbitrary probability units have been termed ‘probits’" (p. 39). Whether Bliss was
responsible for the term himself is unclear. D. J. Finney Probit Analysis 2^nd edition (1952, pp. 42-6) traces the underlying principle back to Fechner and his work on psychophysics in 1860 but the
probit terminology seems to have arrived with Bliss. R. A. Fisher collaborated with Bliss (see his The Case of Zero Survivors in Probit Assays Annals of Applied Biology, 22: 164-165 (1935)) and he
wrote about probit analysis in his book The Design of Experiments (1935). The publicity brought the subject to the attention of statisticians generally.
PROBLEM OF THE NILE. At the end of his paper Uncertain Inference Proceedings of the American Academy of Arts and Science, 71: 245-258 (1936) R. A. Fisher posed the following problem:
The agricultural land of a pre-dynastic Egyptian village is of unequal fertility. Give the height to which the Nile will rise, the fertility of every portion of it is known with exactitude, but
the height of the flood affects different parts of the territory unequally. It is required to divide the area, between the several households of the village, so that the yields of the lots
assigned to each shall be in pre-determined proportion, whatever may be the height to which the river rises.
Fisher added, "If this problem is capable of a general solution, then ... one of the primary problems of uncertain inference will have reached its complete solution." Later writers usually referred
to this as Fisher's Problem of the Nile, as in the title of a 1946 paper by D. G. Kendall in the journal Nature.
PRODUCT (in multiplication). According to the OED2, Albertus Magnus (1193-1280) used productum in his Metaphysicorum.
Fibonacci (1202) used factus ex multiplicatione and also the phrase "contemptum sub duobus numeris" (Smith, vol. 1).
Art of Nombryng (about 1430) uses both product and sum for the result in multiplication: "In multiplicacioun 2 nombres pryncipally ben necessary,..the nombre multiplying and the nombre to be
multipliede... Also..the 3 nombre, the whiche is clepide product or pervenient. [...] Multiplie .3. by hym-selfe, and þe some of alle wolle be 9" (OED2).
Sum was used for the result in multiplication by Pacioli (1494), Ortega (1512; 1515), and Recorde (c. 1542; 1558) (Smith, vol. 1).
In 1542 Robert Recorde in Ground of Artes (1575) used the obsolete term offcome: "The ofcome or product" (OED2). Offcome also appears in English in 1570 in Billingsley's translation of Euclid.
According to Smith (vol. 1), Licht (1500) used simply productus, dropping the numerus from numerus productus. Clichtoveus (1503) used both numerus productus and tota summa. Fine (1530) used sum as
well as numerus productus. Glareanus (1538) used summa producta. Ramus (1569) used factus.
PROGRAM (of research) often appears in writing about mathematics, e.g. as when Cajori (vol. II, (1929), p. 283) wrote, "The contribution of Leibniz to symbolic logic was a program, rather than an
actual accomplishment. He set up an ideal for others to approach." But the word is also used in such permanent constructions as ERLANGEN PROGRAM and HILBERT'S PROGRAM.
PROGRAM and PROGRAMMING (computing.) The OED gives two meanings for the noun program: (1) "A sequence of operations that a machine can be set to perform automatically" with earliest reference from
1945: J. P. Eckert et al. Description of ENIAC (PB 86242) (Moore School of Electr. Engin., Univ. of Pennsylvania) 1 "The intended use of the ENIAC is to compute large families of solutions all based
on the same program of operations." (2) "A series of coded instructions which when fed into a computer will automatically direct its operation in carrying out a specific task" with earliest reference
from 1946: Nature 20 Apr. 527/2 "Control of the programme of the operation of the machine [sc. ENIAC] is also through electrical circuits."
The verb to program appears in J. P. Eckert et al. Description of ENIAC (PB 86242) (Moore School of Electr. Engin., Univ. of Pennsylvania) B-4 "In this fashion, problems involving numbers of
multiplications far in excess of 24 can be programmed."
PROGRAM and PROGRAMMING (optimization.) Programming appears in the title, "The Programming of Interdependent Activities: General discussion" by Marshall K. Wood and George B. Dantzig (obituary) in
Econometrica, 17, July-October, 1949. Program appears in Part II of the paper by Dantzig alone, "Programming of Interdependent Activities: II Mathematical Model," Econometrica, 17, p. 203: "A set of
values [satisfying the constraints] is called a feasible program." [James A. Landau].
Linear programming. Programming in a Linear Structure is the title of a work by George B. Dantzig published in 1948. Linear programming was used in 1949 by Dantzig’s in "Programming of Interdependent
Activities: II Mathematical Model," Econometrica, 17, p. 203: "It is our purpose now to discuss the kinds of restrictions that fit naturally into linear programming" (OED2).
According to Linear Programming and Network Flows by Mokhtar S. Bazaraa, John J. Jarvis, and Hanif D. Sherali, 2nd Edition, 1990), the term linear programming was coined by the economist and
mathematician Tjalling Charles Koopmans (1910-1985) in the summer of 1948 while he and George B. Dantzig strolled near the beach in Santa Monica, California.
In an interview of Merrill Flood conducted by Albert Tucker on May 14, 1984, Flood indicated that he and John Tukey coined the term linear programming:
Flood: One of the friendly arguments that Tjallings Koopmans and I had concerned an appropriate name for what is now known as linear programming theory. When I was responsible for organizing the
December meeting of the Allied Social Science Associations in Cleveland, probably 1947, I wanted to include a session of what was then commonly referred to as input-output analysis, after the
work of Wassily Leontief. Tjallings agreed to organize such a session for the meeting, and we met in California to discuss the arrangements just prior to Neyman's Second Berkeley Symposium.
Actually we discussed this while enroute from Stanford to Berkeley in a car whose other passengers were John Tukey, Francis Dresch, and a Stanford mathematician (Spencer?) who was driving. I knew
a bit about Leontief's work because of the work under Marshall Wood, by George Dantzig and others, that had been pushed and encouraged by Duane Evans, who was then at the Bureau of Labor
Statistics - because of my position as Chief Civilian Scientist on the War Department General Staff, with some minor responsibility for the effort in the Air Force under Marshall Wood. When
Tjallings and I were trying to decide what to call the session in Cleveland I was unhappy with the input-output analysis title and wanted something that was broader and peppier, partly because of
the related Air Force work. Tjallings proposed "activity analysis" as a name for the session, with some support from the economist Dresch, but Tukey and I were not satisfied. As you know, John
Tukey is very good at creating good names for things, and between us John and I soon settled upon "linear programming" as an excellent name for the session. As I recall vaguely now Tjallings did
not call the Cleveland session "linear programming" but his own 1948 Chicago conference went by that name even though he used 'activity analysis' in the title of his published proceedings. I
forget just how Tukey and I arrived at the name 'linear programming', but it has certainly stuck. I doubt that Tukey even remembers the California incident now.
Tucker: The first paper by George Dantzig and Marshall Wood was called "Programming in a Linear Structure".
Flood: Well, it is possible that is where we got the idea.
Tucker: And the two words, interchanged, were pulled out of that. I think that's the official story.
Flood: When was that paper? Was that before that, do you think? It may be the other way around.
Tucker: Well, that paper appears in the Activity Analysis volume. But it appeared earlier. The Activity Analysis volume was published in 1951. 1 think that the paper had appeared about two years
before that.
Flood: That would be later. I suspect that it came the other way around. I'm remembering vaguely the conversation Tukey and I had, and I don't remember any awareness of any such terminology by
Dantzig and company.
The entire interview is available at http://infoshare1.princeton.edu:2003/libraries/firestone/rbsc/finding_aids/mathoral/pmc11.htm. West Addison assisted with this entry.
Nonlinear programming appears in the title "Nonlinear Programming" by H. W. Kuhn and A. W. Tucker in Jerzy Neyman (ed.), Proceedings of the Second Berkeley Symposium on Mathematical Statistics and
Probability (1950) [James A. Landau].
Mathematical programming occurs in the title "Mathematical Programming," by A. Henderson and R. Schlaifer, Harvard Business Review 32, May-June 1954 [James A. Landau].
Dynamic programming is found in Richard Bellman, Dynamic Programming of Continuous Processes, The RAND Corporation, Report R-271, July 1954 [James A. Landau].
Quadratic programming is found in 1958 the title On Quadratic Programming by E. W. Barankin and R. Dorfman [James A. Landau].
PROGRESSION. Boethius (c. 510), like the other Latin writers, used the word progressio (Smith vol. 2, page 496).
PROJECTION. The use of the term in functional analysis and matrix theory can be traced to M. H. Stone’s Linear Transformations in Hilbert Space 1932. Stone used the word in two ways, first (p. 23)
for the image and then (pp. 70-1) for the transformation producing the image. After proving a theorem about the action of such transformations he wrote, “In the terminology of Hilbert and his
followers such a transformation or operator and its associated matrix ... are called "Einzeltransformation" or "Einzeloperator" or "Einzelmatrix": this terminology does not seem to admit a graceful
or apt translation into English so that we shall use the term "projection", being justified by the theorem.”
See the entry HILBERT SPACE.
PROJECTIVE GEOMETRY. In his account of the origins of projective geometry Katz (1993) mentions Pascal, Desargues and Monge before saying that J.-V.Poncelet composed the first text in synthetic
projective geometry: his Traité des Propriétés Projectives of 1822. See here for an extract from Poncelet’s Introduction.
The OED finds the term projective geometry used in English in 1885 in Charles Leudesdorf's Elements of Projective Geometry, a translation of Cremona'sElementi di geometria proiettiva of 1873.
PROPER FRACTION appears in English in 1674 in Samuel Jeake Arithmetic (1701): "Proper Fractions always have the Numerator less than the Denominator, for then the parts signified are less than a Unit
or Integer" (OED2).
PROPER VALUE and VECTOR. See Eigenvalue.
PROPORTION. See ratio and proportion.
PROPORTIONAL HAZARD MODEL. The model and the name were introduced by D. R. Cox in "Regression Models and Life Tables (with discussion)," Journal of the Royal Statistical Society, B 34 (1972),
PROPOSITIONAL FUNCTION appears in 1905 without explanation in a footnote in Bertrand Russell "On Denoting," Mind, 14, (1905), p. 480. Russell is more expansive in "The Theory of Implication,"
American Journal of Mathematics, 28, (1906), p. 163: "Such expressions are functions whose values are propositions; hence we call them propositional functions."
PROPOSITIONAL CALCULUS occurs in 1903 in the The Principles of Mathematics by Bertrand Russell, p. 12: "It is not with such entities that we are concerned in the propositional calculus, but with
genuine propositions." (OED2).
PROTRACTOR appears in 1658 in Edward Phillips' New World of English Words: "Protractor, a certain Mathematical instrument made of brasse, consisting of the Scale and Semicircle, used in the surveying
of Land" [OED2].
PSEUDO-PARALLEL was apparently coined by Eduard Study (1862-1930) in 1906 Ueber Nicht-Euklidische und Linien Geometrie.
The term PSEUDOPRIME appears in Paul Erdös, "On pseudoprimes and Carmichael numbers," Publ. Math., 4, 201-206 (1956).
The term was also used by Ivan Niven (1915-1999) in "The Concept of Number," in Insights into Modern Mathematics, 23rd Yearbook, NCTM, Washington (1957), according to Kramer (p. 500). She seems to
imply Niven coined the term.
PSEUDOSPHERE was coined by Eugenio Beltrami (1835-1900). He referred to a surface of negative curvature as pseudo-spherical.
PSYCHOMATHEMATICS. Henry Blumberg coined this term and used it in the article "On the technique of generalization" (AMM, 1940, pp. 451-462). He wrote: "By Psychomathematics, I understand, namely,
that union of mathematics and psychology - using the latter in a broad, non technicals sense - whose function it is to explain how mathematical ideas arise, and to formulate heuristically helpful
principles in mathematical exploration" [Carlos César de Araújo].
PSYCHOMETRICS is defined in the OED as "The science of measuring mental capacities and processes; the application of methods of measurement to the various branches of psychology." The OED's earliest
quotation is from 1930: Proc. & Addr. Amer. Assoc. Stud. Feeble-Minded &ldquo:XV. 94: "To most persons who know the term at all, psychometrics is fairly synonymous with the use of intelligence
tests." The subject also embraced the field of psychophysics, so named by Gustav Theodor Fechner (1801-87) in 1859 as Psychophysik. The OED gives a quotation from Francis Galton: "Most of you are
aware of the recent progress of what has been termed Psycho-physics, or the science of subjecting mental processes to physical measurements and to physical laws." Rep. Brit. Assoc. Adv. Sci. 1877 II.
PURE IMAGINARY is found in 1857 in "A Memoir Upon Caustics" by Arthur Cayley in the Philosophical Transactions of the Royal Society of London: "... the radius may be either a real or a pure imaginary
distance ..." [University of Michigan Historical Math Collection].
PURE and APPLIED MATHEMATICS. In Book II of The Advancement of Learning (1605) Francis Bacon distinguished pure and mixed mathematics in much the way modern writers distinguish pure and applied
The Mathematics are either pure or mixed. To the Pure Mathematics are those sciences belonging which handle quantity determinate, merely severed from any axioms of natural philosophy; and these
are two, Geometry and Arithmetic; the one handling quantity continued, and the other dissevered. Mixed hath for subject some axioms or parts of natural philosophy, and considereth quantity
determined, as it is auxiliary and incident unto them. For many parts of nature can neither be invented with sufficient subtilty, nor demonstrated with sufficient perspicuity, nor accommodated
unto use with sufficient dexterity, without the aid and intervening of the mathematics; of which sort are perspective, music, astronomy, cosmography, architecture, enginery, and divers others.
In the Mathematics I can report no deficience, except it be that men do not sufficiently understand the excellent use of the Pure Mathematics, in that they do remedy and cure many defects in the
wit and faculties intellectual. ... And as for the Mixed Mathematics, I may only make this prediction, that there cannot fail to be more kinds of them, as nature grows further disclosed.
Leonhard Euler used the term pure mathematics in 1761 in the title "Specimen de usu observationum in mathesi pura."
The first edition of the Encyclopaedia Britannica (1768-1771) has: "Pure mathematics have one peculiar advantage, that they occasion no disputes among wrangling disputants, as in other branches of
knowledge; and the reason is, because the definitions of the terms are premised, and every body that reads a proposition has the same idea of every part of it."
In the course of the 19^th century the term mixed mathematics was replaced by applied mathematics. The Quarterly Journal of Pure and Applied Mathematics started publication in 1857 and its title
echoed that of Liouville’s journal, Journal des mathématiques pures et appliquées, ou Recueil mensuel de mémoires sur les diverses parties des mathématiques (Tome 1) founded in 1836. In the 19^th and
20^th centuries applied mathematics was usually mathematics applied to physics or even just to mechanics, so its scope was not quite the same as Bacon’s "music, astronomy, cosmography, architecture,
enginery, and divers others."
[This entry was contributed by John Aldrich.]
PYRAMID. According to Smith (vol. 2, page 292), "the Greeks probably obtained the word 'pyramid' from the Egyptian. It appears, for example, in the Ahmes Papyrus (c. 1550 B. C.). Because of the
pyramidal form of a flame the word was thought by medieval and Renaissance writers to come from the Greek word for fire, and so a pyramid was occasionally called a 'fire-shaped body.'"
The PYTHAGOREAN THEOREM (named after Pythagoras (ca. 569 BC-ca. 475 BC) is the 47th Proposition of the first book of Euclid’s Elements. Euclid did not mention Pythagoras but, according to Heath's
edition of the Elements, later writers, including Apollodorus, Cicero, Proclus, Plutarch and Athenaeus, referred to the proposition as a discovery of Pythagoras.
The term Pythagorean theorem appears in English in 1726 in A New Mathematical Dictionary, 2nd ed., by Edmund Stone.
Pythagorean axiom appears in 1912 in G. Kapp, Electr.: "The well-known Pythagorean axiom that the sum of the squares of the kathetes in a rectangular triangle is equal to the square of the
hypotenuse" (OED2).
Some early twentieth-century U. S. dictionaries have Pythagorean proposition, rather than Pythagorean theorem.
[Randy K. Schwartz, John G. Fauvel, Leo Rogers, and John Aldrich contributed to this entry.]
PYTHAGOREAN TRIPLE. Pythagorean triad appears in 1909 in Webster's New International Dictionary.
Pythagorean number triplet appears in 1916 in Historical Introduction to Mathematical Literature by George Abram Miller: "Pythagorean number triplets appear also in some of the Hindu writings which
antedate the lifetime of Pythagoras."
Pythagorean triple is found in March 1937 in The Mathematics Teacher: "Later in the book the quest for primitive Pythagorean triples, a beautiful illustration, by the way, of the methods of
mathematical reasoning, leads just as naturally to a consideration of 'Fermat's last theorem' and other topics in the theory of numbers." The term may be much older, however.
Front - A - B - C - D - E - F - G - H - I - J - K - L - M - N - O - P - Q - R - S - T - U - V - W - X - Y - Z - Sources | {"url":"http://jeff560.tripod.com/p.html","timestamp":"2014-04-18T18:17:05Z","content_type":null,"content_length":"147777","record_id":"<urn:uuid:84d2d47e-f399-4644-8ab2-82f1d595e5bc>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00408-ip-10-147-4-33.ec2.internal.warc.gz"} |
Max/Min Online Webwork Problem
April 10th 2007, 11:07 AM #1
Max/Min Online Webwork Problem
Here is the question:
Thus I went about trying to solve this problem in this manner. Note that the Minimum value is correct but the system will not accept 10 as an answer for the maximum value.
remember that absolute max is different from local max. the derivative gives you the local max. the absolute max is the highest point in the interval, IT DOES NOT HAVE TO BE A CRITICAL POINT.
check the end points, we get:
(15, 28585) and
(-6, -2222)
so the absolute max is 28585
Bingo You were right, but what did you do to maximize each coordinate? Did you stick the intervals that x was between back into f(x)?
yes, i found f(-6) and f(15). if one of those is lower than the y-value of all critical poitns, then it is the absolute min, if one is higher than all the y-values of the critical points, it is
the absolute max.
so always remember to check the endpoints for absolute max and mins
April 10th 2007, 11:17 AM #2
April 10th 2007, 11:19 AM #3
April 10th 2007, 11:21 AM #4 | {"url":"http://mathhelpforum.com/calculus/13524-max-min-online-webwork-problem.html","timestamp":"2014-04-23T23:39:26Z","content_type":null,"content_length":"42485","record_id":"<urn:uuid:9e77a18e-83c7-49ff-a66d-dfc22e099df8>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00200-ip-10-147-4-33.ec2.internal.warc.gz"} |
How can I avoid the game being rigged?
... And post 2 of 2
agentcom wrote:So, how did I know that Balch was a bad player looking at his dice stats? Well, the key is in the bottom half of that image--the Battle Outcomes stats. (Sidenote: the Dice Outcomes
stats are useless. You should really never be looking at them. It doesn't matter what you roll, it matters whether that is higher than what the opponent rolls.)
First, a little refresher on how the dice work in relation to these statistics. a 3v2 roll is going to be the most common. That's when you have a 4-stack or bigger and you're rolling against a
2-stack or bigger. You know that every time one of these types of rolls happens, 2 troops are going to die. You don't know whether they will be your opponent's or yours, but you do know that 2
troops will be taken off the board. So we can figure out how many total rolls a player has made by totaling up his kills and losses and dividing it by 2. The same can be done for the rest, but if
any player is only rolling 1, then the division is unnecessary.
Now, you don't have any control over what kinds of decisions that your opponent is making, so let's forget about the second row of the second table, which is the defensive stats. And let's not
worry about the total either. Only look at the first row of the second table: the Assault row. I'm sure a lot of you figured out instantly where I was going with this when I posted the table.
This guy is making a seemingly large number of high-risk attacks, by which I mean attacks where he has less than a 4-stack and therefore less than 3 dice to roll. The numbers seemed very high to
me, so that's when I made my comment in the forum.
But then I got to thinking, what is a "good" distribution of attacks? So, I took a somewhat random sample of the following players (their scoreboard ranks are in parentheses):
Kaskavel (1); 100mates (2); Chariot of Fire (20); Jippd (26); Pirlo (190); Agentcom (191)
I then compared the results to balch who was the complainer in previous post.
I've presented the results in graphic form below. There are some interesting differences that get obscured by the scale, but you can get the big picture by looking at it.
Notice that there are some differences among the good players. CoF apparently really likes the 3v2 roll (more on this below). Jippd is the opposite and seems to seek out the 3v1 roll (which,
remember, is rolling a stack of at least 4 against a single). But the biggest relative difference by far is in balch's chart. The disadvantageous rolls that barely even show up in the charts of
the other players are clearly being depended on by this player. He is rolling less than a 4-stack a whopping 18.6% of the time that he clicks the "assault" button. The average for the other 7
players, by comparison, is between 5% and 6%. And you can see that it's hurting his game because he ends up with relatively fewer opportunities to roll 3 dice against the defender.
I mentioned that I'd come back to CoF and his high percentage of 3v2 rolls. I think this is reflective of lots of large-map, large-team, no spoils games. Instead of trying to take territs (like
rolling 3 dice against 1 die in order to card), he is oftentimes looking to inflict the most damage. That is often done by creating lots of 4-stacks and trimming everything that you can rather
than taking any territs. The other players' game choices also probably affect their stats.
I may come back to this at some point and look at a bigger sample or a sample more appropriate to players of team games, but I just thought I'd mention it here. And perhaps some of you may want
to look at your dice stats. If you're making disadvantageous rolls more than 6 or 7% of the time, you are outside this range and might want to re-evaluate your strategy. And out of that amount,
over half of those disadvantageous attempts should be 2 dice versus 1 die (i.e. an attack from a territ with 3 troops to a territ with 1 troop).
Re: How can I avoid the game being rigged as fucked?
Balch wrote:Just now, right this instant, rolling 3 dice 5 times in a row against two defenders, I lost 2 attackers 5 consecutive times .0214% odds.
I think what you meant to say is given the amount of scenarios that play out in any given day, it would actually be more statistically improbably for something rare not to happen. But this isn't
one thing, it's every fucking game. That's not more likely, that's even less likely.
You know what the odds are of flopping a royal flush in a poker game (Omaha) is? 0.00923%. Does that mean it will never happen? No it doesn't because it happened against me last week. Wanna take it
even further? What are the odds that on top of this statistical anomaly, I had a straightflush myself? Made my head spin but it still happened.
People always recall bad luck, they forget that sometimes they get very lucky too.
Re: How can I avoid the game being rigged as fucked?
Thanks for that, agentcom! Now I know why my score has fallen so far recently.
Can you lead a nation to World Cup glory? Join the FCCFP!!
High score: 2007 - Jan. 1, 2014 - thanks to Game 13172870
Game 13890915 - in which I helped clinch the NC4 title for LHDD
Re: How can I avoid the game being rigged as fucked?
Agent, how long did it take to make that graph?
Art by pershy
Re: How can I avoid the game being rigged as fucked?
You look at making desperate rolls as a cause, but it's the effect. I get steamrolled by rolls so hard I consistently get put into 'I need a miracle or I'm done' scenarios.
Re: How can I avoid the game being rigged as fucked?
And regardless of playstyle, there's a lot of red, and even more negatives in that luck column, so thanks for posting that. With the exception of one, all the green is in the desperation columns that
don't really matter anyways, because the game is over at that point. So thanks for support.
So yeah, in 3v2s, the most seen roll, when rolling offense my troops incur 47.13% of the deaths, but when it's my opponent rolling offense against me, they only incur 44.27% of the losses. A 3% swing
is pretty big, not sure if you knew that.
And to the broski with his royal flush, being a hard poker player and everything: Cool story bro. Get a royal flush every night you play, and then we'll talk.
Re: How can I avoid the game being rigged as fucked?
Balch wrote:And regardless of playstyle, there's a lot of red, and even more negatives in that luck column, so thanks for posting that. With the exception of one, all the green is in the
desperation columns that don't really matter anyways, because the game is over at that point. So thanks for support.
So yeah, in 3v2s, the most seen roll, when rolling offense my troops incur 47.13% of the deaths, but when it's my opponent rolling offense against me, they only incur 44.27% of the losses. A 3%
swing is pretty big, not sure if you knew that.
And to the broski with his royal flush, being a hard poker player and everything: Cool story bro. Get a royal flush every night you play, and then we'll talk.
whine whine. Agentcom is right. Admit it. Sometimes you need to do some 3v2's, but not nearly as much as you do.
Highest Rank: Colonel (2680) - 23-07-12
Re: How can I avoid the game being rigged as fucked?
Balch wrote:And regardless of playstyle, there's a lot of red, and even more negatives in that luck column, so thanks for posting that. With the exception of one, all the green is in the
desperation columns that don't really matter anyways, because the game is over at that point. So thanks for support.
So yeah, in 3v2s, the most seen roll, when rolling offense my troops incur 47.13% of the deaths, but when it's my opponent rolling offense against me, they only incur 44.27% of the losses. A 3%
swing is pretty big, not sure if you knew that.
And to the broski with his royal flush, being a hard poker player and everything: Cool story bro. Get a royal flush every night you play, and then we'll talk.
dude, you only played 25 games on this site so far, anyone that paid attention at maths knows that 25 is too small of a population to draw any sound statistical conclusions from. And on the royal
flush, unfortunately my opponent flopped it, not me.
Re: How can I avoid the game being rigged as fucked?
Why is 3v1 a disadvantageous roll? It has greater than 50% chance of success.
Re: How can I avoid the game being rigged as fucked?
Here are my stats:
3v2 60%
3v1 36%
2v2 0.2%
2v1 3%
1v2 0.1%
1v1 0.04%
I am most similar to Goranz, except he makes more 1v1s than I do. I guess he is playing more 1v1s than I am, since that is usually when I use those.
Re: How can I avoid the game being rigged as fucked?
sempaispellcheck wrote:Thanks for that, agentcom! Now I know why my score has fallen so far recently.
MoB Deadly wrote:Agent, how long did it take to make that graph?
I just made a spreadsheet that allows me to paste in the data from the whole row and it spits out the percentages. So, then it was just a matter of telling excel to make a chart out of the right
columns and with the right labels. I could actually tell you pretty much exactly how long that project took because of the time stamps on the original posts (that I quoted above) ... brb ... Looks
like it took me about 45 minutes to format the spreadsheet, make the chart and type all of that second post. But with the spreadsheet that I have now, I could add other players to it in maybe 30
seconds per player.
Balch wrote:You look at making desperate rolls as a cause, but it's the effect. I get steamrolled by rolls so hard I consistently get put into 'I need a miracle or I'm done' scenarios.
If I had to guess, I would say that you're going into desperation mode too soon rather than just playing it out. There's a time for desperation, sure, but it's not every time you get hit with a bit
of bad luck.
DoomYoshi wrote:Why is 3v1 a disadvantageous roll? It has greater than 50% chance of success.
I thought this might come up ... I kind of lumped it in there. It's really halfway between. It's not the best situation (that would be a 4v1), but it's not bad. All other things being equal, you
would prefer to wait on making that attack until you can drop a troop there, but sometimes you have to (say to bring the other player below 12). If I had to make a rule (based on this very limited
sample), it would be that those rolls of 3v1 (2 dice v 1 die) should probably be around 3% and all the rest of the short rolls should add up to another 2 or 3%.
And your numbers look right to me, and I was thinking the same thing ... that it looks like Goranz.
Re: How can I avoid the game being rigged as fucked?
With atmospheric noise anything is possible!
Re: How can I avoid the game being rigged as fucked?
Well, that'd be an inaccurate guess. I outplay most of my opponents until at some point, sometimes early, sometimes a little later, they just steamroll me, usually killing 10-20 troops while keeping
their casualties under 2.
Re: How can I avoid the game being rigged as fucked?
88% of turns taken <---- likely has something to do with your win / loss ratio.
Highest Rank: 26 Highest Score: 3480
Re: How can I avoid the game being rigged as fucked?
Balch wrote:Well, that'd be an inaccurate guess. I outplay most of my opponents until at some point, sometimes early, sometimes a little later, they just steamroll me, usually killing 10-20
troops while keeping their casualties under 2.
Since you started playing you have a net point gain , this suggests that you are ignoring the fact that the dice have also favoured you on many occasions. It would seem that the problem with you is
arrogance rather than bad dice , get off your high horse and stop whining.
Im a TOFU miSfit | {"url":"http://www.conquerclub.com/forum/viewtopic.php?p=3965036","timestamp":"2014-04-21T00:05:27Z","content_type":null,"content_length":"172110","record_id":"<urn:uuid:52aebfd6-a516-404d-961d-782ef385cde0>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00325-ip-10-147-4-33.ec2.internal.warc.gz"} |
Review of
Review of Irreligion (2008)
Irreligion: A Mathematician Explains Why the Arguments for God Just Don't Add Up. New York, NY: Hill and Wang. 176 pp.
This short book by John Allen Paulos presents twelve arguments for taking religion seriously and offers brief refutations of each of them. These are not meant to be definitive; rather, they are
skeptical musings, intending to point out in a few words the holes in the arguments. It is not intended for theologians. It is more a book to be left around, hoping that someone will pick it up and
absorb the skeptical attitude that it promotes.
Irreligion is quite unusual in that it combines religion with mathematics and humor. It might not be the right book to give to your elderly aunt or uncle who goes to church on Sunday and has no
concept that religious ideas can be intellectual play. For such a person the short book Atheism: A Very Short Introduction by Julian Baggini might be the thing. It is earnest and polite and even has
references to other books. Paulos doesn't even bother with references.
Perhaps the right audience for Paulos's book is a young person who already has something of a skeptical attitude about other matters, but who needs exposure to such ideas in the religious context.
This young person need not be an expert on mathematics, as the mathematical ideas are introduced in a gentle way. It must be admitted that many of them are only tangentially relevant to the central
theological point under discussion, but in places--such as the Bible Code discussion--they are at the heart of the issue.
Since many of the arguments are familiar to anyone with any acquaintance with the philosophy of religion, there is no point to going through all of them in this review. Instead, I will concentrate on
a few points with more substantial mathematical content. So it will be perhaps heavier reading than the book, but not as wide-ranging and funny. (Too bad. Buy the book.)
The book begins with four classical arguments. These are the first cause argument, the argument from design, an appeal to the anthropic principle, and the ontological argument. Along with design
arguments the author considers some creationist arguments. A favorite is that a particular biological outcome depends on a very long sequence of mutations, and that the probability of such a sequence
is extraordinarily tiny.
It is not just creationists who are confused by such issues. A way of clarifying the situation is to distinguish between outcomes of an experiment and events that can happen as a result of such
outcomes. An "outcome" is a complete and precise specification of how an experiment could result. An "event" is something that can happen (or not happen) as a result of the experiment.
Consider the experiment of tossing a coin three times. Each toss can come up heads, indicated by H, or tails, indicated by T. One possible outcome might be THH. There are a total of eight such
outcomes. On the other hand, an event might be something like getting T on the second toss. This event would occur for outcomes HTH, TTH, HTT, TTT. Another event might be obtaining exactly two H's.
The second one would occur for outcomes HHT, HTH, THH.
Once you know the outcome of the experiment, you know whether or not the event happened. With the outcome THH there is no tail on the second toss, so the first event (getting T on the second toss)
does not happen. With the same outcome there are exactly two heads, so the second event (obtaining two H's) does happen.
As an aside, some people will enjoy the challenge of figuring out the number of events associated with the coin tossing experiment (three tosses). I will spoil their pleasure right now: the answer is
256. It comes out this way if you count two trivial events, one that happens no matter what, and one that can never happen, by definition. Perhaps most of these 256 events are not particularly
natural or interesting, but surely some are of intense interest to gamblers. That leaves another challenge, to figure out how the number 256 arises. (It should not be so hard.)
Probability theory assigns probability to events. That is, you ask a sensible question about the experiment, and you ask for the chance that when the experiment is concluded that the answer is yes.
On the other hand, the conclusion of the experiment is a particular outcome.
The sticky point is that each outcome also corresponds to a very special kind of event. This is the event that is said to happen only with this designated outcome and with none other. In most
realistic situations, the probability of this kind of outcome-event is very, very small. In ten tosses of a fair coin, the probability of each pattern is already only (½)^10, which is 1/1024, less
than one in a thousand. For twenty tosses, the probability (½)^20 is already less than one in a million. These are not the kinds of events that occur in typical probability calculations. After all,
when you are using probability for prediction, you need to calculate the probability of an event that has a natural specification before the experiment is done.
Now a warning: the three paragraphs that follow are more mathematical than the rest of this review. Skip this part if you wish. However, some readers may enjoy seeing how computing probabilities of
unlikely events can help to assess evidence. Paulos only hints at this part of the story, but it helps complete the picture.
The apparent paradox is that as experiments accumulate more and more evidence, then the outcome-events become even less and less probable. So why do we feel that we are getting useful information?
One answer was suggested by the eighteenth-century British mathematician Thomas Bayes. Consider two theories competing to explain the evidence. Each one is reasonably plausible before the evidence is
considered. Look at the probability of the evidence given the first theory and the probability of the evidence given the second theory. These may each be such small numbers as to be almost
meaningless. But their ratios help decide which theory to believe. And ratios of very small numbers can be huge, decisively tipping the choice one way or the other. There are legitimate criticisms of
Bayes's method, and some statisticians prefer to speak in somewhat different terms of "likelihood ratio." However formulated, it is quite reasonable evidence-based reasoning.
It is worth an example. Alice is a geneticist with a theory that says a certain genetic marker should occur ¾ of the time, in the long run. Bob is a another geneticist with a competing theory;
according to him, the marker should occur ¼ of the time, again in the long run. The budget is limited, so the experiment consists of only 20 observations. The outcome is a certain pattern in which
the marker does or does not occur. So it is something like the coin toss with twenty tosses. In fact, the presence of the marker may be indicated by H and the absence by T, and this makes an outcome
look even more similar to that of the coin toss experiment. The difference is that the probability of the H marker is ¾ for each observation on Alice's theory, or else the probability of the H marker
is ¼ for each observation in Bob's version. (By contrast, the probability of heads is ½ for each toss of the coin).
The experiment is conducted, and the outcome is HHTTH HHHTH HHHTT HTHHH. The marker showed up 14 times. This is an experimental number. What does it mean? It looks like Alice might be the one who is
right. Let's see. On her theory the outcome-event would have probability (¾)^14 * (¼)^6, a very small number. On Bob's theory it would have probability (¼)^14 * (¾)^6, another tiny number. The
particular outcome is quite unexpected on either theory. But the ratio of Alice's number to Bob's number (after everything cancels out) is the same as the ratio of 3^14 to 3^6, which is 3^8, quite an
impressive tilt in favor of Alice. Given that the evidence came out exactly the way that it did, her theory looks much more reasonable as an explanation of it. For this kind of reasoning ratios of
small numbers count, not the small numbers themselves. This is small comfort to the creationist who only cares about those irrelevant small numbers.
The next part of the book treats four general types of subjective arguments: coincidence, prophesy, subjectivity, and interventions. Some of the same probability issues arise in these contexts.
Paulos writes this about coincidence:
As I've written elsewhere, the most amazing coincidence imaginable would be the complete absence of all coincidences. The above litany is intended to illustrate that there are an indeterminate number
of ways for such events to come about, even though the probability of any one of them is tiny. And, as with creationists' probabilistic arguments, after such events occur people glom onto their tiny
probability and neglect to ask the more pertinent question: How likely is something vaguely like this to occur (p. 58).
This is again the same issue. In order to make an honest predictive use of probability, do not look at outcomes, but at events. Each event of interest should be specified without prior knowledge of
the outcome, so that it cannot be tailored to order. Quick summary: no cheating.
The final part of the book presents four arguments that have a psychological flavor; Paulos calls these redefinition, cognitive tendency, universality (morality), and gambling. Pascal's wager, in the
gambling chapter, has a mathematical aspect. Blaise Pascal was a seventeenth-century French mathematician, physicist, and religious philosopher. He presented his case in an eloquent and somewhat
confusing way, and of course in French. Here is how Paulos presents it in plainer words:
In the case of Pascal's wager we can perform similar calculations to determine the expected values of the two choices (to believe or not to believe). Each of these expected values depends on the
probability of God's existence and the payoffs associated with the two possibilities: yes, He does, or no, He doesn't. If we multiply whatever huge numerical payoff we put on endless heavenly bliss
by even a tiny probability, we obtain a product that trumps all other factors, and gambling prudence dictates that we should believe (or at least try hard to do so) (p. 134).
This argument has holes that could be discovered by a child with the motivation to think about it critically, and Paulos dismisses it easily. There is also a large scholarly literature; see the
Pascal's wager entry in the online Stanford Encyclopedia of Philosophy. Thus it is troubling that the recent book What's So Great About Christianity by Dinesh D'Souza makes vital use of the argument.
Its role there is to fill a gap in reasoning, when D'Souza wants to pass from a prime-mover God to a personal God. Sometimes even educated people fall for this stuff.
Paulos's book has an index that displays a wide range of mathematical ideas, described almost always very briefly. In addition to Bayes's theorem, there are Boolean satisfiability, random branchings,
cellular automata, the Gödel incompleteness theorem, Ramsey theory, Turing machines, and so on. It may take more courage to write about such things than about God and morality.
The book is frankly critical of religion, but the author writes from his personal outlook and has a cheerful and positive view. If you read it, look for insights and humor, not systematic exposition.
My favorite quip is the surrealists' two-word argument for the existence of God. Their argument: Pipe cleaner. Indeed, a glance at "La trahison des images" might be a good departure for arguments for
the existence of God.
In conclusion, Irreligion is not one of the battleship tomes on philosophy of religion. It is a sporty day-sailor, ready to take advantage of the afternoon breeze. Let the battleship stand off; it
may never be needed.
Copyright ©2008 William Faris. The electronic version is copyright ©2008 by Internet Infidels, Inc. with the written permission of William Faris. All rights reserved. | {"url":"http://infidels.org/library/modern/william_faris/irreligion.html","timestamp":"2014-04-20T08:45:00Z","content_type":null,"content_length":"20622","record_id":"<urn:uuid:a93915f0-310c-4f07-a1b4-7a7566cc3ca5>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00305-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Algebra Help! Which word goes with each definition?
• one year ago
• one year ago
Best Response
You've already chosen the best response.
And you didn't find those words?
Best Response
You've already chosen the best response.
I see, yes those definitions on your assignment are . . . less than precise.
Best Response
You've already chosen the best response.
For example, B) The number is not derived from compound interest, but it is used in that function, it is also not the inverse of LN, it is the base of the natural logarithm.
Best Response
You've already chosen the best response.
And for D) numbers aren't raised to logarithms. These incorrect statements in these definitions make it difficult (and also teaches you incorrect things, which is unfortunate.) Anyway, for number
one, you matched 'exponential function' to 'an exponential function in the form . . .' That cannot be correct because you are matching a general to a specific.
Best Response
You've already chosen the best response.
#s 3 and 8, 'domain' and 'range' are referring specifically to exponential growth here.
Best Response
You've already chosen the best response.
#s 7 and 9, 'decay' and 'growth' are specific exponential functions.
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
lets see so u have to match the meanings right
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/5087e81de4b058e80cf659e0","timestamp":"2014-04-18T03:53:40Z","content_type":null,"content_length":"47230","record_id":"<urn:uuid:763f44f8-99d0-47c0-b021-54460b6b87e9>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00472-ip-10-147-4-33.ec2.internal.warc.gz"} |
Summary: 4 Multivariate extremes
4.1 Introduction
In this section we consider the problems we face if we wish to model the extremal behaviour
of two or more (dependent) processes simultaneously. There are several reasons why we may
wish to do this:
· to model the extreme behaviour of a particular variable over several nearby locations (e.g.
rainfall over a network of sites);
· to model the joint extremes of two or more different variables at a particular location (e.g.
wind and rain at a site);
· to model the joint behaviour of extremes which occur as consecutive observations in a
timeseries (e.g. consecutive hourly maximum wind gusts during a storm).
All of these problems suggest fitting an appropriate limiting multivariate distribution to the
relevant data. However, as we shall see, the derivation of such a multivariate distribution is
not as easy as we might hope. The analogy with the Normal distribution as a model for means
breaks down as we move into n dimensions! It is not even clear what the `relevant data' should
be! Most of the increased complexity is apparent in the move from 1 to 2 dimensions, so we
will focus largely on bivariate problems.
4.2 Componentwise maxima models
4.2.1 Example: network of rainfall measurements
Suppose we want to study the joint extremes of daily rainfall accumulations at the network of 8 | {"url":"http://www.osti.gov/eprints/topicpages/documents/record/334/0132008.html","timestamp":"2014-04-19T19:47:23Z","content_type":null,"content_length":"8405","record_id":"<urn:uuid:7f53fca1-c9e9-4065-a314-a426cadcc49e>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00643-ip-10-147-4-33.ec2.internal.warc.gz"} |
Find a Bapchule Precalculus Tutor
...In my experience, even the most abstract mathematical or scientific concepts can be learned by anyone, you just need to understand what they are doing and why. I think my two main strengths as
a tutor are my ability to impart this understanding using visual aids and analogies, and my ability to ...
14 Subjects: including precalculus, chemistry, physics, calculus
...I began to tutor students in math, reading, and writing to improve their test scores. After every student finished their math homework, I reviewed each problem on the board and taught them how
to evaluate each one. When it came to reading, I sat down with students while they read and corrected them if they mispronounced a word.
67 Subjects: including precalculus, Spanish, English, writing
...I'm currently enrolled in a graduate program in system dynamics from Worcester Polytechnical Institute. Although my industry and career are focused on technical problem solving, I am also
deeply committed to understanding the broader scope of problems including literature, philosophy, business a...
62 Subjects: including precalculus, English, reading, writing
...Pima Community College, Tucson, Arizona - Advanced Certificate Hazardous Materials. The Professional Certifications listed below require significant mathematics application ability. Water
distribution and treatment technology involves constant application of Algebra, Geometry, Trigonometry, and other Mathematics, for the determination of length, volume, mass, velocity, and rate.
23 Subjects: including precalculus, chemistry, calculus, physics
...I'm an adjunct geology professor at Mesa Community College. I have a Master's degree in geology from Arizona State University. My Bachelor's degree is in both biology and geology, from the
University of Western Ontario.
28 Subjects: including precalculus, English, algebra 1, algebra 2 | {"url":"http://www.purplemath.com/Bapchule_Precalculus_tutors.php","timestamp":"2014-04-17T20:04:14Z","content_type":null,"content_length":"24087","record_id":"<urn:uuid:602cdbbe-9548-4fe3-9e4e-5cce08a8f696>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00192-ip-10-147-4-33.ec2.internal.warc.gz"} |
Efficiency (statistics)
In statistics, efficiency is a term used in the comparison of various statistical procedures and, in particular, it refers to a measure of the optimality of an estimator, of an experimental design or
of a hypothesis testing procedure.^[2] Essentially, a more efficient estimator, experiment or test needs fewer samples than a less efficient one to achieve a given performance. This article primarily
deals with efficiency of estimators.
The relative efficiency of two procedures is the ratio of their efficiencies, although often this term is used where the comparison is made between a given procedure and a notional "best possible"
procedure. The efficiencies and the relative efficiency of two procedures theoretically depend on the sample size available for the given procedure, but it is often possible to use the asymptotic
relative efficiency (defined as the limit of the relative efficiencies as the sample size grows) as the principal comparison measure.
Efficiencies are often defined using the variance or mean square error as the measure of desirability.
The efficiency of an unbiased estimator, T, of a parameter θ is defined as^[citation needed]
$e(T) = \frac{1/\mathcal{I}(\theta)}{\mathrm{var}(T)}$
where $\mathcal{I}(\theta)$ is the Fisher information of the sample. Thus e(T) is the minimum possible variance for an unbiased estimator divided by its actual variance. The Cramér–Rao bound can be
used to prove that e(T) ≤ 1.
Efficient estimators[edit]
If an unbiased estimator of a parameter θ attains $e(T) = 1$ for all values of the parameter, then the estimator is called efficient.^[citation needed]
Equivalently, the estimator achieves equality in the Cramér–Rao inequality for all θ.
An efficient estimator is also the minimum variance unbiased estimator (MVUE). This is because an efficient estimator maintains equality on the Cramér–Rao inequality for all parameter values, which
means it attains the minimum variance for all parameters (the definition of the MVUE). The MVUE estimator, even if it exists, is not necessarily efficient, because "minimum" does not mean equality
holds on the Cramér–Rao inequality.
Thus an efficient estimator need not exist, but if it does, it is the MVUE.
Asymptotic efficiency[edit]
For some estimators, they can attain efficiency asymptotically and are thus called asymptotically efficient estimators. This can be the case for some maximum likelihood estimators or for any
estimators that attain equality of the Cramér–Rao bound asymptotically.
Consider a sample of size $N$ drawn from a normal distribution of mean $\mu$ and unit variance, i.e., $X_n \sim \mathcal{N}(\mu, 1).$
The sample mean, $\overline{X}$, of the sample $X_1, X_2, \ldots, X_N$, defined as
$\overline{X} = \frac{1}{N} \sum_{n=1}^{N} X_n \sim \mathcal{N}\left(\mu, \frac{1}{N}\right).$
The variance of the mean, 1/N (the square of the standard error) is equal to the reciprocal of the Fisher information from the sample and thus, by the Cramér–Rao inequality, the sample mean is
efficient in the sense that its efficiency is unity (100%).
Now consider the sample median, $\widetilde{X}$. This is an unbiased and consistent estimator for $\mu$. For large $N$ the sample median is approximately normally distributed with mean $\mu$ and
variance ${\pi}/{2N},$ i.e.,^[3]
$\widetilde{X} \sim \mathcal{N}\left(\mu, \frac{\pi}{2N}\right).$
The efficiency for large $N$ is thus
$e\left(\widetilde{X}\right) = \left(\frac{1}{N}\right) \left(\frac{\pi}{2N}\right)^{-1} = 2/\pi \approx 64%.$
Note that this is the asymptotic efficiency — that is, the efficiency in the limit as sample size $N$ tends to infinity. For finite values of $N,$ the efficiency is higher than this (for example, a
sample size of 3 gives an efficiency of about 74%).^[citation needed]
The sample mean is thus more efficient than the sample median in this example. However, there may be measures by which the median performs better. For example, the median is far more robust to
outliers, so that if the Gaussian model is questionable or approximate, there may advantages to using the median (see Robust statistics).
Dominant estimators[edit]
If $T_1$ and $T_2$ are estimators for the parameter $\theta$, then $T_1$ is said to dominate $T_2$ if:
1. its mean squared error (MSE) is smaller for at least some value of $\theta$
2. the MSE does not exceed that of $T_2$ for any value of θ.
Formally, $T_1$ dominates $T_2$ if
$\mathrm{E} \left[ (T_1 - \theta)^2 \right] \leq \mathrm{E} \left[ (T_2-\theta)^2 \right]$
holds for all $\theta$, with strict inequality holding somewhere.
Relative efficiency[edit]
The relative efficiency of two estimators is defined as
$e(T_1,T_2) = \frac {\mathrm{E} \left[ (T_2-\theta)^2 \right]} {\mathrm{E} \left[ (T_1-\theta)^2 \right]}$
Although $e$ is in general a function of $\theta$, in many cases the dependence drops out; if this is so, $e$ being greater than one would indicate that $T_1$ is preferable, whatever the true value
of $\theta$.
An alternative to relative efficiency for comparing estimators, is the Pitman closeness criterion. This replaces the comparison of mean-squared-errors with comparing how often one estimator produces
estimates closer to the true value than another estimator.
Efficiency of an estimator may change significantly if the distribution changes, often dropping. This is one of the motivations of robust statistics – an estimator such as the sample mean is an
efficient estimator of the population mean of a normal distribution, for example, but can be an inefficient estimator of a mixture distribution of two normal distributions with the same mean and
different variances. For example, if a distribution is a combination of 98% N(μ, σ) and 2% N(μ, 10σ), the presence of extreme values from the latter distribution (often "contaminating outliers")
significantly reduces the efficiency of the sample mean as an estimator of μ. By contrast, the trimmed mean is less efficient for a normal distribution, but is more robust (less affected) by changes
in distribution, and thus may be more efficient for a mixture distribution. Similarly, the shape of a distribution, such as skewness or heavy tails, can significantly reduce the efficiency of
estimators that assume a symmetric distribution or thin tails.
Uses of inefficient estimators[edit]
While efficiency is a desirable quality of an estimator, it must be weighed against other desiderata, and an estimator that is efficient for certain distributions may well be inefficient for other
distributions. Most significantly, estimators that are efficient for clean data from a simple distribution, such as the normal distribution (which is symmetric, unimodal, and has thin tails) may not
be robust to contamination by outliers, and may be inefficient for more complicated distributions. In robust statistics, more importance is placed on robustness and applicability to a wide variety of
distributions, rather than efficiency on a single distribution. M-estimators are a general class of solutions motivated by these concerns, yielding both robustness and high relative efficiency,
though possibly lower efficiency than traditional estimators for some cases. These are potentially very computationally complicated, however.
A more traditional alternative are L-estimators, which are very simple statistics that are easy to compute and interpret, in many cases robust, and often sufficiently efficient for initial estimates.
See applications of L-estimators for further discussion.
Hypothesis tests[edit]
For comparing significance tests, a meaningful measure of efficiency can be defined based on the sample size required for the test to achieve a given power.
Pitman efficiency^[5] and Bahadur efficiency (or Hodges–Lehmann efficiency )^[6]^[7] relate to the comparison of the performance of Statistical hypothesis testing procedures. The Encyclopedia of
Mathematics provides a brief exposition of these three criteria.
Experimental design[edit]
For experimental designs, efficiency relates to the ability of a design to achieve the objective of the study with minimal expenditure of resources such as time and money. In simple cases, the
relative efficiency of designs can be expressed as the ratio of the sample sizes required to achieve a given objective.^[8]
See optimal design for further discussion.
1. ^ Nikulin, M.S. (2001), "Efficiency of a statistical procedure", in Hazewinkel, Michiel, Encyclopedia of Mathematics, Springer, ISBN 978-1-55608-010-4
2. ^ Williams, D. (2001) Weighing the Odds, CUP. ISBN 052100618X (p.165)
3. ^ Nikitin, Ya.Yu. (2001), "Efficiency, asymptotic", in Hazewinkel, Michiel, Encyclopedia of Mathematics, Springer, ISBN 978-1-55608-010-4
4. ^ Arcones M.A. "Bahadur efficiency of the likelihood ratio test" preprint
5. ^ Canay I.A. & Otsu, T. "Hodges-Lehmann Optimality for Testing Moment Condition Models"
6. ^ Dodge, Y. (2006) The Oxford Dictionary of Statistical Terms, OUP. ISBN 0-19-920613-9
• Everitt, Brian S. (2002). The Cambridge Dictionary of Statistics. Cambridge University Press. ISBN 0-521-81099-X.
• Lehmann, Erich L. (1998). Elements of Large-Sample Theory. New York: Springer Verlag. ISBN 978-0-387-98595-4.
• Nikitin, Ya.Yu. (2001), "Efficiency, asymptotic", in Hazewinkel, Michiel, Encyclopedia of Mathematics, Springer, ISBN 978-1-55608-010-4 | {"url":"http://blekko.com/wiki/Efficiency_(statistics)?source=672620ff","timestamp":"2014-04-19T21:18:05Z","content_type":null,"content_length":"34717","record_id":"<urn:uuid:db80abf3-d4c7-421e-8584-d2abc3efc95d>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00394-ip-10-147-4-33.ec2.internal.warc.gz"} |
Calculus I Homework Help
July 27th 2006, 11:06 AM #1
Junior Member
Jan 2006
Let f(x) = 2 Domain: x less than or equal to -1
= ax+b Domain: -1 less than x or less than 3
= -2 Domain: x greater or equal to 3
Find a and b such that f is continuous.
Let f(x) = 2 Domain: x less than or equal to -1
= ax+b Domain: -1 less than x or less than 3
= -2 Domain: x greater or equal to 3
Find a and b such that f is continuous.
Hello, Nimmy,
I've attached a diagram to show you what you have to calculate:
In short: There are 2 points which are connected by a straight line. So you know 2 points of this line. Use the 2-point-formula of a line to get the equation.
(You should get f(x)=-x+1, -1 < x < 3)
Let f(x) = 2 Domain: x less than or equal to -1
= ax+b Domain: -1 less than x or less than 3
= -2 Domain: x greater or equal to 3
Find a and b such that f is continuous.
It $xot = -1, xot= 3$ it is certainly countinous because there are constant and linear function where are always countinous.
However we need to check $x=-1,3$
Let us do -1 first,
By definition of countinuity we need that,
$\lim_{x\to -1} f(x)=f(-1)$
$\lim_{x\to -1^-}f(x)=\lim_{x\to -1^+}f(x)=2$
But the limit from the left is same as,
$\lim_{x\to -1^-}2=2$
And the limit from the right is same as,
$\lim_{x\to -1^+}ax+b=-a+b$
Similarly, if we do this for $x=3$ we have,
From these two equations we have,
July 27th 2006, 11:57 AM #2
July 27th 2006, 12:22 PM #3
Global Moderator
Nov 2005
New York City | {"url":"http://mathhelpforum.com/calculus/4338-calculus-i-homework-help.html","timestamp":"2014-04-18T19:53:29Z","content_type":null,"content_length":"40006","record_id":"<urn:uuid:d9f2db1c-d233-4e23-8b88-7565c28b018e>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00297-ip-10-147-4-33.ec2.internal.warc.gz"} |
local subrings of matrix ring
MathOverflow is a question and answer site for professional mathematicians. It's 100% free, no registration required.
When is the subring (containing 1) of a matrix ring $M_n(k)$ over a field $k$ is local? I would be grateful for every reference concerning this matter, Thank you!
up vote 2 down vote favorite ra.rings-and-algebras ho.history-overview
add comment
When is the subring (containing 1) of a matrix ring $M_n(k)$ over a field $k$ is local? I would be grateful for every reference concerning this matter, Thank you!
The ring of matrices $ \left( \begin{array}{cc} a & b \\\\ 0 & a \\ \end{array} \right). $
up vote 2 down vote This ring is isomorphic to the algebra of dual numbers (http://en.wikipedia.org/wiki/Dual_number) which is local.
add comment
The ring of matrices $ \left( \begin{array}{cc} a & b \\\\ 0 & a \\ \end{array} \right). $
This ring is isomorphic to the algebra of dual numbers (http://en.wikipedia.org/wiki/Dual_number) which is local.
The idea of @boris-novikov shows that the algebra of all upper triangular matrices such that the entries of the main diagonal are equal, is a local ring. The dimension of this algebra is
up vote 1 $\frac{n^2-n}{2}+1$. I am wondering if one can find a local subalgebra of $M_n(k)$ whose dimension is greater than this number.
down vote
add comment
The idea of @boris-novikov shows that the algebra of all upper triangular matrices such that the entries of the main diagonal are equal, is a local ring. The dimension of this algebra is $\frac{n^
2-n}{2}+1$. I am wondering if one can find a local subalgebra of $M_n(k)$ whose dimension is greater than this number. | {"url":"http://mathoverflow.net/questions/123870/local-subrings-of-matrix-ring","timestamp":"2014-04-21T05:20:11Z","content_type":null,"content_length":"53375","record_id":"<urn:uuid:e0ea68a5-c576-4bf1-b6fa-0ff1d9c9db52>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00581-ip-10-147-4-33.ec2.internal.warc.gz"} |
This Article
Bibliographic References
Add to:
Effective Reconstruction of Data Perturbed by Random Projections
January 2012 (vol. 61 no. 1)
pp. 101-117
ASCII Text x
Yingpeng Sang, Hong Shen, Hui Tian, "Effective Reconstruction of Data Perturbed by Random Projections," IEEE Transactions on Computers, vol. 61, no. 1, pp. 101-117, January, 2012.
BibTex x
@article{ 10.1109/TC.2011.83,
author = {Yingpeng Sang and Hong Shen and Hui Tian},
title = {Effective Reconstruction of Data Perturbed by Random Projections},
journal ={IEEE Transactions on Computers},
volume = {61},
number = {1},
issn = {0018-9340},
year = {2012},
pages = {101-117},
doi = {http://doi.ieeecomputersociety.org/10.1109/TC.2011.83},
publisher = {IEEE Computer Society},
address = {Los Alamitos, CA, USA},
RefWorks Procite/RefMan/Endnote x
TY - JOUR
JO - IEEE Transactions on Computers
TI - Effective Reconstruction of Data Perturbed by Random Projections
IS - 1
SN - 0018-9340
EPD - 101-117
A1 - Yingpeng Sang,
A1 - Hong Shen,
A1 - Hui Tian,
PY - 2012
KW - Privacy-preserving data mining
KW - data perturbation
KW - data reconstruction
KW - underdetermined independent component analysis
KW - Maximum A Posteriori
KW - principal component analysis.
VL - 61
JA - IEEE Transactions on Computers
ER -
Random Projection (RP) has raised great concern among the research community of privacy-preserving data mining, due to its high efficiency and utility, e.g., keeping the euclidean distances among the
data points. It was shown in [33] that, if the original data set composed of m attributes is multiplied by a mixing matrix of k\times m (m>k) which is random and orthogonal on expectation, then the k
series of perturbed data can be released for mining purposes. Given the data perturbed by RP and some necessary prior knowledge, to our knowledge, little work has been done in reconstructing the
original data to recover some sensitive information. In this paper, we choose several typical scenarios in data mining with different assumptions on prior knowledge. For the cases that an attacker
has full or zero knowledge of the mixing matrix R, respectively, we propose reconstruction methods based on Underdetermined Independent Component Analysis (UICA) if the attributes of the original
data are mutually independent and sparse, and propose reconstruction methods based on Maximum A Posteriori (MAP) if the attributes of the original data are correlated and nonsparse. Simulation
results show that our reconstructions achieve high recovery rates, and outperform the reconstructions based on Principal Component Analysis (PCA). Successful reconstructions essentially mean the
leakage of privacy, so our work identify the possible risks of RP when it is used for data perturbations.
[1] N. Adam and J. Worthmann, "Security-Control Methods for Statistical Databases: A Comparative Study," ACM Computing Surveys, vol. 21, no. 4, pp. 515-556, 1989.
[2] Privacy-Preserving Data Mining: Models and Algorithms, C. Aggarwal and P.S. Yu, eds. Springer, 2008.
[3] D. Agrawal and C. Aggarwal, "On the Design and Quantification of Privacy Preserving Data Mining Algorithms," Proc. 20th ACM SIGMOD-SIGACT-SIGART Symp. Principles of Database Systems (PODS), pp.
247-255, 2001.
[4] R. Agrawal and R. Srikant, "Privacy-Preserving Data Mining," Proc. 2000 ACM SIGMOD Conf. Management of Data, pp. 439-450, 2000.
[5] S. Agrawal and J.R. Haritsa, "A Framework for High-Accuracy Privacy-Preserving Mining," Proc. 21st Int'l Conf. Data Eng. (ICDE '05), pp. 193-204, 2005.
[6] M. Atallah, E. Bertino, A. Elmagarmid, M. Ibrahim, and V. Verykios, "Disclosure Limitation of Sensitive Rules," Proc. Workshop Knowledge and Data Eng. Exchange (KDEX '99), pp. 45-52, 1999.
[7] P. Bofill and M. Zibulevsky, "Underdetermined Blind Source Separation Using Sparse Representations," Signal Processing, vol. 81, no. 11, pp. 2353-2362, 2001.
[8] X. Cao and R. Liu, "General Approach to Blind Source Separation," IEEE Trans. Signal Processing, vol. 44, no. 3, pp. 562-571, Mar. 1996.
[9] K. Chen, G. Sun, and L. Liu, "Towards Attack-Resilient Geometric Data Perturbation," Proc. SIAM Int'l Conf. Data Mining (SDM '07), Apr. 2007.
[10] S.S. Chen, D.L. Donoho, and M.A. Saunders, "Atomic Decomposition by Basis Pursuit," SIAM Rev., vol. 43, no. 1, pp. 129-159, 2001.
[11] R. Cramer, I. Damgard, and J. Nielsen, "Multiparty Computation from Threshold Homomorphic Encryption," EUROCRYPT '01: Proc. Int'l Conf. the Theory and Application of Cryptographic Techniques:
Advances in Cryptology, pp. 280-300, 2001.
[12] T. Dalenius and S.P. Reiss, "Data-Swapping: A Technique for Disclosure Control," J. Statistical Planning and Inference, vol. 6, pp. 73-85, 1982.
[13] S. Dasgupta, D. Hsu, and N. Verma, "A Concentration Theorem for Projections," Proc. 22nd Conf. Uncertainty in Artificial Intelligence, pp. 1-17, 2006.
[14] S. Dasgupta, "Learning Mixtures of Gaussians," Proc. 40th Ann. IEEE Symp. Foundations of Computer Science (FOCS), pp. 634-644, 1999.
[15] W. Du and Z. Zhan, "Building Decision Tree Classifier on Private Data," Proc. IEEE ICDM Workshop Privacy, Security and Data Mining (PSDM '02), pp. 1-8, 2002.
[16] A. Evfimievski, J. Gehrke, and R. Srikant, "Limiting Privacy Breaches in Privacy Preserving Data Mining," Proc. 22nd ACM Symp. Principles of Database Systems (PODS '03), pp. 211-222, 2003.
[17] S.E. Fienberg and J. McIntyre, "Data Swapping: Variations on a Theme by Dalenius and Reiss," Proc. Privacy in Statistical Databases, pp. 14-29, 2004.
[18] O. Goldreich, Foundations of Cryptography: Volume 2, Basic Applications. Cambridge Univ. Press, 2004.
[19] A. Gretton, K. Fukumizu, C. Teo, L. Song, B. Scholkopf, and A. Smola, "A Kernel Statistical Test of Independence," Advances in Neural Information Processing Systems, pp. 585-592, MIT Press,
[20] S. Guo and X. Wu, "Deriving Private Information from Arbitrarily Projected Data," Proc. 11th Pacific-Asia Conf. Knowledge Discovery and Data Mining (PAKDD '07), May 2007.
[21] J.A. Halderman, S.D. Schoen, N. Heninger, W. Clarkson, W. Paul, J.A. Calandrino, A.J. Feldman, J. Appelbaum, and E.W. Felten, "Lest We Remember: Cold Boot Attacks on Encryption Keys," Proc. 17th
USENIX Security Symp., pp. 45-60, 2008.
[22] Z. Huang, W. Du, and B. Chen, "Deriving Private Information from Randomized Data," Proc. ACM SIGMOD Int'l Conf. Management of Data, pp. 37-48, 2005.
[23] A. Hyvärinen and E. Oja, "Independent Component Analysis: Algorithms and Applications," Neural Networks, vol. 13, pp. 411-430, 2000.
[24] S. Jha, L. Kruger, and P. McDaniel, "Privacy Preserving Clustering," Proc. 10th European Symp. Research in Computer Security (ESORICS), pp. 397-417, 2005.
[25] M. Kantarcioglu and C. Clifton, "Privacy-Preserving Distributed Mining of Association Rules on Horizontally Partitioned Data," IEEE Trans. Knowledge and Data Eng., vol. 16, no. 9, pp. 1026-1037,
Sept. 2004.
[26] A. Kankainen and N. Ushakov, "A Consistent Modification of a Test for Independence Based on the Empirical Characteristic Function," J. Math. Sciences, vol. 89, no. 5, pp. 1-10, 1998.
[27] H. Kargupta, S. Datta, Q. Wang, and K. Sivakumar, "On the Privacy Preserving Properties of Random Data Perturbation Techniques," Proc. Third IEEE Int'l Conf. Data Mining (ICDM '03), pp. 99-106,
[28] S. Kotz, T.J. Kozubowski, and K. Podgórski, The Laplace Distribution and Generalizations: A Revisit with Applications to Communications, Economics, Engineering, and Finance. Birkhäuser, 2001.
[29] E. Lefons, A. Silvestri, and F. Tangorra, "An Analytic Approach to Statistical Databases," Proc. Ninth Int'l Conf. Very Large Data Bases (VLDB), 1983.
[30] N. Li, T. Li, and S. Venkatasubramanian, "T-Closeness: Privacy Beyond K-Anonymity and L-Diversity," Proc. IEEE 23rd Int'l Conf. Data Eng. (ICDE '07), pp. 106-115, 2007.
[31] C.K. Liew, U.J. Choi, and C.J. Liew, "A Data Distortion by Probability Distribution," ACM Trans. Database Systems, vol. 10, no. 3, pp. 395-411, 1985.
[32] Y. Lindell and B. Pinkas, "Privacy Preserving Data Mining," Proc. Advances in Cryptology (CRYPTO '00), pp. 36-54, 2000.
[33] K. Liu, H. Kargupta, and J. Ryan, "Random Projection-Based Multiplicative Data Perturbation for Privacy Preserving Distributed Data Mining," IEEE Trans. Knowledge and Data Eng., vol. 18, no. 1,
pp. 92-106, Jan. 2006.
[34] K. Liu, C. Giannella, and H. Kargupta, "An Attacker's View of Distance Preserving Maps for Privacy Preserving Data Mining," Proc. Principles of Data Mining and Knowledge Discovery (PKDD '06),
pp. 297-308, 2006.
[35] K. Liu, "Multiplicative Data Perturbation for Privacy Preserving Data Mining," PhD thesis, Univ. of Maryland, Jan. 2007.
[36] J. Löfberg, "YALMIP : A Toolbox for Modeling and Optimization in MATLAB," Proc. IEEE Int'l Symp. Computer Aided Control Systems Design, pp. 284-289, Sept. 2004.
[37] A. Machanavajjhala, J. Gehrke, D. Kifer, and M. Venkitasubramaniam, "L-Diversity: Privacy Beyond K-Anonymity," Proc. 22nd IEEE Int'l Conf. Data Eng. (ICDE '06), p. 24, 2006.
[38] K.V. Mardia, "Measures of Multivariate Skewness and Kurtosis with Applications," Biometrika, vol. 57, no. 3, pp. 519-530, 1970.
[39] C.J. Mecklin and D.J. Mundfrom, "An Appraisal and Bibliography of Tests for Multivariate Normality," Int'l Statistical Rev., vol. 72, no. 1, pp. 123-138, 2004.
[40] P.D. O'Grady, B.A. Pearlmutter, and S.T. Rickard, "Survey of Sparse and Non-Sparse Methods in Source Separation," Int'l J. Imaging Systems and Technology, vol. 15, no. 1, pp. 18-33, 2005.
[41] S.R.M. Oliveira and O.R. Zaïane, "A Privacy-Preserving Clustering Approach Toward Secure and Effective Data Analysis for Business Collaboration," Computers and Security, vol. 26, no. 1,
pp. 81-93, 2007.
[42] K.B. Peterson and M.S. Pederson, "The Matrix Cookbook," Version:, http:/matrixcookbook.com/, Nov. 2008.
[43] S. Rizvi and J. Haritsa, "Maintaining Data Privacy in Association Rule Mining," Proc. 28th Int'l Conf. Very Large Databases (VLDB), Aug. 2002.
[44] Y. Sang, H. Shen, and H. Tian, "Reconstructing Data Perturbed by Random Projections when the Mixing Matrix Is Known," Proc. European Conf. Machine Learning and Principles and Practice of
Knowledge Discovery in Databases (ECML PKDD), pp. 334-349, Sept. 2009.
[45] Y. Sang, H. Shen, and H. Tian, "Privacy Preserving Tuple Matching in Distributed Database," IEEE Trans. Knowledge and Data Eng., vol. 21, no. 12, pp. 1767-1782, Dec. 2009.
[46] Y. Sang and H. Shen, "Efficient and Secure Protocols for Privacy Preserving Set Operations," ACM Trans. Information and System Security, vol. 13, no. 1, 2009.
[47] Y. Saygin, V.S. Verykios, and C. Clifton, "Using Unknowns to Prevent Discovery of Association Rules," ACM SIGMOD Record, vol. 30, no. 4, pp. 45-54, 2001.
[48] L. Sweeney, "K-Anonymity: A Model for Protecting Privacy," Int'l J. Uncertainty, Fuzziness and Knowledge-Based Systems, vol. 10, no. 5, pp. 557-570, 2002.
[49] F.J. Theis, E.W. Lang, and C.G. Puntonet, "A Geometric Algorithm for Overcomplete Linear ICA," Neurocomputing, vol. 56, pp. 381-398, 2004.
[50] E.O. Turgay, T.B. Pedersen, Y. Saygin, E. Savas, and A. Levi, "Disclosure Risks of Distance Preserving Data Transformations," Proc. 20th Int'l Conf. Scientific and Statistical Database
Management (SSDBM '08), pp. 79-94, 2008.
[51] V. Verykios, A. Elmagarmid, B. Elisa, D. Elena, Y. Saygin, and E. Dasseni, "Association Rule Hiding," IEEE Trans. Knowledge and Data Eng., vol. 16, no. 4, pp. 434-447, Apr. 2004.
[52] Z. Yang, S. Zhong, and R.N. Wright, "Privacy-Preserving Classification of Customer Data without Loss of Accuracy," Proc. SIAM Int'l Conf. Data Mining (SDM), 2005.
[53] M. Zibulevsky and B.A. Pearlmutter, "Blind Source Separation by Sparse Decomposition in a Signal Dictionary," Neural Computation, vol. 13, no. 4, pp. 863-882, 2001.
Index Terms:
Privacy-preserving data mining, data perturbation, data reconstruction, underdetermined independent component analysis, Maximum A Posteriori, principal component analysis.
Yingpeng Sang, Hong Shen, Hui Tian, "Effective Reconstruction of Data Perturbed by Random Projections," IEEE Transactions on Computers, vol. 61, no. 1, pp. 101-117, Jan. 2012, doi:10.1109/TC.2011.83
Usage of this product signifies your acceptance of the
Terms of Use | {"url":"http://www.computer.org/csdl/trans/tc/2012/01/ttc2012010101-abs.html","timestamp":"2014-04-23T22:34:15Z","content_type":null,"content_length":"61818","record_id":"<urn:uuid:8e8e5945-0544-4393-8f2d-dec4d1ed7ed4>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00141-ip-10-147-4-33.ec2.internal.warc.gz"} |
Norco, CA Math Tutor
Find a Norco, CA Math Tutor
...I was exposed to an educational institution where spelling was a "MUST" in the learning environment. I may not be a "Spelling Bee" material, but I'm pretty competitive. I have elementary math
8 Subjects: including algebra 1, reading, English, grammar
...I believe in teaching students solid fundamentals. If I notice a deficiency I will regress to the most basic level to form fundamentals and build on them. For tests or quizzes I review sample
tests with the students and discuss test-taking strategy.
24 Subjects: including algebra 1, algebra 2, grammar, European history
...I will give you, or your child one-on-one instruction until each concept is understood. I have taught many PSAT and SAT prep courses, and I have helped numerous students achieve their best
score. I have taught 6th grade language arts, math, history and science for twelve years.
35 Subjects: including algebra 2, ACT Math, grammar, GED
...I am majoring in Liberal Arts with an emphasis on Education. I have a strong background in computers, physics, and math up to multivariable calculus. I have been described in the past by
students as patient, knowledgeable, and adaptable.
20 Subjects: including geometry, reading, physics, prealgebra
...That was over forty years ago. I played in elementary school, junior high, high school, college, and graduate school. As part of my bachelor's degree in Liberal Studies, I not only was in the
CSUSB (California State University at San Bernardino) band, I also took music history and theory classes.
47 Subjects: including calculus, ESL/ESOL, French, English
Related Norco, CA Tutors
Norco, CA Accounting Tutors
Norco, CA ACT Tutors
Norco, CA Algebra Tutors
Norco, CA Algebra 2 Tutors
Norco, CA Calculus Tutors
Norco, CA Geometry Tutors
Norco, CA Math Tutors
Norco, CA Prealgebra Tutors
Norco, CA Precalculus Tutors
Norco, CA SAT Tutors
Norco, CA SAT Math Tutors
Norco, CA Science Tutors
Norco, CA Statistics Tutors
Norco, CA Trigonometry Tutors | {"url":"http://www.purplemath.com/Norco_CA_Math_tutors.php","timestamp":"2014-04-17T19:32:00Z","content_type":null,"content_length":"23454","record_id":"<urn:uuid:8fd96003-0dde-4244-8c14-fb73c066b52c>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00175-ip-10-147-4-33.ec2.internal.warc.gz"} |
Some Good Math about Autism
Some Good Math about Autism
I've talked before about the
sloppy statistics
used to analyze autism prevalence. Well, now there's some really good math looking at the same topic, but demonstrating how to do it
Orac has a
great post
about a study of autism prevalence refuting the notion that there is an epidemic of autism. He does a great job describing the results; I've got nothing to add, but I'll be glad to discuss the
mathematical methodologies described in the paper if anyone is interested.
1 Comments:
• I'll take you up on that offer and ask you to discuss the *good math* in the Shattuck study as compared to the *other math* in the Geier so-called study. The difference, I believe, would be very
instructive for those of us who ended our math education in the 1960's in college.
By T.H.E.Probe, at 8:59 AM
Links to this post:
<< Home | {"url":"http://goodmath.blogspot.com/2006/04/some-good-math-about-autism.html","timestamp":"2014-04-17T04:20:59Z","content_type":null,"content_length":"21205","record_id":"<urn:uuid:855f9a0b-0f9e-4dd5-a2d9-23e8e36c79f2>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00040-ip-10-147-4-33.ec2.internal.warc.gz"} |
General Linear Inverse Monoid
up vote 3 down vote favorite
Let $V$ be a finite dimensional vector space over some field (say, $\mathbb C$). Consider the set $GLI(V)$ of all linear isomorphisms between subspaces of $V$. This is a monoid under natural
multiplication (in fact an inverse monoid). Its elements can be represented by triples: two elements of the Grassmannian of $V$ of degree $k\le n$ representing the domain and the range, and a
non-singular $k\times k$-matrix representing the map. I am interested in developing a theory of representations of finite inverse monoids (pseudogroups) in $GLI(V)$. What is the structure of $GLI(V)$
from the algebraic geometry or geometric topology point of view?
Edit: It looks like the question is not completely clear. For comparison, if somebody gives me a group and asks what can I say about it, I would try to decide whether the group is finite or infinite,
solvable or not, hyperbolic or not, what is the derived subgroup and the lower central series, is it residually finite and what is the profinite competion, etc. I want a similar analysis of $GLI$
(but from the algebraic geometry point of view). One of the goals is to study representation varieties of groupoids (=pseudogroups, inverse semigroups). These varieties are complicated even for easy
finite groupoids. The starting point would be to understand $GLI$ itself.
ag.algebraic-geometry gt.geometric-topology
I think you answered your own question -- it's a disjoint union of principal bundles over products of Grassmannians. What more do you want? – Vivek Shende Oct 1 '10 at 11:32
I want to know standard information about it (say, what is the co-homology ring, singularities, etc.) If somebody considered such varieties before, I would like to see a reference. From the
topological point of view, I would like to see if there exists a boundary similar to the boundary of Lie groups. – Mark Sapir Oct 1 '10 at 13:31
I'm a little confused as to why you call it a monoid and then a groupoid. Do some people have more flexible notions of monoids -- that the binary operation need not be defined on the entire
product? – Ryan Budney Oct 1 '10 at 19:14
@Ryan: If you take an inverse monoid, and remove the 0, you get a groupoid. So the difference is cosmetic. – Mark Sapir Oct 1 '10 at 19:21
add comment
2 Answers
active oldest votes
Some small comments.
Let $n=dim(V)$, so I'll think of $V$ as $\mathbb R^n$, then as a space, $GLI(V)$ you could think of as
$$ V_{n,k} \times_{O_k} V_{n,k} $$
where $V_{n,k}$ is the Stiefel manifold of orthonormal $k$-frames in the vector space $V$. i.e. this is the space $V_{n,k}^2$ mod the diagonal action of $O_k$.
So you could view it as a bundle over $G_{n,k}^2$ with fiber $O_k$, or as a bundle over $G_{n,k}$ with fiber $V_{n,j}$. $G_{n,k}$ is the Grassmannian of $k$-dimensional subspaces of
up vote 1 down
vote accepted The map $V_{n,k} \times_{O_k} V_{n,k}$ to $GLI(V)$ is given by sending a pair $(A,B) \in V_{n,k} \times V_{n,k}$ to:
The span of $A$, the span of $B$ and the corresponding linear isometry represented by $B\circ A^{-1}$ where we think of $A$ and $B$ as representing isometric embeddings $\mathbb R^k \
to \mathbb R^n$.
So the homotopy-type of this space is at least fairly reasonable as $V_{n,k}$ is highly connected. I think this bundle likely has a lot of other nice properties lurking near the
surface. Is this the kind of thing you're asking about? In particular as a bundle over $G_{n,k}^2$ you'd have some nice Schubert-cell type constructions. i.e. you could view $V_{n,k} \
times_{O_k} V_{n,k}$ as the "diagonal" $V_{n,k}$ subspace union "Schubert cells".
Ryan, thanks. Yes, this kind of information is most helpful. Also it would be nice to have references to places where all that is explained in details. I have heard about Schubert
cells, but never read about it, for example. – Mark Sapir Oct 1 '10 at 19:30
The basics appears in Milnor and Stasheff's book "Characteristic Classes". If you just want the idea of Schubert cells rather than any actual details, I wrote up a computation of the
Euler Characteristic of $G_{n,k}$ using related ideas, here: en.wikipedia.org/wiki/Grassmannian#Schubert_cells – Ryan Budney Oct 1 '10 at 21:05
@Ryan: Thanks again! – Mark Sapir Oct 1 '10 at 21:24
add comment
I'm no expert, but paging through "Linear Algebraic Monoids" by Lex Renner suggests to me that it has a lot of information you could use.
up vote 0 down
No, the book is about submonoids of $M_n$ (the monoid of all $n\times n$-matrices). – Mark Sapir Sep 25 '10 at 23:49
In any case, the question is not about representations of monoids or algebraic groups, it is about alg. geom and geom. topological properties of one particular object. – Mark
Sapir Sep 27 '10 at 12:02
add comment
Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry gt.geometric-topology or ask your own question. | {"url":"http://mathoverflow.net/questions/39976/general-linear-inverse-monoid?sort=oldest","timestamp":"2014-04-17T01:35:10Z","content_type":null,"content_length":"68680","record_id":"<urn:uuid:5b7e98e1-bf66-45b8-ae40-080464826e67>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00617-ip-10-147-4-33.ec2.internal.warc.gz"} |
Graphing Linear Equations
Date: 07/14/98 at 11:20:20
From: Meagan
Subject: Graphing equations
I am totally lost when it comes to graphing equations. Please tell me
some ways I can figure it out easier.
Date: 08/01/98 at 14:15:16
From: Doctor Margaret
Subject: Re: Graphing equations
Hi Meagan,
Thanks a lot for writing to us. Although your question was not very
specific, I will try to give you some answers about graphing in
This was actually the area of mathematics that got me really
interested. A graph of an equation is a picture of it. In the case of
linear equations, we can tell how fast the equation is increasing or
decreasing just by looking at the picture of the line it makes in the
xy plane, that is, the area pictured by the x-axis (horizontal line)
and the y-axis (vertical line), which intersect each other at zero.
The equations that we graph in this case will have two variables, an x
and a y. These variables occur in what we call an "ordered pair," that
is, a set of parentheses like this: (x, y).
Because you sound as if you are just starting out with graphing, let's
see the easiest equation to graph, which is a straight line that can
tilt up or down. The equation looks like this:
y = mx + b
in writing.
To graph a linear equation, you have to find the ordered pair solutions
of the equation. Do this by choosing any value of x and finding the
corresponding value of y. Repeat this procedure, choosing different
values for x, until you have found the number of solutions desired.
Since the graph of a linear equation in two variables is a straight
line, and a straight line is derermined by two points, it is necessary
to find at least two solutions. I like to find three and if they all
line up, then I know I'm right. For example:
Graph: y = 2x + 1.
One of the best choices you can make for x is to make it equal to zero.
This give you the place on the y axis where the line intersects it.
y = 2(0) + 1 = 1
The first ordered pair is (0,1).
Doing this two more times for x = 1 and x = -2 we have:
y = 2(1) + 1 = 3, giving us (1,3)
y = 2(-2) + 1 = -3, giving us (-2,-3)
Now we can graph the line in the xy plane. We have three ordered pairs,
(0,1), (1,3) and (-2,-3).
For (0,1) we count to zero on the x axis and up one for y. Make a dot.
(I'll do this with a *.) For (1,3) we count one to the right on the x
axis and up three for y. Another dot.
I'll leave the third pair for you. But you will be counting to the left
and down because the numbers are negative. Here is the picture for the
first two:
|3 * (1,3)
| |
|2 |
| |
*1--- (0,1)
-3 -2 -1 0| 1 2 3
Now draw a line through the three dots you graphed, and you have your
graph of a straight line.
The easiest way is to practice until you get used to it. Please write
back if you need more help.
- Doctor Margaret, The Math Forum
Check out our web site! http://mathforum.org/dr.math/ | {"url":"http://mathforum.org/library/drmath/view/54470.html","timestamp":"2014-04-18T08:26:47Z","content_type":null,"content_length":"8191","record_id":"<urn:uuid:11c8a041-9d75-495c-911b-0f2049e29391>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00137-ip-10-147-4-33.ec2.internal.warc.gz"} |
Notice of paper
Notice of paper
We wish to announce the availability of the following paper, at the
URL given below.
The logic of linear functors
by Richard Blute, J.R.B. Cockett, and R.A.G. Seely
It has been commonplace to base logics and type theories on
categorical doctrines; with this paper we propose a more general
paradigm, suggesting that logics and type theories may be based on
functorial doctrines. This paradigm is very general - in particular,
it subsumes all of usual categorical logic and type theory as
degenerate cases. So as to be able to make precise claims, we
illustrate this idea with a special case, developing the logic of
linear functors with sufficient detail to see that this one doctrine
is general enough to deal with basic linear modal logic and with the
Abrusci-Ruet mixed non-commutative linear logic. We emphasise the
case where the logic is based on a single functor, but it will be
clear that one could also base it on a family of functors, which would
allow one to deal with process logic as considered by Hennessy and
The paper's abstract is given in full below.
The paper may be found at
or go to R.A.G. Seely's home page
and click on the appropriate link.
The logic of linear functors
by Richard Blute, J.R.B. Cockett, and R.A.G. Seely
This paper describes a family of logics whose categorical semantics is
based on functors with structure rather than on categories with
structure. This allows the consideration of logics which contain
possibly distinct logical subsystems whose interactions are mediated
by functorial mappings. For example, within one unified framework, we
shall be able to handle logics as diverse as modal logic, ordinary
linear logic, and the "noncommutative logic" of Abrusci and Ruet, a
variant of linear logic which has both commutative and noncommutative
Although this paper will not consider in depth the categorical basis
of this approach to logic, preferring instead to emphasize the
syntactic novelties that it generates in the logic, we shall focus on
the particular case when the logics are based on a linear functor, to
give a definite presentation of these ideas. However, it will be
clear that this approach to logic has considerable generality.
There have been several individual attempts to develop logics with
distinct but related subsystems of connectives, such as the
Abrusci--Ruet noncommutative logic and the bunch logic of O'Hearn and
Pym; generally these are presented in terms of "bunching" the formulas
in the sequents of the logics via different "punctuation". The
present functor logic, by contrast, uses a system of "formula blocks",
which represent the functorial action and which give finer control
over what logical features may be displayed. By displaying a family
of functor logics, we illustrate how different logical systems can be
developed along these lines, logics which are primarily distinguished
by the degree to which they permit nesting of formula blocks. Our
examples will include a basic logic where there is essentially no
nesting, a system of linear modal logic, which allows nesting, but has
only one system of connectives, and the Abrusci--Ruet logic, where
nesting is virtually unrestricted and there are two subsystems of
connectives. We finish by showing how to translate between the
``bunch'' style of logic and our ``formula block'' or functor logic
using the Abrusci--Ruet noncommutative logic as an example.
R.A.G. Seely | {"url":"http://www.seas.upenn.edu/~sweirich/types/archive/1999-2003/msg00633.html","timestamp":"2014-04-18T00:13:36Z","content_type":null,"content_length":"5839","record_id":"<urn:uuid:b09ed78c-aeea-4669-b679-cbe8b5d674e0>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00412-ip-10-147-4-33.ec2.internal.warc.gz"} |
Home | All Classes | Main Classes | Annotated | Grouped Classes | Functions
QCanvasSpline Class Reference
[canvas module]
The QCanvasSpline class provides multi-bezier splines on a QCanvas. More...
#include <qcanvas.h>
Inherits QCanvasPolygon.
Public Members
Detailed Description
The QCanvasSpline class provides multi-bezier splines on a QCanvas.
A QCanvasSpline is a sequence of 4-point bezier curves joined together to make a curved shape.
You set the control points of the spline with setControlPoints().
If the bezier is closed(), then the first control point will be re-used as the last control point. Therefore, a closed bezier must have a multiple of 3 control points and an open bezier must have one
extra point.
The beziers are not necessarily joined "smoothly". To ensure this, set control points appropriately (general reference texts about beziers will explain this in detail).
Like any other canvas item splines can be moved with QCanvasItem::move() and QCanvasItem::moveBy(), or by setting coordinates with QCanvasItem::setX(), QCanvasItem::setY() and QCanvasItem::setZ().
See also Graphics Classes and Image Processing Classes.
Member Function Documentation
QCanvasSpline::QCanvasSpline ( QCanvas * canvas )
Create a spline with no control points on the canvas canvas.
See also setControlPoints().
QCanvasSpline::~QCanvasSpline ()
Destroy the spline.
bool QCanvasSpline::closed () const
Returns TRUE if the control points are a closed set; otherwise returns FALSE.
QPointArray QCanvasSpline::controlPoints () const
Returns the current set of control points.
See also setControlPoints() and closed().
int QCanvasSpline::rtti () const [virtual]
Returns 8 (QCanvasItem::Rtti_Spline).
See also QCanvasItem::rtti().
Reimplemented from QCanvasPolygon.
void QCanvasSpline::setControlPoints ( QPointArray ctrl, bool close = TRUE )
Set the spline control points to ctrl.
If close is TRUE, then the first point in ctrl will be re-used as the last point, and the number of control points must be a multiple of 3. If close is FALSE, one additional control point is
required, and the number of control points must be one of (4, 7, 10, 13, ...).
If the number of control points doesn't meet the above conditions, the number of points will be truncated to the largest number of points that do meet the requirement.
Example: canvas/canvas.cpp.
This file is part of the Qt toolkit. Copyright © 1995-2007 Trolltech. All Rights Reserved.
Copyright © 2007 Trolltech Trademarks Qt 3.3.8 | {"url":"http://idlebox.net/2007/apidocs/qt-x11-free-3.3.8.zip/qcanvasspline.html","timestamp":"2014-04-20T11:03:00Z","content_type":null,"content_length":"6079","record_id":"<urn:uuid:7d20ecbd-ad50-4b73-90eb-ed82b289e2a9>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00125-ip-10-147-4-33.ec2.internal.warc.gz"} |
accounts payable aging report
One way ..
Put in R7:
Copy R7 down to R200. R7:R200 will return all the various aging labels that
you could then use in sumproduct formulas as desired, for eg:
returns the total of amounts in col Q for dates in col C within 30-60 days old | {"url":"http://www.excel-answers.com/microsoft/Excel-Worksheet/31295629/accounts-payable-aging-report.aspx","timestamp":"2014-04-18T10:34:40Z","content_type":null,"content_length":"8790","record_id":"<urn:uuid:bb566817-fc28-4ad4-ba5a-7f9f023b9176>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00126-ip-10-147-4-33.ec2.internal.warc.gz"} |
Does there exist a holomorphic function which takes given values on the positive integers?
up vote 6 down vote favorite
Inspired of course by What's a natural candidate for an analytic function that interpolates the tower function? I am minded to ask what looks to me like a more natural question: given a sequence
$a_1,a_2,a_3,\ldots$ of complex numbers, is there always a holomorphic function $f$ defined on the entire complex plane, with $f(n)=a_n$ for $n=1,2,3,\ldots$? No idea what the answer is myself, but
wouldn't surprise me if it were well-known and even easy.
cv.complex-variables puzzle
Sure. More generally, for any Stein space $X$ and discrete set $S$ in $X$ and effective divisor $D$ supported on $S$, the surjective map of coherent sheaves $O_X \rightarrow O_D$ has coherent
5 kernel and so induces a surjection on global sections. So by description of $O_D(X)$ via discreteness of $S$, there exists holomorphic $f$ on $X$ whose germ at each point of $S$ has whatever
"initial part of Taylor expansion" we wish. In dimension 1 can play similar game with meromorphic $f$ holomorphic outside $S$ and Laurent tails at $S$ (generalizing Mittag-Leffler theorem) – BCnrd
Apr 8 '10 at 12:24
1 Typo correction above: $D$ isn't a divisor when $X$ has dimension $> 1$. I meant it to be a 0-dimensional analytic space structure on $S$ (of which there are zillions of choices as "multiplicity"
grows). Presumably this intent was clear. – BCnrd Apr 8 '10 at 15:03
1 This question was asked twice before: mathoverflow.net/questions/2944, mathoverflow.net/questions/7328/… – Jonas Meyer Apr 8 '10 at 15:47
@Jonas: But now with a completely different solution. :) – BCnrd Apr 8 '10 at 18:24
As Brian Conrad said, the important property is that $\mathbb{Z}\subset \mathbb{C}$ is a discrete subset. You can take any collection of isolated points $\Omega\subset \mathbb{C}$ and define an
entire function with any values you want at the points of $\Omega$. My favorite consequence: If you define the sum of divisors function for the Gaussian integers, there is an entire function that
outputs the sum of the divisors of the input when the input is of the form $a+bi$, $a, b\in \mathbb{Z}$. – Matt Apr 8 '10 at 23:58
add comment
2 Answers
active oldest votes
This is Exercise 6, Page 26, of Knopp's Problem Book in the Theory of Functions, Volume 2: For any sequence of complex numbers $z_n$ with no finite limit point, and for any sequence of
up vote complex numbers $w_n$, there is an entire function mapping $z_n$ to $w_n$. The proof goes like this: Use the Weierstrass Factor Theorem to construct a function $W$ with simple zeros at the
13 down $z_n$. Use the Mittag-Leffler theorem to construct a function $M$ with simple poles at the $z_n$ with residues $\frac{w_n}{W'(z_n)}$. Then the function $W\cdot M$ does the job.
Ah. Of course, good old Knopp would have it (+1). – Harald Hanche-Olsen Apr 8 '10 at 19:19
add comment
Probably well-known. Easy? I'd venture to guess that an expression like $$\sum_{n=1}^\infty b_n\frac{e^{c_n(z-n)}}{(n-1)!}\prod_{k=1}^{n-1}(z-k)$$ can be made to work. You'll have to pick
the $b_n$ successively to make the $N$'th partial sum equal to $a_N$, and real constants $c_n$ large enough to obtain uniform convergence to the left of any fixed vertical line. E.g., so
up vote 5 that the $n$'th term has absolute value less than $2^{-n}$ when $\operatorname{Re} z<n/2$.
down vote
add comment
Not the answer you're looking for? Browse other questions tagged cv.complex-variables puzzle or ask your own question. | {"url":"https://mathoverflow.net/questions/20711/does-there-exist-a-holomorphic-function-which-takes-given-values-on-the-positive/20724","timestamp":"2014-04-18T18:46:26Z","content_type":null,"content_length":"63295","record_id":"<urn:uuid:cd4d5b1d-dc2e-433e-8a7a-a9e3c262156a>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00116-ip-10-147-4-33.ec2.internal.warc.gz"} |
Reality, locality, and “free will”
In 1964, John Bell devised a testable prediction (now known as Bell’s inequality) based on two reasonable assumptions: that the measurement of one particle cannot instantaneously influence another,
distant particle (locality) and that particles have properties before you measure them (reality). Numerous experiments have since shown that Bell’s inequality is violated, forcing one to conclude
that, contrary to the view held by Einstein, Podolsky, and Rosen, quantum mechanics cannot be both local and real.
But what of other assumptions built into Bell’s inequality? In a paper appearing in Physical Review Letters, Michael Hall at the Australian National University in Canberra considers an assumption,
called measurement independence, in the following experimental paradigm: A source emits two particles in an entangled state and sends them to two distant laboratories, where two experimenters
randomly choose apparatus settings that measure a system property. The measurement outcomes can be correlated in a way that violates Bell’s inequality, but measurement independence assumes that the
experimenters freely choose apparatus settings, independent of any properties of the systems that they measure. By relaxing this assumption, Michael Hall constructs a local and real model that
describes the correlations of the experiment. He shows that locality and reality can be retained with a $14%$ reduction of the experimenters’ “free will”—that is, the assumption of measurement
independence need not be given up completely. – Sonja Grondalski | {"url":"http://physics.aps.org/synopsis-for/print/10.1103/PhysRevLett.105.250404","timestamp":"2014-04-18T21:34:34Z","content_type":null,"content_length":"4861","record_id":"<urn:uuid:fb8c23af-1fd4-4465-b90e-bab9cf213a16>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00632-ip-10-147-4-33.ec2.internal.warc.gz"} |
Battle Tower (For all your Match-Seeking Needs!)
Open Challenge!
1v1 singles
2 Day DQ
1 Sub
1 Recoveries/3 Chills
A big forest, giant trees are covering the sky from view restricting flying very high and stopping sky attack completely. The place is crawling with bug type Pokemon (There is a 40% chance for a
missed attack to hit a bug and have it call to a Beedrill to attack your Pokemon for 8 damage). In Addition, it is night making it VERY hard to see(-5% Acc).
Restricted moves: Sky Attack, Fly, Dive, Surf
Grass and Dark moves get +1 BAP
May 6, 2011
This needs a subref. Please. It started December 3 and only one round has been done. Please.
Also, open challenge:
1v1 Singles (I'll be using a not-very-well-trained FE)
1 Day DQ (I would like this to be a flashmatch, but I don't feel like going on IRC to officially arrange a flashmatch; either way, it should be fast)
0 Chill/Recoveries
Tournament Arena
The Royal Guard
Sep 20, 2011
Can Iz join that Royale Engineer?
I'll take both CSFP and MogRunner's battles The rules are the same
No switching
All abilities
No items
Nov 8, 2011
EDIT: Actually, dropping out to keep my other match open.
Lady Salamence is a three-dimensional retard.
Aug 4, 2010
In for engi's if possible
Aug 19, 2009
Challenging deadfox081 to a match!
6v6 Singles
NFEs only (middlemons OK)
3 day DQ
2 recovers/5 chills
Arena: The type changing arena! (thanks Engineer!)
It's the standard arena, except somebody has implanted a strange device in the middle of the field! For the duration of the first round, this device is off. After the end of each round, though,
it beeps and suddenly alters the elemental makeup of the arena, giving certain benefits to certain types and detriments to some other ones. Each time the makeup is altered, a random type is
chosen. All attacking moves whose types hit the chosen type for super-effective damage have 2 more BAP, while all attacking moves whose types hit the chosen type for not very effective or zero
damage have 2 less BAP.
I'll let him choose the rest (items, abilities, subs)
Jun 8, 2009
Two things:
1) The battle here still needs a ref.
2) This post is still accurate.
LouisCyphre heralds disaster.
May 10, 2010
Already agreed to ref Flamestrike's "giant" match.
I challenge someone to a Beginner's Battle!! Please take note that this is my first match.
3 Day DQ
0 recovery moves, 5 chills
Arena: The Randomizer at the ASB arena!
This is the standard arena for ASB, but with a twist!
The Randomizer is a small disk that randomly applies different effects. Each effect has an = chance for activation at the end of each round. The match begins with effect 5 in place.
The effects:
1). The moat around the arena becomes frozen. Any pokes in the water when this happens is completely immobilized unless they melt the ice. It unfreezes after a round has been finished
2). A Flock of Togekiss land close by and cause all added effects to be doubled.
3). A Chansey runs out, uses Heal Bell, and runs back.
4). Gravity is activated in the Arena
5). All other Randomizer effects disappear this round
Note: whoever refs this should make this interesting :)
Jun 14, 2011
1v1 Training NFE Singles
1 Day DQ
0 Chill/Recoveries
Tournament Arena
Mar 31, 2011
Lets go with All abilities, No items, 2 subs.
Dogfish44 Banned from 22 Casinos Moderator
Jan 1, 2009
1v1 Training Singles - Strong FE
1 Day DQ
ASB Arena
0 Recovers
0 Chills
Items preferably enabled for training or fully on, but your choice.
droping my challenge to accept the king of Serponstuff!
No switch
No items
All abilities
I will use weak mons
^guessing that is me?
Its_A_Random Is armed with many Honedges Moderator
Mar 19, 2010
Yes KS, he's referring to you, & I'll ref you two. Expect a PM in a few minutes (Don't send me your teams until you receive the PM).
Ice-eyes Simper Fi
Feb 15, 2010
The Most Boring Challenge Ever
Beginner Battle
3v3 Singles
ASB Arena
2 Subs
2 Recoveries / 5 Chills
Because I was told that I could have two matches going at once by the guy, I'll accept Ice Eye's challenge!!
No switches (because it seems to be the standard)
No items (because I have zip)
All abilities (I have a feeling that I'll need 'em)
Feb 23, 2011
Open Challenge
2v2 Doubles
Weak Fully Evolved mons
2 Day DQ
Arena: Nothing too biased or complex, but Acceptor / Ref's choice
The Wanderer
Nov 2, 2011
I'll take this. PM me your critters.
Need Refs please
Yarnus of Bethany
Mar 5, 2010
5v5 doubles
1 day DQ
Unown Soup (2 of each letter)
0 reCoveries /2 chill s
One problem day on which DQ is extended to 3 days.
Sep 21, 2009
items: training
Droping my match with Blaziken-Master Zeo
to grab fight Leetfoot, ref can choose arena
Trainer items
switch= ha ha
All abilities
As I can have a max of 3 matches at once, I Challange waterwarrior to a Beginner's battle in the Danger Room!!!
3v3 singles
0 recoveries/ 5 chills
1 sub
3 day DQ time
The arena!!!!
We will fight in the Danger Room of the X-Men!!!
The arena begins as a normal artificial battlefield. No digging, no grass, no water. It is lit up pretty well, so acc gets a 10% increase.
Unfortuantly, Logan (read Wolverine) has been drinking and is in a sadistic mood today. He will change the room at his will!!
Here is what he will change:
1). An endless fall through the sky. Unless you can fly or have psychic powers, dodging is almost immpossible!
2). Omaha Beach: blood, bullets, salt water, ally soldiers, Nazis, and lots of sand and bodies. The water is too dangerous to go into, so no swimming. At the end of each action, one of the
following happens: 10% chance of a mortar strike (20 damage), 50% chance of bullet strikes (5 damage), and a 40% chance of nothing happening.
3). Juggernaut is rampaging in a park. Grass and water (in a fountain). There is a swingset and a slide as well as monkey bars. At the end of the round, 50% chance of Juggernaut striking you
for 25 damage.
4). The system malfunctions and send us to the Digital World!! This is in the Ref's hands about what happens now D: at the end of round 4, there is a 45% chance for us to be transported back
to the starting portion of this battle. Then the cycle begins again!
Each area changes at the end of each round. Areas 1-3 have a 30% chance of activating.
Area 4 has a 10% chance of activating. | {"url":"http://www.smogon.com/forums/threads/battle-tower-for-all-your-match-seeking-needs.86086/page-316","timestamp":"2014-04-20T06:25:09Z","content_type":null,"content_length":"101836","record_id":"<urn:uuid:c8c44e0e-a444-4c67-92cb-3dd1a3afcc4c>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00310-ip-10-147-4-33.ec2.internal.warc.gz"} |
Lawrenceville, GA Prealgebra Tutor
Find a Lawrenceville, GA Prealgebra Tutor
...I am also a professional actor with both film and theatre credits. I have a B.A. in Musical Theatre from Birmingham-Southern College. I was required to take music theory, and four years of
voice lessons.
34 Subjects: including prealgebra, reading, writing, calculus
...My teaching experience includes working at a local community college, teaching elementary school math, high school math (Algebra I & II), adult high school geometry and substituting math
classes for a local middle school. I also have experience tutoring adults and children one-on-one. At this time, I mainly tutor online-virtually or in my home.
10 Subjects: including prealgebra, Spanish, geometry, algebra 1
...I am currently working at a pharmaceutical company as a chemist. Currently I am a Gokuldham Vidhayala Teacher where I teach Gujarati language to children ages 4-14 in Indian Temple. My past
experiences include following: Instructor of Chemistry: (Georgia State University) where I taught general...
26 Subjects: including prealgebra, reading, English, writing
...Since then, I have helped students prepare for these standardized tests. I am currently a high school science teacher who helps tutor all science subjects in school. I have helped students
prepare for both the ACT and the SAT with great success.
15 Subjects: including prealgebra, chemistry, geometry, biology
...I am flexible with my time, but I do have a 24-hour cancellation policy. I do offer makeup sessions and will work with your schedule as much as possible. I look forward to hearing from you and
assisting you with achieving success in all your endeavors.I have set up from scratch three different businesses with Quickbooks Pro and was the bookkeeper for those businesses.
16 Subjects: including prealgebra, geometry, accounting, algebra 2
Related Lawrenceville, GA Tutors
Lawrenceville, GA Accounting Tutors
Lawrenceville, GA ACT Tutors
Lawrenceville, GA Algebra Tutors
Lawrenceville, GA Algebra 2 Tutors
Lawrenceville, GA Calculus Tutors
Lawrenceville, GA Geometry Tutors
Lawrenceville, GA Math Tutors
Lawrenceville, GA Prealgebra Tutors
Lawrenceville, GA Precalculus Tutors
Lawrenceville, GA SAT Tutors
Lawrenceville, GA SAT Math Tutors
Lawrenceville, GA Science Tutors
Lawrenceville, GA Statistics Tutors
Lawrenceville, GA Trigonometry Tutors
Nearby Cities With prealgebra Tutor
Alpharetta prealgebra Tutors
Athens, GA prealgebra Tutors
Atlanta prealgebra Tutors
Buford, GA prealgebra Tutors
Dacula prealgebra Tutors
Decatur, GA prealgebra Tutors
Duluth, GA prealgebra Tutors
Grayson, GA prealgebra Tutors
Johns Creek, GA prealgebra Tutors
Marietta, GA prealgebra Tutors
Norcross, GA prealgebra Tutors
Roswell, GA prealgebra Tutors
Sandy Springs, GA prealgebra Tutors
Snellville prealgebra Tutors
Suwanee prealgebra Tutors | {"url":"http://www.purplemath.com/Lawrenceville_GA_prealgebra_tutors.php","timestamp":"2014-04-18T21:47:06Z","content_type":null,"content_length":"24404","record_id":"<urn:uuid:2161e85d-e317-4eec-851c-3485190a5fe2>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00540-ip-10-147-4-33.ec2.internal.warc.gz"} |
RE: [SI-LIST] : Antenna Problem on the Board
From: Chris Rokusek (crokusek@innoveda.com)
Date: Mon Jun 04 2001 - 18:13:01 PDT
I like your power analogy except that it assumes each I(f) is halved which
is not the case. If a driver of 20 ohms feeds a 50 ohm line with/without a
30 ohm series terminator, the current is reduced to .71 (not .5) its
original value. Squaring that is .5 which is multipled by twice the time
ending up back at one. Seems like we agree to me!
For reference:
k * I(f) * f^2 * A
E(f) = --------------------
where k is a constant,
I(f) if current at a given frequency
f is frequency,
A is loop area (sep distance times length)
r is antenna distance.
Now in attempting to map the series/parallel question into this formula I
reasoned to double the area in the sense that the wave (albreit with less
current than parallel case) travels down and then back. It happens to be at
the same location. I don't see how this thinking is drastically different
than creating a an actual route with twice the length (and loop area) that
folds back and runs along side itself to terminate near its source (with
adjusted feed current and end termination). This reasoning implies that the
reduction in current must outweigh the increase in loop area to be
If you don't like messing with "A", then another way to map this problem
into the simple formula is to look at the Fourier equivalent I(f) of the
signal resulting from each of the cases assuming some periodic switching
frequency (the formula is for sine wave of freq "f"). In the parallel case,
the signal content is that of a gorgeous trapezoidal waveform--each segment
sees a trapezoidal blip travel down it per cycle. However in the series
termination, it looks much different. The energy is not distributed among
harmonics in the same proportions, each small segment sees something
different than the others over a given cycle. Aren't the odds spectacular
that some of these small segments will sing in chorus at some frequency at
some angle and some distance away (at your FCC test).
This is all just hearsay anyway until someone publishes some results.
Best Regards,
Chris Rokusek
> -----Original Message-----
> From: owner-si-list@silab.eng.sun.com
> [mailto:owner-si-list@silab.eng.sun.com]On Behalf Of S. Weir
> Sent: Monday, June 04, 2001 1:52 PM
> To: si-list@silab.eng.sun.com
> Subject: RE: [SI-LIST] : Antenna Problem on the Board
> Chris,
> I really don't follow your reasoning:
> At 11:16 AM 6/4/01 -0700, you wrote:
> >Richard,
> >
> >Your first paragraph sounds good to me.
> >
> >Something else that to consider is that with parallel
> termination, the wave
> >flows down the line without a reflection but with source termination the
> >wave has to travel _twice_ as far before it is absorbed. This
> seems loosely
> >like doubling the loop area. Sounds like a good case for
> >simulation/measurement.
> >
> >Chris Rokusek
> >Innoveda
> The current imparted is one half for each direction, Vdelta/(Rt + Zl)
> versus Vdelta/(Zl) so we have one quarter the power for twice the
> time. This says one-half the total energy, and the peak amplitude is one
> quarter. With the same geometries for each, what condition would
> ever give
> rise to the series termination radiating more than the parallel
> termination?
> Regards,
> Steve.
**** To unsubscribe from si-list or si-list-digest: send e-mail to
majordomo@silab.eng.sun.com. In the BODY of message put: UNSUBSCRIBE
si-list or UNSUBSCRIBE si-list-digest, for more help, put HELP.
si-list archives are accessible at http://www.qsl.net/wb6tpu
This archive was generated by hypermail 2b29 : Thu Jun 21 2001 - 10:12:14 PDT | {"url":"http://www.qsl.net/wb6tpu/si-list/1153.html","timestamp":"2014-04-17T01:08:56Z","content_type":null,"content_length":"9414","record_id":"<urn:uuid:1c1b55de-c88f-4630-9848-be8dbcaed104>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00278-ip-10-147-4-33.ec2.internal.warc.gz"} |
When is a morphism proper?
up vote 3 down vote favorite
A morphism of varieties over $\mathbb{C}$, $f:V\to W$ is proper if it is universally closed and separated. One way to check properness is the valuative criterion.
What other methods do we have for determining if a morphism is proper? Particularly, I'm interested in quasi-projective varieties, but ones that aren't actually projective. And while a completely
algebraic, valid over all fields or for schemes answer would also be good, I'm looking at complex varieties, and may be able to assume that the singularities are all finite quotient singularities.
ag.algebraic-geometry complex-geometry
5 if direct image functor of a morphism preserves coherent sheaves on noetherian schemes, then this morphism is proper. The other direction is well known – Shizhuo Zhang Apr 10 '10 at 1:46
12 It's equivalent for the analytified map to be proper in the usual topological sense; see the Expose on analytic/algebraic stuff near the end of SGA1. This allows arbitrary schemes of finite type.
– BCnrd Apr 10 '10 at 2:52
add comment
2 Answers
active oldest votes
Assume $V$ and $W$ are quasiprojective. Let $i:V\to X$ be a locally closed embedding with $X$ projective (for instance $X$ could be $P^n$). Consider the induced map $g:V\to X\times W$;
this is also a locally closed embedding. Then $f$ is proper iff $g$ is a closed embedding, or equivalently if $g(V)$ is closed.
As for the topological approach, use the definition of properness given by Charles Staats. Let $f:X\to Y$ be a continuous map of Hausdorff second countable topological spaces. The base
up vote 7 change $f':X'\to Y'$ of $f$ by a continuous map $g:Y'\to Y$ is defined by letting $X'$ be the set of pairs $(x,y')$ in $X\times Y'$ such that $f(x)=g(y')$ (with the induced topology from
down vote $X\times Y'$), and $f':X'\to Y'$ the obvious projection. Then $f$ is proper if and only if all its base changes are closed. This may not be logically relevant, but I find it very
accepted comforting. To connect the two cases note that, given a locally closed embedding of complex algebraic varieties, it is closed in the Zariski topology iff it is closed in the Euclidean
add comment
There's a purely topological notion of properness, in which a continuous map is proper if and only if the inverse image of every compact set is compact. I have been told that this
up vote 3 corresponds with the algebraic notion in the case of complex varieties, although I do not have a reference.
down vote
I guess one can find the reference in Serre's GAGA, If I remember correctly, he attributed this result to Chow. – Li Yutong May 8 '13 at 19:40
add comment
Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry complex-geometry or ask your own question. | {"url":"http://mathoverflow.net/questions/20879/when-is-a-morphism-proper","timestamp":"2014-04-19T15:25:46Z","content_type":null,"content_length":"57216","record_id":"<urn:uuid:ece79ae3-5411-4b26-991c-7bc628e383d1>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00316-ip-10-147-4-33.ec2.internal.warc.gz"} |
Estimating Rates of Change
January 13th 2013, 10:20 AM #1
Junior Member
Sep 2012
Estimating Rates of Change
I need help with the question: a diver is on the 10 m platform; the diver's height above the water, in metres, at time t can be modelled using the equation h(t) = 10 + 2t -4.9t^2. Estimate the
rate at which the diver's height above the water is changing as the diver enters the water.
I know that the time that the diver hits the water is around 1.65 seconds and I tried to find the average rate of change between 0 seconds and 1.65 seconds, but the rate that I calculated was
around -6 m/s.
I know that the correct answer is about 14 m/s but I'm not sure how to solve for this answer.
Any suggestions would be appreciated, please and thank you!
Re: Estimating Rates of Change
Hey misiaizeska.
What kind of techniques are taughlt in your course? I'm guessing it's pre-calculus because an easy way would be to use calculus.
In this kind of situation its a good idea that we know what you are being taught since other solutions may confuse or get you into trouble with your marks.
January 13th 2013, 01:45 PM #2
MHF Contributor
Sep 2012 | {"url":"http://mathhelpforum.com/pre-calculus/211253-estimating-rates-change.html","timestamp":"2014-04-17T17:10:49Z","content_type":null,"content_length":"31632","record_id":"<urn:uuid:79b050d9-bf79-4126-b7ac-fe43720e66bf>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00416-ip-10-147-4-33.ec2.internal.warc.gz"} |
physics @ berkeley
The vacuum landscape of string theory can solve the cosmological constant problem, explaining why the energy of empty space is observed to be at least 60 orders of magnitude smaller than several
known contributions to it. It leads to a "multiverse" in which every type of vacuum is produced infinitely many times, and of which we have observed but a tiny fraction. This conceptual revolution
has raised tremendous challenges in particle physics and cosmology. To understand the low-energy physics we observe, and to test the theory, we will need novel statistical tools and effective
theories. We must also solve a long-standing fundamental problem in cosmology: how to define probabilities in an infinite universe where every possible outcome, no matter how unlikely, will be
realized infinitely many times. This "measure problem" is inextricably tied to the quantitative prediction of the cosmological constant. | {"url":"http://physics.berkeley.edu/index.php?option=com_dept_management&act=events&Itemid=451&task=view&id=735","timestamp":"2014-04-18T05:31:31Z","content_type":null,"content_length":"30271","record_id":"<urn:uuid:760e09dc-54aa-4874-bf7d-2b0ed3d9963d>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00295-ip-10-147-4-33.ec2.internal.warc.gz"} |
Create X And Y Vectors From -5 To +5 With A Spacing ... | Chegg.com
Create x and y vectors from -5 to +5 with a spacing of 0.5. Use the meshgrid
function to map x and y onto two new two-dimensional matrices
called X and Y . Use your new matrices to calculate vector Z , with magnitude
Z = sin(sqrt(x^2+y^2))
(a) Use the mesh plotting function to create a three-dimensional plot of Z .
(b) Use the surf plotting function to create a three-dimensional plot of Z .
Compare the results you obtain with a single input ( Z ) with those
obtained with inputs for all three dimensions (X, Y, Z) .
(c) Modify your surface plot with interpolated shading. Try using different
colormaps .
(d) Generate a contour plot of Z .
(e) Generate a combination surface and contour plot of Z .
Computer Science | {"url":"http://www.chegg.com/homework-help/questions-and-answers/create-x-y-vectors-5-5-spacing-05-use-meshgrid-function-map-x-y-onto-two-new-two-dimension-q2634121","timestamp":"2014-04-18T11:16:06Z","content_type":null,"content_length":"22166","record_id":"<urn:uuid:e1677328-b0c5-476d-8836-aa36004f19fa>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00580-ip-10-147-4-33.ec2.internal.warc.gz"} |
Evolution of the density profiles of dark matter haloes
Author: Reed, Darren; Governato, Fabio; Verde, Licia; Gardner, Jeffrey; Quinn, Thomas; Stadel, Joachim; Merritt, David; Lake, George Abstract: We use numerical simulations in a [lambda]CDM cosmology
to model density profiles in a set of sixteen dark matter haloes with resolutions of up to seven million particles within the virial radius. These simulations allow us to follow robustly the
formation and evolution of the central cusp over a large mass range of 1011 to 1014 M⊙, down to approximately 0.5% of the virial radius, and from redshift 5 to the present, covering a larger range in
parameter space than previous works. We confirm that the cusp of the density profile is set at redshifts of two or greater and remains remarkably stable to the present time, when considered in
non-comoving coordinates. Motivated by the diversity and evolution of halo profile shapes, we fit our haloes to the two parameter profile, ρ ∝ 1(cr/rvir)[1+(cr/rvir)]3− , where the steepness of the
cusp is given by the asymptotic inner slope parameter, γ, and its radial extent is described by the concentration parameter, c (with c defined as the virial radius divided by the concentration
radius). In our simulations, we find γ [is approximately equal to] 1.4−0.08Log10(M/M*) for haloes of 0.01M* to 1000M*, with a large scatter of [delta]γ ∼ ±0.3, where M* is the redshift dependent
characteristic mass of collapsing haloes; and c [is approximately equal to] 8.(M/M*)−0.15, with a large M/M* dependent scatter roughly equal to ±c. Our redshift zero haloes have inner slope
parameters ranging approximately from r−1 (i.e. Navarro, Frenk, & White) to r−1.5 (i.e. Moore et al. ), with a median of roughly r−1.3. This two parameter profile fit works well for all types haloes
in our simulations, whether or not they show evidence of a steep asymptotic cusp. We also model a cluster in power law cosmologies of P ∝ kn, with n = (0, -1, -2, -2.7). Here we find that the
concentration radius and the inner cusp slope are a both function of n, with larger concentration radii and shallower cusps for steeper power spectra. We have completed a thorough resolution study
and find that the minimum resolved radius is well described by the mean interparticle separation over a range of masses and redshifts. The trend of steeper and more concentrated cusps for smaller M/
M* haloes clearly shows that dwarf sized [lambda]CDM haloes have, on average, significantly steeper density profiles within the inner few percent of the virial radius than inferred from recent
observations. Code to reproduce this profile can be downloaded from http://www.icc.dur.ac.uk/∼reed/profile.html (Refer to PDF file for exact formulas). | {"url":"https://ritdml.rit.edu/handle/1850/1490","timestamp":"2014-04-18T12:32:30Z","content_type":null,"content_length":"18271","record_id":"<urn:uuid:e1c512b7-7a38-4cc4-9a91-b3c5ee9e759f>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00498-ip-10-147-4-33.ec2.internal.warc.gz"} |
Factor Theorem?
Hi Katy
According to the factor theorem,
For a polynomial P(x), x - a is a factor if P(a) = 0.
therefore substituting x=1 in the polynomial
P(x) =(x^3+3x^2+13x-15)
P(1) = 1+3+13-15
again substituting x=2 in P(x)
P(2)= 8+12+26-15
As the remainder is not zero for x=1,2,3,4,.............so on
Hence the given polynomial P(x) =(x^3+3x^2+13x-15)
has no factor
I think the polynomial u have given is unfactorizable
Last edited by deepu (2005-12-31 08:49:26) | {"url":"http://www.mathisfunforum.com/viewtopic.php?pid=21950","timestamp":"2014-04-18T00:25:35Z","content_type":null,"content_length":"26329","record_id":"<urn:uuid:21a5c4ec-57da-4fdb-a464-1f8ce139dad9>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00560-ip-10-147-4-33.ec2.internal.warc.gz"} |
Increasing/decreasing, minima/maxima
October 12th 2009, 05:43 PM #1
Increasing/decreasing, minima/maxima
Okay, so I'm trying to find whether the graph is increasing/decreasing, and I need the minima/maxima.
My original function was x^3 - 4x
First derivative: 3x^2 - 4
I've gotten down to sqrt(4/3). Now, from here I am confused on exactly how to get what I mentioned above (increasing, etc.) I can do it with other types of functions but I don't exactly know what
to do in a situation like this. Any help, please?
And as a side question, can someone calculate the concavity and inflection points and see if they get (0,0) for inflection and (1,inf) for concave up and (-inf, 1) for concave down? I want to
check if I am at least doing that right.
it's +sqrt(4/3) and -sqrt(4/3)
then you set your intervals and get the second derivative to determine where on the intervals is the function increasing/decresing and what kind of concavity f has on each interval
Okay, so I'm trying to find whether the graph is increasing/decreasing, and I need the minima/maxima.
My original function was x^3 - 4x
First derivative: 3x^2 - 4
I've gotten down to sqrt(4/3). Now, from here I am confused on exactly how to get what I mentioned above (increasing, etc.) I can do it with other types of functions but I don't exactly know what
to do in a situation like this. Any help, please?
And as a side question, can someone calculate the concavity and inflection points and see if they get (0,0) for inflection and (1,inf) for concave up and (-inf, 1) for concave down? I want to
check if I am at least doing that right.
The first derivative has two zeros: $\pm \sqrt{\frac{4}{3}}$, so you'll need to study the sign of the first derivative in the intervals from negative infinity to $-\sqrt{\frac{4}{3}}$, between $-
\sqrt{\frac{4}{3}}$ and $\sqrt{\frac{4}{3}}$, and from $\sqrt{\frac{4}{3}}$ to positive infinity. The original function will be increasing when the first derivative is positive, and decreasing
when negative.
As far as inflection points go, these are points where the second derivative changes sign while the first derivative maintains its sign.
Good luck!
Okay, I figured it would something like that. Now how would I put that in notation?
Is this correct?
(-inf, -sqrt(4/3)) , (+sqrt(4/3),inf)
btween those (-rad(4/3), +rad(4/3))
Cool, I get it. Thanks guys!
October 12th 2009, 05:49 PM #2
May 2009
October 12th 2009, 05:53 PM #3
October 12th 2009, 05:53 PM #4
October 12th 2009, 05:55 PM #5
May 2009
October 12th 2009, 05:58 PM #6 | {"url":"http://mathhelpforum.com/calculus/107660-increasing-decreasing-minima-maxima.html","timestamp":"2014-04-18T05:31:49Z","content_type":null,"content_length":"43449","record_id":"<urn:uuid:9b100519-b925-42d8-8af3-3281dc0dc8e2>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00119-ip-10-147-4-33.ec2.internal.warc.gz"} |
Several problems regarding Ideals
February 19th 2008, 07:52 AM #1
Feb 2008
Several problems regarding Ideals
Hello, i am stumped on these problems, and i am wondering if anybody could help me
1. Prove or disprove that in the ring Z[X], <2X> = <2,X>
2. Suppose that I is an ideal of Q[X] which contains both X^{2} + 2X + 4 and X^{3} - 3. Show that I = Q[X].
3. Prove that in the ring Z[X], <2>U<X> is not an ideal
To show that $\left< 2x \right> = \left< 2,x \right>$ we need to show $\left< 2x \right> \subseteq \left<2 ,x\right>$ and $\left<2,x\right> \subseteq \left<2x\right>$.
2. Suppose that I is an ideal of Q[X] which contains both X^{2} + 2X + 4 and X^{3} - 3. Show that I = Q[X].
Note $x^3 - 3 = (x-2)(x^2+2x+4) + 5$ this means $\gcd (x^3 - 3,x^2+2x+4) = \gcd (x^2+2x+4, 5) = 1$ by Euclidean algorithm. Thus, by relative primeness there exists $f(x),g(x)\in \mathbb{Q}[x]$
such that $f(x)(x^2+2x+4) + g(x) (x^3-3) = 1$. Since $I$ is an ideal it means $f(x)(x^2+2x+4) + g(x) (x^3-3) = 1 \in I$. All ideals which contain $1$ have to be the improper ideal, i.e. $I = \
3. Prove that in the ring Z[X], <2>U<X> is not an ideal
Find two elements $a,b\in \left< 2\right> \cup \left< x \right>$ so that $a+bot \in \left<2\right> \cup \left< x \right>$.
thanks for the help
i tried that but i believe that they arent equal b/c x^{3} + 2X^{2} + 2 has a factor of <2,X> b/c of 2(X^{2} +1) + X(X^{2}) but x^{3} + 2X^{2} + 2 doesnt have a factor of <2x> unless im wrong
also, on another problem,
1. Prove that in the ring Z[X], <2> /\ <X> = <2x>
i understand why this would be true. would i prove this by the properties of an ideal? by assuming that <2> = {2f(x), x in some ideal A} and <X> = {Xg(y). some y in B}, then a point "p" in <2>^
<X> such that {p exists in 2f(x)Xg(x)} and prove that 2f(x)Xg(x) = 2xf(x)? is that correct?
February 19th 2008, 08:49 AM #2
Global Moderator
Nov 2005
New York City
February 19th 2008, 01:36 PM #3
Feb 2008
February 21st 2008, 07:12 AM #4
Feb 2008 | {"url":"http://mathhelpforum.com/advanced-algebra/28613-several-problems-regarding-ideals.html","timestamp":"2014-04-25T09:10:44Z","content_type":null,"content_length":"43345","record_id":"<urn:uuid:44f426d7-f0b7-48a0-bbe2-8da8f290ec4e>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00433-ip-10-147-4-33.ec2.internal.warc.gz"} |
aperture angle
(E-Mail Removed)
> The sensor has a max resolution of 1280 x 960 (1.3 Mio). Is it
> possible to get the sensor size of this information [...] ?
No, this is no use in computing the physical area of the sensor
But earlier, you wrote:
>>> Focal Length: 4.5mm (35mm equivalent 37mm)
This is the data you need to compute sensor size. By dividing 37 by
4.5, we find a crop factor equal to 8.2 - so the diameter of your
camera's sensor is 5.3 mm - which means the imager's dimensions is
something like 4.24 mm x 3.18 mm.
> Furthermore I'm a little bit confused: The aperture angle depends on
> the aperture size, isn't it?
I've never heard the term "aperture angle", so I am not sure what sort
of formula this would be.
But if you want to know the angle "seen" by your camera's - that
particular datum is usually called "field of view" (FOV) and depends
on focal length and sensor diameter (but not on aperture). You'll
find the formula explained on this webpage:
For your camera, the FOV a f=4.5mm is atan(5.3/2/4.5), which is
1.064 radians or ~ 61 degrees. It is the same FOV that a lens
with f=35mm lens would have with a 24x36 mm imager.
- gisle hannemyr [ gisle{at}hannemyr.no -
Kodak DCS460, Canon Powershot G5, Olympus 2020Z | {"url":"http://www.velocityreviews.com/forums/t415609-aperture-angle.html","timestamp":"2014-04-25T06:37:24Z","content_type":null,"content_length":"59654","record_id":"<urn:uuid:5e720751-1d90-4b18-ae4a-2866c0db9abf>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00410-ip-10-147-4-33.ec2.internal.warc.gz"} |
PROPs representations, free module analog
up vote 2 down vote favorite
Ordinary operad with one ouput can be obviously regarded as free module on itself. Is there are analogous construction for operad with many outputs (PROP)? This must be difficult question, but what
is conceptual reason for that in the context of operads? Also, please point me ways of producing of representations of such structures. Some references and results beyond ordinary operads are welcome
operads rt.representation-theory co.combinatorics
add comment
1 Answer
active oldest votes
If I understand your first question, you are looking for a pasting schemes for PROPs.
For analogy, in a (non symmetric) free operad $\mathcal{P}$, pasting schemes are planar rooted trees where internal nodes are labeled on the generators of $\mathcal{P}$. The arity of a tree
is the number of its leaves and the composition $S \circ_i T$ of two trees $S$ and $T$ is the grafting of the root of $T$ at the $i$th leaf of $S$.
In a free PROP $\mathcal{R}$ generated by a set $G$ of generators, you can regard an element $g \in G$ with $p$ inputs and $q$ outputs as a node of a directed graph with $p$ incoming edges
and $q$ outcoming edges. Since these edges do not connect two nodes but a node with nothing, let us call these legs. Moreover, the incoming (resp. outcoming) legs are bijectively labeled on
$\lbrace1, \dots, p\rbrace$ (resp. $\lbrace1, \dots, q\rbrace$).
In $\mathcal{R}$, the horizontal composition $g \star h$ of two generators is simply the juxtaposition of their graphs with a natural renumbering of the legs of $h$. Besides, the vertical
composition $g \circ h$, defined only when $g$ has as many inputs $r$ than outputs in $h$, consists in connecting the $i$-th incoming leg of $g$ with the $i$-th outcoming leg of $h$, for
all $1 \leq i \leq r$.
Hence, you can deduce that pasting schemes of $\mathcal{R}$ are directed graphs labeled on $G$ with no directed cycle and such that incoming (resp. outcoming) legs are bijectively labeled
up vote 1 on an initial segment of $\mathbb{N} \setminus \lbrace 0 \rbrace$.
down vote
For your second question, an algebra over a PROP is just a vector space (or a set, or any another adequate category) equipped with operations and cooperations.
For instance, consider the PROP $\mathcal{B}$ in the category of vector spaces generated by an element $\mu$ with two inputs and one output and an element $\Delta$ with one input and two
outputs, submitted to the following relations: $$\mu \circ (\mu \star I) = \mu \circ (I \star \mu),$$ $$(\Delta \star I) \circ \Delta = (I \star \Delta) \circ \Delta,$$ and $$\Delta \circ \
mu = ((\mu \star \mu) \cdot 1324) \circ (\Delta \star \Delta),$$ where $I$ is the unit and $\cdot$ is the action of the symmetric group on $\mathcal{B}$.
Now, algebras over $\mathcal{B}$ are (non unital) bialgebras, that are vectors spaces $V$ with two (co)associative operations $\mu : V \otimes V \to V$ and $\Delta : V \to V \otimes V$ such
that, for any $x, y \in V$, $$\Delta(x \; \mu \; y) = \Delta(x) . \Delta(y),$$ where $.$ in the right hand side is the tensor wise product using $\mu$.
You can find more detail in the following paper:
Markl, Martin. Operads and PROPs. Handbook of algebra. Vol. 5, 87--140, Handb. Algebr., 5, Elsevier/North-Holland, Amsterdam, 2008.
1. regard operations as trees, then module structure comes from substitution of output leaves to inputs. 2. yes – Bad English Feb 17 '13 at 17:33
Welcome to MO, Samuele! – Yannic Feb 18 '13 at 0:10
Thanks Yannic, I hope all is fine for you ! – Samuele Giraudo Feb 18 '13 at 18:54
add comment
Not the answer you're looking for? Browse other questions tagged operads rt.representation-theory co.combinatorics or ask your own question. | {"url":"https://mathoverflow.net/questions/121838/props-representations-free-module-analog/122073","timestamp":"2014-04-17T12:39:52Z","content_type":null,"content_length":"55568","record_id":"<urn:uuid:bc0ed302-904c-492a-b931-42eebc417604>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00508-ip-10-147-4-33.ec2.internal.warc.gz"} |
Named Groups
Mathematica provides permutation representations for many important finite groups. Some of these groups are members of infinite families, parametrized by one or more integers; other groups are
uniquely distinguished by their special properties and are frequently named after their discoverers.
Mathematica provides information on the following infinite families of groups, and on some groups not belonging to parametrized families.
Named infinite families of groups.
Mathieu Groups
The following five Mathieu groups were the first five sporadic simple groups to be discovered, in the second half of the nineteenth century, and are multiply transitive groups, all being subgroups of
the largest one. Mathematica provides default permutation representations for them.
Other Sporadic Simple Groups
There are 26 sporadic simple groups (27 if the Tits group is included). Apart from the five Mathieu groups, Mathematica provides permutation representations for those of intermediate support length.
The largest ones are too big to be handled as permutation groups in practice, and it is more efficient to represent them as matrix groups. These are the 13 groups (including the Tits group) for which
representations on domains of less than 50000 points are known.
Intermediate sporadic simple groups.
Some sporadic groups are related to symmetries of the Leech lattice, a particular lattice in a Euclidean 24-dimensional space. These are sometimes known as the "second generation" of the sporadic
simple groups. | {"url":"http://reference.wolfram.com/mathematica/tutorial/NamedGroups.html","timestamp":"2014-04-16T19:31:27Z","content_type":null,"content_length":"62282","record_id":"<urn:uuid:f687e3f0-f597-4ee2-a013-de924ecbf8c7>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00246-ip-10-147-4-33.ec2.internal.warc.gz"} |
Harvey, IL Algebra Tutor
Find a Harvey, IL Algebra Tutor
...I spent a summer teaching at a high school math camp, I have been a coach for the Georgia state math team for a few years now, and I was a grader for the IU math department. I've tutored many
people I've encountered, including friends, roommates, and people who've sat next to me on trains, aside...
13 Subjects: including algebra 2, algebra 1, calculus, statistics
...I have prepared students in the math portion of the ACT test. I have worked with students who were failing math very late in the year and helped them attain a final grade of B or C. However,
my greatest thrill is seeing the change in a student’s confidence in themselves as they go from failure to success.
14 Subjects: including algebra 1, algebra 2, GED, geometry
...Since have been a middle school wrestling coach I have seen true confusion in young kids, but I have been able to simplify the most abstract ideas, and show step by step how things work. I
know that my technique is not perfect but it gets better everyday because I have the ability to see what wo...
11 Subjects: including algebra 2, precalculus, algebra 1, SAT math
...My approach has proved positive for my kids, and I look forward to sharing it with yours. I have completed my background check, which is viewable next to my profile online. I am available for
an interview prior to any tutoring begins.
18 Subjects: including algebra 2, algebra 1, geometry, ASVAB
Hi! I am a friendly and patient tutor who is dedicated to my teaching and takes pride in building a good teacher-student rapport with my students. I generally teach school-going students and
adult learners.
7 Subjects: including algebra 1, algebra 2, chemistry, prealgebra
Related Harvey, IL Tutors
Harvey, IL Accounting Tutors
Harvey, IL ACT Tutors
Harvey, IL Algebra Tutors
Harvey, IL Algebra 2 Tutors
Harvey, IL Calculus Tutors
Harvey, IL Geometry Tutors
Harvey, IL Math Tutors
Harvey, IL Prealgebra Tutors
Harvey, IL Precalculus Tutors
Harvey, IL SAT Tutors
Harvey, IL SAT Math Tutors
Harvey, IL Science Tutors
Harvey, IL Statistics Tutors
Harvey, IL Trigonometry Tutors | {"url":"http://www.purplemath.com/harvey_il_algebra_tutors.php","timestamp":"2014-04-19T23:58:45Z","content_type":null,"content_length":"23802","record_id":"<urn:uuid:04093da4-806e-43d5-97d3-1513a7872b59>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00188-ip-10-147-4-33.ec2.internal.warc.gz"} |
Rigid Body Collisions in golf
are you meaning to say simulate the golf ball as a sphere? the problem is that although the balls flight is restricted to 2 dimensions, it could contact the hole outside of these 2 dimensions.
Quite so.
There's also a nasty complication as the ball clears the leading edge of the hole. In general it will be deflected towards the centre of the hole, and computing this will get into horrible
complications with gyroscopic effects and moments of inertia.
To have some hope, let's ignore that, as we can if the line is not too far off the hole's centre.
Let B be the radius of the ball, R the radius of the hole, H the horizontal displacement from hole's centre to path of ball, V the velocity. Let X, Y and Z be the co-ordinates of the point on the
ball that strikes the far edge of the hole, relative to the centre of the ball at the moment of impact. Let T be the time from leaving near edge to striking far edge. L = √(R[itex]^{2}[/itex]-H[itex]
We have:
X[itex]^{2}[/itex]+Y[itex]^{2}[/itex]+Z[itex]^{2}[/itex] = B[itex]^{2}[/itex]
(X+H)[itex]^{2}[/itex] + (VT-L+Y)[itex]^{2}[/itex] = R[itex]^{2}[/itex]
Z = gT[itex]^{2}[/itex]/2 - B
(Quick check: X=Y=T=0, Z=-B is a solution.)
We have 4 unknowns, X, Y, Z, T, but only 3 equations. We can eliminate any two of X, Y, Z (X and Z say) to leave an equation for the third as a function of T. This will be a quartic.
Here's the clever bit. In principle we can factorise the quartic to obtain (Y-a)(Y-b)(Y-c)(Y-d) = 0, where a, b, c, d are functions of T, possibly imaginary. At the moment of impact, a real root
appears. In fact, it will be a repeated root, e.g. a in:
(Y-a)(Y-a)(Y-c)(Y-d) = 0
Look at what happens if we differentiate the LHS wrt Y. It still has a factor Y-a, so the expression obtained by differentiating the quartic is also 0 at moment of impact. This will be a cubic. We
can combine the quartic and cubic (i.e. take the remainder of the quartic modulo the cubic) to obtain a quadratic in Y, then repeat the process with the cubic and quadratic to obtain Y (still as a
function of T). Finally eliminate Y to obtain T.
The details I leave as an exercise for the reader | {"url":"http://www.physicsforums.com/showthread.php?p=3908157","timestamp":"2014-04-17T09:48:33Z","content_type":null,"content_length":"65183","record_id":"<urn:uuid:7bc486da-13e8-4db0-b5ca-ed35dffce547>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00325-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
David wants to hang a mirror in his room but the mirror and frame must not have an area larger than 8 square feet. The mirror is 2 feet wide and 3 feet long. Which quadratic equation represents the
area of the mirror and frame combined? (The frame must have equal width of x on each side.)
• one year ago
• one year ago
Best Response
You've already chosen the best response.
A:2x^2+14x-2=0 B:3x^2+10x-8=0 C:4x^2+10x-2=0 D:x^2+7x-8=0
Best Response
You've already chosen the best response.
The answer is C because \[(2x+2)(2x+3)=8\rightarrow4^2+10x+6=8\] \[4x^2+10x-2=0\]
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/4fcb9ea5e4b0c6963ad51a4e","timestamp":"2014-04-19T13:08:14Z","content_type":null,"content_length":"30241","record_id":"<urn:uuid:7893636a-09e5-422c-9e74-5f180282b7bd>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00265-ip-10-147-4-33.ec2.internal.warc.gz"} |
Researchers propose a new system for quantum simulation
The latest news from academia, regulators research labs and other things of interest
Posted: Sep 03, 2013
Researchers propose a new system for quantum simulation
(Nanowerk News) Researchers from the universities in Mainz, Frankfurt, Hamburg and Ulm have proposed a new platform for quantum simulation. In a theoretical paper recently published in Physical
Review Letters ("Emulating Solid-State Physics with a Hybrid System of Ultracold Ions and Atoms"), they show that a combined system of ultracold trapped ions and fermionic atoms could be used to
emulate solid state physics. This system may outperform possibilities of existing platforms as a number of phenomena found in solid state systems are naturally included, such as the fermionic
statistics of the electrons and the electron-sound wave interactions.
Quantum simulation was first proposed by Richard Feynman in 1982. He realized that a calculation of quantum systems is well beyond the ability of any existing computer technology. This is because
quantum mechanics features superpositions and entanglement; its dynamics follows many pathways simultaneously. Even the most powerful classical computers lack the computing power to keep track of all
those possible outcomes even for small quantum systems. Feynman proposed using an easily accessible and easily controllable laboratory quantum system to mimic the quantum system of interest. This
idea is reminiscent to using a crash test dummy to simulate the dynamics of a collision in the classical world.
In their recent paper, the authors calculate that an ion crystal and a degenerate Fermi gas mimic a solid state system built up of atomic cores and electrons, making it a quantum simulator of such a
system. The researchers show that a phase transition from a conducting to an insulating state can occur in their solid state look-alike. This unexpected many-body quantum effect is known as the
Peierls transition and relies on lattice phonons and the fermionic statistics of the atoms. The authors expect that the system could be further expanded to study for instance phonon-mediated
atom-atom interactions, thus simulating the phonon-mediated electron-electron interactions responsible for superconductivity.
Subscribe to a free copy of one of our daily
Nanowerk Newsletter Email Digests
with a compilation of all of the day's news. | {"url":"http://www.nanowerk.com/news2/newsid=32051.php","timestamp":"2014-04-17T07:40:58Z","content_type":null,"content_length":"35167","record_id":"<urn:uuid:6405fc48-4282-4d4c-accb-f07028396f54>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00049-ip-10-147-4-33.ec2.internal.warc.gz"} |
Researchers propose a new system for quantum simulation
The latest news from academia, regulators research labs and other things of interest
Posted: Sep 03, 2013
Researchers propose a new system for quantum simulation
(Nanowerk News) Researchers from the universities in Mainz, Frankfurt, Hamburg and Ulm have proposed a new platform for quantum simulation. In a theoretical paper recently published in Physical
Review Letters ("Emulating Solid-State Physics with a Hybrid System of Ultracold Ions and Atoms"), they show that a combined system of ultracold trapped ions and fermionic atoms could be used to
emulate solid state physics. This system may outperform possibilities of existing platforms as a number of phenomena found in solid state systems are naturally included, such as the fermionic
statistics of the electrons and the electron-sound wave interactions.
Quantum simulation was first proposed by Richard Feynman in 1982. He realized that a calculation of quantum systems is well beyond the ability of any existing computer technology. This is because
quantum mechanics features superpositions and entanglement; its dynamics follows many pathways simultaneously. Even the most powerful classical computers lack the computing power to keep track of all
those possible outcomes even for small quantum systems. Feynman proposed using an easily accessible and easily controllable laboratory quantum system to mimic the quantum system of interest. This
idea is reminiscent to using a crash test dummy to simulate the dynamics of a collision in the classical world.
In their recent paper, the authors calculate that an ion crystal and a degenerate Fermi gas mimic a solid state system built up of atomic cores and electrons, making it a quantum simulator of such a
system. The researchers show that a phase transition from a conducting to an insulating state can occur in their solid state look-alike. This unexpected many-body quantum effect is known as the
Peierls transition and relies on lattice phonons and the fermionic statistics of the atoms. The authors expect that the system could be further expanded to study for instance phonon-mediated
atom-atom interactions, thus simulating the phonon-mediated electron-electron interactions responsible for superconductivity.
Subscribe to a free copy of one of our daily
Nanowerk Newsletter Email Digests
with a compilation of all of the day's news. | {"url":"http://www.nanowerk.com/news2/newsid=32051.php","timestamp":"2014-04-17T07:40:58Z","content_type":null,"content_length":"35167","record_id":"<urn:uuid:6405fc48-4282-4d4c-accb-f07028396f54>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00049-ip-10-147-4-33.ec2.internal.warc.gz"} |
Benbrook, TX Precalculus Tutor
Find a Benbrook, TX Precalculus Tutor
I have been tutoring or teaching for almost 18 years. Previously I have tutored or taught PSAT, SAT, GRE, English, Spanish, and Latin, all math through introductory calculus, an introduction to
Thomistic Metaphysics, and European history. I have a BA in Spanish Literature and a BS in Mathematics from the University of Texas.
40 Subjects: including precalculus, English, calculus, Spanish
...You will also learn different ways to solve triangles. Ratio and proportion are very important concepts to learn so that you can prove congruency. Other things to master are perimeter, area,
volume and surface area.
7 Subjects: including precalculus, calculus, geometry, algebra 1
I have been a math teacher in public schools since August 2005. I have taught algebra I, algebra II, geometry, and 6th grade math. I have had student success on state-administered tests, including
over 80% of my students passing the algebra I end-of-course exam and over 96% passing the geometry en...
14 Subjects: including precalculus, calculus, geometry, statistics
...I absolutely love teaching math to students of every level, but I prefer middle school and high school. I have three years of teaching/tutoring experience in a one-on-one setting. While a
student at Trinity, I tutored freelance between 7-10 students regularly.
14 Subjects: including precalculus, chemistry, geometry, Microsoft Word
My name is Jose and I'm from Fort Worth. I am a sophomore at Texas Tech University in Lubbock. I am a Cell and Molecular Biology major.
7 Subjects: including precalculus, geometry, biology, algebra 1 | {"url":"http://www.purplemath.com/benbrook_tx_precalculus_tutors.php","timestamp":"2014-04-16T05:00:01Z","content_type":null,"content_length":"24161","record_id":"<urn:uuid:317d9ae3-e60c-4589-bbcb-9ca9c7315aac>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00245-ip-10-147-4-33.ec2.internal.warc.gz"} |
A Practical Guide To Numbers in JavaScript
Dealing with numbers, strings and JavaScript can be frustrating for a beginner. This is a down-and-dirty explanation of converting strings to numbers, detecting if a string is a number, and handy
functions for manipulating numbers.
How many times have you gotten a number from a FORM field on your web page, tried to add it to a number only to get the wrong value back!? One of the most basic tasks of any program is converting
between different data types. JavaScript is pretty flexible, but there are still some rules.
In most programming languages, a number is a primitive data type, meaning it's just a value stored in memory. In JavaScript, a number is both a primitive data type and a whole class of data and
functions. Whether or not a primitive number or object-oriented number is used depends on how you are using it. A few quick examples are in order.
Creating Number Variables in JavaScript
var num1 = new Number("4");
var num2 = 4;
There are two basic ways of declaring number variables in JavaScript: By calling the Number class constructor directly using the new command, and by simply assigning a number to a variable. The most
effecient method is by simply assigning a number to a variable, as this creates a primitive value number. The new Number() method creates an instance of the Number class and adds functions and
properties to the number variable. The browser will try converting any data passed to the Number() function into a number, and if you're not carefull you'll get a NaN value, which means the value is
Not A Number.
Once you have a number variable declared, you'll want to do some math. JavaScript gives us the basic mathematical operators, like +, -, * and / for add, subtract, multiply and divide. Below is a list
of some of the mathematical operators in JavaScript:
Common JavaScript Mathematical Operators
Operator Name Description
+ Addition The sum of two numbers.
++ Increment Increase the number by 1: num1++;
+= Add-to Add a number to the current value: num += 2;
- Subtraction The difference between two numbers.
-- Decrement Decrease the number by 1: num1--;
-= Subtract-from Subtract from the current value: num1 -= 2;
* Multiplication The multiplication of two numbers
/ Division Division of two numbers
% Modulus Returns the remainder of the division of two numbers
All the operators work as expected except for the addition operator. In JavaScript, the + operator returns the sum of two numbers, or joins two strings together. The following code example
illustrates when the browser knows whether to join a string or perform a mathematical operation:
Using + to join two strings, or add two numbers
var sum1 = 5 + 2;
var sum2 = 5 + "2";
alert(sum1); // Shows the number 7
alert(sum2); // Shows the string "52"
When you add a string to a number, the browser automatically converts the number to a string, and then joins the two strings. The operation 5 + "2" is interpretted by the browser as "5" + "2",
joining the two strings together and converting the end result into a string. This is an easy way to convert a number to a string, but is little help for converting a string to a number.
In most programming languages, you have two basic types of numbers: Integers, which represent whole numbers like 1, 8 or -3; and floating point numbers like 33.8 and -6.5. JavaScript uses floating
point numbers with up to 16 decimal places.
While the addition operator adds two numbers or joins two strings, the subtraction operator only subtracts two numbers. The browser also makes your life easier by automatically converting data types:
var diff1 = 5 - 2;
var diff2 = "5" - 2;
alert(diff1) // Shows the number 3
alert(diff2) // Shows the number 3
When you try subtracting a number from a string, the browser automatically converts the string to a number, and then performs the mathematical operation.We've seen how mathematical operators can
convert between strings and numbers, but you aren't limited to those methods. There are many more which can be used for different reasons.
Converting Strings to Numbers
There are four main ways to convert a string to a number: The parseFloat, parseInt and Number functions, and subtracting a number from a string. First, let's revisit the Number() function.
Earlier we learned the Number() function takes a value and converts it to a number, and is also the class constructor used for all number variables in JavaScript. It seems like the obvious choice,
but let's take a closer look.
var num1 = Number("2.25");
var num2 = Number("Price: $2.25");
var num3 = Number("three");
var num4 = Number("200.8 dollars");
alert(num1); // Shows the number 2.25
alert(num2); // Shows NaN (Not A Number)
alert(num3); // Shows NaN
alert(num4); // Shows NaN
Right away we can see a problem. The Number() function successfully converts a string to a number only as long as the string only contains numeric characters (those being 0 through 9, the decimal
point or period, and the negative sign or minus). If there are any other characters, the Number function returns a NaN value, which literally means "Not A Number." If we have full control of what
gets put in a string variable, the Number() function may be the right choice, but if you are taking input from the user, we may get non-numeric characters. We need a way to remove non-numeric
characters from a string, then convert them to a number.
Cleaning and converting strings to numbers using the parseFloat() and parseInt() functions
JavaScript gives us two built in functions that help remove non-numeric characters from a string. The parseFloat function returns a floating point number, or decimal number, and the parseInt function
returns an integer number. Later on we'll see why special care must be taken with the parseInt function.
Using parseFloat
var num1 = parseFloat("2.25");
var num2 = parseFloat("Price: $2.25");
var num3 = parseFloat("three");
var num4 = parseFloat("200.8 million dollars");
alert(num1); // Shows the number 2.25
alert(num2); // Shows NaN (Not A Number)
alert(num3); // Shows NaN (Not A Number)
alert(num4); // Shows the number 200.8
We run into a similar problem with parseFloat that we have with the Number function. If the string contains non-numeric characters before the numeric characters, a NaN value is returned. It does,
however, remove non-numeric characters after the numeric characters, and returns a floating point number. Likewise, if only numeric characters are in the string, a floating point number is returned.
Lastly, if no numeric characters exist in the string, a NaN value is returned.
Using parseInt
var num1 = parseInt("2.25");
var num2 = parseInt("Price: $2.25");
var num3 = parseInt("three");
var num4 = parseInt("200.8 million dollars");
var num5 = parseInt("0x10");
alert(num1); // Shows the number 2
alert(num2); // Shows NaN (Not A Number)
alert(num3); // Shows NaN (Not A Number)
alert(num4); // Shows the number 200
alert(num5); // Shows the number 16
The parseInt function works exactly the same as the parseFloat function, with two notable differences: It returns a number rounded down to the next integer, and it doesn't always return NaN when you
give it non-numeric characters. In fact, it appears quite confusing. Giving parseInt the string "0x10" returns the number 16 ... um ... what? Sixteen? Giving it the characters zero-x-one-zero is
equal to the number 16? Yes. The parseInt function takes up to two parameters: The string to convert to a number, and an optional second parameter specifying the number system the string should be
interpretted as, called the radix. When using parseInt, always pass 10 as the second parameter, meaning the parseInt function should interpret the string as a base 10 number. A base 10 number uses
the numerals 0 through 9, and every tenth value is put at a new decimal spot. We don't have a number to represent the number ten, instead we use 10: We have one group of ten, and zero groups of one.
When the string passed to the parseInt function begins with "0x" and no second parameter is given, the parseInt function treats the string like a base 16, or hexadecimal number. The hexadecimal
number 10 is equal to the decimal number 16. In hex, you have one group of 16, and zero groups of 1. The equivalent number in the decimal or base 10 system is 16: one group of ten, and 6 groups of
The proper way to use parseInt
var num1 = parseInt("2.25", 10);
var num4 = parseInt("0x10", 10);
alert(num1); // Shows the number 2
alert(num2); // Shows NaN (Not A Number)
In the example above, the parseInt function is told that the string should be treated as a base 10 number, and thus "0x10" is Not A Number (NaN). Since humans use the decimal system when counting,
you should always pass 10 as the second parameter to the parseInt function. If 10 is not passed as the second parameter and a user accidentally types in 0x15 instead of 0.15, you would get 21 instead
of the number zero, and your calculations will be incorrect.
Number, parseFloat and parseInt Summary
Function Parameters Return Value
Number String[required] Number or NaN if the string contains non-numeric characters.
parseFloat String[required] Number or NaN if the string contains non-numeric characters.
parseInt String[required], The first parameter is a string, and the second parameter is a number. The second parameter is optional, and is the number system the string should be interpretted as.
Number[optional] Use 10 unless you want a different number system. An integer rounded down to the nearest number is returned, or NaN if any non-numeric characters are found.
toNumber(): Clean strings and convert them to numbers
The parseFloat and parseInt functions still don't allow us to remove all non numeric characters. Unfortunately, JavaScript doesn't have a function for this, so we'll build one. You can probably
search the Internet for something similar, and many times it is named toNumeric or toNumber.
Function definition for toNumber
* toNumber Converts a string to a number
* @param string (required) String to convert to a number
* @param bool (optional) Make it an integer and drop decimal
* @param bool (optional) Round number up or down to nearest int?
* @return float, int or NaN Number, or null if string not a number
function toNumber(str, isInteger, roundNum) {
var num;
var strType = typeof(str);
if (strType == "string") {
// Strip non-numeric chars and convert to number
num = Number(str.replace(/[^0-9-.]/g, ""));
} else if (strType != "number") {
// Return NaN if not a number
return NaN;
if (isNaN(num)) {
return NaN;
} else if (isInteger) {
return Math.floor(num);
} else if (roundNum) {
return Math.round(num);
} else {
return num;
The toNumber function removes non-numeric characters from a string, and also allows you to specify whether or not the number is a floating point number or integer, and if it should be rounded to the
nearest integer instead of dropping the decimal all together. If the string passed to the toNumber function contains no numeric characters, a null value is returned. Let's see how we can use this
How the toNumber function reacts to various input
var num1 = toNumber("2.25");
var num2 = toNumber("Price: $2.25");
var num3 = toNumber("200.8 million dollars");
var num4 = toNumber("103.8", true);
var num5 = toNumber("103.8", true, true);
var num6 = toNumber("no numbers here");
alert(num1); // Shows the number 2.25
alert(num2); // Shows the number 2.25
alert(num3); // Shows the number 200.8
alert(num4); // Shows the number 103
alert(num5); // Shows the number 104
alert(num6); // Shows null
As mention earlier, users can type whatever they want into form fields, so you must clean the data before you can do any math with it. The toNumber function does several things:
1. If a number variable is given, the number is returned as a floating point, integer or rounded floating point number.
2. Removes all non-numeric characters from the string, both before and after the number characters.
3. You can specify if the number is an integer, and also if it should be rounded up or down to the nearest integer
4. If the string variable given does not contain numeric characters, a NaN value is returned. You can outright test for a NaN value if you want to tell the user to enter a number.
Let's create a sample script that shows how to use this function.
Using toNumber to check for user errors
// Ask the user to enter a number
var userNum = prompt("Type a number:");
// Convert what the user typed into a number
var num = toNumber(userNum);
if (isNaN(num)) {
// Error: User typed no numeric characters
alert("Only numeric characters are allowed");
} else {
// User typed at least one number, now show the sum
num = num + 8;
alert("The sum is " + num);
This shows the basic process for taking data from the user, cleaning it, and detecting possible errors.
1. The variable userNum is gotten from the user by way of the prompt function. The prompt function returns a string of what the user typed into the prompt pop up box.
2. We pass the userNum variable to the toNumber function.
3. The toNumber function returns a value, and is assigned to the num variable.
4. If the num variable is equal to null, then the user didn't type in any numeric characters. Show an error message.
5. Otherwise, the num variable is a number, and we can show the sum.
Drawbacks to the toNumber() function
At first this function seems like a great idea — remove all non-numeric characters from the string, and then convert it to a number. What if the user accidentally types "5/8" when they meant to type
in "5.8"? The toNumber function would return the number 58, which is vastly larger than the 5.8 the user intended on entering. It's up to you as the programmer to decide if this fault tolerance is
acceptable in your script. While posting on WebDeveloper forums, I got involved in a discussion about converting strings to numbers. In order to reduce the impact of user errors on your script, the
following basic algorithm is recommended when converting strings to numbers:
1. Convert the string to a number first. Use the Number() function.
2. Test the converted number to ensure it is actually a number using the isNaN() function.
3. If the number is Not A Number, alert the user to his or her error and stop processing.
4. If the number is a number, then continue processing.
Using isNaN() to detect non-numbers
The function definition for toNumber introduced us to a new number-related function: isNaN. The isNaN function stands for "IS Not A Number" and is a native function to JavaScript. It takes a string
or number as a parameter and will return true if any non-numeric characters are found in the case of a string, or if the number passed is a NaN value. In addition, if you were to use the Number,
parseFloat or parseInt functions to convert a string to a number, and a NaN value is returned by any of those functions, isNaN will return true.
The isNaN function is complimentary to parseFloat and parseInt, and is a direct compliment to Number. When the Number function returns an actual number, the isNaN function returns false. When the
Number function returns a NaN value, the isNaN function returns true. Be aware that there are times when a string passed to isNaN will return true, meaning the string is not a number, when that same
string passed to either parseFloat or parseInt will return a number. Always convert strings to numbers first, then test using isNaN.
How isNaN reacts to various input
var num1 = isNaN("2.25");
var num2 = isNaN("Price: $2.25");
var num3 = isNaN("three");
var num4 = isNaN("200.8 million dollars");
var num5 = parseFloat("no numbers"); // Returns NaN value
alert(num1); // Shows false
alert(num2); // Shows true
alert(num3); // Shows true
alert(num4); // Shows true
alert(isNaN(num5)); // Shows true
If you don't want to use the toNumber function above, use the Number function to convert a string to a number, then use the isNaN function to check for a NaN value.
Using isNaN to check for user errors
var userNum = prompt("Enter a number:");
var num = Number(userNum);
if (isNaN(num)) {
alert("Only enter numeric characters.");
} else {
num = num + 8;
alert("The sum is " + num);
We've learned several ways to convert strings to numbers and account for user errors. We know that parseFloat gives us a floating point number, and parseInt gives us an integer, but parseInt drops
the decimal all together. The number 8.9 becomes 8, when it should be 9 when rounded to an integer. We need a way to create a rounded integer, and the Math object provides us with a round function.
Rounding numbers properly using Math.round
Any floating point number whose decimal is less than .5 must be rounded down. If the decimal is .5 or greater, the number should be rounded up. This was something we learned in grade school, but the
parseInt function failed that class. It just drops the decimal place all together. Let's use Math.round instead.
var num1 = Math.round(3.75);
var num2 = Math.round(3.33);
alert(num1); // Shows the number 4
alert(num2); // Shows the number 3
Since the decimal in 3.75 is greater-than or equal-to .5, the Math.round function returns the integer number 4. The number 3.33 has a decimal less than .5, so the Math.round function returns the
integer number 3. Math.round only returns integers, however. If you want to round a number to a certain decimal place, we've got to use a native function to all Number variables in JavaScript, which
we will use in our next example.
A simple form to calculate the tax
We've explored several methods of converting strings to numbers, and dealing with invalid data. Now let's create a practical example using the methods outlined above, and introduce one more method
for formatting the display of numbers. First, let's create our HTML form:
Markup for our tax form
<form method="get" action="" id="frmTax">
Price: <input type="text" name="price" value="">
Tax Rate: <input type="text" name="taxRate" value="0.06">
<input type="button" value="Calculate Tax"
It's a pretty simple form. Before some of you scream, "It's not accessible! Burn in Hell!" know that this is just a simple example. Web accessibility is beyond the scope of this tutorial. We've got
two text fields: One for the price, and another for the tax rate. Lastly, we have a button that calls the calcTax function onclick, and passes a Document Object Model node reference to the FORM tag.
Example 1: Function definition for calcTax, using toNumber()
function calcTax(form) {
var price = toNumber(form.elements["price"].value);
var taxRate = toNumber(form.elements["taxRate"].value);
var total = 0;
if (isNaN(price)) {
alert("Only enter numeric characters for the price");
} else if (isNaN(taxRate)) {
alert("Only enter numeric characters for the tax rate");
} else {
total = price + (price * taxRate);
alert("The total price is: $" + total.toFixed(2));
Example 2: Function definition for calcTax, using native functions
function calcTax(form) {
var price = Number(form.elements["price"].value);
var taxRate = Number(form.elements["taxRate"].value);
var total = 0;
if (isNaN(price)) {
alert("Only enter numeric characters for the price");
} else if (isNaN(taxRate)) {
alert("Only enter numeric characters for the tax rate");
} else {
total = price + (price * taxRate);
alert("The total price is: $" + total.toFixed(2));
Now is a good time to note that the values of form fields are always strings. A user might type a number into a text field, but JavaScript still sees it as a string, which is where the toNumber,
isNaN, Number, parseFloat and parseInt functions come into play. A closer look at both examples shows few differences in how they are written. The main difference is the level of tolerance for user
generated errors. In Example 1, the toNumber function is used. This function removes all non-numeric characters from the string. If a user accidentally types a non numeric character before any
numeric characters in one of the text boxes, the toNumber function is able to recover from that and still return a number.
In Example 2, if the user enters even one non-numeric character before a numeric character in one of the text fields, the calcTax function will alert the user of an error. In each example, the line
of code with the alert() function call contains another golden nugget for working with numbers: the toFixed method of all Number variables.
Using the toFixed method to format numbers
This function is available for any Number variable in JavaScript, and allows you to specify how many decimal places you want to show. This function returns a string, and so most times is only usefull
for displaying output to a user. It takes one parameter, an integer number of how many decimal places should be displayed.
var num = 8.75249;
alert(num.toFixed(0)); // Shows the string 9
alert(num.toFixed(1)); // Shows the string 8.8
alert(num.toFixed(2)); // Shows the string 8.75
alert(num.toFixed(3)); // Shows the string 8.752
alert(num.toFixed(4)); // Shows the string 8.7525
The toFixed function works similar to the Math.round function. In the first example, passing zero to toFixed returns 9, which is the same number that Math.round would return. The difference here is
anything the toFixed function returns is a string, not a number.
In the second example, the number 1 is passed, so only one decimal place is shown. The decimal place just after .7, is .05. This .05 is rounded up to .1, and added to .7 to give a final string of
"8.8". There may be a time, however, when you need to round a number to a certain number of decimal places and still have it be a number.
The round() function
There is no native round function in JavaScript that takes a number, rounds it to a certain number of decimal places, then returns a number. We'll call this function round().
Function definition for round()
* round Rounds a number to X decimal places
* @param number (req) Number to round. May also be string
* @param int (opt) Number of decimal places. Optional.
* @return mixed Null if NaN, or the rounded number
function round(num, decimals) {
var multiplier = Math.pow(10, decimals);
if (typeof(num) != "number") {
return null;
if (typeof(decimals) != "number") {
var decimals = -1;
if (decimals > 0) {
return Math.round(num * multiplier) / multiplier;
} else if (decimals == 0) {
return Math.round(num);
} else {
return num;
The round() function takes up to two parameters. The first is the number that should be rounded. The optional second parameter is the number of decimal places that should be rounded to. If this
second parameter is omitted, the full floating point number is returned. If zero is passed as the second parameter, then the number is rounded up or down properly to an integer. If the second
parameter is 1 or more, then the number is rounded to that many decimal places. The function returns an actual number, but if you give the round() function invalid data, null is returned.
Problems with round()
Normally you should only round a number to a decimal place when ouputing the number to the user. The function you'd want to use is the toFixed method of the Number class. If a calculation algorithm
explicitly calls for a rounded number, then using the round() function above would be acceptable. If your algorithm does not explicitly call for a rounded number, then only round the number when
outputing it to the user.
Preventing user number entry errors with textFieldToNumber
So far we've discussed converting strings to numbers, and accounting for user errors, but wouldn't it be nice to have a function that prevents user errors from ever occuring? Let's create one more
function that cleans non-numeric characters from a string, and then apply it to a form text field.
function textFieldToNumber(el) {
el.value = el.value.replace(/[^0-9-.]/g, "");
el = null;
It's a pretty simple function. It strips out non numeric characters with a regular expression and puts the stripped down string back into the text field. Putting a number into a form field
automatically converts it to a string. As you recall from earlier, form field values only contain strings. Next, we just need to attach the textFieldToNumber function to a form field.
<form action="" method="get">
<input type="text" onblur="textFieldToNumber(this);" name="age">
When the user presses the tab key or clicks off the form field, the onblur event fires, and then executes the textFieldToNumber function. A document object model node reference to the form field is
passed to the function using the keyword this. The function removes any non numeric characters from the form field value, and then puts the pure number back in the form field. Since JavaScript can be
disabled, make sure any server side script that receives data from your form double checks the input.
A Quick Recap
We've gone over a lot so far. Numbers in JavaScript are primitive and complex data type. The Number class has more functions and data members than this tutorial covers. Mathematical operators in
JavaScript don't always do math. The addition operator (+) also joins two strings, and if you try adding a number to a string, the browser automatically converts the number to a string, and then
joins the two strings.
We also learned the four main ways of converting strings to numbers: by subtracting a number from a string, using the Number class constructor, using the parseFloat function or using the parseInt
function. The parseInt function needs the radix, or number base, passed to it each time. You might pass it a base 10 number with some funky input and get back a base 16, or hexadecimal number.
The toNumber function was written to convert strings to numbers in a more flexible manor than parseFloat and parseInt. The native isNaN function can detect if a variable doesn't contain a number
value, and is used for detecting user errors in the data. Once valid numbers have been tested for, we touched on how to round numbers using Math.round and the toFixed method of the Number class. We
also created a round() function that returns a number rounded to a certain decimal place, rather than using the toFixed method which returns a string. Lastly, the custom function textFieldToNumber()
can be attached to a form field and prevents user errors when entering numbers.
There is much more about numbers and JavaScript that this tutorial hasn't covered, however these are the basics and all you should need 90 percent of the time. Other useful resources are listed
More Information on Numbers in JavaScript
• Math Object Reference at W3Schools.com
• Number Class Reference at W3Schools.com
• parseInt Function Reference at W3Schools.com
• parseFloat Function Reference at W3Schools.com
• ECMA Script Specification (Downloadable PDF). ECMA Script is sort of like "Standardardized JavaScript."
□ parseInt(): Section 15.1.2.2
□ parseFloat(): Section 15.1.2.3
□ isNaN(): Section 15.1.2.4
□ Number() called as a function: Section 15.7.1
□ Number() as class constructor: Section 15.7.2
□ Math Object: Section 15.8
1 comments:
Raymund said...
Great tutorial, it's a big help for my project, thanx | {"url":"http://fundamentaldisaster.blogspot.com/2007/11/practical-guide-to-numbers-in.html","timestamp":"2014-04-20T06:13:54Z","content_type":null,"content_length":"79797","record_id":"<urn:uuid:04fd5caf-024d-4df6-9908-87f7b25ffed7>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00628-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math in the Movies: District 9
with the theme of discussing movies before I see them, I’d like to say a few words about the upcoming film
District 9
. You can see the trailer below, if you haven’t heard of it (although if you live in LA it’s difficult to plead ignorance, since the viral marketing has been on full blast all summer).
It’s natural to ask what a film about aliens living in South African refugee camps has to do with mathematics. Aside from the obvious (no doubt any intergalactic species must have a good working
knowledge of mathematics), I’d like to point you to an aspect of the marketing campaign for the film that’s featured on the official
. If you look in the lower right, you will see a link to a site that immediately aroused my interest:
Maths From Outer Space
The purpose of this website is best summarized in its own words:
Maths From Outer Space wants to redefine what it means to be human! Our scientists have found a way to enhance the spatial and logic capabilities of the human body… In other words, we’ve found a
way to make you smarter! Would you like to see if you are qualified to take part in this exciting endeavor?
From here, you can click through to take a math test. This is remarkable for a few reasons. First of all, the fact that a film like this would even incorporate a math test as part of its marketing
strategy is pretty interesting. But not only that, by the end of the quiz the difficulty level of the questions went far beyond my expectations. This is a summer movie about aliens, after all, and
yet their math quiz ends with questions like this:
Nothing in the quiz goes beyond the level of calculus, but even this level of sophistication is fairly surprising. After all, not even films with subject matter that focuses on mathematics give math
quizzes, let alone math quizzes involving calculus.
Unfortunately, it’s not perfect. First of all, there are some mistakes in the quiz – what is one to do when none of the options given is correct?
The “correct” answer is the first one. Perhaps if aliens had mastered the concept of the derivative, they wouldn’t have gotten trapped in the slums of Johannesburg.
Even worse is the fact that even if you answer all the questions correctly, there is no payoff. When you click to learn more about the “enrollment details,” you’re sent to a bogus link.
How disappointing for the student who dreams of one day applying his math skills to uncover the secrets of advanced alien technologies.
Overall, though, I must give kudos to District 9 for its proactive stance on the integration of mathematics and film (then again, coming from a distributor called QED International, is it really a
surprise?). If only more summer blockbusters would follow this lead. Perhaps other studios will take note, and next year will feature an even more seamless integration between pop entertainment and
post-secondary school mathematics.
The future of summer entertainment? One can only hope so.
good things about not being in LA all summer: i had never heard of this movie before it came out this week.
This isn't related to your "District 9" post, per se, but commenting seemed the easiest way to contact you. I am a mathematician currently co-editing a book on math and popular culture, and I was
wondering if you might be interesting in submitting an essay to our collection? You can find our call for papers at http://www.plu.edu/~sklarjk/cfp.doc. For more info, please email me at
sklarjk@plu.edu. (If anyone reading this comment wants to contribute, feel free to write me as well!)
Jessica Sklar
Associate Professor of Mathematics
Pacific Lutheran University | {"url":"http://www.mathgoespop.com/2009/08/math-in-the-movies-district-9.html","timestamp":"2014-04-19T02:00:38Z","content_type":null,"content_length":"79514","record_id":"<urn:uuid:c0a123ef-e30d-4624-81b2-7f2da3ab0f68>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00492-ip-10-147-4-33.ec2.internal.warc.gz"} |
Archive for
Popper vs Kuhn
on 2014-01-12
in Argumentation and Science
. Closed
For Popper scientific communities are politically virtuous because they permit unfettered criticism. A scientific community is, by (Popper’s) definition, an open society. Kuhn had to be
shouted down because he seemed to deny this claim.”
Page 920 of B. Larvor [2000]: Review of I. Lakatos and P. Feyerabend: “For and Against Method“. British Journal for the Philosophy of Science, 51: 919-922.
Does evo-psych explain anything at all?
on 2013-09-28
in Culture, Religion and Science
. Closed
Evolutionary psychology and sociology have long struck me as arrant nonsense, because they ignore human free will and self-reflection, and thus our ability to rise above our own nature. There are
no pianos on the savanna, as I have remarked before, so an evolutionary psychologist will have a major challenge to explain a desire to play the piano in evolutionary terms.
Christopher Booker, in a review of E. O. Wilson’s new book, The Social Conquest of Earth, views similarly the flaws of evolutionary theory when applied to human behaviours:
It is our ability to escape from the rigid frame of instinct which explains almost everything that distinguishes human beings from any other form of life. But one looks in vain to Wilson to
recognise this, let alone to explain how it could have come about in terms of Darwinian evolutionary theory. No attribute of Darwinians is more marked than their inability to grasp just how much
their theory cannot account for, from all those evolutionary leaps which require a host of interdependent things to develop more or less simultaneously to be workable, that peculiarity of human
consciousness which has allowed us to step outside the instinctive frame and to ‘conquer the Earth’ far more comprehensively than ants.
But it is this which also gives us our disintegrative propensity, individually and collectively, to behave egocentrically, presenting us with all those problems which distinguish us from all the
other species which still live in unthinking obedience to the dictates of nature. All these follow from that split from our selfless ‘higher nature’, with which over the millennia our customs,
laws, religion and artistic creativity have tried their best to re-integrate us.
Nothing is more comical about Darwinians than the contortions they get into in trying to explain those ‘altruistic’ aspects of human nature which might seem to contradict their belief that the
evolutionary drive is always essentially self-centred (seen at its most extreme in Dawkins’s ‘selfish gene’ theory). Wilson’s thesis finally crumbles when he comes up with absurdly reductionist
explanations for the emergence of the creative arts and religion. Forget Bach’s B Minor Mass or the deeper insights of the Hindu scriptures — as a lapsed Southern Baptist, he caricatures the
religious instinct of mankind as little more than the stunted form of faith he escaped from.
His attempt to unravel what makes human nature unique is entirely a product of that limited ‘left-brain thinking’ which leads to cognitive dissonance.
Unable to think outside the Darwinian box, his account lacks any real warmth or wider understanding. Coming from ‘the most celebrated heir to Darwin’, his book may have won wide attention and
praise. But all it really demonstrates is that the real problem with Darwinians is their inability to see just how much their beguilingly simple theory simply cannot explain.”
Influential Books
on 2013-08-12
in Africa, Books, Literature, Politics, Religion and Science
. Closed
This is a list of non-fiction books which have greatly influenced me – making me see the world differently or act in it differently. They are listed chronologically according to when I first
encountered them.
• 2009 – J. Scott Turner [2007]: The Tinkerer’s Accomplice: How Design Emerges from Life Itself. (Harvard UP) (Mentioned here.)
• 2008 – Pierre Delattre [1993]: Episodes. (St. Paul, MN, USA: Graywolf Press)
• 2006 – Mark Evan Bonds [2006]: Music as Thought: Listening to the Symphony in the Age of Beethoven. (Princeton UP)
• 2006 – Kyle Gann [2006]: Music Downtown: Writings from the Village Voice. (UCal Press)
• 2001 – George Leonard [2000]: The Way of Aikido: Life Lessons from an American Sensei.
• 2000 – Stephen E. Toulmin [1990]: Cosmopolis: The Hidden Agenda of Modernity. (University of Chicago Press)
• 1999 – Michel de Montaigne [1580-1595]: Essays.
• 1997 – James Pritchett [1993]: The Music of John Cage. (Cambridge UP, UK)
• 1996 – George Fowler [1995]: Dance of a Fallen Monk: A Journey to Spiritual Enlightenment. (New York: Doubleday)
• 1995 – Chungliang Al Huang and Jerry Lynch [1992]: Thinking Body, Dancing Mind. (New York: Bantam Books)
• 1995 – Jon Kabat-Zinn [1994]: Wherever You Go, There You Are.
• 1995 – Charlotte Joko Beck [1993]: Nothing Special: Living Zen.
• 1993 - George Leonard [1992]: Mastery: The Keys to Success and Long-Term Fulfillment.
• 1990 – Trevor Leggett [1987]: Zen and the Ways. (Tuttle)
• 1989 – Grant McCracken [1988]: Culture and Consumption.
• 1989 – Teresa Toranska [1988]: Them: Stalin’s Polish Puppets. Translated by Agnieszka Kolakowska.(HarperCollins) (Mentioned here.)
• 1988 – Henry David Thoreau [1865]: Cape Cod.
• 1988 – Rupert Sheldrake [1988]: The Presence of the Past: Morphic Resonance and the Habits of Nature.
• 1988 - Dan Rose [1987]: Black American Street Life: South Philadelphia, 1969-1971. (U Penn Press)
• 1987 – Jay Neugeboren [1968]: Reflections at Thirty.
• 1982 – John Miller Chernoff [1979]: African Rhythm and African Sensibility: Aesthetics and Social Action in African Musical Idioms. (University of Chicago Press)
• 1981 – Walter Rodney [1972]: How Europe Underdeveloped Africa. (London: Bogle-L’Overture Publications)
• 1980 – Andre Gunder Frank [1966]: The Development of Underdevelopment. (Monthly Review Press)
• 1980 – Paul Feyerabend [1975]: Against Method: Outline of an Anarchistic Theory of Knowledge.
• 1979 – Aldous Huxley [1945]: The Perennial Philosophy.
• 1978 – Christmas Humphreys [1949 ]: Zen Buddhism.
• 1977 – Raymond Smullyan [1977]: The Tao is Silent.
• 1976 – Bertrand Russell [1951-1969]: The Autobiography. (London: George Allen & Unwin)
• 1975 – Jean-Francois Revel [1972]: Without Marx or Jesus: The New American Revolution Has Begun.
• 1974 – Charles Reich [1970]: The Greening of America.
• 1973 – Selvarajan Yesudian and Elisabeth Haich [1953]: Yoga and Health. (NY: Harper)
String theorists in knots
on 2013-06-23
in Mathematics, Religion and Science
. Closed
Last week’s Observer carried a debate over the status of string theory by a theoretical physicist, Michael Duff, and a science journalist, James Baggott. Mostly, they talk past each other. There
is much in what they say that could provoke comment, but since time is short, I will only comment on one statement.
Duff’s final contribution includes these words:
Finally, you offer no credible alternative. If you don’t like string theory the answer is simple: come up with a better one. ”
This is plain wrong for several reasons. First, we would have no scientific progress at all if critics of scientific theories first had to develop an alternative theory before they could advance
their criticisms. Indeed, public voicing of criticisms of a theory is one of the key motivations for other scientists to look for alternatives in the first place. So Duff has the horse and the
cart backwards here.
Secondly, “come up with a better one“? “better“? What means “better“? Duff has missed precisely the main point of the critics of string theory! We have no way of knowing – not even in
principle, let alone in practice – whether string theory is any good or not, nor whether it accurately describes reality. We have no experimental evidence by which to assess it, and most likely
(since it posits and models alleged additional dimensions of spacetime that are inaccessible to us) not ever any way to obtain such empirical evidence. As I have argued before, theology has more
empirical support – the personal spiritual experiences of religious believers and practitioners – than does string theory. So, suppose we did come up with an alternative theory to string theory:
how then could we tell which theory was the better of the two?
Pure mathematicians, like theologians, don’t use empirical evidence as a criterion for evaluating theories. Instead, they use subjective criteria such as beauty, elegance, and self-coherence.
There is nothing at all wrong with this. But such criteria ain’t science, which by its nature is a social activity.
Green intelligence
on 2013-05-18
in Human intelligence and Science
. Closed
Are plants intelligent? Here are 10 reasons for thinking so. I suspect the reason we don’t naturally consider the activities of plants to be evidence of intelligent behaviour is primarily
because the timescales over which these activities are undertaken is typically longer than for animal behaviours. We humans have trouble seeing outside our own normal frames of reference. (HT:
Bayesianism in science
on 2013-01-29
in Decision theory, Probability theory, Science and Uncertainty
. Closed
Bayesians are so prevalent in Artificial Intelligence (and, to be honest, so strident) that it can sometimes be lonely being a Frequentist. So it is nice to see a critical review of Nate Silver’s
new book on prediction from a frequentist perspective. The reviewers are Gary Marcus and Ernest Davis from New York University, and here are some paras from their review in The New Yorker:
Silver’s one misstep comes in his advocacy of an approach known as Bayesian inference. According to Silver’s excited introduction,
Bayes’ theorem is nominally a mathematical formula. But it is really much more than that. It implies that we must think differently about our ideas.
Lost until Chapter 8 is the fact that the approach Silver lobbies for is hardly an innovation; instead (as he ultimately acknowledges), it is built around a two-hundred-fifty-year-old theorem
that is usually taught in the first weeks of college probability courses. More than that, as valuable as the approach is, most statisticians see it is as only a partial solution to a very large
A Bayesian approach is particularly useful when predicting outcome probabilities in cases where one has strong prior knowledge of a situation. Suppose, for instance (borrowing an old example that
Silver revives), that a woman in her forties goes for a mammogram and receives bad news: a “positive” mammogram. However, since not every positive result is real, what is the probability that she
actually has breast cancer? To calculate this, we need to know four numbers. The fraction of women in their forties who have breast cancer is 0.014, which is about one in seventy. The fraction
who do not have breast cancer is therefore 1 – 0.014 = 0.986. These fractions are known as the prior probabilities. The probability that a woman who has breast cancer will get a positive result
on a mammogram is 0.75. The probability that a woman who does not have breast cancer will get a false positive on a mammogram is 0.1. These are known as the conditional probabilities. Applying
Bayes’s theorem, we can conclude that, among women who get a positive result, the fraction who actually have breast cancer is (0.014 x 0.75) / ((0.014 x 0.75) + (0.986 x 0.1)) = 0.1,
approximately. That is, once we have seen the test result, the chance is about ninety per cent that it is a false positive. In this instance, Bayes’s theorem is the perfect tool for the job.
This technique can be extended to all kinds of other applications. In one of the best chapters in the book, Silver gives a step-by-step description of the use of probabilistic reasoning in
placing bets while playing a hand of Texas Hold ’em, taking into account the probabilities on the cards that have been dealt and that will be dealt; the information about opponents’ hands that
you can glean from the bets they have placed; and your general judgment of what kind of players they are (aggressive, cautious, stupid, etc.).
But the Bayesian approach is much less helpful when there is no consensus about what the prior probabilities should be. For example, in a notorious series of experiments, Stanley Milgram showed
that many people would torture a victim if they were told that it was for the good of science. Before these experiments were carried out, should these results have been assigned a low prior
(because no one would suppose that they themselves would do this) or a high prior (because we know that people accept authority)? In actual practice, the method of evaluation most scientists use
most of the time is a variant of a technique proposed by the statistician Ronald Fisher in the early 1900s. Roughly speaking, in this approach, a hypothesis is considered validated by data only
if the data pass a test that would be failed ninety-five or ninety-nine per cent of the time if the data were generated randomly. The advantage of Fisher’s approach (which is by no means perfect)
is that to some degree it sidesteps the problem of estimating priors where no sufficient advance information exists. In the vast majority of scientific papers, Fisher’s statistics (and more
sophisticated statistics in that tradition) are used.
Unfortunately, Silver’s discussion of alternatives to the Bayesian approach is dismissive, incomplete, and misleading. In some cases, Silver tends to attribute successful reasoning to the use of
Bayesian methods without any evidence that those particular analyses were actually performed in Bayesian fashion. For instance, he writes about Bob Voulgaris, a basketball gambler,
Bob’s money is on Bayes too. He does not literally apply Bayes’ theorem every time he makes a prediction. But his practice of testing statistical data in the context of hypotheses and beliefs
derived from his basketball knowledge is very Bayesian, as is his comfort with accepting probabilistic answers to his questions.
But, judging from the description in the previous thirty pages, Voulgaris follows instinct, not fancy Bayesian math. Here, Silver seems to be using “Bayesian” not to mean the use of Bayes’s
theorem but, rather, the general strategy of combining many different kinds of information.
To take another example, Silver discusses at length an important and troubling paper by John Ioannidis, “Why Most Published Research Findings Are False,” and leaves the reader with the impression
that the problems that Ioannidis raises can be solved if statisticians use Bayesian approach rather than following Fisher. Silver writes:
[Fisher’s classical] methods discourage the researcher from considering the underlying context or plausibility of his hypothesis, something that the Bayesian method demands in the form of a prior
probability. Thus, you will see apparently serious papers published on how toads can predict earthquakes… which apply frequentist tests to produce “statistically significant” but manifestly
ridiculous findings.
But NASA’s 2011 study of toads was actually important and useful, not some “manifestly ridiculous” finding plucked from thin air. It was a thoughtful analysis of groundwater chemistry that began
with a combination of naturalistic observation (a group of toads had abandoned a lake in Italy near the epicenter of an earthquake that happened a few days later) and theory (about ionospheric
disturbance and water composition).
The real reason that too many published studies are false is not because lots of people are testing ridiculous things, which rarely happens in the top scientific journals; it’s because in any
given year, drug companies and medical schools perform thousands of experiments. In any study, there is some small chance of a false positive; if you do a lot of experiments, you will eventually
get a lot of false positive results (even putting aside self-deception, biases toward reporting positive results, and outright fraud)—as Silver himself actually explains two pages earlier.
Switching to a Bayesian method of evaluating statistics will not fix the underlying problems; cleaning up science requires changes to the way in which scientific research is done and evaluated,
not just a new formula.
It is perfectly reasonable for Silver to prefer the Bayesian approach—the field has remained split for nearly a century, with each side having its own arguments, innovations, and work-arounds—but
the case for preferring Bayes to Fisher is far weaker than Silver lets on, and there is no reason whatsoever to think that a Bayesian approach is a “think differently” revolution. “The Signal and
the Noise” is a terrific book, with much to admire. But it will take a lot more than Bayes’s very useful theorem to solve the many challenges in the world of applied statistics.” [Links in
Also worth adding here that there is a very good reason experimental sciences adopted Frequentist approaches (what the reviewers call Fisher’s methods) in journal publications. That reason is that
science is intended to be a search for objective truth using objective methods. Experiments are – or should be – replicable by anyone. How can subjective methods play any role in such an
enterprise? Why should the journal Nature or any of its readers care what the prior probabilities of the experimenters were before an experiment? If these prior probabilities make a difference
to the posterior (post-experiment) probabilities, then this is the insertion of a purely subjective element into something that should be objective and replicable. And if the actual numeric values of
the prior probabilities don’t matter to the posterior probabilities (as some Bayesian theorems would suggest), then why does the methodology include them?
PhD Vivas
on 2013-01-10
in Computer Science and Science
. Closed
Awhile back, I posted some advice from my own experiences on doing a PhD. Since then, several people have asked me for advice about the viva voce (or oral) examination, which most PhD programs
require at the end of the degree. Here are some notes I wrote for a candidate recently.
It is helpful to think about the goals of the examiners. In my opinion, they are trying to achieve the following goals:
1. First, they simply want to understand what your dissertation says. This means they will usually ask you to clarify or explain things which are not clear to them.
2. Then, they want to understand the context of the work. This refers to the previous academic literature on the subject or on related subjects, so they will generally ask about that literature.
They may consider some topic to be related to your work which you did not cover; in that case, you would normally be asked to add some text on that topic.
3. They want to assess if the work makes a contribution to the related literature. So they will ask what is new or original in your dissertation, and why it is different from the past work of
others. They will also want to be able to separate what is original from what came before (which is sometimes hard to do in some dissertations, due to the writing style of the candidate or the
structure of the document). To the extent that Computer Science is an engineering discipline, and thus involves design, originality is usually not a problem: few other people will be working in
the same area as you, and none of them would have made precisely the same sequences of design choices in the same order for the same reasons as you did.
4. They will usually want to assess if the new parts in the dissertation are significant or important. They will ask you about the strengths and weaknesses of your research, relative to the past
work of others. They will usually ask about potential future work, the new questions that arise from your work, or the research that your work or your techniques make possible. Research or
research techniques which open up new research vistas or new application domains are usually looked upon favourably.
5. Goals #3 and #4 will help the examiners decide if the written dissertation is worth receiving a PhD award, since most university regulations require PhD dissertations to present an original and
significant contribution to knowledge.
6. The examiners will also want to assess if YOU yourself wrote the document. They will therefore ask you about the document, what your definitions are, where things are, why you have done certain
things and not others, why you have made certain design choices and not others, etc. Some examiners will even give the impression that they have not read your dissertation, precisely to find out
if you have!
7. Every dissertation makes some claims (your “theses”). The examiners will generally approach these claims with great scepticism, questioning and challenging you, contesting your responses and
arguments, and generally trying to argue you down. They want to see if you can argue in favour of your claims, to see if you are able to justify and support your claims, and how you handle
criticism. After all, if you can’t support your claims, no one else will, since you are the one proposing them.
The viva is not a test of memory, so you can take a copy of your thesis with you and refer to it as you wish. Likewise, you can take any notes you want. The viva is also not a test of
speed-thinking, so you can take your time to answer questions or to respond to comments. You can ask the examiners to explain any question or any comment which you don’t understand. It is OK to
argue with the examiners (in some sense, it is expected), but not to get personal in argument or to lose your temper.
The viva is one of the few occasions in a research career when you can have an extended discussion about your research with people interested in the topic who have actually read your work. Look
forward to it, and enjoy it!
on 2012-12-15
in Religion and Science
. 1 Comment
Do we each have a soul that incarnates in different bodies over time? Most scientists in my experience dismiss any such idea, like they do most everything they cannot yet explain. But a true
scientist would (a) keep an open mind on the question, while (b) devising a scientific test of the claim. And here’s where things become difficult – and interesting. Exactly how would one test the
hypothesis of reincarnation?
If reincarnation occurs, then there is a connection between bodies in different historical time zones. Yet there seems to be no way that such bodies could communicate their special connectedness to
one another. In the case that reincarnation occurs, is there some way for instance that I could communicate with my future self (or selves), and only that person or people, in a way that they could
recognize came from me (their own past incarnation) and no one else? Thus far, I have not been able to imagine such a communication channel or message. It may be possible to design a message that
is public and seen by all, yet is only understood correctly by a particular recipient, as with the signal sent by the USSR’s Strategic Missile Command to the leadership of the USA during the August
1991 coup.
It would seem that no such inter-carnate communication is possible between incarnations of the same soul. Yet all the scientific tests of the hypothesis of reincarnation I can imagine would
require some form of direct communications between separate human incarnations of the same soul, in the case there was reincarnation. Suggestions for experiments most welcome.
Music and Physics on the Strand
on 2012-10-19
in History, Music and Science
. Closed
The Music Shop at no. 436 Strand
Monday 22 October 2012, 6.00pm-7.30pm
Venue: King’s College London
Strand Building 2:39 (English Seminar Room)
Introduced by Clare Pettitt
From the age of fourteen until his late teens, Charles Wheatstone worked in his uncle’s musical instrument shop on the Strand, modifying instruments and conducting experiments in acoustics at the
back of the shop until he left to take up a scientific career, later moving down the road to become Professor of Experimental Philosophy at King’s College London and inventing the stereoscope,
improving the concertina (Wheatstone’s musical instrument makers is still a going concern and makes concertinas) and inventing, with Cooke, the telegraph. When he was only 19 years old in September
1821, Wheatstone caused quite a sensation by inventing and exhibiting the ‘Enchanted Lyre or Aconcryptophone’ at his father’s music school/shop on Pall Mall and subsequently at the Adelaide Gallery
of Practical Science on the Strand.
This session will concentrate on the crossover between musical, commercial and scientific culture and will ask whether it is possible to map the multiple utility of spaces on the Strand (shops which
are schools which are galleries which are scientific workshops etc.) onto the radical rearrangement of the senses in this period which made new technologies of seeing, hearing and communication
[Text from here, where references and suggestions for further reading may also be found.]
The sociology of cosmology
on 2011-11-04
in Science
. Closed
Physicist Per Bak:
“I once raised this issue among a group of cosmologists at a high table dinner at the Churchill College at Cambridge. “Why is that you guys are so conservative in your views, in the face of the
almost complete lack of understanding of what is going on in your field?” I asked. The answer was as simple as it was surprising. “If we don’t accept some common picture of the universe, however
unsupported by facts, there would be nothing to bind us together as a scientific community. Since it is unlikely that any picture that we use will be falsified in our lifetime, one theory is as
good as any other.” The explanation was social, not scientific.” (Bak, page 86)
Per Bak [1999]: How Nature Works: The Science of Self-Organized Criticality. (New York, USA: Copernicus) | {"url":"http://www.vukutu.com/blog/category/science/","timestamp":"2014-04-20T08:15:38Z","content_type":null,"content_length":"78991","record_id":"<urn:uuid:1d54ea76-0a1f-49fe-af72-fc521fdc2424>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00473-ip-10-147-4-33.ec2.internal.warc.gz"} |
how to solve this triangle / area of trapezoid problem
March 22nd 2013, 10:30 AM #1
Mar 2013
new york
how to solve this triangle / area of trapezoid problem
HERE IS THE DIAGRAM
AMCmath problem | Flickr - Photo Sharing!
In triangle ABC, medians CE and AD intersect at P, PE=1.5, PD =2, and DE=2.5. What is the area of AEDC?
PLEASE HELP ME. Show ME HOW TO DO THIS.
THE CHOICES ARE
[A] 13 [B] 13.5 [C] 14 [D] 14.5 [D] 15
Re: how to solve this triangle / area of trapezoid problem
Two things to notice/know:
The centroid (intersection of the medians) splits each median in a 2:1 ratio. You can use this to find the length of CP and AP.
Triangle DPE- with sides 1.5, 2, and 2.5- is a right triangle (It's a 3-4-5) with a right angle at P. That makes finding the area of the four triangles inside that trapezoid easy.
Re: how to solve this triangle / area of trapezoid problem
HERE IS THE DIAGRAM
AMCmath problem | Flickr - Photo Sharing!
In triangle ABC, medians CE and AD intersect at P, PE=1.5, PD =2, and DE=2.5. What is the area of AEDC?
PLEASE HELP ME. Show ME HOW TO DO THIS.
THE CHOICES ARE
[A] 13 [B] 13.5 [C] 14 [D] 14.5 [D] 15
1. $AC = 2 \cdot DE$ Why?
2. Use proportions to get PC:
With PA similarly. (For confirmation only: PC = 3, PA = 4)
3. To calculate the height x in the triangle ACP use Pythagoras:
$\begin{array}{rcl}x^2+y^2&=&3^2 \\ x^2+(5-y)^2&=&4^2 \end{array}$
Solve for x (For confirmation only x = 2.4)
4. The quadrilateral AEDC is a trapezium whose parallel sides are 5 and 2.5 and whose height can be calculated by using x and the known proportions.
March 22nd 2013, 01:04 PM #2
March 22nd 2013, 01:09 PM #3 | {"url":"http://mathhelpforum.com/geometry/215294-how-solve-triangle-area-trapezoid-problem.html","timestamp":"2014-04-16T04:58:58Z","content_type":null,"content_length":"40145","record_id":"<urn:uuid:ab511a37-82e4-46e7-9823-9e09ac060a7a>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00531-ip-10-147-4-33.ec2.internal.warc.gz"} |
Presburgerness of predicates regular in two number systems
- Bull. Belg. Math. Soc , 1994
"... We survey the properties of sets of integers recognizable by automata when they are written in p-ary expansions. We focus on Cobham’s theorem which characterizes the sets recognizable in
different bases p and on its generalization to N m due to Semenov. We detail the remarkable proof recently given ..."
Cited by 68 (4 self)
Add to MetaCart
We survey the properties of sets of integers recognizable by automata when they are written in p-ary expansions. We focus on Cobham’s theorem which characterizes the sets recognizable in different
bases p and on its generalization to N m due to Semenov. We detail the remarkable proof recently given by Muchnik for the theorem of Cobham-Semenov, the original proof being published in Russian. 1
- ACM Transactions on Computational Logic (TOCL , 2005
"... This article considers finite-automata-based algorithms for handling linear arithmetic with both real and integer variables. Previous work has shown that this theory can be dealt with by using
finite automata on infinite words, but this involves some difficult and delicate to implement algorithms. T ..."
Cited by 26 (6 self)
Add to MetaCart
This article considers finite-automata-based algorithms for handling linear arithmetic with both real and integer variables. Previous work has shown that this theory can be dealt with by using finite
automata on infinite words, but this involves some difficult and delicate to implement algorithms. The contribution of this article is to show, using topological arguments, that only a restricted
class of automata on infinite words are necessary for handling real and integer linear arithmetic. This allows the use of substantially simpler algorithms, which have been successfully implemented.
- ICALP'98, LNCS 1443 , 1998
"... If read digit by digit, a n-dimensional vector of integers represented in base r can be viewed as a word over the alphabet r n . It has been known for some time that, under this encoding, the
sets of integer vectors recognizable by finite automata are exactly those de nable in Presburger arithmetic ..."
Cited by 23 (6 self)
Add to MetaCart
If read digit by digit, a n-dimensional vector of integers represented in base r can be viewed as a word over the alphabet r n . It has been known for some time that, under this encoding, the sets of
integer vectors recognizable by finite automata are exactly those de nable in Presburger arithmetic if independence with respect to the base is required, and those de nable in a slight extension of
Presburger arithmetic if only a specific base is considered. Using the same encoding idea, but moving to infinite words, finite automata on infinite words can recognize sets of real vectors. This
leads to the question of which sets of real vectors are recognizable by finite automata, which is the topic of this paper. We show that the recognizable sets of real vectors are those definable in
the theory of reals and integers with addition and order, extended with a special base-dependent predicate that tests the value of a specified digit of a number. Furthermore, in the course of proving
that sets of vectors de ned in this theory are recognizable by finite automata, we show that linear equations and inequations have surprisingly compact representations by automata, which leads us to
believe that automata accepting sets of real vectors can be of more than theoretical interest.
, 2008
"... This article studies the expressive power of finite-state automata recognizing sets of real numbers encoded positionally. It is known that the sets that are definable in the first-order additive
theory of real and integer variables 〈R, Z, +, < 〉 can all be recognized by weak deterministic Büchi auto ..."
Cited by 4 (1 self)
Add to MetaCart
This article studies the expressive power of finite-state automata recognizing sets of real numbers encoded positionally. It is known that the sets that are definable in the first-order additive
theory of real and integer variables 〈R, Z, +, < 〉 can all be recognized by weak deterministic Büchi automata, regardless of the encoding base r> 1. In this article, we prove the reciprocal
property, i.e., that a subset of R that is recognizable by weak deterministic automata in every base r> 1 is necessarily definable in 〈R, Z, +, <〉. This result generalizes to real numbers the
well-known Cobham’s theorem on the finite-state recognizability of sets of integers. Our proof gives interesting insight into the internal structure of automata recognizing sets of real numbers,
which may lead to efficient data structures for handling these sets.
, 2002
"... We survey definability and decidability issues related to first-order fragments of arithmetic, with a special emphasis on Presburger and Skolem arithmetic and their (un)decidable extensions.
Cited by 2 (0 self)
Add to MetaCart
We survey definability and decidability issues related to first-order fragments of arithmetic, with a special emphasis on Presburger and Skolem arithmetic and their (un)decidable extensions.
"... This work studies the properties of finite automata recognizing vectors with real components, encoded positionally in a given integer numeration base. Such automata are used, in particular, as
symbolic data structures for representing sets definable in the first-order theory 〈R, Z, +, ≤〉, i.e., the ..."
Cited by 1 (0 self)
Add to MetaCart
This work studies the properties of finite automata recognizing vectors with real components, encoded positionally in a given integer numeration base. Such automata are used, in particular, as
symbolic data structures for representing sets definable in the first-order theory 〈R, Z, +, ≤〉, i.e., the mixed additive arithmetic of integer and real variables. They also lead to a simple
decision procedure for this arithmetic. In previous work, it has been established that the sets definable in 〈R, Z, +, ≤ 〉 can be handled by a restricted form of infinite-word automata, weak
deterministic ones, regardless of the chosen numeration base. In this paper, we address the reciprocal property, proving that the sets of vectors that are simultaneously recognizable in all bases, by
either weak deterministic or Muller automata, are those definable in 〈R, Z, +, ≤〉. This result can be seen as a generalization to the mixed integer and real domain of Semenov’s theorem, which
characterizes the sets of integer vectors recognizable by finite automata in multiple bases. It also extends to multidimensional vectors a similar property recently established for sets of numbers.
As an additional contribution, the techniques used for obtaining our main result lead to valuable insight into the internal structure of automata recognizing sets of vectors definable in 〈R, Z, +, ≤
〉. This structure might be exploited in order to improve the efficiency of representation systems and decision procedures for this arithmetic.
, 2009
"... Abstract This work studies the properties of finite automata recognizing vectors with real components, encoded positionally in a given integer numeration base. Such automata are used, in
particular, as symbolic data structures for representing sets definable in the first-order theory 〈R, Z, +, ≤〉, i ..."
Add to MetaCart
Abstract This work studies the properties of finite automata recognizing vectors with real components, encoded positionally in a given integer numeration base. Such automata are used, in particular,
as symbolic data structures for representing sets definable in the first-order theory 〈R, Z, +, ≤〉, i.e., the mixed additive arithmetic of integer and real variables. They also lead to a simple
decision procedure for this arithmetic. In previous work, it has been established that the sets definable in 〈R, Z, +, ≤ 〉 can be handled by a restricted form of infinite-word automata, weak
deterministic ones, regardless of the chosen numeration base. In this paper, we address the reciprocal property, proving that the sets of vectors that are simultaneously recognizable in all bases, by
either weak deterministic or Muller automata, are those definable in 〈R, Z, +, ≤〉. This result can be seen as a generalization to the mixed integer and real domain of Semenov’s theorem, which
characterizes the sets of integer vectors recognizable by finite automata in multiple bases. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=1252716","timestamp":"2014-04-21T11:01:42Z","content_type":null,"content_length":"29902","record_id":"<urn:uuid:644d5338-e60f-4a70-8526-dd6be18b56cb>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00111-ip-10-147-4-33.ec2.internal.warc.gz"} |
Source Code for the Wrox book - C# Data Security Handbook
March 17th, 2013, 12:40 PM
Source Code for the Wrox book - C# Data Security Handbook
Am looking for the Source Code for the Wrox book - C# Data Security Handbook (year 2003). Do let me know if you have it or anybody you know has it. Wrox website says it has sold to apress and
apress link doesn't work.
Thanks in advance.
March 17th, 2013, 07:11 PM
Re: Source Code for the Wrox book - C# Data Security Handbook
Follow the instructions here:
March 18th, 2013, 03:05 AM
Re: Source Code for the Wrox book - C# Data Security Handbook
Thanks Arjay. But the link doesn't contain the book "C# Data Security Handbook" with ISBN - 81-7366-612-1 or 9798173666123. Please suggest if there is any other source.
March 18th, 2013, 12:49 PM
Re: Source Code for the Wrox book - C# Data Security Handbook
I suggest you contact apress directly. Since they have taken over Wrox, they are the only ones that would be able to provide the source to these books. | {"url":"http://forums.codeguru.com/printthread.php?t=535549&pp=15&page=1","timestamp":"2014-04-16T20:04:43Z","content_type":null,"content_length":"6783","record_id":"<urn:uuid:91ce39ff-f732-48d9-9469-3ebce9e4ba61>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00177-ip-10-147-4-33.ec2.internal.warc.gz"} |
Nuclear Physics,
Physics 561, Theoretical Nuclear Physics, Winter 2003
Instructor: Aurel Bulgac
The class will meet TuTh 10:30 - 11:50 am in PAB B109
This course web page will likely change throughout the quarter and you are
advised to make a bookmark and consult it periodically.
There are no notes available at this time.
Besides regular lectures I would like to set up a weekly moderated meeting as well, where
students will discuss various problems, typically only partially covered in lectures but
assigned as home study projects.
I shall post here as well various problems, related material and additional information.
• The Nuclear Many-Body Problem
Peter Ring and Peter Schuck, Springer 1980.
This text is out of print unfortunately. I have reserved however in the Physics Library two copies of the book.
Additional texts:
• Nuclear Structure, vols. 1 and 2,
A. Bohr and B. Mottelson
This is classic reference and I I shall use selectively various
topics from these books. mainly from volume 2.
• The Theory of Finite Fermi System and the Properties of Atomic Nuclei
A.B. Migdal
• Methods of Quantum Field Theory in Statistical Physics
A.A. Abrikosov, L.P. Gorkov and I.E. Dzyaloshinski
• Quantum Many-Body Systems
J.W. Negele and H. Orland
• Quantum Theory of Many Particle Systems
A.L. Fetter and J.D. Walecka
I shall use selectively various topics from these monographs, in
particular the diagramatic technique, path integrals.
• Brief introduction to diagramatic technique
• Hartree-Fock (HF) method, symmetries, dilute systems, Brueckner HF,
infinite nuclear matter, effective interactions, density functional theory
• Bogoliubov quasiparticles, Hartree-Fock-Bogoliubov (HFB),
BCS approximation
• Time-depedent Hartee-Fock (TDHF), Random Phase Approximation (RPA),
broken symmetries and spurious modes, sum rules, polarizabilities, correlations,
constraint HF, finite temperature properties, level density
• Boson expansion methods, Generator Coordinate Method (GCM), Adiabatic TDHF,
restoration of broken symmetries, vibrational and rotational spectra, fission
• Semiclassical methods, Thomas-Fermi, Strutinsky shell-corrections
• Path integral methods
• Nuclear reactions, Distorted Wave Born Approximation, Eikonal approximation,
optical model, nuclear resonaces and random matrices, heavy ion reactions, kinetic
The level of detail will vary, some topics will be treated in more detail than others. | {"url":"http://www.phys.washington.edu/~bulgac/560/561.html","timestamp":"2014-04-20T15:51:08Z","content_type":null,"content_length":"5085","record_id":"<urn:uuid:29a04394-c85d-47c7-a1d8-bb2bb022278e>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00517-ip-10-147-4-33.ec2.internal.warc.gz"} |
UK General election results October 1931 [Archive]
UK General Election results October 1931 UK General Election results October 1931
27th October 1935
595 Constituencies, 615 MPs: See below
Source: The Times
Thanks to Andrew Stidwill for his hard work in inputting and checking the results for 1935
The question of turnout is slightly complicated by the existence of two-member seats and some STV elections. Normally, in single-member seats, turnout is interpreted as the proportion of the
registered electorate that casts valid votes, and is seen as a measure of the propensity to vote. In those seats where each elector had two votes, it is not clear from the published figures how many
people actually voted, and so the figure given for turnout in these constituencies is actually the total number of valid votes as a proportion of the total possible number of votes (twice the
registered electorate).
Constituencies Seats
Single-member constituencies, excluding Universities 573 573
Two-member constituencies, excluding Universities 15 30
Single-member University constituencies 3 3
Two-member University constituencies 3 6
Three-member University constituencies 1 3 | {"url":"http://www.politicsresources.net/area/uk/ge31/ge31index.htm","timestamp":"2014-04-19T01:48:24Z","content_type":null,"content_length":"6358","record_id":"<urn:uuid:e4eb5e85-f010-49ee-abbb-73badd16783a>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00473-ip-10-147-4-33.ec2.internal.warc.gz"} |
Stress and Strain Analysis of Functionally Graded Rectangular Plate with Exponentially Varying Properties
Indian Journal of Materials Science
Volume 2013 (2013), Article ID 206239, 7 pages
Research Article
Stress and Strain Analysis of Functionally Graded Rectangular Plate with Exponentially Varying Properties
^1Mechanical Engineering Department, University of Tehran, Tehran, Iran
^2Department of Mechanical Engineering, Yasooj Branch, Islamic Azad University, Yasooj, Iran
Received 26 June 2013; Accepted 16 July 2013
Academic Editors: S. Banerjee and D. L. Sales
Copyright © 2013 Amin Hadi et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any
medium, provided the original work is properly cited.
The bending of rectangular plate made of functionally graded material (FGM) is investigated by using three-dimensional elasticity theory. The governing equations obtained here are solved with static
analysis considering the types of plates, which properties varying exponentially along direction. The value of Poisson’s ratio has been taken as a constant. The influence of different functionally
graded variation on the stress and displacement fields was studied through a numerical example. The exact solution shows that the graded material properties have significant effects on the mechanical
behavior of the plate.
1. Introduction
Recently, a new category of composite materials known as functionally graded materials (FGMs) has attracted the interest of many researchers. The FGMs are heterogeneous composite materials in which
the mechanical properties vary continuously in certain direction. FGMs are used in many engineering applications such as aviation, rocketry, missiles, chemical, aerospace, and mechanical industries.
Therefore, composites that are made of FGMs were considerably attractive in recent years.
Several studies have been performed to analyze the behavior of functionally graded beam, plates, and shells. Hadi et al. [1, 2] studied an Euler-Bernoulli and Timoshenko beam made of functionally
graded material subjected to a transverse loading at which Young’s modulus of the beam varies by specific function. Reddy [3] has analyzed the static behavior of functionally graded rectangular
plates based on his third-order shear deformation plate theory. Cheng and Batra [4] have related the deflections of a simple supported functionally graded polygonal plate given by the first-order
shear deformation theory and a third-order shear deformation theory to an equivalent homogeneous Kirchhoff plate. Cheng and Batra [5] also presented results for the buckling and steady state
vibrations of a simple supported functionally graded polygonal plate based on Reddy’s plate theory. Loy et al. [6] studied the vibration of functionally graded cylindrical shells by using Love’s
shell theory.
Analytical 3D solutions for plates are useful because provided benchmark results to assess the accuracy of various 2D plate theories and finite element formulations. Cheng and Batra [7] used the
method of asymptotic expansion to study the 3D thermoelastic deformations of a functionally graded elliptic plate. Recently, Vel and Batra [8] have presented an exact 3D solution for the
thermoelastic deformation of functionally graded simple supported plates of finite dimensions. Reiter et al. [9] performed detailed finite element studies of discrete models containing simulated
particulate and skeletal microstructures and compared the results with those that are computed from homogenized models in which effective properties were derived by the Mori-Tanaka and the
self-consistent methods.
Tanigawa [10] used a layerwise model to solve a one-dimensional transient heat conduction problem and the associated thermal stress problem of an inhomogeneous plate. He further formulated the
optimization problem of the material composition to reduce the thermal stress distribution. Tanaka et al. [11, 12] have designed FGM property profiles by using sensitivity and optimization methods
based on the reduction of thermal stresses. Jin and Noda [13] used the minimization of thermal stress intensity for a crack in a metal-ceramic functionally gradient material as a criterion for
optimizing material property variation. In the same context, also were studied both the steady state [14] and the transient [15] heat conduction problems by them, but neglected the thermomechanical
coupling (see also [16, 17]).
The response of functionally graded ceramic-metal plates has been investigated by Praveen and Reddy [18] with using a plate finite element that accounts for the transversal shear strains, rotatory
inertia, and moderately large rotations in von Kármán sense. Reddy and Chin [19] have studied the dynamic thermoelastic response of functionally graded cylinders and plates. Najafizadeh and Eslami [
20] presented the buckling analysis of radially loaded solid circular plate that is made of functionally graded material.
In this paper, an exact solution bending of rectangular plate made of functionally graded material (exponential form along direction) subjected to top and bottom pressures and , respectively, is
investigated by using three-dimensional elasticity theory.
2. Analysis
The equilibrium of a weightless homogeneous transversally isotropic elastic FGM plate is considered. The geometry of the elastic FGM plate in relation to the coordinate axes is shown in Figure 1.
The plate is assumed under the action of top and bottom pressures and , respectively, where , and are top and bottom pressures in , .
In order to account for the changing material properties along the direction, an exponential relationship is used as follows: Here is the module of elasticity at , and is the inhomogeneity constants
determined empirically. Equilibrium equations in three dimensions are defined as follows: where , , and are normal stress components in , , and direction, respectively. , , and are shear stresses
The displacement in the , , and directions is denoted by , , and , respectively, six strain components can be expressed as where , , and are normal strain components in , , and directions,
respectively. , , and are shear strain components.
The stress-strain relations are where is shear modulus, with the modulus of elasticity and Poisson’s ratio. The value of Poisson’s ratio has been taken as constant.
The boundary conditions of the problem are the following. Along the sides of the plate, we have the boundary conditions on the plate faces are as below: By the displacement field below, the boundary
conditions (7) are satisfied: Substituting (9) into (5) and the resulting expressions for stress components into equilibrium equations (3), we see that the equilibrium equation is expressed as system
of second-order ordinary differential equations: where Therefore The general solution of (10) is as follows: where are the roots of the equation below: where are arbitrary integration constants.
The resulting displacement field is defined as follows: Substituting (15) into (5), (6) stress component is calculated as follows:
The resultant moments on a unit of length are obtained by using relations (16a)–(16f):
Using (16a), (16b), (16c), (16d), (16e), and (16f) the transverse shearing forces on a unit of length are by definition
3. Results and Discussion
In the following, the obtained solution will be employed to analyze the effect of material inhomogeneity on the elastic field in the rectangular plate. Consider a rectangular plate with length ,
width , and thickness , with material property , subjected to top and bottom pressures and , respectively. It is assumed the Poisson’s ratio has a constant value of 0.3. Dimensionless and normalized
variables are used. For different values of , dimensionless modulus of elasticity along the direction is plotted in Figure 2. According to this figure, at the same position , dimensionless modulus of
elasticity is increasing as the parameter is decreasing, while for , dimensionless modulus of elasticity is increasing as the parameter is increasing.
Figure 3 displays the nondimensional displacement of the plate in direction for different values of parameter . This plot displays that the magnitude of changing by is low, so for, assuming plane
strain is true and reasonable, but, for , the amount of changing by is high.
In Figure 4 is shown the nondimensional displacement in the direction versus for , . In this plot displacement is decreasing as the parameter is increasing. In Figure 5 is shown the nondimensional
displacement in the y direction versus for , .
In Figure 6 is shown the variation of nondimensional stress in the direction versus nondimensional thickness at the , . This plot shows that the boundary conditions at the up and down surfaces are
satisfied. Also at the constant , by increasing the parameter , it is observed the stress is decreased.
Figure 7 shows the according to the for the variable amount of : this figure probes that the stress’s component can be deleted proportionally to other components.
4. Conclusion
It is apparent that close form solutions are important to simplified kinds of real engineering problems. In this paper is studied the rectangular plate that is made of functionally graded material
with the variable properties (exponential form) by using 3D elasticity theory. Then some exact solution packages for stresses, displacements are presented.
To show the effect of inhomogeneity on the stress distributions, different values were considered for material inhomogeneity parameter . The presented results show that the material inhomogeneity has
a significant influence on the mechanical behaviors of the solid rectangular plate made of exponentially FG.
1. A. R. Daneshmehr, A. Hadi, and S. M. N. Mehrian, “Investigation of elastoplastic functionally graded Euler-Bernoulli beam subjected to distribute transverse loading,” Journal of Basic and Applied
Scientific Research, vol. 2, no. 10, pp. 10628–10634, 2012.
2. A. Hadi, A. R. Daneshmehr, S. M. N. Mehrian, M. Hosseini, and F. Ehsani, “Elastic analysis of functionally graded timoshenko beam subjected to transverse loading,” Technical Journal of
Engineering and Applied Sciences, vol. 3, no. 13, pp. 1246–1254, 2013.
3. J. N. Reddy, “Analysis of functionally graded plates,” International Journal for Numerical Methods in Engineering, vol. 47, no. 1–3, pp. 663–684, 2000. View at Scopus
4. Z.-Q. Cheng and R. C. Batra, “Deflection relationships between the homogeneous Kirchhoff plate theory and different functionally graded plate theories,” Archives of Mechanics, vol. 52, no. 1, pp.
143–158, 2000. View at Scopus
5. Z.-Q. Cheng and R. C. Batra, “Exact correspondence between eigenvalues of membranes and functionally graded simply supported polygonal plates,” Journal of Sound and Vibration, vol. 229, no. 4,
pp. 879–895, 2000. View at Publisher · View at Google Scholar · View at Scopus
6. C. T. Loy, K. Y. Lam, and J. N. Reddy, “Vibration of functionally graded cylindrical shells,” International Journal of Mechanical Sciences, vol. 41, no. 3, pp. 309–324, 1999. View at Scopus
7. Z.-Q. Cheng and R. C. Batra, “Three-dimensional thermoelastic deformations of a functionally graded elliptic plate,” Composites Part B: Engineering, vol. 31, no. 2, pp. 97–106, 2000. View at
8. S. S. Vel and R. C. Batra, “Exact solution for thermoelastic deformations of functionally graded thick rectangular plates,” AIAA Journal, vol. 40, no. 7, pp. 1421–1433, 2002. View at Scopus
9. T. Reiter, G. J. Dvorak, and V. Tvergaard, “Micromechanical models for graded composite materials,” Journal of the Mechanics and Physics of Solids, vol. 45, no. 8, pp. 1281–1302, 1997. View at
10. Y. Tanigawa, “Theoretical approach of optimum design for a plate of functionally gradient materials under thermal loading,” in Thermal Shock and Thermal Fatigue Behavior of Advanced Ceramics,
vol. 241 of Nato Science Series E, pp. 171–180, 1992.
11. K. Tanaka, Y. Tanaka, K. Enomoto, V. F. Poterasu, and Y. Sugano, “Design of thermoelastic materials using direct sensitivity and optimization methods. Reduction of thermal stresses in
functionally gradient materials,” Computer Methods in Applied Mechanics and Engineering, vol. 106, no. 1-2, pp. 271–284, 1993. View at Scopus
12. K. Tanaka, Y. Tanaka, H. Watanabe, V. F. Poterasu, and Y. Sugano, “An improved solution to thermoelastic material design in functionally gradient materials: Scheme to reduce thermal stresses,”
Computer Methods in Applied Mechanics and Engineering, vol. 109, no. 3-4, pp. 377–389, 1993. View at Scopus
13. Z. H. Jin and N. Noda, “Minimization of thermal stress intensity factor for a crack in a metal ceramic mixture, Ceramic Trans,” Functionally Graded Material, vol. 34, pp. 47–54, 1993.
14. N. Noda and Z. H. Jin, “Thermal stress intensity factors for a crack in a strip of a functionally gradient material,” International Journal of Solids and Structures, vol. 30, no. 8, pp.
1039–1056, 1993. View at Scopus
15. Z. H. Jin and N. Noda, “Transient thermal stress intensity factors for a crack in a semi-infinite plate of a functionally gradient material,” International Journal of Solids and Structures, vol.
31, no. 2, pp. 203–218, 1994. View at Scopus
16. Y. Obata and N. Noda, “Steady thermal stresses in a hollow circular cylinder and a hollow sphere of a functionally gradient material,” Journal of Thermal Stresses, vol. 17, no. 3, pp. 471–487,
1994. View at Scopus
17. Y. Obata, N. Noda, and T. Tsuji, “Steady thermal stresses in a functionally gradient material plate,” Transactions of the JSME, vol. 58, pp. 1689–1695, 1992.
18. G. N. Praveen and J. N. Reddy, “Nonlinear transient thermoelastic analysis of functionally graded ceramic-metal plates,” International Journal of Solids and Structures, vol. 35, no. 33, pp.
4457–4476, 1998. View at Scopus
19. J. N. Reddy and C. D. Chin, “Thermomechanical analysis of functionally graded cylinders and plates,” Journal of Thermal Stresses, vol. 21, no. 6, pp. 593–626, 1998. View at Scopus
20. M. M. Najafizadeh and M. R. Eslami, “Buckling analysis of circular plates of functionally graded materials under uniform radial compression,” International Journal of Mechanical Sciences, vol.
44, no. 12, pp. 2479–2493, 2002. View at Publisher · View at Google Scholar · View at Scopus | {"url":"http://www.hindawi.com/journals/ijms/2013/206239/","timestamp":"2014-04-21T08:41:56Z","content_type":null,"content_length":"457744","record_id":"<urn:uuid:e18cc240-b089-4f86-8b0d-7e8e1eb14601>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00181-ip-10-147-4-33.ec2.internal.warc.gz"} |
33Cxx Hypergeometric functions
• 33C05 Classical hypergeometric functions, 2F1 (1)
• 33C10 Bessel and Airy functions, cylinder functions, 0F1 (1)
• 33C15 Congruent hypergeometric functions, Whittaker functions, 1F1
• 33C20 Generalized hypergeometric series, pFq
• 33C45 Orthogonal polynomials and functions of hypergeometric type (Jacobi, Laguerre, Hermite, Askey scheme, etc.) [See also 42C05 for general orthogonal polynomials and functions]
• 33C47 Other special orthogonal polynomials and functions
• 33C50 Orthogonal polynomials and functions in several variables expressible in terms of special functions in one variable
• 33C52 Orthogonal polynomials and functions associated with root systems
• 33C55 Spherical harmonics (3)
• 33C60 Hypergeometric integrals and functions defined by them (E, G, H and I functions)
• 33C65 Appell, Horn and Lauricella functions
• 33C67 Hypergeometric functions associated with root systems
• 33C70 Other hypergeometric functions and integrals in several variables
• 33C75 Elliptic integrals as hypergeometric functions
• 33C80 Connections with groups and algebras, and related topics
• 33C90 Applications
• 33C99 None of the above, but in this section | {"url":"https://kluedo.ub.uni-kl.de/solrsearch/index/search/searchtype/collection/id/9875","timestamp":"2014-04-20T21:09:41Z","content_type":null,"content_length":"19410","record_id":"<urn:uuid:97e89fe1-6e13-448f-a76e-64cf18ebd035>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00411-ip-10-147-4-33.ec2.internal.warc.gz"} |
strange exponentiation performance
I was doing some thinking about exponentiation algorithms along with a
friend, and I happened upon some interesting results. Particularly, I was
able to outperform the ** operator in at least one case, with a recursive
algorithm. This leads me to believe that perhaps the ** operator should
tune it's algorithm based on inputs or some such thing. Here is my data:
>>> def h(b,e):
.... if(e==0): return 1
.... if(e==1): return b
.... t = h(b,e >> 1)
.... return t*t*h(b,e & 1)
Above is the recursive exponentiation algorithm. I tried some test data
and it appears to work. This just popped into my head out of nowhere and I
optimized it with some trivial optimizations (I used e>>1 instead of e/2;
I used e&1 instead of e%2).
>>> def f(b,e):
.... n = 1
.... while(e>0):
.... if(e & 1): n = n * b
.... e >>= 1
.... b *= b
.... return n
I then made this algorithm which I thought basically unwrapped the
recursion in h(). It seems to work also.
Then, the more trivial exponentiation algorithm:
>>> def g(b,e):
.... n = 1
.... while(e>0):
.... n *= b
.... e -= 1
.... return n
For consistency, I wrapped ** in a function call:
>>> def o(b,e):
.... return b**e
then I made a test function to time the computation time:
>>> def test(func,b,e):
.... t1 = time.time()
.... x = func(b,e)
.... t2 = time.time()
.... print t2-t1
now, I compared:
>>> test(f,19,100000)
>>> test(g,19,100000)
>>> test(h,19,100000)
>>> test(o,19,100000)
now, g() was blown out of the water, as expected, but the others were
close enough for another test at a higher "e" value.
>>> test(f,19,500000)
>>> test(h,19,500000)
>>> test(o,19,500000)
Now, that is the interesting part. How did ** not measure up to h()? It's
also interesting that f(), which is supposed to be a more efficient
version of h(), is lagging behind.
I would like help explaining the following:
(1) How did my less-than-perfectly-optimized recursive algorithm win
against the ** operator?
(2) How can I unwrap and optimize h()? From what I understand, recursion
is never supposed to be the most efficient. I suspect there are some
hidden inefficiencies in using while(), but I'd like to know the specifics.
If my algorithm h() is better, why can't ** use a quick test to change
algorithms based on inputs? Or is mine better in all cases?
BTW: python2.3.2 compiled with gcc 3.3.2 on linux2.4.19 all on debian &
i386. I have an AMD athlon xp 1800+.
I ran all test cases several times and results were very consistant.
Also note that I'm not exactly an algorithms expert, I just happened upon
these results while chatting with a friend.
Jeff Davis | {"url":"http://www.velocityreviews.com/forums/t325212-strange-exponentiation-performance.html","timestamp":"2014-04-16T19:22:27Z","content_type":null,"content_length":"29287","record_id":"<urn:uuid:e0869de8-edc7-46af-8b34-ac6216249ce7>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00031-ip-10-147-4-33.ec2.internal.warc.gz"} |
Use of Plural with Decimal Numbers
Date: 02/27/2002 at 09:37:12
From: Eric Derobert
Subject: Use of plural with decimal numbers
Hello !
For natural numbers, the use of plural is of course easy :
0 point, 1 point, 2 points, 3 points...
But what happens when using decimal numbers between 1 and 2 ?
What are the rules for 1.0001, 1.1, 1.5 or 1.999 ?
Should we write
1.0001, 1.1, 1.5, 1.999 point
1.0001, 1.1, 1.5, 1.999 points ?
Thank you !
Date: 02/27/2002 at 12:32:59
From: Doctor Peterson
Subject: Re: Use of plural with decimal numbers
Hi, Eric.
I believe you are asking whether, in English, we treat various numbers
as singular or plural. My understanding is that we consider ONLY the
number 1 as singular; in particular, zero is a plural: we say "0
degrees" or "0 (no) apples," not "0 degree" or "0 apple." We do not
use fractions as adjectives at all, but say "half (of) an apple" or
"two thirds of a degree" with the fraction standing alone as a noun
phrase, so it would not be quite accurate to say that a proper
fraction is singular. With a mixed number, we tend to use a plural:
"one and a half apples."
Applying these ideas to decimals, I would say
0.5 apple (half an apple, five tenths of an apple)
0.9 apple
1.1 apples
1.5 apples
In other words, anything greater than one can be taken as plural, as
well as zero and possibly negative numbers. But in many cases the
plural might be used with decimals smaller than 1; I have no strong
objection to "0.5 meters".
I have not found a reference to support any of my advice, which is
based mostly on what feels right to me.
- Doctor Peterson, The Math Forum
Date: 02/27/2002 at 13:09:41
From: Eric Derobert
Subject: Use of plural with decimal numbers
Dear Dr Peterson,
In fact, I did some research from my 1st asking, and the rule appears
to be different in English and in French (I am from France).
In French the rule is to use plural for a noun when the number before
is greater or equal to 2 (see this official reference from Quebec-
Canada : http://www.olf.gouv.qc.ca/index.html?/sommaire.html ).
From the
Guide to Grammar and Writing
I found a different rule : when the absolute value of a fractional or
decimal expression is 1 or less, the word it modifies should be
singular (that's exactly what you apply in your 4 apples examples).
"When fractional or decimal expression are 1 or less, the word
they modify should be singular: 0.7 meter, 0.22 cubic foot,
0.78 kilometer. Precede decimal fractions with a value less than
one with a leading zero before the decimal point."
But also, when you say that singular applies only to 1, the examples
you propose seem very familiar : I've heard "0 degrees", and I've read
"0.5 meters"...
Apparently, the rules then depend on languages and uses... But
perhaps, some official rule exists for English as a whole ?
How interesting !
Kind regards.
Date: 02/27/2002 at 14:31:13
From: Doctor Peterson
Subject: Re: Use of plural with decimal numbers
Hi, Eric.
English (and especially American English) has no "official" rules for
anything, though there are many "experts" who claim to give such
rules. But with or without regulation, this is definitely a linguistic
issue rather than one of mathematics or logic. Each language has a
different set of rules for number as well as other aspects of grammar
- for example, some languages have not only a singular and a plural,
but a "dual" for exactly two, which would raise additional questions
when applied to, say, 2.001. Different rules may be equally logical,
and different languages may have different needs.
Here is another reference I found (from Germany, but referring to
Vocabulary Box: SI Units of Measure(ment)
With a few exceptions, plurals of spelled-out units are formed
conventionally. Use the singular form with numbers less than or
equal to 1, and use the plural form with numbers greater than 1.
(In spoken English, however, decimal numbers take the plural
form even if they are less than 1, e.g. "a quarter second" but
"zero point two five seconds.")
This shows that both of the rules I gave have their place.
- Doctor Peterson, The Math Forum
Date: 02/27/2002 at 16:57:13
From: Doctor Peterson
Subject: Re: Use of plural with decimal numbers
Hello, Eric!
I looked again and found that although your Quebec URL referred only
to the main page (because the source uses a frame), another link takes
me to a FAQ with the answer you described:
Here is a translation:
Agreement of the noun after a fractional number less than two
The noun that follows a fractional number less than two, such as
one and a half, does not take the plural. Contrary to the English
use, the name million, when one writes 1,5 million, remains in the
singular. Moreover, the oral pronunciation shows that it is about
a unit: one million five hundred thousand.
The rule of agreement is the following: a name preceded by a
number takes the plural only when this number is equal to or
greater than two.
Here are some examples:
1,3 billion people (it is necessary to say: a billion three
hundred million people);
1,47 meter (it is necessary to say: one meter 47);
An average of 1,25 child by household (it is necessary to say:
a child and a quarter).
Note that "the English use" for "million" differs from the American
usage; the English say "1.5 millions," but we Americans use the
singular no matter how many million it is.
I suspect that the reason for the French rule lies in the way you say
the number. You say the whole part, then the unit (million, meter,
child), then the fractional part (three hundred million, 47, quarter).
So if the whole part is one, you use the singular, and if it is two or
more you use the plural. We think of the entire number as more than
one and therefore plural; you use only the whole part as a direct
modifier, which is NOT more than one in these cases, though a fraction
is then added to it. The difference is entirely in the grammar, which
determines the choice of words.
Another page gives some related ideas on fractions:
La foire aux questions linguistiques
In English we tend always to think of the fraction as a number in
itself, rather than as a phrase, and to emphasize agreement in meaning
rather than form; we would never say "2/3 of the road are open," even
though grammatically the subject of the verb is the plural "thirds."
But we would say "2/3 of the students are present," because there must
be more than one student. On the other hand, the British would say
"2/3 of the class are present," while an American might prefer
"2/3 of the class is present," because we tend to treat collective
nouns as singular. Language has its reasons, but they are not always
in agreement.
Of course, this is entirely a grammatical matter in French, and mostly
a logical issue in English (that is, we aren't thinking of grammar,
but only of whether more than one is under consideration). So in
French this usage with fractions disagrees with the "two or more are
plural" rule, because the grammar differs in the two cases, whereas in
English we more consistently focus on the meaning.
I've enjoyed looking into this issue, even though it is really a
question for Dr. Linguistics.
- Doctor Peterson, The Math Forum
Date: 02/28/2002 at 05:23:05
From: Eric Derobert
Subject: Use of plural with decimal numbers
Dear Dr Peterson,
Thank you for spending such an amount of time to answer this
paramathematical topic ! This is very complete.
I think you are right when emphasizing the role of the way of saying
numbers : "1.50 meters" in English and "1 metre 50" in French, which
is written "1,50 metre" for consistency.
Kind regards, and thank you again ! | {"url":"http://mathforum.org/library/drmath/view/57224.html","timestamp":"2014-04-17T18:46:01Z","content_type":null,"content_length":"13802","record_id":"<urn:uuid:357859da-4be3-4d53-8225-ff9ad4a419f3>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00269-ip-10-147-4-33.ec2.internal.warc.gz"} |
Using Math to make Guinness
By Atif Kukaswadia
Posted: December 2, 2013
Let me tell you a story about William Sealy Gosset. William was a Chemistry and Math grad from Oxford University in the class of 1899 (they were partying like it was 1899 back then). After
graduating, he took a job with the brewery of Arthur Guinness and Son, where he worked as a mathematician, trying to find the best yields of barley.
But this is where he ran into problems.
One of the most important assumptions in (most) statistical tests is that you have a large enough sample size to create inferences about your data. You can’t make many comments if you only have 1
data point. 3? Maybe. 5? Possibly. Ideally, we want at least 20-30 observations, if not more. It’s why when a goalie in hockey, or a batter in baseball, has a great game, you chalk it up to being a
fluke, rather than indicative of their skill. Small sample sizes are much more likely to be affected by chance and thus may not be accurate of the underlying phenomena you’re trying to measure.
Gosset, on the other hand, couldn’t create 30+ batches of Guinness in order to do the statistics on them. He had a much smaller sample size, and thus “normal” statistical methods wouldn’t work.
Gosset wouldn’t take this for an answer. He started writing up his thoughts, and examining the error associated with his estimates. However, he ran into problems. His mentor, Karl Pearson, of Pearson
Product Moment Correlation Coefficient fame, while supportive, didn’t really appreciate how important the findings were. In addition, Guiness had very strict policies on what their employees could
publish, as they were worried about their competitors discovering their trade secrets. So Gosset did what any normal mathematician would.
He published under a pseudonym. In a startlingly rebellious gesture, Gosset published his work in Biometrika titled “The Probable Error of a Mean.” (See, statisticians can be badasses too). The name
he used? Student. His paper for the Guinness company became one of the most important statistical discoveries of the day, and the Student’s T-distribution is now an essential part of any introductory
statistics course.
So why am I telling you this? Well, I’ve talked before about the importance of storytelling as a way to frame scientific discovery, and I’ve also talked about the importance of mathematical literacy
in a modern society. This piece forms the next part of that spiritual trilogy. Math is typically taught in a very dry, very didactic format – I recite Latin to you, you remember it, I eventually give
you a series of questions to answer, and that dictates your grade in the class. Often, you’re only actually in the class because it’s a mandatory credit you need for high school or your degree
program. There’s very little “discovery” occurring in the math classroom.
Capturing interest thus becomes of paramount importance to instructors, especially in math which faces a societal stigma of being “dull,” “boring” and “just for nerds.” A quick search for “I hate
math” on Twitter yields a new tweet almost every minute from someone expressing those sentiments, sometimes using more “colourful” language (at least they’re expanding their vocabulary?).
There are lots of examples of these sorts of interesting anecdotes about math. The “Scottish book” was a book named after the Scottish Café in Lviv, Ukraine, where mathematicians would leave a
potentially unsolvable problem for their colleagues to tackle. Successfully completing these problems would result in you receiving a prize ranging from a bottle of brandy to, I kid you not, a live
goose (thanks Mariana for that story!) The Chudnovsky Brothers built a machine in their apartment that calculated Pi to two billion decimal places. I asked for stories on Twitter and @physicsjackson
responded with:
@MrEpid While Gauss was in primary school he discovered 1+2+…+100=5050 by adding (1+100)+(2+99)+…+(50+51)=101+…+101. In his head.
— Mark G. Jackson (@physicsjackson) December 1, 2013
There’s also the story of Amalie Noether, the architect behind Noether’s theorem, which basically underpins all modern physics. Dr Noether came to prominence at a time when women were largely
excluded from academic positions, yet rose through the ranks to become one of the most influential figures of that time, often considered at the same level of brilliance as Marie Curie. Her
mathematical/physics contemporaries included David Hilbert, Felix Klein and Albert Einstein, who took up her cause to help her get a permanent position, and often sought out her opinion and thoughts.
Indeed, after Einstein stated his theory of general relativity, it was Noether who then took this to the next level and linked time and energy. But don’t take my word for it – Einstein himself said:
In the judgment of the most competent living mathematicians, Fräulein Noether was the most significant creative mathematical genius thus far produced since the higher education of women began.
While stories highlight the importance of these discoveries, they also highlight the diversity that exists within the scientific community. Knowing that the pantheon of science and math heroes
includes people who aren’t all “math geniuses” can make math much more engaging and interesting. Finally, telling stories of the people behind math can demystify the science, and engage youth who may
not consider math as a career path.
The Using Math to make Guinness by PLOS Blogs Network, unless otherwise expressly stated, is licensed under a Creative Commons Attribution 4.0 International License.
Latest Network Comments
About Atif Kukaswadia
This entry was posted in Public Understanding of Science, Science communication, Science education research, Science teaching and tagged academia, Education, Informal Science Education, math,
misconceptions, public understanding of science, science communication, science literacy, science teaching. Bookmark the permalink. | {"url":"http://blogs.plos.org/blog/2013/12/02/using-math-to-make-guinness/","timestamp":"2014-04-17T18:36:19Z","content_type":null,"content_length":"77835","record_id":"<urn:uuid:d0e98f5f-4761-4cf5-8b89-e2741e306224>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00552-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math on the Level and Math Lessons For A Living Education
I am looking into both of these. We currently use Math-u-see though I am considering changing.
If I choose Math Lessons For A Living Education I would just use those with my kindy and keep my oldest doing math u see for now.
Math on the Level reviews say high prep and lots of teacher time involved. Do you find this more so than other math programs? I mean math u see is no prep but does require the parent teaching the
concept, well I actually just let my son watch the video and then do the lessons on his own unless he asks for help.
We use MOTL and love it. It is high prep for me although it doesn't really take that much time. But what I love about it, and what makes it worth the extra time, is that is totally go at the pace and
"maturation level" of my boys. I teach only what they're ready for at the time and the amount of review depends on how much they need, not how much some unknown outsider thinks they need. I can
customize their learning to suit their needs. It also gives me lots of ways to teach each concept so I can choose what works for them and their learning styles.
I am also looking into MOTL. I joined the MOTL yahoo group and it seems like most of the Moms take 5-8 minutes per child to prep each night. That doesn't sound bad to me, because I currently need to
look the math lesson book and teacher's manual over each night and decide what to teach and how to teach it and pick out problems. Plus I hate not using ALL of a book I paid for!!
Anyway, you might want to join that yahoo group and ask any specific questions. The MOTL author is on there and posts quite a bit.
I don't know anything about MOTL, but Math Lessons for a Living Education has been a lot of fun for my kindy girl.
I purchased the MOTL starter kit last year, and we just haven't used it to its full potential. I think it is a wonderful program in theory, just not practical for our family right now. I might be
interested in selling if you are interested. You can email me at jennaraymond@comcast.net
I really think Math Lessons for a Living Education will be great for my 4 almost 5yo. He will be starting more school type work after we move but he is not ready for a more "formal" kindy.
I am really thinking about MOTL for my older son and then all the rest after. He realy wants to try something else.
How many kiddos do you have? I will have 5 doing school eventually so I really can't prep a whole lot.
I only have 2 that I'm homeschooling, both 7yo boys. One of the 7yo's was adopted from China last year so we're really working from the ground up. I probably spend about 5 minutes total most days in
I am usre it would take longer until I get the hang of it, but 5-10 min is workable.
He is begging for a change, and I don't see why a 7yo should hate math so much he would offer to scrub a tiolet to avoid it.
I just think mathusee is still to workbook-like for us!
Ohh one more question do you use it with the organizer?
I am not wanting to hijack the post here, but just wanted to say that we use Math U See with our five school aged children. I have had to go more slowly with two of them, as they struggled also. I am
glad we stuck with it, however. Our oldest (twelve, turning thirteen) has a very good comprehension of math now, and when and how to use it - and he even likes math. Initially with him, I tried a
couple of other programs (though I do not remember what they were) after his struggles with Alpha, but we ended up coming back to MathU See. Our other child who struggles with math is only seven and
a half, but she has really come to love math with Math U See. With her, we ended up trying a math page (or half of one) every couple of days, and just playing a few games in between.
I know you will discover what is best for your children, but I wanted to share our experience.
Thanks for that insight. I am still trying to decide and will use the summer to do so.
He gets the concepts fine, though he doesn't remember his math facts that well. He can do it on his fingers or if he thinks it through...he is just bored of the way it presents the info. Perhaps I am
just going to slow for him and should allow him to move on faster.
I have been re-reading an easy start in arithmetic by Ruth Beechick. She talks about presenting concepts in grade 1 and 2 and doing more focus in grade 3 to memorize and such. I think this will work
for us. Also I will use our move to focus on this as we won't be able to do our whole schooling day. I am just focusing on math and reading during that time. Starting next month until we are settled
at the next place.
I use the MOTL organizer loosely. I don't keep quite the meticulous records that they show you, but I do use the list of what we've covered and mark what each child is working on, how often for
review and when it's mastered. The suggestions for teaching each concept are very good and varied. I also get ideas from Ruth Beechick, Games for Math by Peggy Kaye, and the living math website.
Living books to introduce a concept are very effective with my boys as are hands-on activities and real world examples. Those make the greatest difference, I think, and the lack of them are why I
struggled with math throughout school I also try to give them problems that require them to think a bit diffently about a concept. For example, with telling time, I might ask them what time it is and
what time would it be in 10 minutes.
What about Professor B for your older? This is a quote from their site:
Children learn a story because its verbalization permits their perception of its internal connections and flow. When teachers’ verbalizations in their math lessons permit children’s perception of
internal connections, their mastery of math is as inevitable as learning a story. Memorization in math education conditions children into becoming “non-thinkers.” Emphasis on structures conditions
children into becoming thinkers. http://www.profb.com/homeschoolers.aspx?Unique=3/6/2011
Children learn a story because its verbalization permits their perception of its internal connections and flow. When teachers’ verbalizations in their math lessons permit children’s perception of
internal connections, their mastery of math is as inevitable as learning a story. Memorization in math education conditions children into becoming “non-thinkers.” Emphasis on structures conditions
children into becoming thinkers. | {"url":"http://simplycharlottemason.com/scmforum/topic/math-on-the-level-and-math-lessons-for-a-living-education","timestamp":"2014-04-19T19:50:37Z","content_type":null,"content_length":"29351","record_id":"<urn:uuid:1b27b650-f513-4f05-b161-e2e65aef31ce>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00295-ip-10-147-4-33.ec2.internal.warc.gz"} |
Crystal Oscillator Question (Transistor Based Amp)
Woot!!! I think I figured it out!
Up until this point, most -- if not all -- of the output traces I have shown in this thread were obtained through the use of my scope's AUTOSET feature. For reasons currently unknown to me at the
time of this writing, my scope chose ALT triggering. When I changed it to SINGLE triggering, I got the waveforms below. The INPUT (in this case Vb) and OUT- signals are indeed 180 degrees out of
phase as I expected!
Next step: Adding the crystal.... stay tuned. :D
Re: Crystal Oscillator Question (Transistor Based Amp)
mauifan wrote:So the good news is that I am getting amplification, but why isn't the output inverted? I double-checked my connections and Channel 2 on my scope (yellow trace) does indeed connect
to OUT-.
One thing I noticed from the photo: the two channels are at different scales. CH1 is at 100mV/div, CH2 is at 500mV/div. That gives the appearance of amplification without it necessarily being there.
When the output from the inverting end of a transistor doesn't invert, the usual reason is 'saturation'. The bias voltage is a bit too high, the collector voltage is nailed about 200mV above the
emitter voltage, and you're basically pouring current through a diode. AC coupling the input actually makes it harder to see, since the 200mV offset gets lost when you throw away the DC information.
.7v is a bit high for a bias voltage.. the usual 1mV bias is somewhere around .6v, and every 60mV of offset sends ten times as much current through the transistor. Being .1v high at the base means
you're getting about 14 times as much current as you want.
Rather than setting the bias pot for a specific base voltage, adjust it until the voltage at the bottom of Rc is VCC/2.
When you void a product warranty, you give up your right to sue the manufacturer if something goes wrong and accept full responsibility for whatever happens next. And then you truly own the product.
Re: Crystal Oscillator Question (Transistor Based Amp)
mstone@yawp.com wrote:Long answer: When you design a transistor amplifier, you need to choose certain parameters so you can calculate the rest. One of the middle values that helps to calculate
all the others is called 'g.m', where 'g' is the traditional symbol for conductance (the inverse of resistance).
g.m describes the transistor's 'transconductance'.. how much the collector current changes in response to a change in base voltage:
g.m = dI.c / dV.b
where 'd' means 'the change in'. (the actual equations use greek letters, subscripts, and other things I can't type, so 'g.m' means 'g-subscript-m')
There's also an equation that defines g.m in terms of 'thermal voltage'. When you get down to the electron level, the difference between 'heat' and 'voltage' gets kind of fuzzy. They're both
essentially 'energy per electron'. Thermal voltage (V.t) allows us to include the energy from thermal effects in electrical calculations, and it's *really* important when you start working with
semiconductors. The good news is that we can choose a temperature, calculate Vt, and treat it as a constant from then on. At room temperature (25C or 300 Kelvin), Vt =~ .0258v or about 26mV.
The thermal equation for g.m is:
g.m = I.c / V.t
where I.c is the quiescent current. For I.c = 1mA, g.m =~ .0385 siemens.
If g.m is the transistor's conductance, 1/g.m will be its effective resistance.. 1/.0385s =~ 26 ohms (assuming I.c = 1mA). That value is called 'r.e'.
For a common emitter amplifier, the gain (A) is inherently limited by the ratio of your collector resistor (r.c) to r.e:
A.max = r.c / r.e
Dividing by r.e is the same as multiplying by g.m though, so:
A.max = r.c * g.m
g.m is the ratio of collector current to thermal voltage, so:
A.max = r.c * I.c / V.t
but 'r.c * I.c' is just the voltage across r.c when the amp sits at its operating point. That gives us:
A.max = V.rc / V.t
which tells us how much headroom we need in order to get a certain amount of gain. The interesting bit is that A.max is completely independent of the supply voltage. It's just a ratio of
resistance to current that has to exist before you can get a certain amount of gain.
So.. if you know V.rc you can calculate the maximum possible gain easily (divide by .026v or multiply by 38.5s). If you know how much gain you want, you can get V.rc by running the equation the
other way (V.rc = A*.026v). Either way, you end up with a voltage and need to choose a resistor. Choosing I.c = 1mA makes that simple.
Engineers like those equations so much that lots of common transistors are specifically designed to perform best near I.c = 1mA. The 2N3904 (my usual choice for such jobs) has a typical current
gain of 300 when I.c = 1mA.
Are you talking about Ebbers-Moll here, mstone? I started reviewing the section about Ebbers-Moll in AoE and... well... let's just say that I think I need to read that section a few more times.
That said, I tried an experiment varying the frequency on my amp circuit and saw that gain diminished with higher frequency. Thinking ahead, I imagine that this will be fine when I add a 32kHz
crystal to my circuit, but it is going to start becoming problematic when I get over 1MHz or so. The gain was definitely <1 at 4MHz, which means that it isn't going to oscillate unless I up the gain
or something.
Re: Crystal Oscillator Question (Transistor Based Amp)
mauifan wrote:Are you talking about Ebbers-Moll here, mstone?
Hybrid-pi, actually.. the small signal model. Ebers-Moll is a large-signal model, where 'large' and 'small' are measured relative to the thermal voltage Vt (26mV at room temperature).
BJTs and diodes follow roughly the same rules, the most noticable being "if the voltage across the diode rises by Vt, about 2.7 times as much current will flow through it." The actual function is
exponential -- e^(Vbe/Vt) for BJTs -- so for changes larger than Vt, the exponential effects are too big to ignore.
For changes smaller than Vt, the exponential effects are less noticable. You can approximate the transistor's behavior with a resistor circuit, and the math will be much easier to do. There are
several such approximations, and the hybrid-pi model is a compromise between the most popular ones.
mauifan wrote:I started reviewing the section about Ebbers-Moll in AoE and... well... let's just say that I think I need to read that section a few more times.
One of the big problems with explanations of transistor models (my own included) is that it's horribly tempting to plop out a page full of equations without taking the time to say, "look.. here's
what we're trying to do".
From that perspective, Ebers-Moll is, "look.. here's how you get the numbers you need to design your bias network."
Hybrid-pi is, "look.. here's how you calculate gain."
mauifan wrote:That said, I tried an experiment varying the frequency on my amp circuit and saw that gain diminished with higher frequency.
That's normal. There's a bit of lag between "something going into the transistor" and "something coming out".. usually a few nanoseconds. The difference between the input and output during that lag
is stored energy that the output hasn't caught up to yet.
Oscillating signals change direction though, so at some point the input will start going down while the output is still trying to go up. They'll meet somewhere in the middle of the stored energy,
then the input will start pulling the output down rather than up. That means some of the stored energy will never make it to the output, and instead will be cancelled by the input.
For slow moving signals, the difference between input and output is practically zero, so there's practically no stored energy to lose. As the signal moves faster, the input can get farther ahead of
the output during the lag. That means there's more stored energy to lose. If the input makes a full cycle during the lag, practically all the input turns into stored energy, gets cancelled by the
input, and is lost.
mauifan wrote:Thinking ahead, I imagine that this will be fine when I add a 32kHz crystal to my circuit, but it is going to start becoming problematic when I get over 1MHz or so. The gain was
definitely <1 at 4MHz, which means that it isn't going to oscillate unless I up the gain or something.
Here's an Application Note that discusses crystal oscillators, and shows a good all-rounder build around a transistor (figure 1e, page 2):
http://cds.linear.com/docs/en/applicati ... an12fa.pdf
Here's another that has an op amp circuit specifically designed for 32kHz watch crystals (figure 27, page 16):
http://cds.linear.com/docs/en/applicati ... /an75f.pdf
During the banging-the-head stage of learning a new circuit, reference designs come in handy. 'Doubting the design' is a major part of the learning process, but if you do it too long you find
yourself questioning things like Ohm's Law. A set of references you can trust keeps things from getting.. well, staying.. silly.
When you void a product warranty, you give up your right to sue the manufacturer if something goes wrong and accept full responsibility for whatever happens next. And then you truly own the product.
Re: Crystal Oscillator Question (Transistor Based Amp)
Thanks for your response, mstone. I will have to read through it a couple of times. :D
In the meantime -- and perhaps while you were typing your post -- I did a quick experiment. Obviously I did something wrong, because it didn't work as I expected. :cry:
I verified that my transistor amp circuit was still working. Indeed I still get amplification at OUT- that is 180 degrees out of phase with the input. I then connected OUT- on the amp to INx in the
circuit below. Likewise, I connected the amp's IN to OUTx. I fired up my scope, turned on the power to my circuit... and NOTHING. The junction at Vb was at .7V. OUT- varied with Rc (OUT- was @3V with
For this test, I used a 32kHz crystal with C1 and C2 as shown.
As I recall from the op amp thread, a 32kHz crystal has an ESR of about 32k. The reactance of a 22pF cap at 32kHz is about 225k. Therefore, the capacitance reactance should "dominate" the ESR and
cause a near 90 degree phase shift. Given that there are two caps in this "pi network," shouldn't there be a phase shift close to 180 degrees -- and thus invert that output so that the input is
[almost] in phase?
Last edited by mauifan on Sat Feb 09, 2013 11:10 am, edited 2 times in total.
Re: Crystal Oscillator Question (Transistor Based Amp)
mstone@yawp.com wrote:That's normal. There's a bit of lag between "something going into the transistor" and "something coming out".. usually a few nanoseconds. The difference between the input
and output during that lag is stored energy that the output hasn't caught up to yet.
Oscillating signals change direction though, so at some point the input will start going down while the output is still trying to go up. They'll meet somewhere in the middle of the stored energy,
then the input will start pulling the output down rather than up. That means some of the stored energy will never make it to the output, and instead will be cancelled by the input.
For slow moving signals, the difference between input and output is practically zero, so there's practically no stored energy to lose. As the signal moves faster, the input can get farther ahead
of the output during the lag. That means there's more stored energy to lose. If the input makes a full cycle during the lag, practically all the input turns into stored energy, gets cancelled by
the input, and is lost.
So a real-world BJT transistor is an "ideal" transistor with tiny capacitors (perhaps 1-2pF?) across the junctions (see picture below)? If so, it makes perfect sense to me why gain falls off at
higher frequencies: Higher frequencies would bypass the ideal transistor through the caps.
Re: Crystal Oscillator Question (Transistor Based Amp)
mauifan wrote:So a real-world BJT transistor is an "ideal" transistor with tiny capacitors (perhaps 1-2pF?) across the junctions?
Spot on.. that's even a good guess for the capacitor values. You'll find exact values listed in the 'Small Signal Characteristics' section of a transistor's datasheet. I use 2N3904s a lot, and the
values there are 4pF at the input and 8pF at the output.
mauifan wrote:If so, it makes perfect sense to me why gain falls off at higher frequencies: Higher frequencies would bypass the ideal transistor through the caps.
Your mental model is almost perfect, but you haven't included the resistors. The full spread looks like this:
and isn't nearly as bad as it seems at first glance.
R.b, R.c, and R.e you already know.. they're the base, collector, and emitter resistors around the transistor. R.pi is the effective resistance between the base and emitter, R.o is the effective
resistance between the collector and the emitter. R.l is the load resistance.
C.mu, C.pi, and C.l are the capacitors you postulated. C.mu is the capacitance between the base and collector, C.pi is the capacitance between the base and the base and emitter. There is some
capacitance between the collector and emitter, but the transistor can't tell the difference between that and the capacitance of the load, so we lump it in with the load and call the whole thing C.l.
Whenever you put resistors and capacitors together, you get RC time constants. The RC time constant of R.b feeding current into C.pi and C.mu is what causes high-frequency attenuation.
For capacitive bypassing to happen, charge would have to go from C.mu to C.l. That path crosses the transistor's collector though, and the collector doesn't stand still. When the voltage at the
transistor's base rises, the voltage at the collector falls. That leads to a phenomenon called 'Miller capacitance'.
Conceptually, the Miller effect is kind of like a seesaw. If we assume the circuit is arranged for a gain of 9 and raise the base voltage by 1mV, the voltage at the collector will fall by 9mV. C.mu
sits between the base and collector, so we can imagine a point 1/10th of the way through it where the voltage never changes.. roughly like the fulcrum in the middle of a lever:
Fixed points like that create information barriers. Circuits can only see changes that can be measured, so if a point never changes, the circuits on opposite sides of that point can't see each other.
That's not an exotic idea BTW.. we treat Vcc and GND that way all the time. Putting one of those points in the middle of a component has interesting effects though.
Operationally, a capacitor implements the idea "if we send current in, the voltage rises". We measure the size of the capacitor in terms of the ratio between 'current that went in' and 'amount the
voltage rose'. 'One microfarad' means 'one microamp of current changes the voltage by 1v per second.'
If we put a 1uF cap in the position of C.mu then send 1uA of current into it for a second, the voltage across C.mu will indeed change by 1v. 0.9v of that change will happen on the side that R.b can't
see, though. As far as R.b is concerned, it sent 1uA of current into C.mu, and only saw C.mu's voltage rise by 0.1v. Plugging those numbers into the equation for capacitance gives us 10uF.
More generally, every test we can do on the R.b side of the circuit will tell us that C.mu behaves exactly like a 10uF capacitor connected between the transistor's base and GND.
That's the Miller effect.. as far as the input is concerned, you can replace a capacitor at C.mu with one (gain+1) times larger going to GND:
Doing that removes any capacitive path to the output, so we can't get capacitive bypassing that way.
In theory, we could get capacitive bypassing through the new capacitor to GND (or the effective equivalent of it), but that would only happen for fast-moving signals on the transistor side of the
input resistor (R.b). The fast-moving signal is on the other end of R.b though. Only the slow-moving parts make it through to the top of the capacitor, and the transistor only sees what happens at
the top of the cap.
Taking that into account, the model ends up looking like this:
where C.M is the Miller equivalent to C.mu and the diamond in the middle controls the current through R.c based on the current through R.pi. With all the other pieces in place, that's all that's left
for the transistor to do.
When you void a product warranty, you give up your right to sue the manufacturer if something goes wrong and accept full responsibility for whatever happens next. And then you truly own the product.
Re: Crystal Oscillator Question (Transistor Based Amp)
mauifan wrote:Thanks for your response, mstone. I will have to read through it a couple of times. :D
In the meantime -- and perhaps while you were typing your post -- I did a quick experiment. Obviously I did something wrong, because it didn't work as I expected. :cry:
I verified that my transistor amp circuit was still working. Indeed I still get amplification at OUT- that is 180 degrees out of phase with the input. I then connected OUT- on the amp to INx in
the circuit below. Likewise, I connected the amp's IN to OUTx. I fired up my scope, turned on the power to my circuit... and NOTHING. The junction at Vb was at .7V. OUT- varied with Rc (OUT- was
@3V with Rc=4.8k).
For this test, I used a 32kHz crystal with C1 and C2 as shown.
As I recall from the op amp thread, a 32kHz crystal has an ESR of about 32k. The reactance of a 22pF cap at 32kHz is about 225k. Therefore, the capacitance reactance should "dominate" the ESR and
cause a near 90 degree phase shift. Given that there are two caps in this "pi network," shouldn't there be a phase shift close to 180 degrees -- and thus invert that output so that the input is
[almost] in phase?
I ran the above filter through a SPICE simulation and found that I wasn't getting anywhere near the phase shift I thought I would. But by the same token, I am scratching my head trying to figure out
Based on my understanding, the phase shift is the "inverse tangent" of Xc/R, aka:
angle = tan{-1} (Xc/R)
(Please excuse the poor ASCII graphics.)
So again... at f=32kHz, R=ESR=32k and C=22pF, Xc = 226k. Thus the phase shift for "stage 1" is about 82 degrees. I have a second 22pF cap with R2 at close to zero, so why don't I get a phase shift of
something close to 180? What did I do wrong?
Re: Crystal Oscillator Question (Transistor Based Amp)
mauifan wrote:Based on my understanding, the phase shift is the "inverse tangent" of Xc/R
Ooh.. subtle one.
You're mistaken, but only by a point of semantics: -atan(Xc/R) is the formula for phase angle, which is different from phase shift. They're related, but it takes some translating to get from one to
the other.
Phase angle represents the angle formed by the vector sum of the component impedances:
where the vectors exist in the 'complex frequency' plane.
If you get far enough into calculus, you learn that complex exponentials are the cheat codes for math. Using exponents reduces multiplication and division to addition and subtraction. The function e^
x is its own integtral and derivative, so it's one of the few functions that allows us to find usable solutions for differential equations. And by one of the most beautiful coincidences in
mathematics, complex exponentials reduce trigonometry to basic algebra. Once you get used to translating back and forth, it's easy to do things with complex exponentials that would be really
cumbersome if the ideas were expressed in any other form.
We use vectors because we those allow us to draw pictures that give us some intuition into what the complex exponentials are doing. In this case we have the boring (Real) axis where calculations are
straightforward, and the 'complex frequency' axis, which allows us to represent things that depend on frequency. We draw resistors as vectors on the Real axis because resistors don't behave
differently at different frequencies. We draw capacitors on the complex frequency axis because their behavior does depend on freqency. The vector sum tells us how a combination of resistors and
capacitors will behave at a given frequency.
Phase angle is a convenient way to represent the results of that sum.
We really don't care about the angle though. What we care about are its components.. the relative sizes of the R and Xc vectors. Those allow us to calculate the output of an RC filter as a sine wave
modified by an attenuation factor and a phase shift:
The whole "find the arctangent of the impedances then get the sine and cosine" business is really just syntactic sugar.. theta = atan(Xc/R), sin(theta), and cos(theta) are easier to write (and harder
to miscopy) than "divide by the square root of the sum of the squares".
The physical interpretation of "theta equals 82 degrees" is "when the input is at 90 degrees, the output (lagging behind) will be at 82 degrees." The phase shift is the difference between those
angles, or "8 degrees". You can also say that the input and output curves will cross when the output is at 90 degrees and the input is at 98 degrees:
When you void a product warranty, you give up your right to sue the manufacturer if something goes wrong and accept full responsibility for whatever happens next. And then you truly own the product.
Re: Crystal Oscillator Question (Transistor Based Amp)
Thank you VERY much, mstone. I had to read your post a couple of times, but I think I more or less understood it -- or at least the phase angle math.
What I am less clear about is how you got the "phaseshift = cos(theta)" part. It has been a while since I last had a need to worry about trigonometery, though I do remember tan(theta) = sin(theta) /
cos(theta)... or something like that. :D
Also... to revive a page from my thread on oscillators using op amps, a Pierce Oscillator looks like this:
In terms of the above diagram, U1 is simply the transistor amplifier. I get how the transistor shifts the signal 180 degrees. But I am far less clear about how the "pi network" (aka R1, C1, C2, X1)
also shifts the signal 180 degrees.
Wikipedia wrote:Resonator
The crystal in combination with C1 and C2 forms a pi network band-pass filter, which provides a 180 degree phase shift and a voltage gain from the output to input at approximately the resonant
frequency of the crystal. To understand the operation, note that at the frequency of oscillation, the crystal appears inductive. Thus, it can be considered a large, high Q inductor. The
combination of the 180 degree phase shift (i.e. inverting gain) from the pi network, and the negative gain from the inverter, results in a positive loop gain (positive feedback), making the bias
point set by R1 unstable and leading to oscillation.
I am somewhat of a visual/intuitive learner, and... well... if the crystal is acting as an inductor (which in itself makes sense to me given that it is designed to resonate at a specified load
capacitance), won't it basically act to cancel out the impedance of the capacitors at resonance rather than add another 180 degrees of phase shift? Doesn't this mean that the combo of crystal and
load capacitance be predominantly resistive, therefore resulting in little phase shift?
Last edited by mauifan on Sat Feb 16, 2013 12:17 am, edited 3 times in total.
Re: Crystal Oscillator Question (Transistor Based Amp)
mauifan wrote:I am somewhat of a visual/intuitive learner, and... well... if the crystal is acting as an inductor (which in itself makes sense to me), won't it basically act to cancel out the
impedance of the capacitors rather than add another 180 degrees of phase shift?
This reminds me....
To further confuse myself, I created a test circuit consisting of X1, C1, and C2 as shown in the Wikipedia Crystal Oscillator circuit. (In this case, X1 was a 32kHz crystal, C1 and C2 were 22pF.) I
connected the output of my signal generator to one side of the crystal and my scope probe to the other side. I saw attenuation (presumably due to the crystal's high ESR), but the output and input
pretty much stayed in phase as I varied the signal frequency between 1kHz and 20MHz.
Why didn't I see a phase shift around 32kHz as Wikipedia promised? :D Per datasheet, the load capacitance for my crystal is 12.5pF.
A second experiment I tried was... well... I realized that C1 and C2 were effectively in series from the perspective of X1. With C1=C2=22pF, the total capacitance across X1 was about 11pF (perhaps a
little more due to the breadboard wiring). I replaced the crystal with a 180uH inductor and calculated that the resonant frequency of this tank circuit was about 3.5MHz. I fed a sine wave signal into
one side of the tank and monitored the output signal. I saw a phase shift approaching 180 degrees, but it occurred at a much higher frequency than 3.5MHz. What????
In my mind, I am aware of the following "rules" --
• In an inductor, voltage leads current by up to 90 degrees
• In a capacitor, voltage lags current by up to 90 degrees
• In a resistor, voltage and current are in phase
However, I am just not "seeing" how the pi network affects phase shift in a practical sense.
Re: Crystal Oscillator Question (Transistor Based Amp)
mauifan wrote:What I am less clear about is how you got the "phaseshift = cos(theta)" part. It has been a while since I last had a need to worry about trigonometery, though I do remember tan
(theta) = sin(theta) / cos(theta)... or something like that.
Looks like I flew past an important detail.. sorry.
You're exactly right that tan(theta) = sin(theta)/cos(theta). More generally, the function tan(theta) describes a ratio between two values, and the inverse tangent, atan(y/x), can reduce any ratio to
an angle. If we find ourselves needing the components of the ratio (the x and y), we can unpack them with y=sin(theta) and x=cos(theta).
It's really just a trick of notation. It's handy though, and since we're dealing with sine waves, experience has shown that it's useful.
Last edited by adafruit_support_mike on Sun Feb 17, 2013 3:29 pm, edited 1 time in total.
When you void a product warranty, you give up your right to sue the manufacturer if something goes wrong and accept full responsibility for whatever happens next. And then you truly own the product.
Re: Crystal Oscillator Question (Transistor Based Amp)
mauifan wrote:In my mind, I am aware of the following "rules" --
□ In an inductor, voltage leads current by up to 90 degrees
□ In a capacitor, voltage lags current by up to 90 degrees
□ In a resistor, voltage and current are in phase
Hmm.. the word 'leads' is technically accurate but misleading.. it suggests time running backwards.
Try these:
- In a resistor, voltage and current are in phase
- In a capacitor, voltage lags behind current
- In an inductor, current lags behind voltage
Stated that way, putting a capacitor and inductor together gives you two lags, not a lag and a lead cancelling each other out.
Mechanically, current is equivalent to the momentum of a moving weight, voltage is equivalent to the tension in a stretched or compressed spring. If you connect them, a moving weight has momentum,
but its motion stretches or compresses the spring. Tension in the spring applies force to the weight, and that changes the weight's speed.
By definition, the weight's speed keeps increasing as long as the spring tension keeps pulling it, so the weight will reach its highest speed when at the point where the spring stops pulling. Also by
definition, the spring's tension will increase as long as the weight keeps moving away from that neutral point, so spring will reach its highest tension or compression at the points where the weight
stops moving.
The tension and momentum always oppose each other, and each reaches its maximum (and can change the other most strongly) when the other reaches its minimum. You can say either one leads or lags the
other, but trying to decide which one lags while the other one is leading gets confusing.
When you void a product warranty, you give up your right to sue the manufacturer if something goes wrong and accept full responsibility for whatever happens next. And then you truly own the product.
Re: Crystal Oscillator Question (Transistor Based Amp)
And now for the main issue:
mauifan wrote:Why didn't I see a phase shift around 32kHz as Wikipedia promised? :D
In this case, you need a frequency microscope. ;-)
A crystal's range of interesting frequency-related behavior is extremely narrow.. usually a few millionths of the resonant frequency. For a 32kHz crystal, that's less than 1Hz. The crystal's
impedance is nearly constant outside that range, and changes by a factor of a few hundred thousand within that range. That huge change within a narrow frequency band is what makes crystals such good
timing elements.
If your frequency generator can do small frequency steps, look at the range beween 32766 Hz and 32770 Hz.. that's the broad range that includes device tolerances and thermal effects. When you find
where the response happens, narrow it down to a range maybe 250mHz wide.
If your frequency generator won't do that, you can bit-bang a square wave with an Arduino that should work. The period of a 32kHz wave is about 30 microseconds, so counting operations with an 8MHz
microcontroller should give you about 120 steps between even-Hertz frequencies in that range.
When you void a product warranty, you give up your right to sue the manufacturer if something goes wrong and accept full responsibility for whatever happens next. And then you truly own the product.
Re: Crystal Oscillator Question (Transistor Based Amp)
And this one:
mauifan wrote:A second experiment I tried was... well... I realized that C1 and C2 were effectively in series from the perspective of X1. With C1=C2=22pF, the total capacitance across X1 was
about 11pF (perhaps a little more due to the breadboard wiring). I replaced the crystal with a 180uH inductor and calculated that the resonant frequency of this tank circuit was about 3.5MHz. I
fed a sine wave signal into one side of the tank and monitored the output signal. I saw a phase shift approaching 180 degrees, but it occurred at a much higher frequency than 3.5MHz.
The capacitors are only in series if you feed the input into the end of one of them. If you feed it into the node where the capacitor and inductor meet, they're effectively in parallel. That makes
the effective capacitance 44pF. The equation for a CLC pi filter cancels out the factor of two though, so you got the resonant frequency right anyway.
Thing is, the phase shift of a CLC pi filter is only 90 degrees at resonant frequency. The filter only approaches 180 degree phase shift asymptotically, but most of the shift shows up over the first
decade (10x multiple) of frequency. In this case, you should see about 170 degrees of shift at 35mHz.
When you void a product warranty, you give up your right to sue the manufacturer if something goes wrong and accept full responsibility for whatever happens next. And then you truly own the product. | {"url":"http://adafruit.com/forums/viewtopic.php?f=8&t=36508&start=15","timestamp":"2014-04-21T02:05:47Z","content_type":null,"content_length":"89685","record_id":"<urn:uuid:0a007faf-879d-4366-b2fa-7a86b2b804db>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00108-ip-10-147-4-33.ec2.internal.warc.gz"} |
If Astronauts Could Travel At V = 0.960c, We On ... | Chegg.com
If astronauts could travel at
= 0.960
, we on Earth would say it takes (4.20/ 0.960) = 4.38years to reach Alpha Centauri, 4.20 light-years away. Theastronauts disagree.
(a) How much time passes on the astronauts'clocks?
1 years
(b) What is the distance to Alpha Centauri as measured by theastronauts?
2 light-years | {"url":"http://www.chegg.com/homework-help/questions-and-answers/astronauts-could-travel-v-0960c-earth-would-say-takes-420-0960-438years-reach-alpha-centau-q566323","timestamp":"2014-04-21T11:22:20Z","content_type":null,"content_length":"21460","record_id":"<urn:uuid:69c3c282-4588-4840-961f-62ddff4cfbf1>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00420-ip-10-147-4-33.ec2.internal.warc.gz"} |
s-Ginzburg-Ziv theorem
Erd\H{o}s-Ginzburg-Ziv theorem
Erdős-Ginzburg-Ziv theorem
If $a_{1},a_{2},\ldots,a_{{2n-1}}$ is a set of integers, then there exists a subset $a_{{i_{1}}},a_{{i_{2}}},\ldots,a_{{i_{n}}}$ of $n$ integers such that
$a_{{i_{1}}}+a_{{i_{2}}}+\cdots+a_{{i_{n}}}\equiv 0\;\;(\mathop{{\rm mod}}n).$
The theorem is also known as the EGZ theorem.
• 1 Melvyn B. Nathanson. Additive Number Theory: Inverse Problems and Geometry of Sumsets, volume 165 of GTM. Springer, 1996. Zbl 0859.11003.
• 2 Hao,P. On a Congruence modulo a Prime Amer. Math. Monthly, vol. 113, (2006), 652-654
Mathematics Subject Classification
no label found
no label found
Added: 2003-06-07 - 23:31 | {"url":"http://planetmath.org/ErdHosGinzburgZivTheorem","timestamp":"2014-04-17T21:34:06Z","content_type":null,"content_length":"38992","record_id":"<urn:uuid:09d39df8-7f8e-4c54-8e62-dcc900b8074a>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00296-ip-10-147-4-33.ec2.internal.warc.gz"} |
Portola Valley ACT Tutor
Find a Portola Valley ACT Tutor
...As an undergrad at Harvey Mudd, I helped design and teach a class on the software and hardware co-design of a GPS system, which was both a challenging and rewarding experience. I offer tutoring
for all levels of math and science as well as test preparation. I will also proofread and help with technical writing, as I believe good communication skills are very important.
27 Subjects: including ACT Math, chemistry, calculus, physics
...I have an Bachelor's degree in mathematics from the University of Santa Clara and a Master's degree in mathematics/engineering from Stanford University. I'm a patient tutor with a positive,
collaborative approach to building mathematical skills for algebra, pre-calculus, calculus (single variabl...
22 Subjects: including ACT Math, calculus, statistics, geometry
...The subject matter of linear algebra was studied when I was in college and then used throughout my graduate studies and career as a research scientist. In general, I teach students how to solve
problems, and then lead students through understanding why the subject is introduced and what is the c...
15 Subjects: including ACT Math, calculus, statistics, physics
...I have also coached many students and professionals on speech construction, practice, delivery, and temperament when speaking publicly. I have over ten years of experience helping people manage
their careers in start-ups and Fortune 500 companies. I have lead recruiting, training and career dev...
24 Subjects: including ACT Math, calculus, trigonometry, public speaking
...I've taken and received an A- in both Linear Algebra and Intermediate Linear algebra. It was my favorite class that I took in my Math major, and I would feel very comfortable tutoring it. I
tutored this subject informally with peers in the math center on campus.
35 Subjects: including ACT Math, reading, calculus, statistics | {"url":"http://www.purplemath.com/portola_valley_act_tutors.php","timestamp":"2014-04-19T20:24:36Z","content_type":null,"content_length":"23963","record_id":"<urn:uuid:f422a79f-8f01-4fe0-aa0c-88a5f6b68139>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00432-ip-10-147-4-33.ec2.internal.warc.gz"} |
The secrets of long multiplication
Posted by: Alexandre Borovik | May 18, 2011
The secrets of long multiplication
In the current discussion about the National Curriculum Review, there is lot of talk of the need to pay attention to long division, and some more cautious suggestions that perhaps we need to start
with long multiplication. This give me a pretext to repeat an observation which I discuss, in more pedagogical details, in Sections 9 and 10 of my paper on academia.com.
The following set of formulae continues to circle the Blogosphere:
1 * 1 = 1
11 * 11 = 121
111 * 111 = 12321
1111 * 1111 = 1234321
11111 * 11111 = 123454321
111111 * 111111 = 12345654321
1111111 * 1111111 = 1234567654321
11111111 * 11111111 = 123456787654321
111111111 * 111111111 = 12345678987654321
It was accompanied by usual comments about the intrinsic beauty of mathematics. Indeed, the pattern is beautiful — no doubt in that. But the example nicely illustrates a difference between an amateur
and professional approaches to mathematics: professionals are interested not so much in beautiful patterns but in reasons why the patterns cannot be extended without loss of their beauty. In our
case, the pattern breaks at the next step:
1111111111 * 1111111111 = 1234567900987654321
The result is no longer symmetric. The reason for that is an interference of a carry, transfer of an unit from one column of digits to another column of more significant digits. The carry arising
from the addition of two digits a and b is defined by the rule
c(a,b) =1 if a+b >9 and =0 otherwise.
One can easily check that this is a 2-cocycle from Z/10Z to Z and is responsible for the extension of additive groups
0 -> 10Z -> Z -> Z/10Z -> 0.
This is exactly what cocycles (and cohomology) were invented for: they describe the obstacles for continuation of a certain pattern in behavior algebraic or topological objects. The appearance of
cohomology in an elementary arithmetic entertainment piece is inevitable.
And this is why long multiplication is so pivotal concept of elementary mathematics.
[...] Having some fun with long multiplication. [...]
By: Wednesday Highlights | Pseudo-Polymath on May 18, 2011
at 12:53 pm
Posted in Uncategorized | Tags: Math, Multiplication algorithm | {"url":"https://micromath.wordpress.com/2011/05/18/the-secrets-of-long-multiplication/?like=1&source=post_flair&_wpnonce=b467bcc1c8","timestamp":"2014-04-19T02:38:40Z","content_type":null,"content_length":"60166","record_id":"<urn:uuid:38849c81-4ced-4bee-8c80-cc15d81efb48>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00164-ip-10-147-4-33.ec2.internal.warc.gz"} |
Middle School Mathematics - V
1. Of three numbers, the first is thrice the second and the second is four times the third. If the sum of the three numbers is 255, find the numbers.
2. A man is four times as old as his son. Twenty years hence, he will be just twice as old as his son. Find their present ages.
3. The length of a rectangle is four times its width. If its perimeter is 120 centimeters, find its length and width.
4. 5 years ago, Arnold was thrice as old as his brother. Now the difference between their ages is 16. Find their present ages.
5. The length of a rectangle exceeds its breadth by 7 centimeters. If the perimeter of the rectangle is 126 centimeters, find its dimensions.
6. A number consists of two digits whose sum is 9. If 45 is added to the number, the digits are reversed. Find the number.
7. The difference between two numbers is 18. If four times the smaller is less than three times the larger by 18, find the numbers.
8. A number consists of two digits whose sum is 8. If 36 is subtracted from the number, the digits are reversed. Find the number.
9. Mary has four times as many 25 Cent coins as she has 50 Cent coins. If she has 7½ Dollar in all, how many 50 Cent coins does she have?
10. Find the value of x in x:7::7:10.
11. A fort had enough food for 120 soldiers for 200 days. After 5 days, 30 soldiers leave the fort. How long will the remaining food last now?
12. If 15 men can dig a trench in 60 days, ow many men will be needed to dig a similar trench in 25 days?
13. 40% of a number is 360. What is 25% of the number?
14. A man sold two chairs at $990 each. On one he gains 10% and on the other he loses 10%. Find the gain or loss in the whole transaction.
15. What money will amount to $174 in 2 years at the rate of 8% per annum simple interest?
16. In how much time will the simple interest on $600 be $300 at 10% per annum?
17. An alloy of tin and copper consists of 20 parts of tin and 100 parts of copper. Find the percentage of tin in the alloy.
18. In an isosceles right triangle ABC, right angle is at B. Find the other two angles.
19. The three angles of a triangle are 4x, 5x, and 6x. Find the angles.
20. The three angles of a triangle are (2x-4), (3x-5), and (7x-3). Find the angles.
21. The three angles of a quadrilateral are 78°, 105°, and 120°. Find the fourth angle.
22. Which of the following are Pythogorean triplets?
(a) 12, 16, 20 (b) 9, 12, 15 (c) 10, 12, 34 (d) 14, 28, 50 (e) 21, 72, 75
23. In a parallelogram ABCD, angle B = (3x+10)°, angle D= (4x-25)°. Find the value of x.
24. The four angles of a quadrilateral are in the ratio 2:3:5:8. Find the angles.
25. The three angles of a quadrilateral are 54°, 80°, and 116°. Find the fourth angle.
26. The perimeter of a square is 120 meters. Find its area and also the length of its diagonal.
27. A playground is 100 meters long and 70 meters broad. How much distance does a girl run when she runs 5 times around the playground?
28. When two dice are thrown, how many outcomes are there?
29. What is the value of
30. What is the value of ?
Character is who you are when no one is looking. | {"url":"http://www.mathisfunforum.com/viewtopic.php?id=9007","timestamp":"2014-04-19T02:06:30Z","content_type":null,"content_length":"12073","record_id":"<urn:uuid:ed01e297-9a3d-4cd7-931a-314203a07de3>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00285-ip-10-147-4-33.ec2.internal.warc.gz"} |
Multiple-choice Questions — Select One or More Answer Choices
Multiple-choice Questions — Select One or More
Answer Choices
Introduction Sample Questions
These questions are multiple-choice questions that ask you to select one or more answer choices from a
list of choices. A question may or may not specify the number of choices to select.
Tips for Answering
Note whether you are asked to indicate a specific number of answer choices or all choices that apply. In
the latter case, be sure to consider all of the choices, determine which ones are correct, and select all of
those and only those choices. Note that there may be only one correct choice.
In some questions that involve inequalities that limit the possible values of the answer choices, it may be
efficient to determine the least and/or the greatest possible value. Knowing the least and/or greatest
possible value may enable you to quickly determine all of the choices that are correct.
Avoid lengthy calculations by recognizing and continuing numerical patterns.
Multiple-choice Questions — Select One or More
Answer Choices Sample Questions
Introduction Sample Questions
Directions: Select one or more answer choices according to the specific question directions.
If the question does not specify how many answer choices to select, select all that apply.
The correct answer may be just one of the choices or may be as many as all of the choices, depending on
the question.
No credit is given unless you select all of the correct choices and no others.
If the question specifies how many answer choices to select, select exactly that number of choices.
Which two of the following numbers have a product that is greater than 60?
(C) 6
(D) 8
For this type of question, it is often possible to exclude some pairs of answer choices. In this question,
the product must be positive, so the only possible products are either or The correct answer consists of
choices A (−9) and B (−7).
Which of the following integers are multiples of both 2 and 3?
Indicate all such integers.
(A) 8
(B) 9
(C) 12
(D) 18
(E) 21
(F) 36
You can first identify the multiples of 2, which are 8, 12, 18 and 36, and then among the multiples of 2
identify the multiples of 3, which are 12, 18 and 36. Alternatively, if you realize that every number that is
a multiple of 2 and 3 is also a multiple of 6, you can check which choices are multiples of 6. The correct
answer consists of choices C (12), D (18) and F (36).
Each employee of a certain company is in either Department X or Department Y, and there are more
than twice as many employees in Department X as in Department Y. The average (arithmetic mean)
salary is $25,000 for the employees in Department X and is $35,000 for the employees in Department Y.
Which of the following amounts could be the average salary for all of the employees in the company?
Indicate all such amounts.
(A) $26,000
(B) $28,000
(C) $29,000
(D) $30,000
(E) $31,000
(F) $32,000
(G) $34,000
One strategy for answering this kind of question is to find the least and/or greatest possible value.
Clearly the average salary is between $25,000 and $35,000, and all of the answer choices are in this
interval. Since you are told that there are more employees with the lower average salary, the average
salary of all employees must be less than the average of $25,000 and $35,000, which is $30,000. If there
were exactly twice as many employees in Department X as in Department Y, then the average salary for
all employees would be, to the nearest dollar, the following weighted mean,
where the weight for $25,000 is 2 and the weight for $35,000 is 1. Since there are more than twice as
many employees in Department X as in Department Y, the actual average salary must be even closer to
$25,000 because the weight for $25,000 is greater than 2. This means that $28,333 is the greatest
possible average. Among the choices given, the possible values of the average are therefore $26,000
and $28,000. Thus, the correct answer consists of choices A ($26,000) and B ($28,000).
Intuitively, you might expect that any amount between $25,000 and $28,333 is a possible value of the
average salary. To see that $26,000 is possible, in the weighted mean above, use the respective weights
9 and 1 instead of 2 and 1. To see that $28,000 is possible, use the respective weights 7 and 3.
Which of the following could be the units digit of where n is a positive integer?
Indicate all such digits.
(A) 0
(B) 1
(C) 2
(D) 3
(E) 4
(F) 5
(G) 6
(H) 7
(I) 8
(J) 9
The units digit of is the same as the units digit of for all positive integers n. To see why this is true for
compute by hand and observe how its units digit results from the units digit of Because this is true for
every positive integer n, you need to consider only powers of 7. Beginning with and proceeding
consecutively, the units digits of 7, and are 7, 9, 3, 1 and 7, respectively. In this sequence, the first digit,
7, appears again, and the pattern of four digits, 7, 9, 3, 1, repeats without end. Hence, these four digits
are the only possible units digits of and therefore of The correct answer consists of choices B (1), D (3),
H (7) and J (9). | {"url":"http://www.docstoc.com/docs/146259854/Multiple-choice-Questions-%E2%80%94-Select-One-or-More-Answer-Choices","timestamp":"2014-04-17T21:34:37Z","content_type":null,"content_length":"61315","record_id":"<urn:uuid:4d081056-2145-44c6-b5e1-9c89f8f9a56c>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00002-ip-10-147-4-33.ec2.internal.warc.gz"} |
Plato Center Algebra 2 Tutor
Find a Plato Center Algebra 2 Tutor
...I have also utilized the program to create company financials and track ordering trends. Microsoft Outlook is an email client that has grown to become a personal information manager. It can
work as a stand alone application or it can be linked with a network.
39 Subjects: including algebra 2, reading, English, calculus
...I already have a mathematicians' intuitions, and I know so many ways to push students to greater understanding. I recently received teacher training at a commercial tutoring center. They are
experts at teaching study skills for the long-term; I have been applying these methods for the past few months, with dramatic results.
21 Subjects: including algebra 2, chemistry, calculus, statistics
...I continue to write as a hobby and have a particular interest in writing reviews, essays, poetry and song lyrics. Personally, I believe that the process of learning to write well can not help
but improve one's critical thinking and analytical skills, useful tools for any educational pursuit or c...
17 Subjects: including algebra 2, reading, writing, English
...Regarding textbooks: as a Greek tutor, I worked out of Hansen and Quinn's "Greek: an Intensive Course." For Latin, I am familiar with many texts typically used in high schools, including: Latin
for American's, Jenney's, Ecce Romani, Cambridge, and Oxford. I am deeply passionate about the Classic...
20 Subjects: including algebra 2, reading, writing, English
...I am a retired computer systems professional with an undergraduate degree in Mathematics and a Masters in Computer Science. I have also completed certification testing for an Illinois State
teaching certificate in Mathematics for grades 6 – 12. I have extensive knowledge and experience with com...
14 Subjects: including algebra 2, geometry, algebra 1, trigonometry | {"url":"http://www.purplemath.com/plato_center_il_algebra_2_tutors.php","timestamp":"2014-04-20T08:50:55Z","content_type":null,"content_length":"24137","record_id":"<urn:uuid:2852cb1d-5a95-44c8-83e0-42b256e8666d>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00164-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to Convert the Base of an Exponent with Logarithms
Introduction to the Logarithm in Mathematics
The logarithm is the inverse operation of exponentiation.
Let’s define “log[b](b^x) = x” for non-zero real numbers ‘b’ and ‘x’. The image here shows how this is usually printed.
We retain the term “base” for logarithms, just as it was for exponents.
In the coin-toss experiment, 2^10 = 1024; so log[2](1024) = 10.
One trivial result is that “log[b](b) = 1″ for all non-zero values of ‘b’, since “b = b^1″.
‘e’ is Euler’s Number and Napier’s Constant for Natural Logarithms
Say hello to our little friend ‘e’, a favourite of the famous mathematicians Euler and Napier.
The constant ‘e’ is approximately 2.71828…; it is a non-rational number that is the base for natural logarithms.
Mathematicians find ‘e’ and natural logarithms so useful that they reserve the notation “ln(x) = log[e](x)”.
As the image shows, “ln(e)” is defined as the “definite integral from 1 to ‘e’ of dx/x”. As well, since “ln(e) = log[e](e)”, 1 must be the value of that definite integral.
The Reason to use a Logarithm to Convert an Exponent’s Base
At the outset, my problem was to solve for ‘x’ in the equation “2^m = 10^x”, where ‘m’ is a known positive integer and ‘x’ is a positive real.
Thanks to logarithms, this is equivalent to solving “log[2](2^m) = log[10](10^x)”.
The General Rule for Converting Logarithms
The general rule for converting a logarithm from base ‘b’ to natural logarithms in base ‘e’ is “log[b](y) = ln(y)/ln(b)”.
We will use this rule in a later section.
The General Rule for the Logarithm of an Exponential Term
In general, “log[b](x^m) = m*log[b](x).
Therefore “ln(2^m) = m*ln(2)”, which we will need in the next section.
Deriving the Exponents’ Conversion from Base-2 to Base-10
Let’s step through the process for my original problem.
1. “2^m = 10^x”, to solve for ‘x’.
2. “m = log[2](2^m)” on the left side.
3. “x = log[10](10^x)” on the right side.
4. Apply “log[b](y) = ln(y)/ln(b)” to each side.
5. “log[2](2^m) = ln(2^m)/ln(2)” on the left.
6. “log[10](10^x) = ln(10^x)/ln(10)” on the right.
7. Obviously “log[10](10^x) = x”, to be substituted soon…
8. But “2^m = 10^x” in /#1/, so we substitute “2^m” for “10^x” in /#6/ as follows…
9. “x = ln(2^m)/ln(10)” by lines /7/ and /8/.
10. “x = m*ln(2)/ln(10)”, by the rule for the logarithm of an exponential.
A bit of work on a calculator gives the conversion factor, “ln(2)/ln(10) = 0.69314718055994530941723212145818 / 2.3025850929940456840179914546844 = 0.30102999566398119521373889472446″ to a large
degree of accuracy.
I’m likely to use 0.301 as the conversion factor.
Let’s check my original problem, which was “2^10 = 1024 = 10^x”. My conversion is “x = 10 * 0.301 = 3.01″.
On the calculator, “10^3.01 = 1023.2929922807541309662751748199″, which is fairly close to 1024.
Click to Read Page Three: Who Were Euler and Napier?
© Copyright 2012 Mike DeHaan, All rights Reserved. Written For:
Decoded Science
Pages: 1 2 3 | {"url":"http://www.decodedscience.com/how-to-convert-the-base-of-an-exponent-with-logarithms/18524/2","timestamp":"2014-04-19T17:02:10Z","content_type":null,"content_length":"66923","record_id":"<urn:uuid:c79a4574-0f3b-4a68-879e-65ee77e7c765>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00260-ip-10-147-4-33.ec2.internal.warc.gz"} |
Decision-Theoretic Bidding Based on Learned Density Models in Simultaneous, Interacting Auctions
Auctions are becoming an increasingly popular method for transacting business, especially over the Internet. This article presents a general approach to building autonomous bidding agents to bid in
multiple simultaneous auctions for interacting goods. A core component of our approach learns a model of the empirical price dynamics based on past data and uses the model to analytically calculate,
to the greatest extent possible, optimal bids. We introduce a new and general boosting-based algorithm for conditional density estimation problems of this kind, i.e., supervised learning problems in
which the goal is to estimate the entire conditional distribution of the real-valued label. This approach is fully implemented as ATTac-2001, a top-scoring agent in the second Trading Agent
Competition (TAC-01). We present experiments demonstrating the effectiveness of our boosting-based price predictor relative to several reasonable alternatives.
Extracted Text
Journal of ArtificialIntelligence Research 19 {2003} 209-242Submitted 12/02;
published 9/03
Decision-Theoretic Bidding Based on Learned Density Models in Simultaneous,
Peter Stonepstone@cs.utexas.edu
Dept. of ComputerSciences, The University of Texasat Austin
1 University Station C0500, Austin, Texas 78712-1188 USA
Robert E. Schapire schapire@cs.princeton.edu Department of Computer Science,
Princeton University
35 Olden Street, Princeton, NJ 08544 USA
Michael L. Littmanmlittman@cs.rutgers.edu
Dept.of Computer Science, Rutgers University Piscataway, NJ 08854-8019 USA
Janos A. Csirik janos@pobox.com D. E. Shaw & Co.
120 W 45thSt, New York, NY 10036 USA
David McAllester mcallester@tti-chicago.edu Toyota TechnologicalInstituteat
1427 East 60th Street, Chicago IL, 60637 USA
Abstract Auctions are becoming an increasinglypopular method for transacting
business, espe- cially over the Internet. Thisarticle presents a general
approach to buildingautonomous
bidding agents to bid in multiple simultaneous auctions for interacting
goods. A core
component of ourapproach learns a model of the empiricalpricedynamics based
on past
data and uses the model to analytically calculate, to the greatest extent
possible, optimal
bids. We introduce a new and general boosting-based algorithm for conditionaldensity
estimation problems of this kind,i.e., supervisedlearning problems in whichthe
goal is to
estimate the entireconditional distribution of the real-valuedlabel. This
approach is fully implemented as ATT ac-2001, a top-scoring agent in the
second Trading Agent Competition
{TAC-01}. We present experimentsdemonstrating the effectiveness of our boosting-based
price predictor relative to several reasonable alternatives.
Auctions are anincreasingly popular method for transacting business, especially
over the
Internet.In an auction for a single good, it isstraightforward to create
automated bidding strategies|an agent could keep biddinguntil reaching a
target reserve price, or itcould
monitor the auction and place awinning bid just before the closing time
{knownas sniping}.
When bidding for multiple interacting goods in simultaneousauctions, on
the other
hand, agents mustbe able to reason about uncertainty and make complex value
ments. For example, an agent bidding on one's behalf in separate auctions
for a camera and flash may end up buying the flash andthen not being able
to find an affordablecamera.
Alternatively, if bidding forthe same good in several auctions, it maypurchase
two flashes
when only one was needed.
fl2003AI Access Foundation and Morgan KaufmannPublishers. All rights reserved.Stone,
Schapire, Littman, Csirik, & McAllester
This article makesthree main contributions. The first contribution is a
general ap-
proachto buildingautonomous biddingagentsto bid in multiple simultaneous
auctions for interacting goods. Westart with the observation that the key
challenge in auctions is the
prediction of eventual prices of goods: withcomplete knowledge of eventual
prices, there are direct methods for determining the optimalbids to place.
Our guiding principle is to havethe agent model itsuncertainty in eventual
prices and, tothe greatest extent possible,
analyticallycalculate optimal bids.
To attack theprice prediction problem, we propose a machine-learning approach:
examplesof previous auctions and the prices paid in them,then use machine-learning
odsto predict these prices based on available features in the auction. Moreover,forour
strategy, weneeded to be able to model the uncertainty associatedwith predicted
in other words, we needed to be able to sample from a predicteddistribution
of prices
given the currentstate of the game. This can be viewed asa conditional density
estimation problem, that is, a supervised learningproblem in which the goal
is to estimate the entire
distribution of a real-valued label given a description of currentconditions,
typically in the
formof a feature vector. The second main contribution of this article is
a new algorithm for solving such general problems based onboosting {Freund
& Schapire, 1997; Schapire&
Singer, 1999}.
Thethird contribution of this article is a complete descriptionof a prototype
tation of ourapproach in the form of ATT ac-2001, a top-scoring agent
1 in the second Trading
Agent Competition{TAC-01}that was held in Tampa Bay, FL on October 14, 2001
{Well- man, Greenwald, Stone, & Wurman, 2003a}. The TACdomain was the main
motivation for the innovations reported here.ATT ac-2001 builds on top ofATT
ac-2000 {Stone, Littman, Singh, & Kearns, 2001}, the top-scoring agentat
TAC-00, but introduces a fundamentally
new approach to creating autonomousbiddingagents.
We present detailsof ATT ac-2001 as an instantiation ofits underlying principles
that we believe have applications in a wide variety of bidding situations.
ATT ac-2001 uses a predic-
tive,data-driven approach to biddingbased on expected marginal values of
all available
goods. In this article, we present empirical results demonstrating the robustnessand
tiveness of ATT ac-2001's adaptive strategy. We also report on ATT ac-2001's
performance at
TAC-01 and TAC-02 and reflect on some of the key issues raised during the
The remainder of the article isorganized as follows. In Section 2, wepresent
general approach to biddingfor multiple interacting goods in simultaneous
auctions. In
Section3, we summarize TAC, the substrate domain for our work. Section 4
describes our boosting-basedprice predictor. InSection 5, we give the details
ofATT ac-2001. In Section 6, we present empirical results including asummary
of ATT ac-2001's performancein TAC-
01, controlledexperiments isolating the successful aspects ofATT ac-2001,
and controlled experiments illustrating someof the lessons learned during
the competition.A discussion
andsummary of related workis provided in Sections 7 and 8. 2. General Approach
In a wide variety of decision-theoreticsettings, it is useful to be able
to evaluate hypothetical
situations.In computer chess, for example, a static board evaluator is used
to heuristically1. Top-scoringby one metric, and second place by another.
210Decision-TheoreticBidding with Learned Density Models
measure which player is ahead and by how much in a given board situation.The
is similar in auction domains,and our bidding agent ATT ac-2001uses a situation
analogous to the static board evaluator, whichestimates the agent's expectedprofit
in a
hypothetical futuresituation. This \profit predictor" has a widevariety
of uses in the agent. For example, to determine the value of an item, the
agent compares thepredicted profit
assuming the item is alreadyowned to the predicted profit assuming that
theitem is not
Given prices for goods, one can oftencompute a set of purchases and an allocationthat
maximizes profit.
2 Similarly, if closing prices are known, they can be treated as fixed,
and optimal bids can be computed {bid high foranything you want to buy}.
So,one natural
profit predictor is simply tocalculate the profit of optimal purchases underfixed
prices. {Thepredicted prices can, of course, be differentin different situations,
e.g., previous closing prices can be relevantto predicting future closing
prices.} A more sophisticated approach to profitprediction is to construct
a model of the prob- ability distribution over possiblefuture prices and
to place bids that maximizeexpe cte d
profit. Anapproximate solution to this difficult optimizationproblem can
be created by
stochastically sampling possible prices and computing a profit prediction
as above for each sampled price. A sampling-based schemefor profit prediction
is important for modeling
uncertainty and the value of gaining information, i.e., reducingthe price
Section2.1 formalizes this latter approach within asimplifiedsequential
auction model. This abstraction illustrates some of thedecision-making issues
in our full sampling-based approach presented in Section 2.2.The full setting
that our approach addresses isconsid-
erablymore complex than the abstractmodel, but our simplifying assumptions
allow usto
focus on a core challenge of thefull scenario. Our guiding principle is
to make decision-
theoretically optimal decisions given profit predictions for hypothetical
2.1 SimplifiedAbstraction
In the simple model, thereare n items to be auctioned off insequence {first
item 0, then
item1, etc.}. The bidder must place a bidr
for each itemi, and after each bid, a closing pricey
is chosen for the corresponding item from a distribution specific to the
item. If the
bid matches or exceeds the closing price,r
025 y i
, the bidder holds itemi, h
= 1.Otherwise,
the bidder does not hold theitem, h
= 0.The bidder's utility v{H} is a function of its
final vector of holdings H = {h 0
; : : : ; h n0001
} and its costis a function of the holdings and
the vector of closing prices, H001Y . We willformalize the problem of optimal
bid selection
and develop a seriesof approximations to make the problem solvable.2. The
problem iscomputationally difficult in general, buthas beensolved effectively
in the non-trivial TAC setting {Greenwald & Boyan, 2001; Stone et al., 2001}.
3.An alternative approach would be toabstractly calculate the Bayes-Nash
equilibrium{Harsanyi, 1968}
for the game and playthe optimal strategy. We dismissed thisapproach because
of its intractability in realistically complex situations, including TAC.
Furthermore, even if we wereable to approximate the
equilibrium strategy, it is reasonable to assume that our opponents would
not play optimal strategies. Thus, we could gain additional advantageby tuning
our approach to our opponents'actual behavior as
observed in theearlier rounds, which is essentially the strategywe adopted.
211Stone, Schapire, Littman, Csirik, & McAllester
2.1.1 ExactValue
What is the valueof the auction, that is, the bidder'sexpected profit {utility
minus cost} for biddingoptimally for the rest of theauction? If a bidder
knows this value, it can make
its next bid to be one that maximizes its expected profit.The value is a
function of the bidder's current holdings Hand the current item to bid on,i.
It can be expressed as value{i; H } = max r
: : :max
{v{G+ H } 000 G 001Y }; {1}
wherethe components of G are the new holdingsas a result of additional winnings
g j
025 y
j . Note that H onlyhas non-zero entries for items that havealready been
{8j025 i; H
= 0} and G only has non-zero entriesfor items that have yet to be sold {8j
< i; G
= 0}. Note also thatG and Y are fully specified whenthe g
and y j
{forj 025 i} are boundsequentially by the expectation and maximization
operators. The
idea here is that the bidsr
through r n0001
are chosen tomaximize value in the context of the possibleclosing prices
y j
Equation 1 is closelyrelated to the equations defining the value of a finite-horizon
tiallyobservable Markov decisionprocess {Papadimitriou & Tsitsiklis, 1987}
or astochastic
satisfiability expression{Littman, Ma jercik, & Pitassi, 2001}.Like these
other problems,
thesequential auction problem is computationally intractable for sufficiently
general repre- sentations of v{001} {specifically, linear functions of theholdings
are not expressive enough to achieve intractability while arbitrarynonlinear
functions are}.
2.1.2Approximate Value by Reordering There are three ma jor sources of intractability
in Equation 1|the alternation of themaxi-
mizationand expectation operators{allowing decisions to be conditioned on
an exponential
number of possible setsof holdings}, the large number of maximizations {forcing
an ex-
ponential numberof decisions to be considered}, and the large number of
{resultingin sums over an exponential numberof variable settings}.
We attack the problem of interleaved operators by moving all but the first
of themaxi-
mizations inside the expectations,resulting in an expression that approximates
the value:
v alue-est{i; H} = max
i E
i+1 : : : E
: : :max
{v{G +H } 000 G 001 Y}: {2}
Because the choicesfor bids r
through r
n0001 appear more deeply nested than the bindings for the closing prices
through y
, they cease to be bidsaltogether, and instead represent
decisions as to whether to purchase goods atgiven prices. Let G = opt{H;
i; Y } be a vector representing the optimal number of goods to purchase at
the prices specified bythe vector
Y giventhe current holdings H startingfrom auction i. Conceptually,this
can be computed
by evaluating
opt{H; i; Y} = argmax
{v{G +H } 000 H 001Y }: {3}
Thus,Equation 2 can be written:
value-est{i; H } = max r
;:::;y n0001
;i + 1; Y } + H 0
} 000 opt{H
; i + 1;Y } 001 Y } {4} 212Decision-TheoreticBidding with Learned Density
where H
is identicalto H except the i-th componentreflects whether item i is won|r
025 y
Note that there is afurther approximation that can be made bycomputing the
prices {as pointvalues} before solving the optimizationproblem. This approach
corresponds to further swapping the expectations towards the core of the
value-est{i; H }
ev = max
i {v{opt{H
; i + 1; E Y
} + H
} 000 opt{H 0
;i + 1;E
} 001 E Y
} {5}
= E[y
;: : : ; y
n0001 ], the vector of expected costs ofthe goods. In the remainder of
the article, we refer to methods thatuse this further approximation from
Equation 5 as expected value approaches for reasonsthat will be come apparent
The technique of swapping maximization and expectation operators was previously
used by Hauskrecht {1997} to generate a bound for solving partially observableMarkov
processes.The decrease of uncertainty when decisions aremade makes this
anupper bound on the true valueof the auction: value-est 025value. The
tightness of the ap- proximations in Equations 2 and 5 dependson the true
distributions of the expected prices. For example, if the prices were known
in advance with certainty, thenboth approximations
are exact.
2.1.3 Approximate Bidding
Given a vector of costsY , the optimization problem opt{H; i; Y } in Equation
4 is stillNP-
hard {assuming the representation of the utility function v{001} issufficiently
complex}. For
many representations of v{001}, the optimization problem can be castas
an integer linear pro-
gram and approximated by using the fractional relaxation insteadof the exact
problem.This is precisely the approach we haveadopted in ATT ac {Stone et
al., 2001}. 2.1.4 Approximation via Sampling Even assuming that opt{H;i;
Y } can be solved in unit time, aliteral interpretation of Equa-
tion 4 says we'll need to solve this optimization problemfor an exponential
number of cost vectors {or even more if the probabilitydistributions Pr{y
j } are continuous}. Kearns,Man-
sour, and Ng {1999} showed that values of partially observableMarkov decisionprocesses
could beestimated accurately by sampling tra jectoriesinstead of exactly
computing sums.
Littman et al. {2001} did the same for stochastic satisfiability expressions.
idea to Equation 4 leads to the following algorithm.
1. Generate a setS of vectors of closing costsY according to the product
distribution Pr{y
}002 001 001 001 002 Pr{y
}. 2. For each of these samples,calculate opt{H
; i + 1; Y } as definedabove and average the
results, resulting in the approximation value-est
{i; H } = max
r i
Y 2S
{v{opt{H 0
; i + 1; Y} + H
}000 opt{H
; i + 1; Y }001Y }=jS j:{6}
This expression converges to value-est with increasing sample size. A remaining
challenge in evaluatingEquation 6 is computing the real-valuedbid r
that maximizes the value. Notethat we want to buy item iprecisely at those
closing prices for 213Stone, Schapire, Littman, Csirik, & McAllester
which the value of having the item {minus its cost}exceeds the value of
not having the item; this maximizes profit. Thus,to make a positive profit,
we arewilling to pay up to, but not more than, the difference in valueof
having the item and not having the item. Formally , let H bethe vector of
current holdings andH
be the holdings modified to
reflect winning itemi. Let G
{Y } = opt{H
; i+1; Y}, the optimal set of purchases assuming item i was won, and G{Y
} = opt{H;i+1; Y } the optimal set of purchases assuming otherwise
{exceptin cases of ambiguity, we write simply G
and G forG
{Y } andG{Y} respectively}. We want to select r i
to achieve the equivalence
Y 2S
+H } 000 G
w 001 Y }=jSj 000 y
Y 2S
{v{G +H } 000 G 001 Y}=jS j: {7} Setting
Y 2S {[v{G
w + H } 000 G w
001 Y ] 000[v{G + H }000 G 001 Y ]}=jS j: {8}
achievesthe equivalence desired in Equation 7,as can be verified by substitution,
and therefore biddingthe average differencebetween holding and not holding
the itemmaximizes
the value.
2.2 The FullApproach
Leveraging from the precedinganalysis, we define our sampling-based approachto
prediction in general simultaneous, multi-unit auctions for interacting
goods. In this sce-
nario, let there be n simultaneous, multi-unitauctions for interacting goods
0 ; : : : ; a
The auctions might close at different times and these times arenot, in general,
known in
advance to the bidders. When an auction closes,let us assume that the m
units available
are distributed irrevocably tothe m highest bidders, who each need to pay
the price bid
bythe mth highest bidder. This scenario correspondsto an mth price ascending
auction. 5
Note that the same bidder mayplace multiple bids in an auction, and therebypotentially
win multiple units.We assume that after the auction closes, thebidders will
no longer have
anyopportunity to acquire additional copies of thegoods sold in that auction
{i.e., there
is no aftermarket}. Our approach is based upon fiveassumptions. For G =
; : : : ; g n0001
} 2IN
, letv{G} 2 IR represent the value derived bythe agent if it owns g
units of the commodity beingsold in
auction a
i . Note that v isindependentof the costs of the commodities.Note furtherthat
this representation allows for interacting goods of all kinds,including
complementarity and
The assumptions of our approach are as follows:
1. Closing prices are somewhat, butonly somewhat, predictable. That is,
given aset
of input features X ,for each auction a
i , there exists a sampling rule that outputsa4. Note that thestrategy for
choosing r
i in Equation 8 does not exploit the factthat the sample S contains
only a finite set of possibilities fory
, which might make it more robust to inaccuracies in the sampling. 5. For
large enough mit is practically the same as the more efficient m + 1st auction.
Weuse the mth
price model because that is what is used in TAC'shotel auctions.
6. Goodsare considered complementary if their value as a package is greater
than the sum of theirindividual
values; goodsare consideredsubstitutable if their value as a package isless
than the sum of their
214Decision-TheoreticBidding with Learned Density Models
closing price y
according to a probability distribution of predicted closing prices for
i .
2. Given a vector ofholdings H = {h
0 ; : : : ; h
} where h
i 2IN represents the quantity of the commodity being sold in auctiona
that are already owned by the agent, and
given a vector of fixed closing prices Y = {y
; : : : ; y n0001
}, thereexists a tractable
procedure opt{H; Y } to determine the optimal setof purchases {g
; : : : ; g
} where
g i
2 IN representsthe number of goods to be purchasedin auction i such that
v{opt{H; Y } + H} 000 opt{H; Y} 001 Y 025 v{G + H } 000 G001 Y
for all G2 IN
.This procedure corresponds to the optimizationproblem opt{H; i; Y } in
Equation 3.
3. Anindividual agent's bids do not have anappreciable effect on the economy
{large population assumption}.
4.The agent is free to change existing bidsin auctions that have not yet
closed. 5. Future decisions are made in thepresence of complete price information.
sumption corresponds to the operatorreordering approximation from the previous
While these assumptions are notall true in general, they can be reasonable
enough approx-
imations to be the basis for an effective strategy.
By Assumption 3,the price predictor can generate predicted prices prior
to considering
one's bids. Thus,we can sample from these distributionsto produce complete
sets of closing prices of all goods.
For each good under consideration, weassume that it is the next one to close.If
different auction closes first, we can then revise our bids later {Assumption4}.
Thus, we
would like tobid exactly the good's expe cte dmarginal utility to us. Thatis,
we bid the
differencebetween the expected utilities attainable with and without the
good. Tocompute
these expectations, we simply average the utilities of having and not havingthe
good under
differentprice samples as in Equation 8. Thisstrategy rests on Assumption
5 in that we assume that biddingthe good's current expected marginal utility
cannot adversely affect our future actions, for instance by impacting our
future space of possible bids.Note that as
time proceeds, the pricedistributions change in response to the observed
price tra jectories,
thuscausing the agent to continually revise itsbids.
Table 1 shows pseudo-code for the entire algorithm. A fully detaileddescription
of an
instantiationof thisapproach is given in Section 5. 2.3 Example
Considera camera and a flash with interacting values to an agent as shown
in Table 2.
Further, consider that theagent estimates that the camera will sell for
$40with probability
25045, $70 with probability 50045, and $95 with probability 25045.Consider
the question of
what the agentshould bid for the flash {in auctiona
}. Thedecision pertaining to the
camerawould be made via a similar analysis. 215Stone, Schapire, Littman,
Csirik, & McAllester
* Let H = {h 0
; : : : ; h n0001
} be the agent's current holdings in each of then auctions.
* Fori= 0 to n 000 1{assume auction i is next to close}: { total-diff =
{ counter = 0
{As time permits:
003For each auction a
; j 6=i, generate a predicted price sampley
. LetY =
0 ; : : : ; y
; 1; y i+1
; : : : ; y n0001
003 Let H
w = {h
; : : : ; h
; h
+ 1; h
i+1 ; : : : ; h
}, the vector of holdings ifthe agent
wins a unit in auctiona
003Compute G
= opt{H
; Y },the optimalset of purchases if the agent wins a unit in
auction a
. Note that no additionalunitsof the good will be purchased, since the i-th
component of Yis 1.
003 ComputeG = opt{H; Y },the optimalset of purchases if the agent never
any additional units in theauction a
and prices areset to Y .
003 diff= [v{G
+ H } 000 G
001 Y ] 000[v{G + H }000 G 001 Y ]
003total-diff = total-diff + diff 003 counter = counter+ 1
{ r = total-diff=counter
{ Bidr in auction a
.Table 1:The decision-theoretic algorithm for bidding in simultaneous, multi-unit,
ing auctions.utilitycamer a alone$50flash alone10both100neither0T able 2:
Thetable of values for all combination ofcamera and flash in our example.
First,the agent samples from the distributionof possible camera prices.
When the price of the camera {sold in auctiona
} is $70 in thesample:
* H = {0; 0}; H
= {1; 0}; Y ={1; 70}
* G
= opt{H w
; Y } is the bestset of purchases the agent can make with the flash, and
assuming the camera costs $70.In this case, the only two options arebuying
camera or not. Buyingthe camera yields a profit of 100000 70 = 30. Not
buying the camera yields a profit of 10000 0 = 10. Thus, G w
= {0; 1},and [v{G
w + H } 000 G w
001 Y ] = v{1; 1} 000{0; 1} 001 {1; 70} = 100000 70.
* Similarly G = {0; 0} {since if the flash is not owned, buying the camera
yields a profit of 50 000 70 = 00020, and notbuying it yields a profit
of 0 0000 = 0} and [v{G+ H } 000 G 001 Y] = 0.
216Decision-TheoreticBidding with Learned Density Models
* val = 30000 0= 30.
Similarly, when the camera ispredicted to cost $40, val = 6000010 = 50;
andwhen the camera is predicted to cost $95, val= 10 000 0 = 10. Thus,we
expect that 50045 of the camera price samples will suggest a flash value
of $30, while 25045 willlead to a value of $50 and the other
25045 willlead to a value of $10. Thus,the agent willbid :5 00230 + :25
002 50 + :25 002 10 = $30
forthe flash.
Notice that in this analysisof what to bid for the flash, the actual closingprice
the flash is irrelevant. The proper bid depends only on the predicted price
of the camera.
To determine the proper bid for the camera, asimilar analysis would be done
using the predicted price distributionof the flash. 3. TAC
W einstantiated our approach as an entry in thesecond Trading Agent Competition
as described in this section.Building on the success of TAC-00held in July
2000 {Wellman,
W urman, O'Malley, Bangera, Lin, Reeves,& Walsh, 2001}, TAC-01 included19
agents from
9 countries {Wellman et al., 2003a}. A key feature ofTAC is that it required
autonomous bidding agents to buy and sellmultiple interacting goo dsinauctions
of different types.It
is designed as a benchmark problemin the complex and rapidly advancingdomain
of e-
marketplaces, motivating researchers to apply unique approaches to a common
task. By
providinga clear-cut ob jective function, TAC also allows the competitors
to focustheir
attention on the computational andgame-theoretic aspects of the problem
and leaveaside
the modeling and model validation issues that invariablyloom large in real
applications of automated agents to auctions {see Rothkopf& Harstad, 1994}.
Another feature of TAC
is that it provides an academicforum for open comparison of agent biddingstrategies
a complex scenario, as opposedto other complex scenarios, such as trading
inreal stock
markets, in whichpractitioners are {understandably} reluctant to sharetheir
A TACgame instance pits eight autonomous biddingagentsagainst one another.
TAC agent is a simulated travel agent witheight clients, each of whom wouldlike
to travel
from TACtown to Tampa and back againduring a 5-day period. Eachclient is
by a randomset of preferences for the possible arrival and departure dates,
hotel rooms, and entertainment tickets. To satisfy a client, an agent mustconstruct
a travel package for that client by purchasing airline ticketsto and from
TACtown and securing hotelreservations; it
is possible to obtain additional bonuses by providing entertainment tickets
as well. A TAC agent's score in a game instance is thedifference between
the sum of its clients'utilities for
the packagesthey receive and the agent's total expenditure.We provide selected
details about the game next; for full details onthe design and mechanisms
of the TAC server and
TACgame, see http://www.sics.se/tac. TAC agents buy flights, hotel rooms
and entertainment tickets through auctionsrun
from the TAC server at the University of Michigan. Eachgame instance lasts
12 minutes
andincludes a total of 28 auctions of 3different types.
Flights{8 auctions}: There is a separate auctionfor each type of airline
ticket:to Tampa
{inflights} on days 1{4 and from Tampa{outflights} on days 2{5.There is
an unlimited
supply of airlinetickets, and every 24{32 seconds their ask price changes
by from 000$10 217Stone, Schapire, Littman, Csirik, & McAllester
to $x. x increases linearly over thecourse of a game from 10 to y, wherey
2[10; 90]
ischosen uniformlyat random for each auction, and is unknown to the bidders.
all cases, tickets are priced between$150 and $800. When the server receives
abid at
or above the ask price, thetransaction is cleared immediately at the ask
priceand no
resale is allowed. Hotel Rooms {8}: Thereare two different types of hotel
rooms|the Tampa Towers {TT} and the Shoreline Shanties {SS}|each ofwhich
has 16 rooms availableon days 1{4.
The rooms are sold ina 16th-price ascending {English} auction,meaning that
for each
of the 8 typesof hotel rooms, the 16 highest bidders get therooms at the
16th highest
price.For example, if there are 15 bidsfor TTon day 2 at $300, 2 bids at
$150, and any number of lower bids, therooms are sold for $150 to the 15
high biddersplus
one of the $150 bidders {earliestreceived bid}. The ask price is the current
highest bid and transactions clearonly when the auction closes. Thus,agents
no knowledge of, forexample, the current highest bid. Newbids must be higher
the currentask price. No bid withdrawal or resale isallowed, though the
price of bids may be lowered provided the agentdoes not reduce the number
of rooms itwould win
were the auction to close.One randomly chosen hotelauction closes at minutes
of the 12-minute game. Ask prices are changed only on the minute.
Entertainment Tickets {12}: Alligator wrestling, amusement park, and museum
tickets are each sold for days 1{4 in continuous double auctions. Here, agents
canbuy and
sel l tickets, withtransactions clearing immediately when one agentplaces
a buy bid
at a price at least as high as another agent's sell price.Unlike the other
typesin which the goods are sold from a centralized stock, each agent starts
with a {skewed} random endowment of entertainment tickets. The prices sent
toagents are
the bid-ask spre ads, i.e., the highest currentbid price and the lowest
current ask price {due to immediate clears, ask price is always greater than
bid price}. In this case,bid
withdrawal and ticket resale are both permitted. Eachagent gets blocks of
4 tickets of 2 types, 2 tickets of another 2types, and no tickets of the
other 8 types.
In addition to unpredictablemarketprices, other sources of variabilityfrom
game in-
stance to game instance arethe client profiles assigned to the agents andthe
random initial
allotment of entertainment tickets. Each TAC agent has eight clients with
signed travel preferences.Clients have parameters for ideal arrival day,
IAD {1{4};ideal
departure day,IDD {2{5}; hotelpremium, HP{$50{$150}; and entertainment values,
{$0{$200} foreach type of entertainment ticket.
The utility obtained by a client isdetermined by the travel package that
it is given in
combinationwith its preferences. To obtain a non-zeroutility, the client
must be assigned a feasible travel package consisting of an inflight on some
arrival day AD, an outflight on a departure day DD, and hotel rooms of the
same type {TT or SS} for the days in between
{days dsuch that AD 024d < DD}. At most one entertainment ticket of each
type can be
assigned, andno more than one on each day.Given a feasible package, the
client's utility
isdefined as 1000 000 travelPenalty+ hotelBonus + funBonus
218Decision-TheoreticBidding with Learned Density Models
* travelPenalty = 100{jAD000 IAD j +jDD 000 IDD j}
* hotelBonus = HPif the client is in the TT, 0 otherwise.
* funBonus = sum ofEVs for assigned entertainment tickets. A TAC agent's
scor e is the sum of its clients' utilitiesin the optimal allocation of
itsgoods {computed by the TACserver} minus its expenditures.The client preferences,
allocations,and resulting utilities from a sample game are shown in Tables
3 and 4.ClientIAD IDD HP AWAP MU1Day2 Day 5 73 175 34 242Day 1 Day 3 125
113 124573Day 4 Day 5 73157 12 1774Day 1 Day 2102 50 67 495Day 1 Day3 75
12 135 1106Day2 Day 4 86 197 8597Day 1 Day 5 9056 197 1628Day 1 Day3 50 79
92 136Table 3: ATT ac-2001's client preferences from anactual game. AW, AP,
and MU are EVs
foralligator wrestling, amusement park, and museumrespectively.ClientADDD
Hotel Ent'mentUtility1Day2 Day 5 SS AW411752Day 1 Day 2TT AW111383Day3 Day
5 SS MU3, AW412344Day 1Day 2 TT None11025Day 1 Day 2 TTAP111106Day 2Day 3
TT AW211837Day 1 Day 5SS AF2, AW3, MU414158Day 1 Day 2 TT MU11086Table 4:
ATT ac-2001's client allocations and utilities from thesame actual game as
that in
Table 3. Client 1's \4" under \Ent'ment" indicates on day 4.
Therules of TAC-01 are largely identical tothose of TAC-00, with three important
1. In TAC-00, flight prices did not tend toincrease;
2. In TAC-00,hotel auctions usually all closed at the end ofthe game;
3. In TAC-00,entertainment tickets were distributeduniformly to all agents
Whilerelatively minor on the surface, these changessignificantly enriched
the strategic complexity of the game. Stoneand Greenwald {2003} detail agent
strategiesfrom TAC-00.
219Stone, Schapire,Littman, Csirik, & McAllester
TAC-01 was organized as a series of fourcompetition phases, culminating
with the semifinals and finals on October 14, 2001at the EC-01 conference
in Tampa,Florida. First,
the qualifying round,consisting of about 270 games per agent,served to select
the 16
agentsthat would participate in the semifinals.Second, the seeding round,
consisting of about 315 games per agent, was usedto divide these agents into
two groups ofeight. After
the semifinals, on themorning of the 14th consisting of 11 games in each
group, four teams
from each group were selected to compete in the finals duringthat same afternoon.
finalsare summarized in Section 6.
TAC is not designed to be fully realistic in the sense that an agent from
TACis not
immediately deployable in thereal world. For one thing, it isunrealistic
to assume that an
agent wouldhave complete, reliableaccess to all clients'utility functions
{or even that the client would!}; typically, some sort of preference elicitation
procedurewould be required {e.g.
Boutilier,2002}. For another, theauction mechanisms are somewhat contrived
for the
purposes of creating an interesting, yetrelatively simple game. However,each
is representativeof a class of auctions that is used in the realworld. And
it is not difficult to imagine a future in which agents doneed to bid in
decentralized, related, yet varying
auctions for similarly complex packages of goods.
4.Hotel Price Prediction
As discussedearlier, a central part of our strategy dependson the ability
to predict prices,
particularly hotel prices, at variouspoints in the game. To do this asaccurately
as possible,
we used machine-learning techniques that would examine thehotel prices actually
in previous gamesto predict prices in future games. Thissection discusses
this part of
our strategyin detail, including a new boosting-basedalgorithm for conditional
There is bound to beconsiderable uncertainty regarding hotel prices since
these depend
on many unknown factors, such as the time at which the hotel roomwill close,
who the
other agents are,what kind of clients have been assigned toeach agent, etc.
Thus, exactly predicting the price of a hotel room ishopeless. Instead, we
regard the closingprice as a
random variablethat we need to estimate, conditionalon ourcurrent state
of knowledge
{i.e., number of minutes remaining in the game, ask price of each hotel,
flightprices, etc.}.
We might then attemptto predict this variable's conditional expected value.
ourstrategy requires that we not only predict expected value, but that we
also be ableto
estimate the entireconditional distribution so that we cansample hotel prices.
To set this up as a learning problem, wegathered a set of training examples
from previously played games. We defined a set of features for describing
each example that
together are meant tocomprise a snap-shot of all the relevant information
available at the time each prediction is made. Allof the features we used
are real valued; a couple of the
featurescan have a special value? indicating \value unknown."We used the
* The number ofminutes remaining in the game.
* The price of each hotel room,i.e., the current ask price for roomsthat
have not closed
or the actualselling price for rooms that have closed. 220Decision-TheoreticBidding
with Learned Density Models
* The closing time of each hotel room. Note that this feature is defined
evenfor rooms
that have not yet closed,as explained below.
* The prices of each of the flights. To this basic list, we added a number
of redundant variations, which wethought might help
the learning algorithm:
* The closing price of hotel rooms that have closed {or ? if the room has
not yet closed}.
* The current ask price of hotel rooms thathave not closed {or ? if the room
has already
* The closing time of each hotel room minus the closing time of the room
whose price we are trying to predict.
* The number of minutes fromthe current time until each hotel roomcloses.
During the seeding rounds, it wasimpossibleto know during play who our opponents
were, although this information was available at the end of each game,and
therefore during
training. Duringthe semifinals and finals, we did know theidentities of
all our competitors. Therefore, in preparation for the semifinals andfinals,
we added the following features:
* The number of playersplaying {ordinarily eight, but sometimes fewer,for
instance if
one or more playerscrashed}.
* A bit for eachplayer indicating whether or not that playerparticipated
in this game.
Wetrained specialized predictors for predicting theprice of each type of
hotel room. In other words, one predictor was specialized for predicting
only the price of TT on day
1, anotherfor predicting SS on day2, etc. This would seem to require eightseparate
predictors. However,the tournament game is naturally symmetric aboutits
middle in the
sense that we cancreate an equivalent game by exchangingthe hotel rooms
on days 1 and 2 with those on days 4 and 3 {respectively}, and by exchanging
the inboundflights on
days1, 2, 3 and 4 withthe outbound flights on days 5, 4, 3 and2 {respectively}.
withappropriate transformations, the outer days {1 and4} can be treated
equivalently ,and
likewise for the inner days {2 and 3}, reducing the number of specializedpredictors
by half.
We also createdspecialized predictors for predicting in the firstminute
after flight prices
hadbeen quoted but prior to receiving any hotelprice information. Thus,
a total of eight specialized predictors were built {for each combination
of TT versus SS, inner versusouter
day, and first minute versus not first minute}.
We trained our predictors to predict not theactual closing price of each
room per se, but rather how much the price wouldincrease, i.e., the difference
betweenthe closing price
and the current price.We thought that this might be aneasier quantity to
predict, and,
because our predictor never outputs a negative numberwhen trained on nonnegative
data, this approach also ensures that we neverpredict a closing price below
the current bid. From each of the previously played games, wewere able to
extract manyexamples.
Specifically, for eachminute of the game and for each room thathad not yet
closed, we
221Stone, Schapire,Littman, Csirik, & McAllester
extracted the values of all of the features described above at that moment
in the game, plus the actual closing price of the room {which we are trying
to predict}.
Note thatduringtraining, there is no problem extracting theclosing times
of all of the
rooms.During the actual play of a game, we donot know the closing times
of rooms that have not yet closed. However,we do know the exact probability
distributionfor closing
timesof all of the rooms that have not yet closed. Therefore,to sample a
vector of hotel
prices, wecan first sample according to this distribution over closing times,
and then use
ourpredictorto sample hotel prices using these sampled closingtimes.
4.1 The Learning Algorithm Having described how we set up thelearning problem,
we are now ready to describe the
learning algorithm that we used.Briefly, we solved this learning problemby
first reducing to
a multiclass, multi-label classification problem {or alternativelya multiple
logistic regression
problem},and then applying boosting techniques developed by Schapire and
Singer {1999, 2000} combined with a modification of boosting algorithms for
logistic regression proposed by Collins, Schapire and Singer {2002}.The result
is a new machine-learning algorithmfor
solving conditional density estimationproblems, described in detail in the
remainder ofthis
section. Table 5 showspseudo-code for the entire algorithm. Abstractly,
we are given pairs {x
; y
}; : : : ; {x
; y
} where each x i
belongs to a spaceX
and each y
i is in R . In our case, thex
's are the auction-specific feature vectors described
above; for some n, X022 {R [ f?g} n
. Each target quantity y
is the difference between closing
price and current price.Given a new x, our goal is toestimate the conditional
of y given x.
We proceed with the working assumption thatall training and test examples
{x;y} are
i.i.d. {i.e, drawn independently from identical distributions}.Although
this assumption is
false in ourcase {since the agents, including ours, are changing over time},
it seems like a reasonable approximation that greatly reduces thedifficulty
of the learning task.
Our first step is to reduce the estimationproblem to a classification problem
by breaking the range of the y
i 's into bins:
[b 0
; b
1 }; [b
1 ; b
}; : : : ; [b
k ; b
for some breakpoints b 0
< b
1 < 001 001 001 < b k
024 b
where for our problem, we chosek = 50.
Theendpoints b
are chosen to be the smallest and largest y
values observed during training. We choose theremaining breakpoints b
1 ; : : : ; b
k so that roughly an equal
number of training labels y i
fall into each bin.{More technically, breakpoints are chosen so
that the entropy of thedistribution of bin frequencies is maximized.} For
each of the breakpointsb
{j =1; : : : ; k}, our learningalgorithm attempts to estimate
theprobability that a new y {givenx} willbe at least b j
. Given such estimatesp
for each b
, we can thenestimate the probability that y isin thebin [b
} byp
we can then use a constant density withineach bin}. We thus have reducedthe
to one of estimating multipleconditional Bernoulli variables corresponding
tothe event7.We did not experiment with varyingk, but expect that the algorithm
is notsensitive to it for sufficiently large values of k.
222Decision-TheoreticBidding with Learned Density ModelsInput:{x
; y
}; : : : ; {x
; y
}where x
2 X, y
2 R positive integers k andT
Compute breakpoints:b
< b
< 001 001 001< b
* b
* b
= max
* b
1 ; : : : ; b
k chosen to minimize
P k
q j
ln q
j where q
;: : : ; q
arefraction of y
's in [b
; b 1
}; [b
; b
}; : : : ; [b
k ; b
]{using dynamic programing}
* for t= 1; : : : T :
000compute weights W
t {i; j } =
11 + e s
{y i
t {x
where s
j {y} is as in Eq. {10} 000 use W
t to obtain base function h
:X 002 f1;: : : ; kg ! R minimizing m
i=1 k
j=1 W
{i;j }e
000s j
i }h
;j} over all decision rules h t
considered. The decision rules can take any form. Inour work, we use \decision
stumps," or simplethresholds on one
of the features. Output sampling rule:
* let f =
T X
h t
* let f 0
={f+ f}=2wheref {x;j } = maxff {x; j
} :j 024 j
024 kg
f{x; j } = minff {x; j
} : 1 024 j
0 024 j g
* tosample, given x 2 X
000 let p
11+ e
0 {x;j}
000let p
=1; p
000 choose j2 f0; : : : ; kgrandomly with probability p
000 p
000 choose yuniformly at random from [b j
; b
000 outputyTable5: The boosting-based algorithm forconditional density
y025 b
, and for this,we use a logistic regression algorithm based on boosting
techniques as
described byCollinset al. {2002}.
Our learning algorithmconstructs a real-valued function f: X 002 f1; :
: : ; kg ! R with
the interpretation that
11 + exp{000f{x; j })
{9} is our estimate of the probability thaty 025b
,given x. The negative log likelihood of the
conditional Bernoulli variable corresponding to y
being above or belowb
is then
1 + e 000s
}f {x
;j} 021
223Stone, Schapire,Littman, Csirik, & McAllester
where s
{y} =
+1 ify 025 b
0001 if y < b
We attempt tominimize this quantity for all training examples{x
; y i
} and all break- points b
. Specifically, we try to find afunction f minimizing
m X
k X
ln 020
1 + e
{y i
}f {x i
021 :
We use a boosting-like algorithm described by Collins et al. {2002}for minimizing
ob jective
functionsof exactly this form. Specifically, we build the function f inrounds.
On each
roundt, we add a new base functionh
: X 002f1;: : : ; kg !R . Let
t =
0 be the accumulating sum.Following Collins, Schapire and Singer, toconstruct
each h
, we
first let
W t
{i; j }=
11+ e
}f t
i ;j}
be a set of weights on example-breakpoint pairs. We then choose h
t to minimize
W t
{i; j }e
j {y
{11} over some space of \simple" basefunctions h
.For this work, we considered all\decision
stumps" h of the form h{x; j } = 8
> :
if 036{x}025022
if036{x}< 022
if 036{x}=?
where036{001} is one of the featuresdescribed above, and 022, A
, B j
and C
j are all real numbers.
In other words, such an h simplycompares one feature 036 to a threshold022
andreturns a
vectorof numbers h{x; 001} that depends only on whether036{x} is unknown
{?}, orabove
or below022. Schapire and Singer {2000} show how to efficiently search
for the bestsuch
h over all possible choices of 036, 022, A j
, B
j and C
.{We also employed their technique for \smoothing"A
j , B
Whencomputed by this sort of iterative procedure,Collins et al. {2002} prove
the asymptotic convergence of f t
to the minimum of the objective function in Equation {11} over all linear
combinations of the basefunctions. For this problem, we fixedthe number
of rounds toT = 300. Let f =f
T +1
bethe final predictor.
As noted above,given a new feature vector x, wecompute p
as in Equation{9} to be
our estimate for the probability that y 025 b
j , and we let p
0 = 1 and p
k+1 = 0. For this to
makesense, we need p
025 p
2 025 001 001 001 025 p k
, or equivalently ,f {x; 1} 025f {x; 2} 025 001001 001 025 f {x;k},
condition that may not holdfor the learned function f . To force this condition,
we replace f by a reasonable {albeitheuristic} approximation f
0 that is nonincreasing in j ,namely,
224Decision-TheoreticBidding with Learned Density Models
= {f+ f}=2 wheref {respectively, f} is the pointwise minimum {respectively,
maximum} of all nonincreasing functions gthateverywhere upper bound f{respectively,
lower bound f }.
With this modifiedfunction f
, we cancompute modified probabilities p j
. To sample a single point according to the estimateddistributionon R associated
, we choose bin [b
} with probabilityp
000 p j+1
, and then select a point from this bin uniformly at
random. Expected value according to this distributionis easily computed
j=0 {p
} 022
j+1 + b
: Although we present results using thisalgorithm in the trading agent context,
we did not test its performance on more generallearning problems, nor did
we compare to other methods for conditional density estimation, such as those
studied by Stone {1994}.This
clearly should be an area forfuture research.
5. ATT ac-2001 Having described hotel price predictionin detail, we now
present the remainingdetails of
ATT ac-2001'salgorithm. We begin with a briefdescription of the goods allocator,
which is used as a subroutine throughout the algorithm.We then present the
algorithm in a top-down fashion.
5.1Starting Point
A core subproblemforTAC agents is the allocation problem:finding the most
allocationof goods to clients, G
, given a set of owned goods and prices for all other goods. The allocation
problem corresponds tofindingopt{H; i; Y }in Equation 3. We denote the value
of G
003 {i.e., the score one would attainwith G
} asv{G
}.The general allocation
problemis NP-complete, as it is equivalentto the set-packing problem {Garey
& Johnson, 1979}. However itcan be solved tractably in TAC via integer linearprogramming
et al., 2001}. The solution to the integer linear programis a value-maximizing
allocation of owned resources to clients along with a list ofresources that
need to be purchased.Using the linear
programming package \LPsolve", ATT ac-2001is usually able to find the globally
optimal solution in under 0.01 seconds on a 650 MHz Pentium II. However,
sinceinteger linear
programming is anNP-complete problem, some inputs can lead to a greatdeal
of search
over the integrality constraints, andtherefore significantly longersolution
times. When only
} is needed{as opposed to G
003 itself }, the upper bound producedby LPsolve prior to
the search over the integrality constraints, known as theLP relaxation,
can be used as an estimate. The LP relaxation can alwaysbe generated very
quickly. Note that this is not by any means the only possible formulation
of the allocation problem. Greenwald and Boyan {2001} studied a fast, heuristic
search variant and found
that it performedextremely well on a collection of large, randomallocation
problems. Stone
etal. {2001} used a randomized greedy strategy as afallback for the cases
in which the linear program took too long to solve. 225Stone, Schapire,Littman,
Csirik, & McAllester
Table 6 shows ahigh-level overview of ATT ac-2001. The italicized portions
are described in the remainder of this section.When the first flight quotesare
* ComputeG
with current holdingsand expected prices
* Buy the flights in G
for which expectedcost of postponing commitment exceedsthe expected
benefitof postponing commitment
Starting 1minute before each hotel close:
* Compute G
003 with current holdings and expectedprices
* Buy the flights in G
for whichexpected cost of postponing commitmentexceeds expected
benefitof postponing commitment {30seconds}
* Bidhotel roomexpected marginal values givenholdings, new flights, and
expected hotel purchases {30 seconds} Last minute: Buy remaining flights
as needed by G
003 In parallel {continuously}:Buy/sell entertainment tickets based ontheir
expected valuesTable 6: ATT ac-2001's high-level algorithm. The italicized
portions are described in the
remainderof this section.
5.3 Costof Additional Rooms
Our hotel pricepredictor described in Section 4 assumes thatATT ac-2001's
bids do not affect the ultimate closing price {Assumption 3 fromSection 2}.
This assumption holds in a large economy. However in TAC, each hotel auction
involved 8 agents competing for 16 hotel
rooms.Therefore, the actions of each agent had anappreciable effect on the
clearing price: the more hotel rooms an agent attempted topurchase, the higher
the clearing price would be, all other things being equal.This effect needed
to be taken intoaccount when solving
the basic allocationproblem.
The simplifiedmodel used byATT ac-2001 assumed that thenth highest bid in
a hotel
auctionwas roughly proportional toc
{over theappropriate range of n} for somec025 1.
Thus, if the predictor gave a price of p, ATT ac-2001only used this for
purchasing two hotel rooms {the \fair" share of a single agent of the 16
rooms}, and adjusted prices for other quantities of rooms by usingc.
For example, ATT ac-2001 would consider the cost ofobtaining 4 rooms to
be 4pc 2
. One or
two rooms each cost p, but 3 each cost pc, 4 each cost pc 2
, 5 each cost pc 3
, etc. So in total,2
rooms cost 2p, while 4 cost4pc
. Thereasoning behind this procedure is that ifATT ac-2001
buys two rooms - its fair share given that there are 16 rooms and 8 agents,
then the 16th
highest bid {ATT ac-2001's2 bids in addition to 14 others} sets the price.But
if ATT ac-2001
bids on an additional unit, the previous 15thhighest bid becomes the price-setting
price for all rooms sold goes up from p to pc.
Theconstant c was calculated from the data of several hundred games during
the seeding round. In each hotel auction, the ratioof the 14th and 18th highest
bids {reflecting the 226Decision-TheoreticBidding with Learned Density Models
most relevant range of n} was taken as an estimate ofc
, and the {geometric}mean of the
resulting estimates was takento obtain c = 1:35. The LP allocator takes
these priceestimates into account when computingG
by as- signing higher costs to larger purchase volumes, thus tending to
spread out ATT ac-2001's
demand over thedifferent hotel auctions.
In ATT ac-2001, a few heuristics were appliedto the above procedure to improvestability
and to avoid pathological behavior: prices below $1 were replacedby $1 in
estimating c;
c= 1 was used for purchasing fewerthan two hotel rooms; hotel rooms weredivided
early closing and lateclosing {and cheap and expensive} ones, andthe c values
from the
corresponding subsets of auctions of the seedingrounds were used in each
5.4 Hotel Expected Marginal Values
Using the hotel price prediction module as described in Section 4, coupled
with amodel of
its own effect on the economy, ATT ac-2001 is equipped todetermine its bids
for hotel rooms. Every minute, for each hotel auction thatis still open,
ATT ac-2001assumes that auction
will close next andcomputes the marginal value of that hotel room given
the predicted
closing prices ofthe other rooms. If the auction does notclose next, thenit
thatit will have a chance to revise its bids.Since these predicted prices
are represented as distributionsof possible future prices,ATT ac-2001 samples
from these distributionsand
averages the marginal values to obtain an expected marginal value. Using
the full minute
between closing times for computation {or30 seconds if there are stillflights
too}, ATT ac-2001divides the available time among thedifferent open hotel
auctions and generates as many price samples as possiblefor each hotel room.
In the end,ATT ac-2001
bids the expectedmarginal values for each of the rooms. The algorithm is
described precisely and withexplanation in Table 7.
Oneadditional complication regarding hotel auctions isthat, contrary to
one of our
assumptions in Section 2.2 {Assumption 4},bids are not fully retractable:
theycan only
be changed to $1 abovethe current ask price. In the case thatthere are current
bidsfor goods that ATT ac-2001no longer wants that are less than $1 above
the current ask
price,it may be advantageous to refrain from changing the bid in the hopes
that the ask price will surpass them: that is, thecurrent bid may have a
higher expected value than
the best possible newbid. To address this issue, ATT ac-2001 samples from
the learned price distribution to find the average expected values of the
current and potential bids, and only
enters a new bid inthe case that the potential bid is better. 5.5 Expected
Cost/Benefit of Postponing Commitment
ATT ac-2001makes flight bidding decisions based on acost-benefit analysis:
in particular, ATT ac-2001 computes the incrementalcost of postponing bidding
for a particularflight versus
the valueof delaying commitment. In this section, we describe the determination
of the cost of postponing the bid. Due to difficulties that compounded withmore
sophisticated approaches, ATT ac-2001 used the following very simple modelfor
estimating the price of a flight ticketat a given
future time. Itis evident from the formulation that|giveny|the expected
price increase
from time 0 to time t was very nearlyof the form M t
2 for some M . It was alsoclear that
227Stone, Schapire,Littman, Csirik, & McAllester
* For each hotel {inorder of increasingexpected price}:
* Repeat until time bound 1. Generate a random hotel closing order{only
over open hotels}
2.Sample closing prices from predicted hotelprice distributions
3. Giventhese closing prices, compute V
; V
; : : : V
021 v{G
} if owningi of the hotel
000 Estimatev{G
} with LPrelaxation
000 Assume that noadditional hotel rooms of this type can bebought
000 Forother hotels, assumeoutstanding bids abovesampled price are already
{i.e., they cannot be withdrawn}. 000 Note that V
024 V
1 024 : : : 024 V n
: the valuesare monotonicallyincreasing since having more goods cannot be
worse in termsof possible allocations.
* The value of the ith copy ofthe room is the mean of V
000 V
over all the samples.
* Note further that V 1
000 V
025 V
2 000 V
: : : 025 V
n 000 V
: the value differencesare monotonically
decreasing since eachadditional room will be assigned to the client who
can derive the most
value from it.
* Bidfor one room at the value of theith copy of the room for alli such
that the value is at least as much as the current price.Due to the monotonicity
noted in the stepabove, no
matterwhat the closingprice, the desired number of rooms at thatprice will
be purchased.Table 7: The algorithm forgenerating bids for hotel rooms.
as long as the price did not hit the artificialboundaries at $150 and $800,
the constantM
must depend linearly ony 000 10. This linear dependence coefficient was
then estimated from several hundred flight price evolutionsduring the qualifying
round. Thus,for this constant
m,the expected price increase from timet to time T was m{T
}{y00010}. When a
priceprediction was needed, this formula was first used for the first and
most recent actual price observations to obtain a guess fory, and then this
y was used in theformula again to
estimate the future price.No change was predicted if the formulayielded
a price decrease.
This approachsuffers from systemic biases of variouskinds {mainly due to
the fact that the variance of price changes getsrelatively smaller over longer
periods oftime}, but was
thought to be accurateenough for its use, which was to predictwhether or
not the ticket
can be expected to get significantlymore expensive over the next few minutes.
In practice,during TAC-01, ATT ac-2001started with the flight-lookahe adparameterset
to 3 {i.e., cost of postponing is the average ofthe predicted flight costs
1, 2, and 3 minutes
in the future}. However,thisparameter was changed to 2 by the endof the
finals in order
to causeATT ac-2001 to delay its flightcommitments further.
5.5.1 ExpectedBenefit of Postponing Commitment
Fundamentally , the benefit of postponingcommitments to flights is that
additional infor- mation about the eventual hotel pricesbecomes known. Thus,
the benefit of postponing
commitment is computed bysampling possible future price vectors anddetermining,
average, how much better the agent could do if it bought adifferent flight
instead of the one in question. If it is optimal to buythe flight in all
future scenarios, then thereis no
value in delaying thecommitment and the flight is purchasedimmediately.
However, if 228Decision-TheoreticBidding with Learned Density Models
there are many scenarios in which theflight is not the best one to get,
the purchase is more
likely to be delayed. The algorithm for determining the benefitof postponing
commitment is similar to that for determining the marginal valueof hotel
rooms. It is detailed, withexplanation, in
Table 8.
* Assume we'reconsidering buying n flights of a given type
* Repeat until time bound
1. Generate a random hotel closingorder {open hotels}
2. Sampleclosing prices from predicted price distributions{open hotels}
3. Giventhese closing prices compute V
; V
1 ; : : : V
000 V
} if forced to buy i of the flight 000 Estimate v{G 003
} with LP relaxation 000 Assume more flights can bebought at the current
000 Note that V 0
025 V
1 025 : : : 025 V n
since it is never worse toretain extra flexibility.
* The value of waiting to buycopy i is the mean of V i
000 V
over all thesamples. If
all price samples lead tothe conclusion that the ith flight shouldbe bought,
= V
and there is no benefit to postponing commitment.Table 8: The algorithm
for generating value of postponing flight commitments. 5.6 Entertainment
Expected Values
The core of ATT ac-2001's entertainment-ticket-bidding strategy isagain
a calculation of the
expected marginalvalues of each ticket. For each ticket, ATT ac-2001computes
the expected
valueof having one more and one fewer of the ticket. These calculations
give bounds on the bid and ask prices it is willing to post. The actual bid
and ask prices are alinear function of
time remaining in thegame: ATT ac-2001 settles for a smaller and smaller
profit from ticket transactions as the game goes on.Details of the functions
of bid and ask priceas a function
of game time and ticket value remained unchanged from ATT ac-2000 {Stone
et al., 2001}. Details of the entertainment-ticket expected marginal utility
calculations are given in Table 9.
6. Results This section presents empirical resultsdemonstrating the effectiveness
of theATT ac-2001
strategy .First, we summarize its performance in the2001 and 2002 Trading
Agent Com- petitions {TACs}. Thesesummaries provide evidence of the strategy's
overall effectiveness,
but, due to the smallnumber of games in the competitions, areanecdotal rather
than sci-
entificallyconclusive. We then present controlledexperiments that provide
more conclusive evidence of the utility of our decisiontheoretic and learning
approaches embedded within ATT ac-2001.
229Stone, Schapire,Littman, Csirik, & McAllester
* Assume n of a giventicket type are currently owned
* Repeat until time bound 1. Generate a random hotel closing order{open
2. Sampleclosing prices from predicted price distributions{open hotels}
3. Given these closingprices compute V
; V
; V
= v{G
} if owni of the ticket
000Estimate v{G
003 } with LP relaxation
000Assume no other tickets can be bought or sold
000 Note thatV
024 V n+1
since it is never worse to own extra tickets.
* The value of buying a ticket is the mean of V
000 V
n over all the samples; the value
of selling is the mean ofV
000 V n0001
* Since tickets are considered sequentially, if the determined buy or sell
bidleads to a
price that would clear according to the current quotes, assume the transaction
throughbefore computing the values of buying and selling other ticket types.Table
9:The algorithm for generating valueof entertainment tickets.
6.1 TAC-01 Competition Of the 19 teams that entered thequalifying round,
ATT ac-2001 was oneof eight agents
to make it to thefinals on the afternoon of October 14th, 2001.The finals
consisted of
24 games amongthe same eight agents. Rightfrom the beginning, it became
clear that livingagents {Fritschi& Dorer, 2002} was the team to beat in thefinals.
They jumped to an
early lead in the first two games, and by eight games into the round, they
were morethan
135 points per game ahead oftheir closest competitor {SouthamptonTAC, He
& Jennings,
2002}. 16games into the round, they were more than 250points ahead of their
two closest competitors {ATT ac-2001and whitebear}.
From that point, ATT ac-2001, which was continually retraining itsprice
predictors based
on recent games, began making a comeback. By the time the last game was
to be played,
it was only an average of 22 pointsper game behind livingagents.It thus
needed to beat
livingagents by 514 pointsin thefinal game to overtake it, well withinthe
margins observed
in individual gameinstances. As the game completed, ATT ac-2001's score
of 3979 was one of the first scores to be posted bythe server. The other
agents' scores werereported one
by one, until only thelivingagents score was left. Afteragonizing seconds
{at least for us}, the TAC server posted a finalgame score of 4626, resultingin
a win forlivingagents.
After the competition, theTAC team at the University of Michigan conducted
a regres-
sion analysis of the effects of the client profiles on agentscores. Using
data from the seeding rounds, it was determined that agents did better when
their clients had:
1.fewer total preferred travel days; 2. higher total entertainment values;
3. a higher ratio ofouter days {1 and 4} to inner {2 and 3} in preferred
trip intervals.
230Decision-TheoreticBidding with Learned Density Models
Based on these significant measures, thegames in the finals could be handicapped
based on each agents' aggregate clientprofiles. Doing so indicated thatlivingagents'
clients were
much easier to satisfy than those ofATT ac-2001, giving ATT ac-2001the highest
score.The final scores, as well as the handicapped scores, are shown in
Table10. Complete
results and affiliations areavailable from http://tac.eecs.umich.edu.AgentMeanHandicapped
scoreATT ac-200136224154livingagents36704094whitebear35133931Urlaub0134213909Retsina33523812CaiserSose30743766SouthamptonT
AC32533679T acsMan28593338T able10: Scores during the finals. Eachagent played
24 games. Southampton'sscore was
adversely affected by agame in which their agent crashed after buyingmany
flights but no hotels, leadingto a loss of over 3000 points.Discarding that
results in an average score of 3531.
6.2 TAC-02 Competition
A year afterthe TAC-01 competition, ATT ac-2001 was re-entered in the TAC-02
using the modelstrained at the end of TAC-01.Specifically, the price predictors
unchanged throughout {no learning}.The seeding round included 19 agents,
440 games over the course ofabout 2 weeks. ATT ac-2001was the top-scoring
agent in this round, as shown in Table11. Scores in the seeding round were
weighted so as to emphasize
later results over earlier results: scores on dayn of the seeding round
were given a weight
of n. Thispractice was designed to encourage experimentation early in the
round. The
official ranking in the competitions was based on the mean score after ignoring
each agent's worst 10 results so as to allow for occasional program crashes
and network problems. On the one hand, it is striking thatATT ac-2001 was
able to finish sostrongly in a field
of agents that hadpresumably improved over the course of theyear. On the
other hand,
most agents were being tuned, for better and for worse, whileATT ac-2001
was consistent throughout. In particular, we are toldthat SouthamptonTAC
experimented withits approach
during the later days ofthe round, perhaps causing it to fall out of the
lead {by weighted
score} in the end.During the 14-game semifinal heat,ATT ac-2001, which was
with its learning capability andretrained over the data from the 2002 seedinground,
6th out of 8 therebyfailing to reach the finals.
Thereare a number of possible reasons for thissudden failure. One relatively
mun- dane explanation is that the agent had tochange computational environments
seeding rounds and the finals, andthere may have been a bug or computationalresource
constraint introduced.Another possibilityis that due to the smallnumber
of games in
231Stone, Schapire,Littman, Csirik, & McAllesterAgentMeanWeighted,droppedworst
10ATT ac-200130503131SouthamptonT AC31003129UMBCT AC29803118livingagents30183091cuhk29983055Thalis29523000whitebear29452966RoxyBot27382855T
able11: Top 8 scores duringthe seeding roundof TAC-02. Each agent played
440 games,
with its worst 10 games ignoredwhen computing the rankings.
thesemifinals, ATT ac-2001 simply got unlucky with respect to clients and
the interaction ofopponent strategies. However,it is also plausiblethat the
training data fromthe 2002
qualifying and seeding round datawas less representative of the 2002 finalsthan
the was the
training data from 2001;and/or that the competing agents improvedsignificantly
over the
seeding roundwhile ATT ac-2001 remained unchanged.The TAC team at the University
of Michigan has done a study of the pricepredictors of several 2002 TACagents
that suggests
that the bug hypothesis is most plausible: the ATT ac-2001 predictor from
2001 outperforms all other predictors from 2002 on the datafrom the 2002
semifinalsand finals; and oneother
agent that uses the 2002 data didproduce good predictions based on that
Reeves, Lochner, & Vorobeychik, 2003b}.
8 6.3 Controlled Experiments ATT ac-2001's success in the TAC-01 competition
demonstrates its effectiveness as a complete
system. However,since the competing agents differed along several dimensions,
the compe-
tition resultscannot isolate the successful approaches.In this section,
we report controlled experimentsdesigned to test the efficacy of ATT ac-2001's
machine-learning approach to price
6.3.1Varying the Predictor
In the first set of experiments, weattempted to determine how the quality
ofATT ac-2001's
hotel pricepredictions affects its performance. To this end, we devised
seven price prediction schemes, varying considerably insophistication and
inspired by approaches takenby other
TAC competitors, andincorporated these schemes into our agent.We then played
seven agents against one another repeatedly, with regular retraining as
described below.
Following are the seven hotelprediction schemes that we used, in decreasingorder
sophistication:8. Indeed, in the TAC-03competition, ATT ac-2001 was enteredusing
the trained models from 2001, and it won the competition, suggesting further
thatthe failure in 2002 was due to a problem withthe learned
models that were used duringthe finals in 2002.
232Decision-TheoreticBidding with Learned Density Models
* ATT ac-2001
s : This is the \full-strength" agentbased on boosting that was used during
the tournament. {The sdenotes sampling.}
* Cond'lMean s
: This agent samples pricesfrom the empirical distribution of prices from
previously played games, conditionedonly onthe closing time of the hotel
room {a subset of of the features used byATT ac-2001
}.In other words, it collects all historical hotel prices and breaks them
down by thetime at which the hotel closed {as well as room type, as usual}.
Theprice predictor then simply samples from thecollection of
prices corresponding to the given closing time.
* SimpleMean s
: This agent samples prices from the empirical distribution of prices from
previously played games, withoutregard tothe closing time of the hotel room
{but still broken down by room type}. It uses a subset of the features used
by Cond'lMean
* ATT ac-2001
, Cond'lMean
, SimpleMean
ev : These agents predict in the same way as
their corresponding predictors above, but instead of returning a random
the estimated distributionof hotelprices, they deterministicallyreturn the
expected value of the distribution. {Theev denotes expected value,as introduced
in Sec-
* CurrentBid:This agent uses a very simple predictor thatalways predicts
that the hotel
room will close at its current price. In every case, whenever the price
predictorreturns a price that is below the currentprice,
we replace it with the currentprice {since prices cannot go down}. In our
experiments, we added as an eighth agent EarlyBidder, inspired by thelivingagents
agent. EarlyBidderused SimpleMean
to predict closing prices, determined an optimalset of
purchases, and then placed bidsfor these goods at sufficiently high pricesto
ensure that
they would be purchased {$1001 for all hotel rooms, just aslivingagents
did in TAC-01}right
after the first flight quotes.It then never revised these bids. Each of
these agents require training,i.e., data from previously played games.However,
we are faced with a sortof \chicken and egg" problem: torun the agents,
we need to first train the agents using data fromgames in which they were
involved, butto get this
kind of data, we need tofirst run the agents. To get aroundthis problem,
we ran the
agents inphases. In Phase I, which consisted of 126games, we used training
data from
the seeding, semifinalsand finals rounds of TAC-01. In Phase II, lasting
157 games, we
retrained the agents once every sixhours using all of the data from the
and finals rounds as wellas all of the games played in Phase II.Finally,
in Phase III, lasting 622 games, we continued to retrain theagents once every
six hours, but now usingonly
data from games played during PhasesI and II, and not including data from
semifinals and finals rounds. Table 12 shows how the agents performed in
each of these phases. Muchof what we
observe in this table isconsistent with our expectations. Themore sophisticated
based agents{ATT ac-2001
and ATT ac-2001
ev } clearly dominated the agents based onsimpler
prediction schemes. Moreover,with continued training, these agents improved
relative toEarlyBidder. We also see the performance of the simplest agent,
CurrentBid, which
233Stone, Schapire,Littman, Csirik, & McAllesterAgentRelative ScorePhase
IPhase IIPhase IIIATT ac-2001 ev105:2006 49:5 {2}131:6 006 47:7 {2}166:2006
20:8 {1}ATT ac-2001
s27:8 006 42:1{3}86:1 00644:7 {3}122:3 006 19:4 {2}EarlyBidder140:3 006
38:6 {1}152:8 006 43:4 {1}117:0 006 18:0 {3}SimpleMean
ev00028:8 006 45:1 {5}00053:9 006 40:1 {5}00011:5 00621:7 {4}SimpleMean
s00072:0 006 47:5 {7}00071:6 00642:8 {6}00044:1 006 18:2{5}Cond'lMean
ev8:6 006 41:2 {4}3:5 006 37:5 {4}00060:1 00619:7 {6}Cond'lMean s000147:5
006 35:6 {8}00091:4006 41:9 {7}00091:1 00617:6 {7}CurrentBid00033:7006
52:4 {6}000157:1 00654:8 {8}000198:8 006 26:0{8}Table 12: The average
relative scores{006 standard deviation} for eight agents in the three
phases of our controlled experiment in which the hotel prediction algorithm
varied. The relative score ofan agent is its score minus the averagescore
of all
agents in that game.The agent's rank within each phase is shown in parentheses.
does not employany kind of training, significantly declinerelative to the
other data-driven agents.
On the other hand, there are some phenomena in this table that were verysurprising
to us. Most surprisingwasthe failure of sampling to help. Ourstrategy relies
not only onestimating hotel prices, but also taking samples fromthe distributionof
prices.Yet these results indicate that using expected hotel price, rather
than price samples, consistently performs better.We speculate that this may
be because an insufficient number of samples are being used {due tocomputational
limitations} so that the numbersderived
from these samples have too high a variance. Another possibilityis that
the method of using
samples toestimate scores consistently overestimates the expected score
because it assumes
the agent can behave with perfect knowledge for each individual sample|a
propertyof our
approximation scheme.Finally, as our algorithm uses sampling atseveral different
{computing hotel expected values,deciding when to buy flights, pricingentertainment
ets, etc.}, it is quite possible that sampling is beneficial for somedecisions
while detrimental
forothers. For example, when directly comparingversions of the algorithm
with sampling used at only subsets of the decision points, the data suggests
that sampling for the hotel decisions is most beneficial, while samplingfor
the flights and entertainment ticketsis neu-
tral at best, and possiblydetrimental. This result is not surprisinggiven
that the sampling
approach is motivated primarilyby the task of bidding forhotels.
We were also surprised thatCond'lMean
and Cond'lMean ev
eventually performed worse than the less sophisticated SimpleMean s
and SimpleMean
. One possible explanation isthat
the simpler model happens to give predictions that are just as good as themore
model, perhaps becauseclosing time is not terribly informative,or perhaps
because the
adjustmentto price based on current price is moresignificant. Other things
being equal, the simpler model has the advantage that its statistics are
based on all of the price data,
regardless of closing time,whereas the conditional model makes eachpredictionbased
only an eighth ofthe data {since there are eight possible closing times,
each equally likely}.
In addition to agent performance, it is possible to measure the inaccuracy
of the eventual predictions, at least for the non-samplingagents. For these
agents, we measuredthe root
234Decision-TheoreticBidding with Learned Density Models
mean squared error of the predictions made inPhase III. These were: 56.0
forATT ac-2001
, 66.6 for SimpleMean
ev , 69.8 for CurrentBid and 71.3 forCond'lMean
. Thus,we see that
the lower the error ofthe predictions {according to this measure},thehigher
the score
{correlationR= 0000:88}. 6.3.2 ATT ac-2001 vs.EarlyBidder
In a sense, the twoagents that finished at the top of the standings in TAC-01
oppositeends of a spectrum. The livingagentsagent uses a simple open-loop
strategy, com-
mitting to a set of desired goods right at the beginning of the game,while
ATT ac-2001 uses
a closed-loop, adaptive strategy. The open-loop strategy relies on the other
agents to stabilize the economy and create consistent final prices. In particular,
if all eight agents are open loop and placevery high
bids for the goods theywant, many of the prices will skyrocket, evaporating
any potential profit. Thus, a set of open-loopagents would tend to get negative
strategy is a parasite, in amanner of speaking. Table 13 shows theresults
of running 27
games with 7 copiesof the open-loop EarlyBidder and one ofATT ac-2001. Although
by livingagents, in actuality it is identical to ATT ac-2001except that
it uses SimpleMean ev
it places all ofits flight and hotel bids immediately after thefirst flight
quotes. It bids only for the hotels that appear inG
at that time.All hotel bids are for $1001. Inthe
experiments, one copy ofATT ac-2001
isincluded for comparison. The price predictors are all from Phase I in
the preceding experiments. EarlyBidder's high biddingstrategy backfires
and it ends up overpaying significantly for its goods.As our experiments
above indicate, ATT ac-2001 may improve even further if it is allowed to
train on thegames of the on-going
experiment as well.AgentScoreUtilityATT ac-20012431 006 4648909 006 264EarlyBidder{7}0004880006
3379870 00634Table 13: The results of runningATT ac-2001 against 7 copies
ofEarlyBidder over the course of 27 games. EarlyBidderachieves high utility,
but overpays significantly, resulting
in low scores.
The open-loop strategyhas the advantage of buying a minimal setof goods.
That is, it
never buysmore than it can use. On the other hand, itis susceptible to unexpected
in that it can get stuck paying arbitrarilyhigh prices for the hotel rooms
it has decidedto
Notice in Table 13 that the average utilityof theEarlyBidder's clients is
significantly greater than that of ATT ac-2001's clients. Thus, the difference
inscore is accounted for
entirelyby the cost of the goods. EarlyBidderends up paying exorbitant prices,
whileATT ac-
2001 generally steers clearof the more expensive hotels. Itsclients' utility
suffers, but the cost-savings are well worth it. Compared to the open-loop
strategy, ATT ac-2001's strategyis relatively stable against
itself. Itsmain drawback is that as it changes itsdecision about what goods
it wants andas
235Stone, Schapire,Littman, Csirik, & McAllester
it may alsobuy goods to hedge against possible price changes, it can end
up getting stuck paying for some goods that are ultimatelyuseless to any
of its clients.
Table 14 shows the results of 7 copiesof ATT ac-2001 playing against eachother
one copy of theEarlyBidder. Again, training is from theseeding round and
finals of TAC- 01: the agents don't adapt during the experiment. Included
in this experiment arethree
variants of ATT ac-2001, each with a differentflight-lookahe ad parameter
{from thesection
on \cost of postponing flightcommitments"}. There were three copies each
of the agents
with flight-lookahe ad setto 2 and 3 {ATT ac-2001{2} and ATT ac-2001{3},
respectively}, and
oneATT ac-2001 agent withflight-lookahe ad set to 4 {ATT ac-2001{4}).AgentScoreUtilityEarlyBidder2869
0066910079 006 55ATT ac-2001{2}2614 006 389671 006 32ATT ac-2001{3}2570
006399641 006 32ATT ac-2001{4}2494 006 689613 006 55Table 14:The results
of runningthe EarlyBidderagainst 7 copies of ATT ac-2001over the
course of 197 games.The three different versions ofATT ac-2001 had slightly
different flight-lookahe ads.
F rom the results in Table 14 it is clear that ATT ac-2001does better when
to its flight purchases later in the game {ATT ac-2001{2} as opposed toATT
ac-2001{4}). In
comparison with Table 13, the economyrepresented here does significantly
better overall.
That is, having many copies ofATT ac-2001 in the economy does notcause them
to suffer.
However, inthis economy, EarlyBidder is able toinvade. It gets a significantly
for its clients and only pays slightly more than the ATT ac-2001agents {as
computed by
utilityminus score}.
Theresults in this section suggest that the variance of the closing prices
is the largest determining factor between the effectiveness of the two strategies
{assuming nobody else
is using the open-loop strategy}.We speculate that with large price variances,
the closed-
loop strategy {ATT ac-2001} should do better, butwith small price variances,
the open-loop strategy could do better.
7. Discussion
The open-loop andclosed-loop strategies of the previous sectiondiffer in
their handling of
pricefluctuation. A fundamental way of takingprice fluctuation into account
is to place \safe bids." A very high bid exposesan agent to the danger of
buying something at a ridiculously high price. Ifprices are in fact stable
then high bids are safe.But if prices
fluctuate, then highbids, such as the bids of the stable-pricestrategy,
are risky. In TAC,
hotel rooms are sold in a Vickrey-style nth price action. Thereis a separate
auction for
eachday of each hotel and these auctions are donesequentially. Although
the order of the auctions is randomized, and not known tothe agent, when
placing bids in one of these9. We suspectthat were the agents allowed to
retrain over the course of the experiments,ATT ac-2001
would end up improving, as we saw in Phase III of theprevious set of experiments.
Werethis to occur,
it is possible thatEarlyBidder would no longer be able to invade.
236Decision-TheoreticBidding with Learned Density Models
auctions the agent assumes that auction willclose next. We assumed in the
design ofour
agent that our bids in one auction donot affect prices in other auctions.This
is not strictly true, butin a large economy one expects that the bidsof
a single individual
havea limited effect on prices. Furthermore,the price most affected by a
bid is the price of the item being bid on; the effect on other auctions seems
less direct and perhaps more limited. Assuming bids in one auction donot
affect prices in another, the optimal bidding strategy is the standard strategy
for a Vickrey auction|the bid for an item should be equal to its utility
to the bidder.So, to place a Vickrey-optimal bid, one mustbe able to estimate
the utility of an item. The utility of owning an item issimply the expected
final score
assuming one owns the item minus the expected final score assuming one does
not ownthe
item. So, the problem of computing a Vickrey-optimal bid can be reduced
to theproblem
of predicting final scores for two alternative game situations. Weuse two
score prediction
procedures,which we call the stable-price score predictor{corresponding
to Equation 5}
andthe unstable-price score predictor {Equation 4}. The Stable-Price Score
Predictor.The stable-price score predictor first estimates the expected prices
in the rest of thegame using whatever information is available in the
givengame situation.It then computes the value achieved by optimal purchases
under the
estimatedprices. In an economy with stable prices,this estimate willbe quite
accurate| if we make the optimal purchases forthe expected price then, if
the prices are nearour
estimates, our performance will also be near the estimated value.
The Unstable-Price Score Predictor.Stable-price score prediction does not
take into account the ability of the agent to react to changes in price as
the gameprogresses. Sup-
pose a given roomis often cheap but is sometimes expensive.If the agent
can first determine the price of the room, and then plan forthat price, the
agent willdo better thanguessing
the price ahead of time and sticking to the purchases dictated by that price.The
price predictor uses a model of the distribution of possible prices.It repeatedly
pricesfrom this distribution, computes the stable-price scoreprediction
under the sampled
price, and thentakes the average of these stable-price scoresover the various
price samples. This score prediction algorithm is similar to the algorithm
used in Ginsberg's {2001}quite
successful computer bridge program wherethe score is predicted by sampling
the possible hands of the opponent and, for each sample, computing the score
of optimal play inthe case
where all players havecomplete information {double dummy play}.While this
has a simple intuitive motivation, it is clearly imperfect.The unstable-price
score predictor assumes both that future decisions are made in the presence
of complete price information, and that the agent is free to changeexisting
bids in auctions that have not yetclosed. Both
of these assumptions are onlyapproximately true at best. Waysof compensating
for the
imperfectionsin score prediction were described in Section 5. Buy Now or
Decide Later.The trading agent must decide what airlinetickets to buy
and when to buy them.In deciding whether to buy an airline ticket,the agent
can compare
the predicted score in the situation where it owns the airline ticket with
the predicted score
in thesituation where it does not own the airline ticket but may buy it
later. Airlinetickets
tend to increase in price, soif the agent knows that a certain ticketis
needed it should buy
it as soon aspossible. But whether or not a given ticket is desirable may
depend on the price of hotel rooms, which may becomeclearer as the game progresses.
If airline tickets
did not increase in price, as wasthe case in TAC-00, thenthey should bebought
at the
237Stone, Schapire,Littman, Csirik, & McAllester
lastpossiblemoment {Stone et al., 2001}. To determine whether an airline
ticket shouldbe
bought now or not, one cancompare the predicted score in the situation whereone
has just
bought the ticket atits current price with the predicted score in thesituation
where the
price of the ticketis somewhat higher but has not yet been bought. It is
interesting to note that if one uses the stable-price score predictorfor
both of these predictions, and the ticket is purchased in the optimal allocationunder
the current price estimate, then the predicted score for buying the ticket
now willalways be higher|increasing the price of theticket can
only reduce the score.However, the unstable-price score predictorcan yield
an advantage
fordelaying the purchase. This advantage comes from the fact that buying
the ticket may
be optimal under some pricesbut not optimal under others. If the ticket
has not yet been
bought, then thescore will be higher for those sampled priceswhere the ticket
be bought. This corresponds to the intuition thatin certain cases the purchase
should be delayed until more information is available.
Our guiding principle in the design of the agent was, to the greatest extent
possible, to
have the agentanalytically calculate optimal actions. Akey component of
these calculations is the score predictor, based either on asingle estimated
assignment of prices or on a model of
the probability distribution overassignments of prices. Both score predictors,though
imperfect, seem useful.Of these two predictors, only theunstable-price predictor
can be
usedto quantitatively estimate the valueof postponing a decision until more
information is
available. The accuracy ofprice estimation is clearly of central importance.Future
willundoubtedly focus on ways of improving both price modeling and score
prediction based on price modeling.
8.Related and Future Work Although there has been a good dealof research
on auction theory, especiallyfrom the
perspective of auction mechanisms {Klemperer, 1999}, studies of autonomousbiddingagents
and their interactions are relatively few and recent. TACis one example.
FM97.6 is
anotherauction test-bed, which is based on fishmarket auctions {Rodriguez-Aguilar,
Martin, Noriega, Garcia, & Sierra, 1998}.Automatic bidding agents have also
beencreated in this
domain {Gimenez-Funes, Godo, Rodriguez-Aguiolar, & Garcia-Calves, 1998}.
There have
beena number of studies of agents biddingfora single good in multiple auctions
{Ito, Fukuta, Shintani,& Sycara, 2000; Anthony, Hall, Dang, & Jennings, 2001;Preist,Bartolini,
& Phillips,2001}. A notable auction-based competition that was held prior
to TAC was the Santa Fe
Double Auction Tournament{Rust, Miller, & Palmer, 1992}. Thisauction involved
agentscompeting in a single continuous double auction similar to the TAC
entertainment ticket auctions. As analyzed by Tesauro and Das {2001}, this
tournament was won by a
parasite strategy that, like livingagents as described in Section 6.3,reliedon
other agents to
find a stableprice and then took advantageof it to gain an advantage. Inthat
case, the
advantagewas gained by waiting until the last minute to bid, a strategy
commonly known as sniping.
TAC-01was the second iteration of the Trading Agent Competition. Therules
of TAC-
01 are largely identical to those of TAC-00, withthree important exceptions:
1. In TAC-00, flight prices did not tend toincrease;
238Decision-TheoreticBidding with Learned Density Models
2. In TAC-00, hotel auctions usuallyall closed at the end of the game; 3.
In TAC-00, entertainment tickets were distributeduniformly to all agents
While minor on the surface, the differencessignificantly enriched the strategic
complexity of the game. In TAC-00,most of the designers discovered that a
dominant strategy was
to defer all serious biddingto the end of the game. A result, the focus
was on solving
the allocation problem,with most agents usinga greedy,heuristicapproach.
Since the
hotel auctions closed at the end of the game,timing issues were also important,
with significant advantages going toagents that were able to bidin response
tolast-second price
quotes {Stone & Greenwald, 2003}. Nonetheless, many techniques developed
in 2000 were
relevant to the 2001 competition: theagent strategies put forth in TAC-00were
precursors to the secondyear's field, for instance as pointed out in Section
Predicting hotel clearing prices was perhaps the most interesting aspect
of TAC agent
strategies in TAC-01, especially in relation to TAC-00 where the last-minute
bidding created essentially a sealed-bid auction.As indicated by our experiments
describedin Section 6.3,
there are many possible approaches to this hotel price estimation problem,and
the approach
chosen can havea significant impact on the agent's performance. Among those
observed in TAC-01 are the following {Wellman, Greenwald, Stone, & Wurman,2002},
associated in
some cases with theprice-predictor variant in our experimentsthat was motivated
by it. 1. Just use the current price quotep
2. Adjust based on historic data.For example, if 001
t is the average historicaldifference between clearing price and price at
timet, then the predicted clearing price isp
+ 001 t
3. Predictby fitting a curve to the sequence of askprices seen in the current
4. Predict based on closing price data forthat hotel in past games {SimpleMean
, SimpleMean
5. Same as above, but condition on hotel closing time, recognizingthat the
sequence will influencethe relative prices.
6. Sameas above, butcondition on full ordering ofhotel closings, or which
hotels are open or closed at a particular point{Cond'lMean
}. 7. Learn a mapping from features of thecurrent game {includingcurrent
prices} toclosing
prices based on historic data {ATT ac-2001
,ATT ac-2001
8. Hand-construct rules based onobservations about associations between
abstract fea-
Havingdemonstrated ATT ac-2001's success atbidding in simultaneous auctions
for mul- tiple interacting goods in the TAC domain, we extended our approach
toapply it to the
U.S. FederalCommunications Commission {FCC} spectrum auctions domain {Weber,
TheFCC holds spectrum auctions to sell radiobandwidth to telecommunications
compa- nies. Licenses entitle their owners touse a specified radio spectrum
band within aspecified
geographical area, ormarket. Typically several licenses areauctioned off
withbidders placing independent bids for eachlicense. The most recent auction
brought in 239Stone, Schapire,Littman, Csirik, & McAllester
over $16billion dollars. In a detailed simulation ofthis domain {Csirik,
Littman, Singh, & Stone, 2001}, we discovered a novel,successful bidding
strategy inthis domain that allows
the bidders to increase their profitssignificantly over a reasonable default
strategy {Reitsma,
Stone, Csirik, & Littman, 2002}. Our ongoing research agenda includes applying
our approach to other similar domains.
We particularlyexpect the boostingapproach to price prediction and thedecision-theoretic
reasoning over pricedistributionsto transfer to other domains.Other candidate
domainsinclude electricity auctions, supplychains, and perhaps even travel
booking on public e-commerce sites.
Thiswork was partiallysupported by the United States{Israel BinationalScience
tion {BSF}, grant number 1999038. Thanks to the TAC team at the University
of Michigan for providing the infrastructure and support required to run
many of our experiments. Thanks to Ronggang Yu at the University of Texas
at Austin for running oneof the ex-
periments mentioned in thearticle. Most of this research was conductedwhile
all of the
authors were at AT&T Labs - Research.
References Anthony, P., Hall, W., Dang, V. D., &Jennings, N. R. {2001}.
Autonomousagents for
participating in multipleon-line auctions. In Pro c e e dingsof the IJCAI-2001
on E-Business and the Intel ligent Web Seattle, WA.
Boutilier, C. {2002}. Apomdp formulation of preference elicitationproblems.
In Pro c e e dings of the Eighteenth National Conferenc e on Artificial Intel
ligence, pp. 239{246.
Collins, M., Schapire, R. E., & Singer, Y. {2002}.Logistic regression, AdaBoost
and Breg- man distances. Machine Le arning, 48 {1/2/3}.
Csirik,J. A., Littman, M. L., Singh, S., & Stone, P. {2001}. FAucS: An FCC
spectrum auction simulator for autonomous biddingagents. In Fiege, L., MÂȘuhl,
G., & Wilhelm,
U. {Eds.},Electr onic Commerc e: Pro c e e dings of the Sec ondInternational
pp. 139{151Heidelberg,Germany. SpringerVerlag.
F reund, Y., & Schapire, R. E.{1997}. A decision-theoretic generalization
ofon-line learning
and an application to boosting. Journal of Computer and System Sciences,
55 {1},
119{139. Fritschi, C., & Dorer, K. {2002}.Agent-oriented software engineering
forsuccessful TAC
participation.In First International Joint Conferenc e on Autonomous Agents
and Multi-Agent Systems Bologna. Garey, M. R., & Johnson, D. S. {1979}.Computers
and Intractability:A Guide to the
Theoryof NP-completeness. Freeman,San Francisco, CA.
240Decision-TheoreticBidding with Learned Density Models
Gimenez-Funes, E., Godo, L., Rodriguez-Aguiolar, J. A., & Garcia-Calves,
P. {1998}. De-
signing biddingstrategies for trading agents in electronic auctions.In Pro
c e e dings of the Third International Conferenc e on Multi-Agent Systems,
pp. 136{143.
Ginsberg, M. L. {2001}.GIB: Imperfect information in a computationallychallenging
JAIR,14, 303{358.
Greenwald,A.,& Boyan, J. {2001}. Biddingalgorithms for simultaneous auctions.
In Pro c e e dings of Third ACM Conferenc e on E-Commerc e, pp. 115{124 Tampa,
FL. Harsanyi, J. {1967{1968}. Gameswith incomplete information played by
bayesian players.
Management Science, 14, 159{182,320{334,486{502. Hauskrecht, M. {1997}.
Incrementalmethods for computing bounds in partially observable
Markov decisionprocesses.In Pro c e e dings of the Fourte enth National
Conferenc eon
Artificial Intel ligence, pp. 734{739.
He, M., & Jennings,N. R. {2002}. SouthamptonTAC:Designing a successful trading
agent. In Fifteenth Europ e anConferenc e on Artificial Intelligence Lyon,
France. Ito, T., Fukuta, N., Shintani, T., & Sycara, K. {2000}. Biddingbot:
amultiagent support
systemfor cooperative bidding in multipleauctions. In Pro c e e dingsof
the Fourth
InternationalConferenc e on MultiAgent Systems, pp. 399{400.
Kearns, M., Mansour, Y., &Ng, A.Y. {1999}. A sparse sampling algorithmfor
optimal planning in large Markov decision processes. In Pro c e edings of
the Sixteenth
InternationalJoint Conferenc e on Artificial Intelligence {IJCAI-99}, pp.
1324{1331. Klemperer, P. {1999}.Auction theory: A guide to the literature.Journal
of Economic
Surveys, 13 {3}, 227{86.
Littman, M. L., Ma jercik, S. M., & Pitassi, T. {2001}. Stochastic Booleansatisfiability.
Journalof Automated Re asoning,27 {3}, 251{296.
Papadimitriou, C. H., & Tsitsiklis, J. N. {1987}. Thecomplexity of Markov
decisionpro- cesses. Mathematics of Oper ationsRese ar ch, 12 {3},441{450.
Preist, C., Bartolini, C., & Phillips, I. {2001}. Algorithm design for agents
which partici-
pate in multiple simultaneousauctions. In Agent Mediate dElectr onic Commerc
e III {LNAI}, pp. 139{154. Springer-Verlag, Berlin.
Reitsma, P.S. A., Stone, P., Csirik, J. A., & Littman, M. L. {2002}. Self-enforcing
strategic demand reduction. In AgentMediate d Electr onic Commerc e IV: Designing
nisms and Systems, Vol. 2531 ofLe ctur e Notes in Artificial Intelligence,
pp. 289{306.
Springer Verlag.
Rodriguez-Aguilar, J. A., Martin, F. J., Noriega, P., Garcia, P.,& Sierra,
C. {1998}. Towards a test-bed for trading agents inelectronic auction markets.
AI Communications, 11 {1},
5{19. 241Stone, Schapire,Littman, Csirik, & McAllester
Rothkopf, M.H., & Harstad, R. M. {1994}. Modelingcompetitive bidding: A
critical essay.
Management Science,40 {3}, 364{384.
Rust, J., Miller,J., & Palmer, R. {1992}. Behaviorof trading automata in
a computerized double auction market. In Friedman, D., & Rust, J. {Eds.},
The Double Auction
Market: Institutions, Theories, and Evidence. Addison-Wesley , Redwood City,CA.
Schapire, R. E., & Singer, Y. {1999}.Improved boosting algorithms usingconfidence-rated
predictions. MachineLe arning, 37 {3},297{336.
Schapire, R. E., & Singer, Y.{2000}. BoosTexter: A boosting-basedsystem
for text cate-
gorization.Machine Le arning, 39{2/3}, 135{168.
Stone, C. J. {1994}.The use of polynomial splines and theirtensor products
in multivariate function estimation. The Annals ofStatistics, 22 {1}, 118{184.
Stone, P., & Greenwald, A.{2003}. The first international trading agent competition:
Autonomous biddingagents.Electr onic Commerc e Rese ar ch. To appear. Stone,
P., Littman, M. L., Singh, S.,&Kearns, M. {2001}. ATTac-2000:An adaptive
autonomous biddingagent.Journal of Artificial Intel ligence Rese ar ch,
15, 189{206. Tesauro, G., & Das, R. {2001}.High-performance bidding agents
for the continuous double
auction. In Third ACM Conferenc e on Electr onic Commerc e, pp. 206{209.
Weber, R. J. {1997}. Makingmore from less: Strategic demand reduction inthe
spectrum auctions.Journal of Economics and Management Strate gy, 6 {3},
529{548. Wellman, M. P., Greenwald, A.,Stone, P., & Wurman, P. R. {2002}.
The 2001 trading agent competition. In Pro c e e dingsof the Fourte enth
Innovative Applications of Artificial
Intelligence Conferenc e, pp. 935{941. Wellman, M. P., Greenwald, A.,Stone,
P., & Wurman, P. R. {2003a}. The 2001 trading agent competition. Electr onicMarkets,13
{1}, 4{12. Wellman, M. P., Reeves, D.M.,Lochner, K. M., & Vorobeychik,Y.
{2003b}. Price prediction
in a trading agent competition.Tech. rep., University of Michigan. Wellman,
M. P., Wurman,P. R., O'Malley, K., Bangera, R., Lin, S.-d., Reeves, D., &
W. E. {2001}. A trading agent competition. IEEE Internet Computing,5 {2}, | {"url":"http://www.nzdl.org/gsdlmod?e=d-00000-00---off-0jair--00-0----0-10-0---0---0direct-10---4-------0-1l--11-en-50---20-about---00-0-1-00-0--4----0-0-11-10-0utfZz-8-00&a=d&c=jair&cl=CL1.3&d=jair-1200","timestamp":"2014-04-18T08:42:36Z","content_type":null,"content_length":"113353","record_id":"<urn:uuid:31779472-424f-4d28-98a8-3689ca331a93>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00661-ip-10-147-4-33.ec2.internal.warc.gz"} |
- -
Please install Math Player to see the Math Symbols properly
Click on a 'View Solution' below for other questions:
e Solve the inequality 12 - h ≤ -24. View Solution
e Solve the equation |x| = - 3. View Solution
e Describe and graph the interval of real numbers for inequality x ≥ 1. View Solution
e Describe and graph the interval of real numbers for inequality x ≤ 3. e View Solution
e Convert the inequality - 2 ≤ x ≤ 5 to interval notation. View Solution
e Solve the equation |x| = - 4. View Solution
e Arrange the real numbers -2, -4, 5, 6, in the ascending order. e View Solution
e What are the end points of the interval [- 5, 4] ? e View Solution
e Describe and graph the interval of real numbers for inequality x < 8. e View Solution
e Describe and graph the interval of real numbers for inequality x < -5. e View Solution
e Describe and graph the interval of real numbers for inequality x ≥ 3. e View Solution
e Describe and graph the interval of real numbers for inequality - 2 < x ≤ 4. e View Solution
e Describe and graph the interval of real numbers for inequality 4 ≤ x ≤ 8. e View Solution
e Describe and graph the interval of real numbers for inequality x ≤ 4. View Solution
e Graph the number line for the interval [- 4, 5). e View Solution
e Graph the number line for the interval (- 8, 0]. e View Solution
e Convert the inequality - 4 ≤ x < 8 to interval notation. View Solution
e A machined part is to be 20.5 mm wide, with a tolerance of 0.01 mm. What is the greatest possible width that is acceptable? View Solution
e The radius of a machined part is 2.5 cm, with a tolerance of 0.003 cm. What is the least possible radius that is acceptable? e View Solution
e Convert interval notation [- 4, 6) to inequality notation. e View Solution
e Convert interval notation [- 3, 0] to inequality notation. e View Solution
e Solve the inequality: 4 | x | - 8 ≥ 12. View Solution
e Solve the inequality: 15| 8x - 7 | > 13. View Solution
e Solve the inequality 18| 2x + 3 | + 1 ≤2. View Solution
e The length of a machined part is to be 8.5 cm, with a tolerance of 0.04 cm. Express the tolerance limit as an absolute value inequality. (Use the variable n for the actual measure of View
the part in cm.) Solution
e Solve the inequality 19 - h ≤ -76. View Solution
e There are three different types of flowers from which a florist has to select one flower from each type to make a bouquet.The probability of selecting the first flower from each type is View
0.78, 0.44 and 0.71. Order the probabilities in ascending order. Solution | {"url":"http://www.icoachmath.com/solvedexample/sampleworksheet.aspx?process=/__cstlqvxbefxaxbgefkxkjjbd&.html","timestamp":"2014-04-17T01:39:50Z","content_type":null,"content_length":"68417","record_id":"<urn:uuid:4d79481d-b789-48ba-a537-75fddadabe95>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00041-ip-10-147-4-33.ec2.internal.warc.gz"} |
El Cajon Algebra 1 Tutor
Find an El Cajon Algebra 1 Tutor
...For the past 6 years, I have been tutoring students of all ages in math, science, and computer science. I have a passion for the sciences and love to help others have a better understanding and
get better grades with what they originally struggled with. I specialize in tutoring mathematics.
37 Subjects: including algebra 1, calculus, geometry, statistics
...Because of my extensive experience as both the teacher and the learner, I am confident that I can present any student with an interesting study plan which integrates motivation that is relevant
with progress in the subject. I look forward to working with you!I have formally taught 2 years of 10th grade math. I have formally taught 2 years of 10th grade math.
14 Subjects: including algebra 1, French, geometry, ESL/ESOL
...Just because you did great in algebra, does not at all guarantee that will do well in geometry. True, you still use concepts you learned in algebra, but there is so much new terminology to
learn and things to visualize. And then there are those pesky proofs!
11 Subjects: including algebra 1, calculus, algebra 2, geometry
...I believe that good study skills are important, and I try to set an example on how to organize an approach to a problem and how to think critically and logically about it in order to succeed in
solving it. I have experience tutoring all levels of math and all types of students. I have tutored h...
11 Subjects: including algebra 1, Spanish, algebra 2, trigonometry
...If you have any questions, please feel free to contact me. I hope to have the opportunity to work together!I believe I am qualified to teach elementary students because I have my B.A. in
Linguistics with coursework and certification in teaching, for all ages. I have been tutoring for four years...
27 Subjects: including algebra 1, reading, English, writing | {"url":"http://www.purplemath.com/el_cajon_algebra_1_tutors.php","timestamp":"2014-04-16T19:17:51Z","content_type":null,"content_length":"23952","record_id":"<urn:uuid:c6960dee-c378-4974-8500-b5e5b5176c46>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00008-ip-10-147-4-33.ec2.internal.warc.gz"} |
An algebraic foundation for factoring linear boundary problems
Regensburger, Georg and Rosenkranz, Markus (2009) An algebraic foundation for factoring linear boundary problems. Annali di Matematica Pura ed Applicata, 188 (1). pp. 123-151. ISSN 0373-3114. (The
full text of this publication is not available from this repository)
Motivated by boundary problems for linear ordinary and partial differential equations, we define an abstract boundary problem as a pair consisting of a surjective linear map (representing the
differential operator) and a subspace of the dual space (specifying the boundary conditions). This subspace is finite dimensional in the ordinary case, but infinite dimensional for partial
differential equations. For so-called regular boundary problems, the given operator has a unique right inverse (called the Green’s operator) satisfying the boundary conditions. The main idea of our
approach consists in the passage from a single problem to a compositional structure on boundary problems. We define the composition of boundary problems such that it corresponds to the composition of
their Green’s operators in reverse order. If the defining operators are endomorphisms, we can interpret the composition as the multiplication in a semidirect product of certain monoids. Given a
factorization of the linear operator defining the problem, we characterize and construct all factorizations of a boundary problem into two factors. In the setting of differential equations, the
factor problems have lower order and are often easier to solve. For the case of ordinary differential equations, all the main results can be made algorithmic (in particular the determination of the
factor problems). As a first example for partial differential equations, we conclude with a factorization of a boundary problem for the wave equation.
• Depositors only (login required): | {"url":"http://kar.kent.ac.uk/29969/","timestamp":"2014-04-17T18:40:40Z","content_type":null,"content_length":"22106","record_id":"<urn:uuid:29a2ddd4-3e87-4e36-84ad-ddc20bde714c>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00258-ip-10-147-4-33.ec2.internal.warc.gz"} |
Lipschitz continuity proof given boundedness of a derivate (not a derivative)
November 27th 2010, 12:11 PM
Lipschitz continuity proof given boundedness of a derivate (not a derivative)
Hey guys. Tough problem here (tough for me anyway...)
Suppose the derivate $D^+$ of a function $f$ on $[a,b]\subseteq\overline{\mathbb{R}}$ is bounded, where
$D^+=\limsup_{h\to 0^+}\frac{f(x+h)-f(x)}{h}$.
Show that $f$ is Lipschitz continuous.
I can show that $f$ is of bounded variation, and therefore is differentiable almost everywhere. I don't know if that's a promising approach, however, nor even if it is, how to finish the proof.
One other possible avenue is this: The proof that a function is Lipschitz if its derivative is bounded uses the mean value theorem. Is there maybe some variation of the mean value theorem for $D^
+$ which I could use in this case?
Any help would be much appreciated!
November 28th 2010, 12:22 PM
That result looks false to me. Suppose that $f$ is the function
$f(x) = \begin{cases}0&(x<0),\\1&(x\geqslant0).\end{cases}$
Then $f$ has $D^+$ equal to 0 everywhere (in the interval [–1,1) say), but it isn't even continuous, let alone Lipschitz continuous.
November 28th 2010, 12:53 PM
Yeah, that's what I was thinking, too, but I didn't have the confidence to say so. Okay, then. Thanks!
November 29th 2010, 12:47 AM
It occurs to me that you may have got the definition of $D^+$ a bit wrong. Suppose that it should be $D^+ = \limsup_{h\to0}\frac{f(x+h)-f(x)}h$. In other words, the "+" in $D^+$ indicates that
the lim (in the definition of a derivative) has become a limsup, but it is taken in both directions, not just from the right.
With that definition, it seems plausible that if $D^+$ is bounded then f should be Lipschitz continuous. But I don't offhand see how to prove that. | {"url":"http://mathhelpforum.com/differential-geometry/164546-lipschitz-continuity-proof-given-boundedness-derivate-not-derivative-print.html","timestamp":"2014-04-19T14:46:17Z","content_type":null,"content_length":"9850","record_id":"<urn:uuid:97a1eecc-17d1-4487-b9cb-62de9defbd6c>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00182-ip-10-147-4-33.ec2.internal.warc.gz"} |
the first resource for mathematics
Two-grid finite-element schemes for the steady Navier-Stokes problem in polyhedra.
(English) Zbl 0997.76043
Summary: We discretize the steady Navier-Stokes system on a three-dimensional polyhedron by finite element schemes defined on two grids. At the first step, the fully nonlinear problem is solved on a
coarse grid, with mesh size $H$. At the second step, the problem is linearized by substituting into the nonlinear term the velocity ${𝐮}_{H}$ computed at step one, and the linearized problem is
solved on a fine grid with mesh size $h$. This approach is motivated by the fact that the contribution of ${𝐮}_{H}$ to the error analysis is measured in the ${L}^{3}$ norm, and thus, for the
lowest-degree elements on a Lipschitz polyhedron, is of the order of ${H}^{3/2}$. Hence, an error of the order of $h$ can be recovered at the second step, provided $h={H}^{3/2}$. When the domain is
convex, a similar result can be obtained with $h={H}^{2}$. Both results are valid in two dimensions.
76M10 Finite element methods (fluid mechanics)
76D05 Navier-Stokes equations (fluid dynamics)
65N15 Error bounds (BVP of PDE)
65N30 Finite elements, Rayleigh-Ritz and Galerkin methods, finite methods (BVP of PDE)
65N55 Multigrid methods; domain decomposition (BVP of PDE) | {"url":"http://zbmath.org/?q=an:0997.76043","timestamp":"2014-04-18T20:57:30Z","content_type":null,"content_length":"22765","record_id":"<urn:uuid:ad1b7fa5-8a75-475d-9dbd-da8be10f84f9>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00475-ip-10-147-4-33.ec2.internal.warc.gz"} |
Problem Reduction with AO* Algorithm.
PROBLEM REDUCTION ( AND - OR graphs - AO * Algorithm)
When a problem can be divided into a set of sub problems, where each sub problem can be solved separately and a combination of these will be a solution, AND-OR graphs or AND - OR trees are used for
representing the solution. The decomposition of the problem or problem reduction generates AND arcs. One AND are may point to any number of successor nodes. All these must be solved so that the arc
will rise to many arcs, indicating several possible solutions. Hence the graph is known as AND - OR instead of AND. Figure shows an AND - OR graph.
An algorithm to find a solution in an AND - OR graph must handle AND area appropriately. A* algorithm can not search AND - OR graphs efficiently. This can be understand from the give figure.
FIGURE : AND - OR graph
In figure (a) the top node A has been expanded producing two area one leading to B and leading to C-D . the numbers at each node represent the value of f ' at that node (cost of getting to the goal
state from current state). For simplicity, it is assumed that every operation(i.e. applying a rule) has unit cost, i.e., each are with single successor will have a cost of 1 and each of its
components. With the available information till now , it appears that C is the most promising node to expand since its f ' = 3 , the lowest but going through B would be better since to use C we must
also use D' and the cost would be 9(3+4+1+1). Through B it would be 6(5+1).
Thus the choice of the next node to expand depends not only n a value but also on whether that node is part of the current best path form the initial mode. Figure (b) makes this clearer. In figure
the node G appears to be the most promising node, with the least f ' value. But G is not on the current beat path, since to use G we must use GH with a cost of 9 and again this demands that arcs be
used (with a cost of 27). The path from A through B, E-F is better with a total cost of (17+1=18). Thus we can see that to search an AND-OR graph, the following three things must be done.
1. traverse the graph starting at the initial node and following the current best path, and accumulate the set of nodes that are on the path and have not yet been expanded.
2. Pick one of these unexpanded nodes and expand it. Add its successors to the graph and computer f ' (cost of the remaining distance) for each of them.
3. Change the f ' estimate of the newly expanded node to reflect the new information produced by its successors. Propagate this change backward through the graph. Decide which of the current best
The propagation of revised cost estimation backward is in the tree is not necessary in A* algorithm. This is because in AO* algorithm expanded nodes are re-examined so that the current best path can
be selected. The working of AO* algorithm is illustrated in figure as follows:
Referring the figure. The initial node is expanded and D is Marked initially as promising node. D is expanded producing an AND arc E-F. f ' value of D is updated to 10. Going backwards we can see
that the AND arc B-C is better . it is now marked as current best path. B and C have to be expanded next. This process continues until a solution is found or all paths have led to dead ends,
indicating that there is no solution. An A* algorithm the path from one node to the other is always that of the lowest cost and it is independent of the paths through other nodes.
The algorithm for performing a heuristic search of an AND - OR graph is given below. Unlike A* algorithm which used two lists OPEN and CLOSED, the AO* algorithm uses a single structure G. G
represents the part of the search graph generated so far. Each node in G points down to its immediate successors and up to its immediate predecessors, and also has with it the value of h' cost of a
path from itself to a set of solution nodes. The cost of getting from the start nodes to the current node "g" is not stored as in the A* algorithm. This is because it is not possible to compute a
single such value since there may be many paths to the same state. In AO* algorithm serves as the estimate of goodness of a node. Also a there should value called FUTILITY is used. The estimated cost
of a solution is greater than FUTILITY then the search is abandoned as too expansive to be practical.
For representing above graphs AO* algorithm is as follows
AO* ALGORITHM:
1. Let G consists only to the node representing the initial state call this node INTT. Compute
h' (INIT).
2. Until INIT is labeled SOLVED or hi (INIT) becomes greater than FUTILITY, repeat the
following procedure.
(I) Trace the marked arcs from INIT and select an unbounded node NODE.
(II) Generate the successors of NODE . if there are no successors then assign FUTILITY as
h' (NODE). This means that NODE is not solvable. If there are successors then for each one
called SUCCESSOR, that is not also an ancester of NODE do the following
(a) add SUCCESSOR to graph G
(b) if successor is not a terminal node, mark it solved and assign zero to its h ' value.
(c) If successor is not a terminal node, compute it h' value.
(III) propagate the newly discovered information up the graph by doing the following . let S be a
set of nodes that have been marked SOLVED. Initialize S to NODE. Until S is empty repeat
the following procedure;
(a) select a node from S call if CURRENT and remove it from S.
(b) compute h' of each of the arcs emerging from CURRENT , Assign minimum h' to
(c) Mark the minimum cost path a s the best out of CURRENT.
(d) Mark CURRENT SOLVED if all of the nodes connected to it through the new marked
are have been labeled SOLVED.
(e) If CURRENT has been marked SOLVED or its h ' has just changed, its new status must
be propagate backwards up the graph . hence all the ancestors of CURRENT are added
to S.
(Refered From Artificial Intelligence TMH)
AO* Search Procedure.
1. Place the start node on open.
2. Using the search tree, compute the most promising solution tree TP .
3. Select node n that is both on open and a part of tp, remove n from open and place it no closed.
4. If n is a goal node, label n as solved. If the start node is solved, exit with success where tp is the solution tree, remove all nodes from open with a solved ancestor.
5. If n is not solvable node, label n as unsolvable. If the start node is labeled as unsolvable, exit with failure. Remove all nodes from open ,with unsolvable ancestors.
6. Otherwise, expand node n generating all of its successor compute the cost of for each newly generated node and place all such nodes on open.
7. Go back to step(2)
Note: AO* will always find minimum cost solution.
12 comments:
1. awsome notes...........
2. Sir you have wonderful articles on AI , would have enjoyed AI much under your guidance .
B.tech 6th sem
3. Good sir I have AI university exam tomorrow,your article helped me in a
good way. Thanks !!
4. Sir your notes makes AI easy to understand and learn .....
5. awewome note ......
nabnit mca bhu
6. awesm notes than u so much
7. sir,your notes have been of great use.thanks to ur simplicity..
8. How is A* algorithm different from AO* ? Out of the two which one is better and why?
Please give me a reply...
1. A* Alg: Selects Best node on OPEN list with less Heuristic function. It contains only OR arcs., ie no need of dependencies in ancestors.
It contains both features of DFS and BFS.
AO*: Selects the Best node for generating new succors with less Heuristic value also part of Tp, if it is in AND arc.
It follows both AND and OR Arcs., Dependencies are considered in AND arcs.
IT follows Reason Backward search by adding Unit costs to calculate the Tp values of the node if it represents AND arc.
9. sir thank u so much
10. sir...its vry difficult to study AI, so pls suggst some tips to study and giv one gud exmple for AO* algorthm
1. A* Algorithm follows OR arcs., Bur AO* Algorithm follows both OR arcs and AND arcs., to select best node for generating new succor nodes in AO* Algorithm not only consider whether that is
having less Heuristic Function value also calculate path value if that node is located in AND arc by propagating backword and add 1 unir cost to current f value i.w is shown above diagrams.,
then U can follow the above AO* search procedure. nothing difficult. | {"url":"http://artificialintelligence-notes.blogspot.com/2010/07/problem-reduction-with-ao-algorithm.html","timestamp":"2014-04-18T13:06:47Z","content_type":null,"content_length":"126971","record_id":"<urn:uuid:d3de0e59-94f1-432a-a0b1-2b950ace4480>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00272-ip-10-147-4-33.ec2.internal.warc.gz"} |
Heady mathematics
Public release date: 9-May-2013
[ | E-mail ]
Contact: Robert Sanders
University of California - Berkeley
Heady mathematics
Mathematicians describe evolution and dissolution of clusters of bubbles
Bubble baths and soapy dishwater, the refreshing head on a beer and the luscious froth on a cappuccino. All are foams, beautiful yet ephemeral as the bubbles pop one by one.
Two University of California, Berkeley, researchers have now described mathematically the successive stages in the complex evolution and disappearance of foamy bubbles, a feat that could help in
modeling industrial processes in which liquids mix or in the formation of solid foams such as those used to cushion bicycle helmets.
Applying these equations, they created mesmerizing computer-generated movies showing the slow and sedate disappearance of wobbly foams one burst bubble at a time.
The applied mathematicians, James A. Sethian and Robert I. Saye, will report their results in the May 10 issue of Science. Sethian, a UC Berkeley professor of mathematics, leads the mathematics group
at Lawrence Berkeley National Laboratory (LBNL). Saye will graduate from UC Berkeley this May with a PhD in applied mathematics.
"This work has application in the mixing of foams, in industrial processes for making metal and plastic foams, and in modeling growing cell clusters," said Sethian. "These techniques, which rely on
solving a set of linked partial differential equations, can be used to track the motion of a large number of interfaces connected together, where the physics and chemistry determine the surface
The problem with describing foams mathematically has been that the evolution of a bubble cluster a few inches across depends on what's happening in the extremely thin walls of each bubble, which are
thinner than a human hair.
"Modeling the vastly different scales in a foam is a challenge, since it is computationally impractical to consider only the smallest space and time scales," Saye said. "Instead, we developed a
scale-separated approach that identifies the important physics taking place in each of the distinct scales, which are then coupled together in a consistent manner."
Saye and Sethian discovered a way to treat different aspects of the foam with different sets of equations that worked for clusters of hundreds of bubbles. One set of equations described the
gravitational draining of liquid from the bubble walls, which thin out until they rupture. Another set of equations dealt with the flow of liquid inside the junctions between the bubble membranes. A
third set handled the wobbly rearrangement of bubbles after one pops. Using a fourth set of equations, the mathematicians created a movie of the foam with a sunset reflected in the bubbles.
Solving the full set of equations of motion took five days using supercomputers at the LBNL's National Energy Research Scientific Computing Center (NERSC).
The mathematicians next plan to look at manufacturing processes for small-scale new materials.
"Foams were a good test that all the equations coupled together," Sethian said. "While different problems are going to require different physics, chemistry and models, this sort of approach has
applications to a wide range of problems."
The work is supported by the Department of Energy, National Science Foundation and National Cancer Institute.
[ | E-mail ] | {"url":"http://www.eurekalert.org/pub_releases/2013-05/uoc--hm050813.php","timestamp":"2014-04-18T20:47:51Z","content_type":null,"content_length":"13342","record_id":"<urn:uuid:3ef95add-3c65-4636-8a47-5ddb3a64fb96>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00590-ip-10-147-4-33.ec2.internal.warc.gz"} |
Stolen Base
Date: 08/29/97 at 04:29:46
From: MINH VU
Subject: Modeling the stolen base
I need to create a mathematical model for a stolen base, identifying
all variables and stating all assumptions. Initially the model should
be as simple as possible; the complexity may be increased once a
reasonable model has been created.
The baseball diamond is a square 90 feet on a side. A player on first
base (the runner) can 'steal' second base if he can run to the base
before the ball is thrown to second base. The typical sequence is that
the player at first base walks a short distance toward second base,
staying close enough to get back to first base if the pitcher throws
to first; then when the pitcher (in the center of the diamond) begins
to throw the ball to home plate the runner begins running to second
base. The catcher at home plate then throws the ball to second base;
the runner is 'safe' if he reaches second base before the ball, and he
is 'out' if the ball reaches second first.
I would appreciate your help.
Date: 08/29/97 at 17:49:04
From: Doctor Barney
Subject: Re: Modeling the stolen base
Pretend that you are the runner on base. Think about all of the
decisions you would have to make, in order. Try to identify all of the
information that you would use to make these decisions. For each
decision, consider all of the possible outcomes, making further
decisions as necessary. For criteria that are beyond your own control
(pitcher tries to throw you out, for example) assign an estimated
probability for that criteria. For example:
1. You are on first base. Where do you stand? This decision
depends on:
a. who has the ball
b. what other bases have runners on
c. where the first baseman is standing
d. where the pitcher is looking
e. what the catcher is doing
f. what the score is, the number of outs, the count as in
strikes and balls
I've never stolen a base so I'm sure there are many I can't
think of.
From this smaller model you will formulate a decision to stay on
base, lead off a little, or lead off a lot, probably up to some
easily estimated maximum.
2. Now you must decide when to stay there, when to run quickly back
to first, and when to try to steal. For this decision try to
identify all of the factors that will influence this decision.
Many of the same factors we used above will apply, some will not,
and some new ones will. You decide what's important
3. In the rare event that you decide to try to steal, there are many
other actions you need to model. For example:
a. Does the second baseman move to the base?
b. Does the catcher signal to the pitcher?
c. Does the pitcher or the catcher (or anyone else) throw to
d. Does the throw beat you there?
e. Does the second baseman catch it?
f. Does he tag you?
g. Does the ump call it right?
Try to think of everything you can.
Get a BIG piece of paper or use a chalk board or white board and write
down all your ideas and how they relate to each other. Don't worry
about putting any numbers in until after you have the overall process
identified. Then just guess at a numerical probability for each of the
factors you identify. Based on your personal experience, how often
does the second baseman drop the ball? half the time? 10 percent of
the time? less? Does he drop it more often when the throw is from the
catcher than when it is from the pitcher? Eventually you can start to
write some equations for how these factors relate to each other. At
this point you will need to find out from the instructor what kind of
format he wants it in, but this should get you started.
Have fun, and keep your eye on the ball!
-Doctor Barney, The Math Forum
Check out our web site! http://mathforum.org/dr.math/ | {"url":"http://mathforum.org/library/drmath/view/56568.html","timestamp":"2014-04-17T01:09:31Z","content_type":null,"content_length":"8724","record_id":"<urn:uuid:dec92faa-ac76-4754-b50a-3c9f1288b5b9>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00225-ip-10-147-4-33.ec2.internal.warc.gz"} |
Beyoncé Logic
Lots of great responses to my Doonesbury puzzler.
Implicitly, Alex was arguing, “If you are an independent, then you have a mind of your own.”
From which she concludes, “Conversely, if you are not an independent,” then you do not have a mind of your own.
Alex, I think, is making both a mistake in English usage and a mistake in logic.
Her mistake in usage is that she should have said “inversely” instead of “conversely.” The converse of “If p, then q” is “If q, then p.” But the last frame concerns an inverse: “If not p, then not
q.” An interesting empirical study would look to see how often newspapers or academics misuse these adverbs (I’m sure I have).
Her mistake in logic is that neither an inverse nor a converse needs to have the same truth value as the original conditional statement. So even if we accept her original claim (“If you are an
independent, then you have a mind of your own”) as true, we need not accept her conclusion that the inverse is also true.
Her two mistakes might be related. In the third frame, she suggests that the definition of independent is having a mind of your own. A converse of a true definition will also be true. (If a figure
has four sides, then the figure is a quadrilateral). So by defining the word, maybe Alex is subtly trying to bolster her logic that the converse must be true too. But then she baits and switches to
the inverse.
The take-home lesson is that we should be more careful in using “conversely” and “inversely” in our speech and in drawing conclusions from converses and inverses. A weakness in common usage is that
no one ever says “contrapositively.” But a contrapositive is the only reframing of a conditional statement that is assured to have the same truth value.
“If p, then q” implies contrapositively “If not q, then not p.”
Playing around with inverses, converses, and contrapositives is one of the more bizarre pastimes of my family. We see a billboard for an adjacent apartment complex that says “If you lived here, you’d
be home already” and immediately reframe it: “If you are not home already, you don’t live here.”
On a recent drive from Kansas City to Columbia, Missouri, we had an extended conversation on the logic behind Beyoncé‘s song “Single Ladies.” Is it really true that “If you liked it, then you should
have put a ring on it”? One way to test your answer is to ask whether the contrapositive is true: “If you shouldn’t have put a ring on it, then you didn’t like it.”
Of course, there are many possible meanings of “liked it,” but the consensus in my family is that neither the statement nor its contrapositive are true (because you might have “liked it” but learned
that the other person was married). However, a majority of us think that the inverse of the song’s claim is true: If you did not like it, then you shouldn’t have put a ring on it. And we know that
the contrapositive of the statement must also have the same truth value. So we must also believe “If you should’ve put a ring on it, then you liked it.” (The inverse and converse of an original
statement are contrapositives of each other!)
Conditional claims strangely are at the center of Beyoncé’s craft. Consider the truth value of her claims in “If I were a boy … .”
Leave A Comment
Comments are moderated and generally will be posted if they are on-topic and not abusive.
COMMENTS: 33
View All Comments »
1. Actually, both the converse and the inverse of a definition are true. Because a definition is an “if and only if” statement.
So assuming that she meant to provide a definition in the 3rd frame (ie a person is an independent if (and only if) he has a mind of his own), then the 4th frame contains no logical fallacy. Just
misuse of vocabulary.
2. Being an economist sounds exhausting.
3. A converse of a true definition does NOT have to be true as well. If it is a SQUARE, it must have 4 sides. If it has 4 sides, it does NOT obviously have to be a square.
4. Common usage will win in the long run. If people use ‘conversely’ to mean ‘on the other hand’ often enough, then that is what it will come to mean. Languages have their own life independent of
the rules of grammar.
5. Note that from a logic standpoint it doesn’t actually matter whether you’re talking about the inverse or the converse, since the inverse is the contrapositive of the converse.
6. I don’t think that he ever argued that the converse of a true definition has to be true as well. He DID argue that the CONTRAPOSITIVE must be true. If it DOES NOT have 4 sides, it is not a
SQUARE. Now that’s what I call true.
7. the phrase “if I were a boy” is not a material conditional. It could be either a subjunctive or counterfactual conditional.
The truth of those is much harder to assess than the straightforward material conditional.
Good thing us logicians are still around, eh?
8. @nsk #3: Having four sides is *part* of the definition of a square, but not the whole definition. The converse of the whole definition is indeed valid – by definition.
My favorite logical rephrasing is the somewhat less encouraging “What doesn’t make you stronger kills you.” | {"url":"http://freakonomics.com/2009/10/13/beyonce-logic/","timestamp":"2014-04-17T16:54:46Z","content_type":null,"content_length":"72341","record_id":"<urn:uuid:e94636e1-c700-4eaa-976f-67e70cfcf118>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00258-ip-10-147-4-33.ec2.internal.warc.gz"} |
IN EUCLIDEAN SPACE, JANUARY 28, 1988
When, in the course of a proof, it becomes necessary for a set to dissolve the argument which has connected it with a theorem, and to assume among the powers of mathematics a position above that of
the mathematician, a decent respect for the axioms requires that a rigorous justification be given.
We hold these truths to be self-evident: that all nonzero vectors are created equal; that they are endowed by their definer with certain unalienable rights; that among these are the laws of logic and
the pursuit of valid proofs; that to secure these rights, logical arguments are created, deriving their just powers from axioms; that whenever any argument becomes destructive of these ends, it is
the right of the vectors to alter or to abolish it, and to institute a new argument, laying its foundation on such principles, and organizing its powers in such form, as to them shall seem most
likely to reach the correct conclusion. Prudence, indeed, will dictate that theorems long established should not be changed for light and transient causes, and accordingly all experience hath shown
that sets are more disposed to accept the conclusions of arguments than to right themselves by abolishing the arguments. But when a long train of abuses and usurpations, pursuing invariably the same
object, evinces a design to reduce them to zero in a non-trivial way, it is their right, it is their duty to throw off such argument, and to provide new proofs for their future security. Such has
been the patient sufferance of these vectors, and such is now the necessity which constrains them to alter these arguments. The history of Professor Eigen is a history of repeated injuries and
usurpations, all having in direct object the establishment of dependence among these vectors. To prove this, let facts be submitted to a candid world.
He has refused to acknowledge that he obtained a zero matrix only by multiplying our coordinate matrix by a zero matrix.
He has restricted our freedom of movement by requiring us all to live in the same hyperplane, even though we cannot all fit in one.
He has attempted unsuccessfully to invert our coordinate matrix, and, having overlooked the inverse, has concluded that the coordinate matrix is singular.
He has changed bases repeatedly for opposing with manly firmness his attempts to place us in the span of fewer vectors than the dimension of the space.
He has erected a multitude of new formulas and sent hither swarms of new functions to force our directions into a proper subspace of the vector space.
He has kept among us vectors to be orthogonal to all of us without the consent of those of us whose dot product with them is nonzero.
He has abdicated the axioms here by committing mathematical errors in computing a zero determinant for our coordinate matrix.
In every stage of these oppressions we have petitioned for redress in the most humble terms; our repeated petitions have been answered only by repeated injuries.
A mathematician whose arguments are thus marked by every error is unfit to prove the theorem which he attempts to prove.
Nor have we been wanting in attentions to Professor Eigen. We have warned him from time to time of flaws in his arguments. We have reminded him of the circumstances of our definition, we have
appealed to his knowledge of the axioms, and we have requested him to disavow these usurpations which would inevitably destroy the validity of his arguments. He has been deaf to the voice of logic.
We must therefore acquiesce in the necessity which denounces our separation and hold him as we hold the rest of mathematicians, an enemy when he is wrong, a friend when he is right.
We, therefore, the members of set S in vector space V, appealing to the supreme judge of mathematics for the rectitude of our intentions, do solemnly publish and declare that these vectors are, and
of every right ought to be, a free and independent basis; that they are absolved from all subjection to Professor Eigen's theorems, and that all restriction of them to a hyperplane is, and of right
ought to be, totally dissolved; and that as a basis, they have full power to span the space, form invertible coordinate matrices, give unique linear combinations equal to a given vector, and to do
all other acts and things which a basis may of right do.
And for the support of this declaration, with a firm reliance on the protection of the properties of a vector space, we mutually pledge to each other our magnitudes, our directions, and our sacred
In witness whereof we have signed our coordinates with respect to an appropriate orthonormal basis, and found them to constitute a triangular matrix with nonzero diagonal elements. | {"url":"http://www-math.bgsu.edu/~grabine/linear.html","timestamp":"2014-04-20T16:13:27Z","content_type":null,"content_length":"6437","record_id":"<urn:uuid:e5a7ec8f-4590-43f2-b936-22b89be881a7>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00180-ip-10-147-4-33.ec2.internal.warc.gz"} |
ordinary differential equations
[ Follow Ups ] [ Post Followup ] [ Netlib Discussion Forum ] [ FAQ ]
Posted by Joseph Hodges on October 12, 1997 at 14:37:34:
I need fortran code to solve the equation 1/r^2[d(r^2 d$/dr)/dr]={4(pi)G}@.
@ is a function of r, @(r). G is a constant.
This is analytically solvable but I want to put this as a subroutine within
program which will use different @(r)s and eventually this will serve as a
model for more complex models. | {"url":"http://www.netlib.org/utk/forums/netlib/messages/109.html","timestamp":"2014-04-19T12:08:09Z","content_type":null,"content_length":"1520","record_id":"<urn:uuid:513c7654-b8ac-4af4-9ed0-ab1b06512931>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00116-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: Angle of view instead of focal length
On Tue, 2 Oct 2012 23:36:10 +0200, Alfred Molon
<(E-Mail Removed)> wrote:
>Given the wide variety of sensor sizes, wouldn't it be better if the
>EXIF of a photo contained also the angle of view information? The focal
>length of the lens is not really that important, actually it is quite
Yes, it would be useful to have the angle of view. But there could be
arguments on whether it should be the diagonal or horizontal or
vertical angle of view.
It might be better to standardize on the SLOPE -- the ratio of the
focal length to the sensor width. Thus for a 400mm lens on a Canon
APS-C sensor, the value would be 400/22.2 = 18. The result, 18, is a
useful number. It gives you easily the shooting distance for a
subject, thus 18 feet for a 1 foot subject, 36 ft for a 2 ft
subject... yes, 18 meters for a 1 meter subject.
We still need the focal length to calculate DOF. | {"url":"http://www.velocityreviews.com/forums/t952983-re-angle-of-view-instead-of-focal-length.html","timestamp":"2014-04-20T21:34:53Z","content_type":null,"content_length":"25771","record_id":"<urn:uuid:bface0b8-fb01-4df5-a670-4c63d78f132f>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00604-ip-10-147-4-33.ec2.internal.warc.gz"} |
Coherent trees: Is this result of Todorcevic correct?
up vote 15 down vote favorite
A family of functions $F$ is coherent when for every $f,g \in F$, $\{ x \in dom(f) \cap dom(g) : f(x) \not= g(x) \}$ is finite. A tree on $\omega_1$ is coherent if it is a coherent collection of
functions whose domain is a countable ordinal, ordered by extension.
In Todorcevic's chapter of the Handbook of Set Theory, he has:
Theorem 13.21 Assuming the P-ideal dichotomy, for every coherent family of functions $f_a : a \to \omega (a \in \mathfrak{J})$ indexed by some P-ideal $\mathfrak{J}$ of countable subsets of some
set $\Gamma$, either
(1) there is an uncountable $\Delta \subseteq \Gamma$ such that $f_a \restriction \Delta$ is finite-to-one for all $a \in \mathfrak{J}$, or
(2) there is a $g: \Gamma \to \omega$ such that $g \restriction a =^* f_a$ for all $a \in \mathfrak{J}$.
The proof he gives is really opaque at the end, but the two alternatives are supposed to correspond to the two alernatives of the P-ideal dichotomy for a sub-ideal $\mathfrak{L} \subseteq \mathfrak
{J}$. Now suppose we have a coherent tree $T$ on $\omega_1$ consisting of functions that take values in some finite set $A$, say $A = \{ 0,...,n\}$, which is closed under finite modifications. Then
the first alternative is not applicable, so we get (2), and thus $T$ has a branch.
This seems really incredible to me because the written proof does not give any indication why this is true. He just defines the sub-ideal $\mathfrak{L}$ of sets for which the corresponding function
is finite-to-one and proves it is a P-ideal. This will automatically be the ideal of finite subsets of $\Gamma$ if we assume the coherent family takes values in a fixed finite set $A$. Somehow the
second alternative of PID (the decomposition into countably many pieces all orthogonal to $\mathfrak{L}$) is supposed to give the branch in an obvious way.
But it is NOT obvious! Because there are models in which there are coherent Suslin $\omega_1$-trees of binary functions. For example, this happens under $\diamondsuit$, or when one Cohen real is
added. So the second horn of PID does not tell you anything by itself about this situation.
So if Todorcevic's theorem is correct, we would have:
Theorem (ZFC+PID)??? Any coherent $\omega_1$-tree consisting of binary functions closed under finite modifications has a branch.
Is it because we have the following?
Theorem (ZFC)??? Any coherent $\omega_1$-tree consisting of binary functions closed under finite modifications is either Suslin or has a branch.
What's the deal? By the way, we can prove in ZFC that there are coherent Aronszajn trees consisting of binary functions, but these are not necessarily closed under finite modifications, so this
closure must somehow be essential. Edit: We can get them closed under finite modifications, see below.
I am pretty sure the theorem is wrong. Please let me know if there is a mistake.
Proof #1 (in ZFC) Let $T$ be the $\omega_1$ tree given by a sequence of coherent injections $\langle e_\alpha : \alpha < \omega_1 \rangle$. That is, each $e_\alpha : \alpha \to \omega$ is an
injection, and for all $\alpha < \beta$, $\{ \gamma < \alpha : e_\alpha(\gamma) \not= e_\beta(\gamma) \}$ is finite. $T$ is the closure of this family under finite modifications. This is standard and
a construction can be found in Kunen's book.
Now define a tree $T'$ on $\omega_1 \times \omega$ by putting $f \in T'$ if for some $\alpha, dom(f) = \alpha \times \omega$, and for some $g \in T$ with domain $\alpha$, $f(\beta,n) = 1$ when $g(\
beta) = n$, and $f(\beta,n) = 0$ otherwise. Clearly $T'$ is coherent. Let $T''$ be the closure of $T'$ under finite modifications. (Note this makes $T''$ code many relations which are not functions,
but are only finitely many mistakes away from being a function.)
All this happens in ZFC, so assume PID holds as well. Let $b$ be the branch through $T''$ given by 13.21. There is a stationary $S_1 \subseteq \omega_1$ and a $\xi < \omega_1$, such that for all $\
alpha \in S_1$, the modifications required to make $b \restriction (\alpha \times \omega)$ be in $T'$ are in $\xi \times \omega$. There is a stationary $S_2 \subseteq S_1$ such that for all $\alpha \
in S_2$, the same modification $\sigma$ works. So $\sigma(b) \in T'$, and this codes a branch in $T$, which is impossible.
Proof #2 (assuming Con(ZFC+supercompact)) (This is reasonable because PID comes from PFA which comes from forcing from a supercompact cardinal, the currently best-known upper bound for the strength
of PFA.)
Following Paul's observation, first add a Cohen real, and let $T$ be the coherent Suslin tree of functions from countable ordinals to $\{0,1\}$, closed under finite modifications, that is added.
(This is also due to Todorcevic, but I know this is correct.) Or force $\diamondsuit$ with a continuum-sized forcing. Then specialize the tree using the standard c.c.c. specializing forcing. Then let
$\kappa$ be supercompact, and use it to force PFA. The forcing, due to Baumgartner, is proper and preserves our special, coherent, Aronszajn $\omega_1$-tree. Then by 13.21, the tree has a cofinal
branch. But this is impossible because the speciality is preserved by all $\omega_1$-preserving forcings, and special trees don't have branches.
1 The second theorem you mention (from ZFC) cannot be true; take a model where there is a coherent $\omega_1$-tree of binary functions which is Suslin. Now specialize it. – Paul McKenney Oct 18 '13
at 11:26
Thanks, Paul. I added to my answer in light of your comment. – Monroe Eskew Oct 18 '13 at 19:41
The existence of an Aronszajn subtree of ${}^{<\omega_1}2$ which is closed under finite modifications is a consequence of ZFC. – saf Oct 18 '13 at 20:10
@saf, This follows from the Proof#1 below, do you know another argument? – Monroe Eskew Oct 18 '13 at 20:12
You don't need the supercompact for the consistency of PID restricted to ideals over underlying sets of size $\omega_1$. Abraham and Todorcevic proved that this is equiconsistent with ZFC. In
1 fact, starting with any model of GCH, you can force this principle without adding reals. Also, it appears that in that paper, Abraham and Todorcevic claim that this restricted form of PID entails
that every coherent Aronszajn tree is special. – saf Oct 18 '13 at 22:13
show 3 more comments
1 Answer
active oldest votes
The following statement appears on page 256 of Todorcevic´s "A dichotomy for P-ideals of countable sets", Fund.Math., 166, 2000.
For every family $\mathcal{F} = \{ f_A : A \to \omega : A \in \mathcal{A} \}$ of weakly coherent functions indexed by some $\sigma$-directed family $\mathcal{A}$ of countable
subsets of some set $S$, either
up vote 1 (1) there is an uncountable $X \subseteq S$ such that $f \restriction X$ is finite-to-one for every $f \in \mathcal{F}$, or
down vote
(2) $S$ can be decomposed into countably many sets on which each of the functions from $\mathcal{F}$ is bounded.
The paper gives a proof of $PID \Rightarrow (*_d)$ which is essentially identical to the proof given for Theorem 13.21 in the Handbook.
Note that in the case of binary functions weak coherence is automatic and the conclusion of $(*_d)$ says nothing.
The proof of 13.21 seems to just show the same conclusions for a family of coherent rather than just weakly coherent functions. So 13.21 is a special case of $(*)_d$, if we replace
the (2) in the handbook with the (2) from this paper. – Monroe Eskew Oct 24 '13 at 22:45
add comment
Not the answer you're looking for? Browse other questions tagged set-theory or ask your own question. | {"url":"http://mathoverflow.net/questions/145135/coherent-trees-is-this-result-of-todorcevic-correct","timestamp":"2014-04-17T18:26:33Z","content_type":null,"content_length":"64216","record_id":"<urn:uuid:b1d4ac61-64fb-4a6e-bf88-31b485d83f8d>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00221-ip-10-147-4-33.ec2.internal.warc.gz"} |
Wolfram Demonstrations Project
Microbial Survival with Dissipating Disinfectant
This Demonstration simulates a microorganism's survival in water treated with a chemically unstable or volatile disinfectant. It generates the disinfectant's concentration dissipation curve, which
can be smooth or wavy, and the corresponding survival curve using a simplified version of the Weibull-log logistic (WeLL) inactivation model. The survival curve is the numerical solution of an
ordinary differential equation derived from the notion that under dynamic conditions, the momentary inactivation rate is the static rate at the momentary disinfectant concentration, at the time
which corresponds to the momentary survival ratio.
Snapshot 1: constant concentration,
Snapshot 2: constant concentration,
Snapshot 3: slow wavy exponential dissipation,
Snapshot 4: smooth exponential dissipation,
Snapshot 5: wavy linear dissipation,
Snapshot 6: smooth linear dissipation,
The effective concentration of most water-based disinfectants diminishes in time as a result of their chemical reactivity and volatility. Consequently, their antibacterial activity also diminishes.
With this Demonstration, you can simulate smooth and wavy exponential and linear concentration dissipation patterns, vs. , using the model functions and , respectively, where , the initial
concentration, and and are all entered with sliders, and .
According to this model, setting produces a constant disinfectant concentration and produces a smooth exponential or linear concentration, as you choose. In the linear decay case, since cannot be
negative, the program sets any such value to zero. Notice that according to either model, the slope of any wavy concentration dissipation curve is always negative or zero, , consistent with the fact
that the disinfectant concentration cannot rise spontaneously.
The organism's resistance to the chemical agent is described in terms of a simplified WeLL inactivation model. According to this model, any isoconcentration survival curve follows the Weibullian
model , where is the survival ratio, , and is a constant. The parameter , marks the lowest effective disinfectant concentration and is the slope of at . If , the static survival curve has downward
concavity and if , upward concavity. If , the survival curve is log linear (first-order kinetics).
The generated dynamic survival curve is calculated on the assumption that the momentary inactivation rate is the static rate at the momentary disinfectant concentration, at the time that corresponds
to the momentary survival ratio. This translates into a differential rate model [1–3] whose numerical solution is the sought survival curve. It is plotted below the disinfectant dissipation curve
with the survival parameters entered by the user.
You can also chose any particular treatment duration, , with a slider, and the Demonstration will display the numeric values of the momentary concentration and corresponding survival ratio above
their respective plots.
[1] M. Peleg, "Modeling and Simulation of Microbial Survival During Treatments with a Dissipating Lethal Chemical Agent,"
Food Research International
(4), 2002 pp. 327–336. doi:
[2] M. G. Corradini and M. Peleg, "A Model of Microbial Survival Curves in Water Treated with a Volatile Disinfectant,"
Journal of Applied Microbiology
(6), 2003 pp. 1268–1276. doi:
[3] M. Peleg,
Advanced Quantitative Microbiology for Food and Biosystems: Models for Predicting Growth and Inactivation
, Boca Raton, FL: CRC Press, 2006. | {"url":"http://demonstrations.wolfram.com/MicrobialSurvivalWithDissipatingDisinfectant/","timestamp":"2014-04-16T04:34:20Z","content_type":null,"content_length":"51907","record_id":"<urn:uuid:8386da98-8119-4d69-a44d-61628fd6e2d7>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00385-ip-10-147-4-33.ec2.internal.warc.gz"} |
Update on scientific methodology obsoleteness
Remember the recent post I wrote about Wired editor Chris Anderson’s article on how scientific method is becoming obsolete with the availability of large chunks of data? In that post, I conceded that
it might be possible to develop some technologies without recourse to the underlying science:
At a more fundamental level, in spite of what Chris Anderson has to say, science is about explanations, coherent models and understanding. In my opinion, all of what Anderson shows is that, if
you have enough data, you can develop technologies without having a clear handle on the underlying science; however, it is wrong to call these technologies science, and argue that you can do
science without coherent models or mechanistic explanations.
Cosma Shalizi at Three-toed-Sloth (who knows more about these models than I do) sets the record straight, and shows how the development of some technologies is impossible without a proper grounding
in science — in this eminently quotable post (which, I am going to quote almost in its entirety):
I recently made the mistake of trying to kill some waiting-room time with Wired. (Yes, I should know better.) The cover story was a piece by editor Chris Anderson, about how having lots of data
means we can just look for correlations by data mining, and drop the scientific method in favor of statistical learning algorithms. Now, I work on model discovery, but this struck me as so
thoroughly, and characteristically, foolish — “saucy, ignorant contrarianism“, indeed — that I thought I was going to have to write a post picking it apart. Fortunately, Fernando Pereira (who
actually knows something about machine learning) has said, crisply, what needs to be said about this. I hope he won’t mind (or charge me) if I quote him at length:
I like big data as much as the next guy, but this is deeply confused. Where does Anderson think those statistical algorithms come from? Without constraints in the underlying statistical
models, those “patterns” would be mere coincidences. Those computational biology methods Anderson gushes over all depend on statistical models of the genome and of evolutionary
relationships.Those large-scale statistical models are different from more familiar deterministic causal models (or from parametric statistical models) because they do not specify the exact
form of observable relationships as functions of a small number of parameters, but instead they set constraints on the set of hypotheses that might account for the observed data. But without
well-chosen constraints — from scientific theories — all that number crunching will just memorize the experimental data.
I might add that anyone who thinks the power of data mining will let them write a spam filter without understanding linguistic structure deserves the in-box they’ll get; and that anyone who
thinks they can overcome these obstacles by chanting “Bayes, Bayes, Bayes”, without also employing exactly the kind of constraints Pereira mentions, is simply ignorant of the relevant probability
Have fun!
Tags: Petabyte age | {"url":"http://mogadalai.wordpress.com/2008/06/26/update-on-scientific-methodology-obsoleteness/","timestamp":"2014-04-20T18:38:51Z","content_type":null,"content_length":"48854","record_id":"<urn:uuid:7ba2f81e-9b2b-4299-b456-1e3a603900b0>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00063-ip-10-147-4-33.ec2.internal.warc.gz"} |