content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Intergers and Eponents
November 24th 2006, 12:17 PM
Intergers and Eponents
Please tell me the steps to get through these questions;
1) (8.9x10 to the exponent 5) to the exponent 4
2) (4x10 the the eponent negative 5) to the exponent negative 6
3) (6x10 to the exponent negative 5) to the exponent 3
All help is greatly appreciated.
November 24th 2006, 12:33 PM
Ranger SVO
1. 890,000 and 89,000
See a pattern, simply move the decimal to the right 5 places for the first one or 4 places for the second one
2. 0.00004 and 0.000004
now we move the decimal place to the left 5 places for the first one and 6 places for the second one.
negative exponent move left
posite exponent move right
Bet you can get the next one.
November 24th 2006, 12:38 PM
You need to know this property: $\left(x^a\right)^b=x^{(ab)}$
So: $\left(8.9\times10^5\right)^4=8.9\times10^{(5\times 4)}=8.9\times10^{20}$
EDIT: Wow, I completely forgot to distribute the exponent :(
November 24th 2006, 03:11 PM
Hello, Kitty_Kat!
You didn't give us any instructions.
I will assume the answers are to be in Scientific Notation.
$1)\;\;\left(8.9 \times 10^5\right)^4$
$\left(8.9 \times 10^5\right)^4\;=\;8.9^4 \times \left(10^5\right)^4 \;=\;6,274.2241 \times 10^{20} \;=\;6.2742241 \times 10^{23}$
$2)\;\;\left(4 \times 10^{-5}\right)^{-6}$
$\left(4\times10^{-5}\right)^{-6}\;=\;4^{-6}\times\left(10^{-5}\right)^{-6}$$\;=\;0.000244140625 \times 10^{30}\;=\;2.44140625 \times 10^{26}$
$3)\;\;\left(6 \times 10^{-5}\right)^3$ | {"url":"http://mathhelpforum.com/algebra/7959-intergers-eponents-print.html","timestamp":"2014-04-19T08:48:16Z","content_type":null,"content_length":"8547","record_id":"<urn:uuid:5f5660da-26b7-425d-a663-d3a7a047f2e4>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00082-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Deltas and Epsilons
Replies: 6 Last Post: Jun 13, 1997 5:06 PM
Messages: [ Previous | Next ]
Deltas and Epsilons
Posted: Jun 10, 1997 4:25 PM
In a message dated 97-06-10 11:29:59 EDT, Lou Talman wrote, in a very
different context
<< But the students I have to teach at my
institution would be completely overwhelmed by the epsilon-delta definition.
As are students everywhere. My question is are they overwhelmed by the
concept in general or are they overwhelmed by the complicated and trickey and
often obscure algebraic manipulation required to find the proper
delta-epsilon relationship for all but the simplest limits? (Same question
applies to Proofs by Induction.) I have found that they spend so much time
doing the algebra that, once they get a correct relationship (that's part of
the problem, there is no "the" correct relationship), that the have lost
sight of the big picture?
What do you think?
Lin McMullin
Ballston Spa, NY | {"url":"http://mathforum.org/kb/thread.jspa?threadID=160271","timestamp":"2014-04-19T07:38:53Z","content_type":null,"content_length":"23646","record_id":"<urn:uuid:315289a5-c89a-4ddf-867f-544725ff057a>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00377-ip-10-147-4-33.ec2.internal.warc.gz"} |
Radiosity Units & Computation
CS 481/681 2007 Lecture, Dr. Lawlor
So in reality, everything's a light source. Yes, the sun is a light source. But the sky is also a light source, filling in areas not illuminated by direct sunlight with a soft blue glow. Your pants
are a light source, lighting up the ground in front of you with pants-colored light.
Does this matter? Well, does this look like OpenGL?
This is an image computed using "radiosity".
Radiosity Units
The terms we'll be using to talk about light power are:
• Power and Energy carried by light (e.g., power received at a 100% efficient photocell)
□ It's not energy, it's energy per unit time.
□ Measured in Watts
☆ Watt=1 Joule/second, a small unit of power
☆ =1 Newton * meter/second (a quarter-pound weight lifted at one yard per second)
☆ =1 Volt * Ampere (or 0.01 amps at 100 volts)
☆ =1/746 horsepower
☆ =1/4.186 calories (calorie = 1cc of water heated by one degree C per second; food is measured in *kilocalories*, but they leave off the "kilo")
□ Also measured in "lumens"-- power as perceived by the human eye. At a wavelength of 555nm, one lumen is 1/680 watt.
□ AKA flux, radiant flux
• Irradiance: light power per unit surface area leaving or arriving at a surface
□ Measured in watts per square meter [of receiver or transmitter area].
□ So multiply by the receiver's area to get watts. (Assuming everything's facing dead-on.)
□ E.g., the Sun delivers an irradiance of 700W/m^2 to the surface of the Earth.
☆ So a 10 square meter solar panel facing the sun would receive 7000W of power.
☆ So a 1.0e-10 square meter CCD pixel open for 1.0e-3 seconds would receive 7.0e-11 joules of light energy, or a couple hundred million photons (visible light photons have energy of about
4e-19 Joules).
□ Irradiance as perceived by the human eye can be measured in "lux" (lumens per square meter).
• Radiance: "brightness" as seen by a camera or your eye.
□ Measured in (get this) watts per square meter [of receiver area] per steradian [of source coverage, as measured from the receiver], or just W/m^2/sr. (See below for details on steradians)
□ E.g., from the Earth the Sun is about 1 degree across, or about 1/60 radian, so assuming it's a tiny square it would cover 1/3600 steradians. The radiance of the sun is thus about 700W/m^2
per 1/3600 steradians, or 2.3 megawatts per square meter per steradian (2.3 MW/m^2/sr).
☆ This means if a lens sets it up so from one spot you see the sun at a coverage of 1 steradian, that spot receives an irradiance of 2.3MW/m^2 of power!
□ Radiance is unchanged through free space.
☆ That is, Radiance does *not* depend directly on distance: if the source coverage is unchanged, the radiance is unchanged. That is, the pixel doesn't get brighter just because the geometry
is right in front of it!
☆ Suprisingly, Radiance is even unchanged when passing through a *lens*. Sunlight focused through a lens has the same radiance, but because the lens changes the Sun's *angle* (and hence
solid angle coverage) of the light, the received irradiance is bigger.
• Solid angle: "bigness" of light source as seen by a receiver.
□ Measured in "Steradians" (abbreviated "sr").
□ A light source's solid angle in steradians is defined as the ratio of the area of the light source when projected onto the surface of a sphere centered on the receiver. This area is then
divided by the sphere's radius squared, or equivalently, is always performed with a unit-radius sphere.
□ Examples:
☆ The maximum coverage is the whole sphere, which has area 4 pi * radius squared, so the maximum coverage is 4 pi steradians.
☆ A hemisphere is half the sphere, so the coverage is 2 pi steradians.
☆ A small square that measures T radians across covers approximately T^2 steradians. This expression is exact for an infinitesimal square.
□ A flat receiver surface only receives part of the light from a light source low on its horizon--this is Lambert's cosine illumination law. Hence for a flat receiver, we've got to weight the
incoming solid angle by a factor of "cosine(angle to normal)".
☆ For a source that lies directly along the receiver's normal, this doesn't affect the solid angle--it's a scaling by 1.0.
☆ For a source that lies right at the receiver's horizon, this factor totally eliminates the source's contribution--it's a scaling by 0.0.
☆ For a hemisphere, this cosine factor varies across the surface, but it integrates out to a weighted solid angle of just 1 pi steradians (the unweighted solid angle of a hemisphere is 2 pi
□ You only want the solid angle for computing illumination from a polygon to a point. For illumination between two polygons, you actually want to compute the "form factor" between the polygons,
which you can either approximate using the solid angle, or compute exactly using Schröder and Hanrahan's 1990 paper.
See the Light Measurement Handbook for many more units and nice figures.
To do global illumination, we start with the radiance of each light source (in W/m^2/sr). For a given receiver patch, we compute the cosine-weighted solid angle of that light source, in steradians.
The incoming radiance times the solid angle gives the incoming irradiance (W/m^2). Most surfaces reflect some of the light that falls on them, so some fraction of the incoming irradiance leaves. If
the outgoing light is divided equally among all possible directions (i.e., the surface is a diffuse or Lambertian reflector), for a flat surface it's divided over a solid angle of pi (cosine
weighted) steradians. This means the outgoing radiance for a flat diffuse surface is just M/pi (W/m^2/sr), if the outgoing irradiance (or radiant exitance) is M (W/m^2).
To do global illumination for diffuse surfaces, then, for each "patch" of geometry we:
1. Compute how much light comes in. For each light source:
1. Compute the cosine-weighted solid angle for that light source.
☆ This depends on the light source's shape and position.
☆ It also depends on what's in the way: what shapes occlude/shadow the light.
2. Compute the radiance of the light source in the patch's direction.
☆ This might just be a fixed brightness, or might depend on view angle or location.
3. Multiply the two: incoming irradiance = radiance times weighted solid angle.
2. Compute how much light goes out. For a diffuse patch, outgoing radiance is just the total incoming irradiance (from all light sources) divided by pi.
For a general surface, the outgoing radiance in a particular direction is the integral, over all incoming directions, of the product of the incoming radiance and the surface's BRDF (Bidirectional
Reflectance Distribution Function). For diffuse surfaces, where the surface's radiance-to-radiance BRDF is just the cosine-weighted solid angle over pi, this is computed exactly as above.
Computing Solid Angles
So one important trick in doing this is being able to compute cosine-weighted solid angles of light sources. Luckily, there are several funny tricks for this.
The oldest, best-known, and least accurate way to compute a solid angle is to assume the light source is small and/or far away. If either of these are true, then the light source projects to a little
dot on the hemisphere, and we just need to figure out how the area (and hence solid angle) of that dot scales with distance. Well, the height of the dot scales like 1/z, and the width of the dot
scales like 1/z, so together the area (width times height) scales like 1/z^2. That's the old, familiar inverse-square "law" you've heard since middle-school physics class. Unfortunately, it's
actually a lie.
For example, say the light source is an isotropic 1-meter radius disk, sitting z meters away and directly facing you. What's the falloff with z? It turns out to be 1/(z^2+1), not 1/z^2. These are
similar for large z, but at small z the disk isn't as bright as an equivalent point source. That is, inverse-square breaks down when you get too close--and it's a good thing too, because you'd reach
infinite brightness at z=0!
Solid Angle from Point To Polygon Light Source
Luckily, there's a fairly simple, if bizarre, algorithm for computing the solid angle of a polygon transmitter from any receiver point. This algorithm was presented by Hottel and Sarofin in 1967 (for
heat transfer engineering, not computer graphics!), and again in a 1992 Graphics Gem by Filippo Tampieri.
vec3 light_vector=vec3(0.0);
for (int i=0;i<light.size();i++) { // Loop over vertices of light source
vec3 Ri=normalize(receiver-light[i]); // Points from light source vertex to receiver
vec3 Rp=normalize(receiver-light[i+1]); // Points from light source's *next* vertex to receiver
light_vector += acos(dot(Ri,Rp)) * normalize(cross(Ri,Rp));
float solid_angle = dot(receiver_normal,light_vector);
You can actually just dump this whole thing directly into GLSL, and it will run. Two years ago, the "acos" kicked it out into software rendering, which was 1000x slower. Today, on the *same*
hardware, the *same* GLSL program runs in graphics hardware, and is pretty dang fast too--a sincere thank you goes out to the ATI GLSL driver writers!
You can speed it up a bit (and mess it up too at close range!) by first noting that
light_vector += acos(dot(Ri,Rp)) * normalize(cross(Ri,Rp))
is an angle-weighted, normalized cross product.
In theory we could compute the angle weighting just as well with
light_vector += asin(length(cross(Ri,Rp))) * normalize(cross(Ri,Rp))
This isn't actually useful in practice, because it breaks down for nearby receivers--the angle between Ri and Rp exceeds 90 degrees, and asin starts giving the wrong answer, while acos keeps working
all the way out to 180.
However, for distant receivers, which have small angles between Ri and Rp, we're close to having asin x == x. Thus the above is very nearly equal to:
light_vector += length(cross(Ri,Rp))) * normalize(cross(Ri,Rp))
Now, the length of a vector times the direction of the vector gives the original vector, so we can actually just sum up the cross products:
In practice, this is very difficult to distinguish visually from the real equation, except at extreme close range where this approximation comes out a bit darker than the real thing.
Solid Angle's Achilles Heel--Occlusion
The above equation works great if the receiver can see the whole light source. If something gets in the way of that light source, you still compute solid angle, but only where the light source
actually shines on the receiver--a much more complicated computation to perform.
It is actually possible to clip away the occluded parts of the light source, and then compute the solid angle of the non-occluded pieces. I've done this in software. It's actually tolerably fast as
long as you're willing to limit yourself to a few dozen polygons per scene. But with any realistic number of polygons, computing the exact solid angle becomes very painfully slow. This is where
approximations come in, which we'll cover next class. | {"url":"https://www.cs.uaf.edu/2007/spring/cs481/lecture/04_10_radiosity.html","timestamp":"2014-04-17T07:34:41Z","content_type":null,"content_length":"14685","record_id":"<urn:uuid:d02db529-ccd9-4245-bcd1-e23ca57048da>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00055-ip-10-147-4-33.ec2.internal.warc.gz"} |
Pseudoprimes on elliptic curves
, 2001
"... Let L(x) denote the counting function for Lucas pseudoprimes, and E(x) denote the elliptic pseudoprime counting function. We prove that, for large x, L(x) ≤ x L(x) −1/2 and E(x) ≤ x L(x) −1/3,
where L(x) = exp(log xlog log log x / log log x). ..."
Cited by 9 (1 self)
Add to MetaCart
Let L(x) denote the counting function for Lucas pseudoprimes, and E(x) denote the elliptic pseudoprime counting function. We prove that, for large x, L(x) ≤ x L(x) −1/2 and E(x) ≤ x L(x) −1/3, where
L(x) = exp(log xlog log log x / log log x).
"... Abstract. We describe some primality tests based on quadratic rings and discuss the absolute pseudoprimes for these tests. 1. ..."
Add to MetaCart
Abstract. We describe some primality tests based on quadratic rings and discuss the absolute pseudoprimes for these tests. 1. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=13135584","timestamp":"2014-04-16T07:01:25Z","content_type":null,"content_length":"13171","record_id":"<urn:uuid:6b014615-50fc-4915-8bab-a3553f292ead>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00623-ip-10-147-4-33.ec2.internal.warc.gz"} |
Cross product
From Encyclopedia of Mathematics
crossed product, of a group $G$ and a ring $K$
An associative ring defined as follows. Suppose one is given a mapping $\sigma$ of a group $G$ into the isomorphism group of an associative ring $K$ with identity, and a family
$$ \rho = \{ \rho_{g,h} | g,h \in G\} $$
of invertible elements of $K$, satisfying the conditions
$$ \rho_{g_1,g_2}\rho_{g_1g_2,g_3} = \rho^{\sigma(g_1)}_{g_2,g_3}\rho_{g_1,g_2g_3} $$ $$ \alpha^{\sigma(g_2)\sigma(g_1)} = \rho_{g_1,g_2}\alpha^{\sigma(g_1g_2)}\rho^{-1}_{g_1,g_2} $$
for all $\alpha\in K$ and $g_1,g_2,g_3\in G$. The family $\rho$ is called a factor system. Then the cross product of $G$ and $K$ with respect to the factor system $\rho$ and the mapping $\sigma$ is
the set of all formal finite sums of the form
$$ \sum_{g\in G} \alpha_g t_g $$
where $\alpha_g \in K$ and the $t_g$ are symbols uniquely assigned to every element $g\in G$, with binary operations defined by
$$ \sum_{g\in G} \alpha_g t_g + \sum_{g\in G} \beta_g t_g = \sum_{g\in G} (\alpha_g+\beta_g)t_g,$$ $$ \left(\sum_{g\in G}\alpha_gt_g\right) \left(\sum_{g\in G}\beta_gt_g\right) = \sum_{g\in G} \left
(\sum_{h_1h_2=g}\alpha_{h_1}\beta^{\sigma(h_1)}_{h_2}\rho_{h_1,h_2}\right) t_g $$
This ring is denoted by $K(G, \rho, \sigma)$; the elements $t_g$ form a $K$-basis for the ring.
If $\sigma$ maps $G$ onto the identity automorphism of $K$, then $K(G, \rho)$ is called a twisted or crossed group ring, and if, in addition, $\rho_{g,h}=1$ for all $g,h\in G$, then $K(G,\rho,\sigma)
$ is the group ring of $G$ over $K$ (see Group algebra).
Let $K$ be a field and $\sigma$ a monomorphism. Then $K(G,\rho,\sigma)$ is a simple ring, being the cross product of the field with its Galois group.
[1] S.K. Sehgal, "Topics in group rings" , M. Dekker (1978)
[2] A.A. Bovdi, "Cross products of semi-groups and rings" Sibirsk. Mat. Zh. , 4 (1963) pp. 481–499 (In Russian)
[3] A.E. Zalesskii, A.V. Mikhalev, "Group rings" J. Soviet Math. , 4 (1975) pp. 1–74 Itogi Nauk. i Tekhn. Sovrem. Probl. Mat. , 2 (1973) pp. 5–118
[4] D.S. Passman, "The algebraic structure of group rings" , Wiley (1977)
In the defining relations for a factor system above
Then the
Up to Brauer equivalence every central simple algebra is a cross product, but not every division algebra is isomorphic to a cross product. Two algebras Brauer group.
How to Cite This Entry:
Cross product. Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=Cross_product&oldid=30991
This article was adapted from an original article by A.A. Bovdi (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098.
See original article | {"url":"http://www.encyclopediaofmath.org/index.php/Cross_product","timestamp":"2014-04-17T15:44:48Z","content_type":null,"content_length":"24997","record_id":"<urn:uuid:d304e089-d16e-4ac7-a647-2db6df36f72d>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00371-ip-10-147-4-33.ec2.internal.warc.gz"} |
Londonderry, NH Science Tutor
Find a Londonderry, NH Science Tutor
...I worked with students near the top of their class, students out of school, for a variety of reasons, and everyone in between. Once I earned my Ph.D., I worked for 13 years as a pharmaceutical
chemist making oncology drugs. Well, the teaching bug has not left and I would like to get back into working with students.
11 Subjects: including organic chemistry, physics, physical science, chemistry
...I've written enough papers to help you decide on a topic and find evidence in a piece of literature, or to discuss what you might compare between two pieces. If you let me know beforehand, I
might even be able to read the entire text before we meet. Just give me a bit longer if it's a novel or harder to find.
20 Subjects: including biology, English, Spanish, reading
...I look forward to helping your student be successful! I currently teach Biology at the high school level and I hold my Bachelor's degree in biology. I took a full year Anatomy and Physiology
in college and was a Teacher's Aid for the next year, including running a lab section independently.
5 Subjects: including biology, chemistry, anatomy, physical science
I have been teaching public school for 18 years and have been honored to receive the Milken National Educator Award (NH 2003). I am able to work with a variety of student learning styles and
believe that every student learns differently. I have taught Adult Education, English to Speakers of Other L...
9 Subjects: including anthropology, archaeology, reading, English
...My approach is to teach you the current subject along with whatever you might have missed in past studies. I am patient in explaining. I also ask simple questions to ensure your full
understanding of a particular subject.
12 Subjects: including chemistry, geometry, physical science, algebra 1
Related Londonderry, NH Tutors
Londonderry, NH Accounting Tutors
Londonderry, NH ACT Tutors
Londonderry, NH Algebra Tutors
Londonderry, NH Algebra 2 Tutors
Londonderry, NH Calculus Tutors
Londonderry, NH Geometry Tutors
Londonderry, NH Math Tutors
Londonderry, NH Prealgebra Tutors
Londonderry, NH Precalculus Tutors
Londonderry, NH SAT Tutors
Londonderry, NH SAT Math Tutors
Londonderry, NH Science Tutors
Londonderry, NH Statistics Tutors
Londonderry, NH Trigonometry Tutors
Nearby Cities With Science Tutor
Bedford, NH Science Tutors
Derry, NH Science Tutors
Haverhill, MA Science Tutors
Hudson, NH Science Tutors
Lawrence, MA Science Tutors
Litchfield, NH Science Tutors
Lowell, MA Science Tutors
Manchester, NH Science Tutors
Merrimack Science Tutors
Methuen Science Tutors
Nashua, NH Science Tutors
North Andover Science Tutors
Peabody, MA Science Tutors
Salem, NH Science Tutors
Windham, NH Science Tutors | {"url":"http://www.purplemath.com/Londonderry_NH_Science_tutors.php","timestamp":"2014-04-18T15:57:52Z","content_type":null,"content_length":"24061","record_id":"<urn:uuid:502a4599-dbc2-4e54-8dae-44df1ea39dfc>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00600-ip-10-147-4-33.ec2.internal.warc.gz"} |
• Taught by Feynman Prize winner Professor Yaser Abu-Mostafa.
• The fundamental concepts and techniques are explained in detail. The focus of the lectures is real understanding, not just "knowing."
• Lectures use incremental viewgraphs (2853 in total) to simulate the pace of blackboard teaching.
• The 18 lectures (below) are available on different platforms:
Here is the playlist on YouTube Lectures are available on iTunes U course app
Place the mouse on a lecture title for a short description
The Learning Problem - Introduction; supervised, unsupervised, and reinforcement learning. Components of the learning problem.
Is Learning Feasible? - Can we generalize from a limited sample to the entire space? Relationship between in-sample and out-of-sample.
The Linear Model I - Linear classification and linear regression. Extending linear models through nonlinear transforms.
Error and Noise - The principled choice of error measures. What happens when the target we want to learn is noisy.
Training versus Testing - The difference between training and testing in mathematical terms. What makes a learning model able to generalize?
Theory of Generalization - How an infinite model can learn from a finite sample. The most important theoretical result in machine learning.
The VC Dimension - A measure of what it takes a model to learn. Relationship to the number of parameters and degrees of freedom.
Bias-Variance Tradeoff - Breaking down the learning performance into competing quantities. The learning curves.
The Linear Model II - More about linear models. Logistic regression, maximum likelihood, and gradient descent.
Neural Networks - A biologically inspired model. The efficient backpropagation learning algorithm. Hidden layers.
Overfitting - Fitting the data too well; fitting the noise. Deterministic noise versus stochastic noise.
Regularization - Putting the brakes on fitting the noise. Hard and soft constraints. Augmented error and weight decay.
Validation - Taking a peek out of sample. Model selection and data contamination. Cross validation.
Support Vector Machines - One of the most successful learning algorithms; getting a complex model at the price of a simple one.
Kernel Methods - Extending SVM to infinite-dimensional spaces using the kernel trick, and to non-separable data using soft margins.
Radial Basis Functions - An important learning model that connects several machine learning models and techniques.
Three Learning Principles - Major pitfalls for machine learning practitioners; Occam's razor, sampling bias, and data snooping.
Epilogue - The map of machine learning. Brief views of Bayesian learning and aggregation methods. | {"url":"http://www.work.caltech.edu/lectures.html","timestamp":"2014-04-16T22:47:05Z","content_type":null,"content_length":"16085","record_id":"<urn:uuid:2d50bb41-a818-4f97-8ad9-dc1a907a1228>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00236-ip-10-147-4-33.ec2.internal.warc.gz"} |
Newest 'convention soft-question' Questions
This is a boring, technical question that I stumbled upon while making a contribution to Sage. I would still like to hear a constructive answer so hopefully the question does not get closed. The ...
I rarely find modern research papers (on mathematics) that are less than 5 pages long. However, recently I came across a couple of mathematical research papers from the 1960/1970's that were very ...
It strikes me that there is no widely accepted symbol to denote the set of usual prime numbers in $\mathbb{N}$. Look: $$\zeta(s)=\prod_{p\in \mathrm{?}}\frac{1}{(1-p^{-s})}$$ Wouldn't it be nicer ...
It seems in writing math papers collaborators put their names in the alphabetical order of their last name. Is this a universal accepted norm? I could not find a place putting this down formally. | {"url":"http://mathoverflow.net/questions/tagged/convention+soft-question","timestamp":"2014-04-18T03:35:42Z","content_type":null,"content_length":"39430","record_id":"<urn:uuid:d83a1599-13f2-48a5-a41d-f97cae1011f1>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00019-ip-10-147-4-33.ec2.internal.warc.gz"} |
I describe recent work with with Stefan Hollands that establishes a new criterion for the dynamical stability of black holes in $D \geq 4$ spacetime dimensions in general relativity with respect to
axisymmetric perturbations: Dynamic stability is equivalent to the positivity of the canonical energy, $\mathcal E$, on a subspace of linearized solutions that have vanishing linearized ADM mass,
momentum, and angular momentum at infinity and satisfy certain gauge conditions at the horizon. We further show that $\mathcal E$ is related to the second order variations of mass, angular momentum, | {"url":"https://perimeterinstitute.ca/subject/gravity","timestamp":"2014-04-17T22:06:57Z","content_type":null,"content_length":"35543","record_id":"<urn:uuid:0ecc95ca-0dac-4a61-a07b-5ab2e8d3e90a>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00400-ip-10-147-4-33.ec2.internal.warc.gz"} |
Why positive curvature implies finite universe?
But look my previous post: my toy Universe has finite diameter, but it is not spacially finite!!!
Sorry, but the statement that it is a sphere and yet geodesics don't close is a contradiction. A sphere is defined by topology and geometry, not coordinate conventions. A geometric sphere has, as a
property, that all geodesics are closed curves.
OK, so you propose that there is a 'funny 2-surface' with constant positive curvature everywhere, that is not closed. Well, that contradicts Meyer's theorem. Forget embedding, this means that if you
go through the steps to actually set it up as manifold with the proposed properties, you will fail. I suspect where you must fail is that in defining the open sets for your magical surface, you get a
contradiction - some point must be near and not near some other point, at the same time. | {"url":"http://www.physicsforums.com/showthread.php?p=4162159","timestamp":"2014-04-19T07:24:42Z","content_type":null,"content_length":"58979","record_id":"<urn:uuid:c40e2a8a-9eca-4b1f-a26b-43f9dc742543>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00428-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Numpy-discussion] all elements equal
John Hunter jdh2358@gmail....
Mon Mar 5 13:32:40 CST 2012
On Mon, Mar 5, 2012 at 1:29 PM, Keith Goodman <kwgoodman@gmail.com> wrote:
> I[8] np.allclose(a, a[0])
> O[8] False
> I[9] a = np.ones(100000)
> I[10] np.allclose(a, a[0])
> O[10] True
One disadvantage of using a[0] as a proxy is that the result depends on the
ordering of a
(a.max() - a.min()) < epsilon
is an alternative that avoids this. Another good use case for a minmax
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.scipy.org/pipermail/numpy-discussion/attachments/20120305/8787ddf7/attachment.html
More information about the NumPy-Discussion mailing list | {"url":"http://mail.scipy.org/pipermail/numpy-discussion/2012-March/061167.html","timestamp":"2014-04-19T14:31:06Z","content_type":null,"content_length":"3402","record_id":"<urn:uuid:a9e75603-5154-4121-a49c-6978d17a90d6>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00344-ip-10-147-4-33.ec2.internal.warc.gz"} |
To Get a basic idea of parallel processing work on fragment shader and GPU memomry
09-08-2012, 01:59 AM
To Get a basic idea of parallel processing work on fragment shader and GPU memomry
I am new to GLSL and GPGPU . I want to know about how fragment shader for matrix multiplication / histogram works with parallel algorithm.
I only know it works on each pixel.
If I have a 5X3 Matrix (A) and 3X6 Matrix B I have to get C(5X6) matrix
if I represent A in texture what should be its size , How to represent it as texture?
if C is also texture whether the fragment program works on each pixels of C and how it is related total number of pixels of the system , how registers and memory is used how get an idea of the
flow of the algorithm works with glsl . what are memory constraint to be aware to devlope GPGPU
applications with glsl.
I had read that if we try to implement Histogram computation more than 80 bins is not possible with GPGPU with normal Graphics cards,
My laptop having NVIDIA GT425m with 1 GB ram is it possible develop GPGPU program to compute Histogram.
Expecting linksfor learning GLSL and Answers for above doubts,
Thanks & Regards
Dileep S. | {"url":"http://www.opengl.org/discussion_boards/printthread.php?t=179003&pp=10&page=1","timestamp":"2014-04-23T21:41:42Z","content_type":null,"content_length":"4395","record_id":"<urn:uuid:b2b0236d-a8f0-48e5-bb19-3ccad3850119>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00247-ip-10-147-4-33.ec2.internal.warc.gz"} |
Bsu1103: a new genome-scale metabolic model of
• We are sorry, but NCBI web applications do not support your browser and may not function properly.
More information
Genome Biol. 2009; 10(6): R69.
iBsu1103: a new genome-scale metabolic model of Bacillus subtilis based on SEED annotations
Bacillus subtilis is an organism of interest because of its extensive industrial applications, its similarity to pathogenic organisms, and its role as the model organism for Gram-positive,
sporulating bacteria. In this work, we introduce a new genome-scale metabolic model of B. subtilis 168 called iBsu1103. This new model is based on the annotated B. subtilis 168 genome generated by
the SEED, one of the most up-to-date and accurate annotations of B. subtilis 168 available.
The iBsu1103 model includes 1,437 reactions associated with 1,103 genes, making it the most complete model of B. subtilis available. The model also includes Gibbs free energy change (Δ[r]G'°) values
for 1,403 (97%) of the model reactions estimated by using the group contribution method. These data were used with an improved reaction reversibility prediction method to identify 653 (45%)
irreversible reactions in the model. The model was validated against an experimental dataset consisting of 1,500 distinct conditions and was optimized by using an improved model optimization method
to increase model accuracy from 89.7% to 93.1%.
Basing the iBsu1103 model on the annotations generated by the SEED significantly improved the model completeness and accuracy compared with the most recent previously published model. The enhanced
accuracy of the iBsu1103 model also demonstrates the efficacy of the improved reaction directionality prediction method in accurately identifying irreversible reactions in the B. subtilis metabolism.
The proposed improved model optimization methodology was also demonstrated to be effective in minimally adjusting model content to improve model accuracy.
Bacillus subtilis is a naturally competent, Gram-positive, sporulating bacterium often used in industry as a producer of high-quality enzymes and proteins [1]. As the most thoroughly studied of
Gram-positive and sporulating bacteria, B. subtilis serves as a model cell for understanding the Gram-positive cell wall and the process of sporulation. With its similarity to the pathogens Bacillus
anthracis and Staphylococcus aureus, B. subtilis is also important as a platform for exploring novel medical treatments for these pathogens. Moreover, the natural competence of B. subtilis opens the
way for simple and rapid genetic modification by homologous recombination [2].
For all these reasons, B. subtilis has been the subject of extensive experimental study. Every gene essential for growth on rich media is known [3]; 60 gene intervals covering 49% of the genes in the
genome have been knocked out and the resulting phenotypes analyzed [4]; ^13C experiments have been run to explore the cell response to mutations in the central carbon pathways [5]; and Biolog
phenotyping experiments [6] have been performed to study the ability of B. subtilis to metabolize 271 different nutrient compounds [7].
As genome-scale experimental datasets begin to emerge for B. subtilis, genome-scale models of B. subtilis are required for the analysis and interpretation of these datasets. Genome-scale metabolic
models may be used to rapidly and accurately predict the cellular response to gene knockout [8,9], media conditions [10], and environmental changes [11]. Recently, genome-scale models of the
metabolism and regulation of B. subtilis have been published by Oh et al. [7] and Goelzer et al. [12], respectively. However, both of these models have drawbacks and limitations. While the Goelzer et
al. model provides regulatory constraints for B. subtilis on a large scale, the metabolic portion of this model is limited to the central metabolic pathways of B. subtilis. As a result, this model
captures fewer of the metabolic genes in B. subtilis, thereby restricting the ability of the model to predict the outcome of large-scale genetic modifications. While the Oh et al. metabolic model
covers a larger portion of the metabolic pathways and genes in B. subtilis, many of the annotations that this model is based upon are out of date. Additionally, both models lack thermodynamic data
for the reactions included in the models. Without these data, the directionality and reversibility of the reactions reported in these models is based entirely on databases of biochemistry such as the
Kyoto Encyclopedia of Genes and Genomes (KEGG) [13,14]. Hence, directionality is often over-constrained, with a large number of reactions listed as irreversible (59% of the reactions in the Goelzer
et al. model and 65% of the reactions in the Oh et al. model).
In this work, we introduce a new genome-scale model of B. subtilis based on the annotations generated by the SEED Project [15-17]. The SEED is an attractive source for genome annotations because it
provides continuously updated annotations with a high level of accuracy, consistency, and completeness. The exceptional consistency and completeness of the SEED annotations are primarily a result of
the subsystems-based strategy employed by the SEED, where each individual cellular subsystem (for example, glycolysis) is annotated and curated across many genomes simultaneously. This approach
enables annotators to exploit comparative genomics approaches to rapidly and accurately propagate biological knowledge.
During the reconstruction process for the new model, we applied a group contribution method [18] to estimate the standard Gibbs free energy change of reaction (Δ[r]G'°) for each reaction included in
the model. We then developed new extensions to an existing methodology [19-21] that uses these estimated Δ[r]G'° values along with the reaction stoichiometry to predict the reversibility and
directionality of every reaction in the model. The Δ[r]G'° values reported for the reactions in the model may also be of use in applying numerous forms of thermodynamic analysis now emerging [22-24]
to study the B. subtilis metabolism on a genome scale.
Once the reconstruction process was complete, we applied a significantly modified version of the GrowMatch algorithm developed by Kumar and Maranas [25] to fit our model to the available experimental
data. In the GrowMatch methodology, an optimization problem is solved for each experimental condition that is incorrectly predicted by the original model, in order to identify the minimal number of
reactions that must be added or removed from the model to correct the prediction. As a result, many equivalent solutions are generated for correcting each erroneous model prediction. We propose new
solution reconciliation steps for the GrowMatch procedure to identify the optimal combination of GrowMatch solutions that results in an optimized model. We also propose significant alterations to the
objective function of the GrowMatch optimization to improve the quality of the solutions generated by GrowMatch.
Reconstruction of the Core iBsu1103 model
We started the model reconstruction by obtaining the annotated B. subtilis 168 genome from the SEED. This annotated genome consists of 2,691 distinct functional roles associated with 3,257 (79%) of
the 4,114 genes identified in the B. subtilis 168 chromosome. Of the functional roles included in the annotation, 50% are organized into SEED subsystems, each of which represents a single biological
pathway such as histidine biosynthesis. The functional roles within subsystems are the focus of the cross-genome curation efforts performed by the SEED annotators, resulting in greater accuracy and
consistency in the assignment of these functional roles to genes. Reactions were mapped to the functional roles in the B. subtilis 168 genome based on three criteria: match of the Enzyme Commission
numbers associated with the reaction and the functional role; match of the metabolic activities associated with the reaction and the functional role; and match of the substrates and products
associated with the reaction and functional role [26]. In total, 1,263 distinct reactions were associated with 1,032 functional roles and 1,104 genes. Of these reactions, 88% were assigned to
functional roles included in the highly curated SEED subsystems, giving us a high level of confidence in the annotations that form the basis of the B. subtilis model.
Often genes produce protein products that function cooperatively as a multi-enzyme complex to perform a single reaction. To accurately capture the dependency of such reactions on all the genes
encoding components of the multi-enzyme complex, we grouped these genes together before mapping them to the reaction. We identified 111 such gene groups and mapped them to 199 distinct reactions in
the B. subtilis model. Reactions were mapped to these gene groups instead of individual genes if: the functional roles assigned to the genes indicated that they formed a complex; multiple consecutive
non-homologous genes were assigned to the same functional role; or the reaction represented the lumped functions of multiple functional roles associated with multiple genes.
The metabolism of B. subtilis is known to involve some metabolic functions that are not associated with any genes in the B. subtilis genome. During the reconstruction of the B. subtilis model, 71
such reactions were identified. While 19 of these reactions take place spontaneously, the genes associated with the remaining reactions are unknown. These reactions were added to the model as open
problem reactions, indicating that the genes associated with these reactions have yet to be identified (Table S3 in Additional data files 1 and 2).
Data from Biolog phenotyping arrays were also used in reconstructing the B. subtilis model. The ability of B. subtilis to metabolize 153 carbon sources, 53 nitrogen sources, 47 phosphate sources, and
18 sulfate sources was tested by using Biolog phenotyping arrays [7]. Of the tested nutrients, B. subtilis was observed to be capable of metabolizing 95 carbon, 42 nitrogen, 45 phosphate, and 2
sulfate sources. Transport reactions are associated with genes in the B. subtilis 168 genome for only 94 (51%) of these proven nutrients. Therefore, 73 open problem transport reactions were added to
the model to allow for transport of the remaining Biolog nutrients that exist in our biochemistry database (Table S3 in Additional data files 1 and 2).
In total, the unoptimized SEED-based B. subtilis model consists of 1,405 reactions and 1,104 genes (Table (Table1).1). We call this model the Core iBsu1103, where the i stands for in silico, the Bsu
stands for B. subtilis, and the 1,103 stands for the number of genes captured by the model (one gene is lost during the model optimization process described later). In keeping with the modeling
practices first proposed by Reed et al. [27], protons are properly balanced in the model by representing all model compounds and reactions in their charge-balanced and mass-balanced form in aqueous
solution at neutral pH [28].
Construction of a biomass objective function
In order to use the reconstructed iBsu1103 model to predict cellular response to media conditions and gene knockout, a biomass objective function (BOF) was constructed. This BOF was based primarily
on the BOF developed for the Oh et al. genome-scale model of B. subtilis [7]. The 61 small molecules that make up the Oh et al. BOF can be divided into seven categories representing the fundamental
building blocks of biomass: DNA, RNA, lipids, lipoteichoic acid, cell wall, protein, and cofactors and ions. In the Oh et al. BOF, all of these components are lumped together as reactants in a single
biomass synthesis reaction, which is not associated with any genes involved in macromolecule biosynthesis. In the iBsu1103 model, we decomposed biomass production into seven synthesis reactions: DNA
synthesis; RNA synthesis; protein synthesis; lipid content; lipoteichoic acid synthesis; cell wall synthesis; and biomass synthesis. These abstract species produced by these seven synthesis reactions
are subsequently consumed as reactants along with 22 cofactors and ionic species in the biomass synthesis reaction. This process reduces the complexity of the biomass synthesis reaction and makes the
reason for the inclusion of each species in the reaction more transparent. Additionally, this allows the macromolecule synthesis reactions to be mapped to macromolecule biosynthesis genes in B.
subtilis. For example, genes responsible for encoding components of the ribosome and genes responsible for tRNA loading reactions were all assigned together as a complex associated with the protein
synthesis reaction.
Some of the species acting as biomass precursor compounds in the Oh et al. BOF were also altered in the adaptation of the BOF to the iBsu1103 model. In the Oh et al. model, the BOF involves 11 lumped
lipid and teichoic acid species, which represent the averaged combination of numerous lipid compounds with varying carbon chain lengths. In the development of the fatty acid and cell wall
biosynthesis pathways for the iBsu1103 model, we represented every distinct fatty acid and teichoic acid species explicitly rather than using lumped reactions and compounds. As a result, lumped
species that serve as biomass components in the Oh et al. model were replaced by 99 explicit species in the iBsu1103 BOF. Of these species, 63 serve as reactants in the lipid content reaction, while
the remaining species serve as reactants in the teichoic acid synthesis reaction.
Two new biomass precursor compounds were added to the biomass synthesis reaction of the iBsu1103 model to improve the accuracy of the gene essentiality predictions: coenzyme A (CoA) and
acyl-carrier-protein (ACP). Both of these species are used extensively as carrier compounds in the metabolism of B. subtilis, making the continuous production of these compounds essential. The
biosynthesis pathways for both compounds already existed in the iBsu1103, and two of the steps in these pathways are associated with essential genes in B. subtilis: ytaG (peg.2909) and acpS
(peg.462). If these species are not included in the BOF, these pathways become non-functional, and the essential genes associated with these pathways are incorrectly predicted to be nonessential.
The coefficients in the Oh et al. BOF are derived from numerous analyses of the chemical content of B. subtilis biomass [29-33]. We similarly derived the coefficients for the iBsu1103 model from
these sources. While no data were available on the percentage of B. subtilis biomass represented by our two additional biomass components CoA and ACP, we assume these components to be 0.5% of the net
mass of cofactors and ions represented in the BOF.
Results of automated assignment of reaction reversibility
The group contribution method [18] was used to estimate standard Gibbs free energies of formation (Δ[f]G'°) for 948 (83.3%) of the metabolites and Δ[r]G'° for 1,372 (97.4%) of the reactions in the
unoptimized iBsu1103 model. Estimated Δ[r]G'° values were used in combination with a set of heuristic rules (see Materials and methods) to predict the reversibility and directionality of each
reaction in the model under physiological conditions (Figure (Figure1).1). Based on these reversibility rules, 635 (45%) of the reactions in the model were found to be irreversible. However, when
the directionality of the irreversible reactions was set according to our reversibility criteria, the model no longer predicted growth on LB or glucose-minimal media. This result indicates that the
direction of flux required for growth under these media conditions contradicted the predicted directionality for some of the irreversible reactions in the model. Six reactions were identified in the
model that met these criteria (Table (Table2).2). In every case, these reactions were irreversible in the reverse direction because the minimum Gibbs free energy change (18]. Thus, in combination
with the strong experimental evidence for the activity of these reactions in the direction shown in Table Table2,2, we assumed that the Δ[r]G'° values of these reactions were overestimated by the
group contribution method and that these reactions are, in fact, reversible.
Reactions required to violate the automated reversibility rules
Distribution of reactions conforming to reversibility rules. (a) The distribution of reactions in the iBsu1103 model conforming to every possible state in the proposed set of rules for assigning
reaction directionality and reversibility is shown. This ...
Results of the model optimization procedure
The unoptimized model was validated against a dataset consisting of 1,500 distinct experimental conditions, including gene essentiality data [3], Biolog phenotyping data [7], and gene interval
knockout data [4] (Table (Table3).3). Initially, 85 errors arose in the gene essentiality predictions, including 58 false positives (an essential gene being predicted to be nonessential) and 27
false negatives (a nonessential gene being predicted to be essential). The annotations of all erroneously predicted essential and nonessential genes were manually reviewed to identify cases where the
prediction error was a result of an incorrect gene annotation. Of the essential genes that were predicted to be nonessential, 30 were mapped to essential metabolic functions in the model. However,
these essential genes all had homologs in the B. subtilis genome that were mapped to the same essential metabolic functions (Table S4 in Additional data files 1 and 2). Three explanations exist for
the apparent inactivity of these gene homologs: they are similar to the essential genes but actually perform a different function; they are nonfunctional homologs; or the regulatory network in the
cell deactivates these genes, making them incapable of taking over the functions of the essential genes when they are knocked out. In order to correct the essentiality predictions in the model, these
30 homologous genes were disassociated from the essential metabolic functions.
Accuracy of model predictions before and after optimization
We then applied our modified GrowMatch model optimization procedure (see Materials and methods) in an attempt to fix the 116 remaining false negative predictions and 39 remaining false positive
predictions (Figure (Figure2).2). First, the gap filling algorithm was applied to identify existing irreversible reactions that could be made reversible or new reactions that could be added to
correct each false negative prediction. This step produced 686 solutions correcting 78 of the false negative predictions. The gap filling reconciliation algorithm was used to combine the gap filling
solutions into a single solution that corrected 45 false negative predictions and introduced five new false positive predictions. Next, the gap generation algorithm was applied to identify reactions
that could be removed or made irreversible to correct each false positive prediction. The gap generation algorithm produced 144 solutions correcting 32 of the false positive predictions. The gap
generation reconciliation algorithm combined these solutions into a single solution that corrected 11 false positive predictions without introducing any new false negative predictions. Overall, two
irreversible reactions were made reversible, 35 new reactions were added to the model, 21 reversible reactions were made irreversible, and 3 reactions were removed entirely from the model (Table S5
in Additional data files 1 and 2). As a result of these changes, the model accuracy increased from 89.7% to 93.1%.
Model optimization procedure results. The results are shown from the application of each step of the model optimization procedure to fit the iBsu1103 model to the 1,500 available experimental
data-points. KO, knock out.
Model overview
The final optimized version of the iBsu1103 model consists of 1,437 reactions, 1,138 metabolites, and 1,103 genes (Table (Table1).1). Based on the reversibility rules and the estimated thermodynamic
data, 653 (45.0%) of the model reactions were determined to be irreversible. All data relevant to the model are provided in the Additional data files, including metabolite structures (Additional data
file 3), metabolite data (Table S1 in Additional data files 1 and 2), reaction data (Table S2 in Additional data files 1 and 2), estimated thermodynamic data (Table S2 in Additional data files 1 and
2), model stoichiometry in SBML format (Additional data file 4), and mappings of model compound and reaction IDs to IDs in the KEGG and other genome-scale models (Tables S1 and S2 in Additional data
files 1 and 2).
The reactions included in the optimized model were categorized into ten regions of B. subtilis metabolism (Figure (Figure3a;3a; Table S2 in Additional data files 1 and 2). The largest category of
model reactions is 'fatty acid and lipid biosynthesis'. This is due to the explicit representation of the biosynthesis of every significant lipid species observed in B. subtilis biomass as opposed to
the lumped reactions used in other models. The explicit representation of these pathways has numerous advantages: Δ[f]G'° and Δ[r]G'° may be estimated for every species and reaction; every species
has a distinct structure, mass, and formula; and the stoichiometric coefficients in the reactions better reflect the actually biochemistry taking place. The other most significantly represented
categories of model reactions are carbohydrate metabolism, amino acid biosynthesis and metabolism, and membrane transport. These categories are expected to be well represented because they represent
pathways in the cell that deal with a highly diverse set of substrates: 20 amino acids, more than 95 metabolized carbon sources, and 244 transportable compounds.
Classification of model reactions by function and behavior. (a) Reactions in the optimized iBsu1103 model are categorized into ten regions of the B. subtilis metabolism. Regions of metabolism
involving a diverse set of substrates typically involve the ...
Reactions in the model were also categorized according to their behavior during growth on Luria-Bertani (LB) media (Figure (Figure3b;3b; Table S2 in Additional data files 1 and 2). Of the model
reactions, 300 (21%) were essential for minimal growth on LB media. These are the reactions fulfilling essential metabolic functions for B. subtilis where no other pathways exist, and they form an
always-active core of the B. subtilis metabolism. Another 697 (49%) of the model reactions were nonessential but capable of carrying flux during growth on LB media. While these reactions are not
individually essential, growth is lost if all of these reactions are simultaneously knocked out. The reason is that some of these reactions represent competing pathways for performing an essential
metabolic function. Another 229 (16%) of the reactions cannot carry flux during growth on LB media. These reactions are on the periphery of the B. subtilis metabolism involved in the transport and
catabolism of metabolites not included in our in silico representation of LB media. Moreover, 210 (14%) of the model reactions are disconnected from the network, indicating that these reactions
either lead up to or are exclusively derived from a dead end in the metabolic network. Presence of these reactions indicates miss-annotation or overly generic annotation of the gene associated with
the reaction, or a gap in the metabolic network. Thus, these reactions represent areas of the metabolic chemistry where more experimental study and curation of annotations must occur.
Comparison with previously published models of B. subtilis
We performed a detailed comparison of the Oh et al. and iBsu1103 models to identify differences in content and elucidate the conflicts in the functional annotation of genes (Table (Table1).1). Our
comparison encompassed the reactions involved in the models, the genes involved in the models, the mappings between genes and reactions in the models, and the gene complexes captured by the models
(Figure (Figure4).4). Our comparison revealed significant overlap in the content of the two models. Of the 1,020 total reactions in the Oh et al. model, 810 (79%) were also contained in the iBsu1103
model. The remaining 210 Oh et al. reactions were excluded from the iBsu1103 model primarily because of a disagreement between the Oh et al. and SEED annotations or because they were lumped reactions
that were represented in un-lumped form in the iBsu1103 model (Table S6 in Additional data files 1 and 2).
Comparison of iBsu1103 model to the Oh et al. model. (a) A detailed comparison of the iBsu1103 model and the Oh et al. model was performed to determine overlap of reactions, genes, annotations, and
gene complexes between the two models. In the annotation ...
Significant agreement was also found in the mapping of genes to reactions in the Oh et al. and iBsu1103 models. Of the 1,550 distinct gene-reaction mappings that involved the 810 reactions found in
both models, 997 (64%) were identical. Of the 357 mappings that were exclusive to the iBsu1103 model, 20 involved reactions that were included in the Oh et al. model without any gene association. The
remaining 337 exclusive iBsu1103 mappings involved paralogs or gene complexes not captured in the Oh et al. annotation. The 175 mappings exclusive to the Oh et al. model all represent conflicts
between the functional annotations in the Oh et al. model and the functional annotations generated by the SEED. Although some of these Oh et al. exclusive mappings involved eight reactions with no
associated gene in the iBsu1103 model, these mappings were rejected because they conflicted with the SEED annotation.
In addition to containing most of the reaction and annotation content of the Oh et al. model, the iBsu1103 model also includes 628 reactions and 354 genes that are not in the Oh et al. model (Figure
(Figure4;4; Table S2 in Additional data files 1 and 2). Of the additional reactions in the iBsu1103 model, 173 are associated with the 354 genes that are exclusive to the iBsu1103 model. These
additional reactions are a direct result of the improved coverage of the B. subtilis genome by the SEED functional annotation. The remaining 455 reactions that are exclusive to the iBsu1103 model
take part in a variety of functional categories spread throughout the B. subtilis metabolism, although nearly half of these reactions participate in the fatty acid and lipid biosynthesis (Figure
(Figure4b).4b). These reactions are primarily a result of the replacement of lumped fatty acid and lipid reactions in the Oh et al. model with unlumped reactions in the iBsu1103 model.
A comparison of the gene complexes encoded in both model reveals little overlap in this portion of the models. Of the 111 distinct gene complexes encoded in the iBsu1103 model, only 21 overlapped
with the Oh et al. model, whereas the Oh et al. model contained only 8 gene complexes not encoded in the iBsu1103 model (Figure (Figure3).3). This indicates a significantly more complete handling of
complexes in the iBsu1103 model.
All of the additional content in the iBsu1103 model translates into a significant improvement in the accuracy of the gene knockout predictions, the Biolog media growth predictions, and the gene
interval knockout predictions (Table (Table3).3). Even before optimization, the iBsu1103 model is 0.7% more accurate than the Oh et al. model. After optimization, the iBsu1103 model is 4.1% more
accurate. In addition to the improvement in accuracy, the improved coverage of the genome by the iBsu1103 model also allows for the simulation of 337 additional experimental conditions by the model.
We note that while the annotations used in the iBsu1103 model were derived primarily from the SEED, the Oh et al. model proved invaluable in reconstructing the iBsu1103 model. The work of Oh et al.
was the source of Biolog phenotyping data and analysis; and the Oh et al. model itself was a valuable source of reaction stoichiometry, metabolite descriptions, and data on biomass composition, all
of which were used in the reconstruction of the iBsu1103 model.
As one of the first genome-scale metabolic models constructed based on an annotated genome from the SEED framework, the iBsu1103 model demonstrates the exceptional completeness and accuracy of the
annotations generated by the SEED. The iBsu1103 model covers 259 more genes than the Oh et al. model; it can simulate 337 more experimental conditions; and it simulates conditions with greater
accuracy. In fact, of the seven new assignments of functions to genes proposed in the Oh et al. work based on manual gene orthology searches, two were already completely captured by the SEED
annotation for B. subtilis 168 prior to the publication of the Oh et al. manuscript. Another two of these proposed annotations were partially captured by the SEED annotation.
In this work we also demonstrate new extended reversibility criteria for consistently and automatically assigning directionality to the biochemical reactions in genome-scale metabolic models. The
extended criteria enabled us to identify 306 additional irreversible reactions that are missed when using existing methodologies alone [19-21]. However, we also found that even with the extended
criteria, the predicted reversibility was not correct for every reaction in the model. In order for model predictions to fit available experimental observations, the predicted reversibility had to be
adjusted for 29 (2%) of the model reactions. Some possible explanations for these exceptions to the reversibility criteria include: the estimated Δ[r]G'° may be too high or too low; the reactant or
product concentrations may be tightly regulated to levels that prohibit reactions from functioning in certain directions; or the reactions involve additional/alternative cofactors not accounted for
in current reversibility calculations. These exceptions to the reversibility rules emphasize the importance of using a model correction method to adjust predicted reversibility based on experimental
data. While these rules were very effective with the iBsu1103 model, they still need to be validated with a wider set of organisms and models. The extended version of GrowMatch presented in this work
was also demonstrated to be a highly effective means of identifying and correcting potential errors in the metabolic network that cause errors in model predictions. This method is driven entirely by
the available experimental data, requiring manual input only in selecting the best of the equivalent solutions generated by the solution reconciliation steps of the method. The reconciliation steps
we introduced to the GrowMatch method also proved to be effective for identifying the minimal changes to the model required to produce the optimal fit to the available experimental data. The
reconciliation reduced 830 distinct solutions involving hundreds of changes to the model to a single solution that combined 62 model modifications to fix 51 (33%) of the 155 incorrect model
Overall, we demonstrate the iBsu1103 model to be the most complete and accurate model of B. subtilis published to date. The identification and encoding of gene complexes, the removal of lumped
reactions and compounds, and the refinements of the biomass objective function make this model especially applicable to thermodynamic analysis and gene knockout prediction. This model will be a
valuable tool in the ongoing efforts to genetically engineer a minimal strain of B. subtilis for numerous engineering applications [2,4]. The thermodynamic data published with this model will be
invaluable in the application of the model to numerous emerging forms of thermodynamic analysis [22-24]. Additionally, the new extensions that we have proposed for methods of automatically predicting
reaction reversibility and automatically correcting model errors are valuable steps towards the goal of automating the genome-scale model reconstruction process [34,35].
Materials and methods
Validation of the B. subtilis model using flux balance analysis
Flux balance analysis (FBA) was used to simulate all experimental conditions to validate the iBsu1103 model. FBA defines the limits on the metabolic capabilities of a model organism under
steady-state flux conditions by constraining the net production rate of every metabolite in the system to zero [36-39]. This quasi-steady-state constraint on the metabolic fluxes is described
mathematically in Equation 1:
In Equation 1, N is the m × r matrix of the stoichiometric coefficients for the r reactions and m metabolites in the model, and v is the r × 1 vector of the steady-state fluxes through the r
reactions in the model. Bounds are placed on the reaction fluxes depending on the reversibility of the reactions:
- (CDW = cell dry weight). When simulating a gene knockout, the bounds on the flux through all reactions associated exclusively with the gene being knocked out (or associated exclusively with a
protein complex partially encoded by the gene being knocked out) were reset to zero. When simulating media conditions, only nutrients present in the media were allowed to have a net uptake by the
cell. All other transportable nutrients were allowed only to be excreted by the cell. Details on conditions for all FBA simulations performed are provided in Table S8 in Additional data files 1 and
Prediction of reaction reversibility based on thermodynamics
The reversibility and directionality of the reactions in the iBsu1103 model were determined by using a combination of thermodynamic analysis and a set of heuristic rules based on knowledge of
metabolism and biochemistry. In the thermodynamic analysis of the model reactions, Δ[r]G'° was estimated for each reaction in the model by using the group contribution method [40-42]. The estimated Δ
[r]G'° values were then used to determine the minimum and maximum possible values for the absolute Gibbs free energy change of reaction (Δ[r]G^') using Equations 4 and 5, respectively:
In these equations, x[min ]is the minimal metabolite activity, assumed to be 0.01 mM; x[max ]is the maximum metabolite activity, assumed to be 20 mM; R is the universal gas constant; T is the
temperature; n[i ]is the stoichiometric coefficient for species i in the reaction; U[r ]is the uncertainty in the estimated Δ[r]G'°; and ΔG[Transport ]is the energy involved in transport of ions
across the cell membrane. Any reaction with a negative maximum Gibbs free energy change of reaction (19-21].
However, in our work with the iBsu1103 model we found that iBsu1103 model, we developed and applied a set of three heuristic rules based on common categories of biochemical reactions that are known
to be irreversible: carboxylation reactions, phosphorylation reactions, CoA and ACP ligases, ABC transporters, and reactions utilizing ATP hydrolysis to drive an otherwise unfavorable action. We
applied our new heuristic rules to identify any irreversible reactions that were missed by previous methods based only on
The first reversibility rule is that all ABC transporters are irreversible. As a result of the application of this rule, ATP synthase is the only transporter in the iBsu1103 model capable of
producing ATP directly. The second reversibility rule is that any reaction with a milli-molar Gibbs free energy change (Δ[r]G^'m) that is less than 2 kcal/mol and greater than -2 kcal/mol is
reversible. The Δ[r]G^'m is calculated by using Equation 6:
Δ[r]G^'m is preferred over Δ[r]G'° when assessing reaction feasibility under physiological conditions because the 1-mM reference state of Δ[r]G^'m better reflects the intracellular metabolite
concentration levels than does the 1-M reference state of Δ[r]G'°.
The final reversibility rule uses a reversibility score, S[rev], calculated as follows:
In this equation, n[x ]is the number of molecules of type x involved in the reaction, Pi represents phosphate, Ppi represents pyrophosphate, and λ[i ]is a binary parameter equal to 1 when i is a
low-energy substrate and equal to zero otherwise. Lower-energy substrates in this calculation include CO[2], HCO[3]^-, CoA, ACP, phosphate, and pyrophosphate. According to the final reversibility
rule, if the product of S[rev ]and Δ[r]G^'m is >2 and Δ[r]G^'m is <0, the reaction is irreversible in the forward direction; if the product of S[rev ]and Δ[r]G^'m is >2 and Δ[r]G^'m is >0, the
reaction is irreversible in the reverse direction. All remaining reactions that fail to meet any of the reversibility rule criteria are considered to be reversible.
Model optimization procedure overview
We applied an extended version of the GrowMatch procedure developed by Kumar et al. [25] to identify changes in the stoichiometry of the iBsu1103 model that would eliminate erroneous model
predictions. The procedure consists of four steps applied consecutively (Figure (Figure2):2): step 1, gap filling to identify and fill gaps in the original model that cause false negative
predictions (predictions of zero growth where growth is known to occur); step 2, gap filling reconciliation to combine many gap filling solutions to maximize correction of false negative predictions
while minimizing model modifications; step 3, gap generation to identify extra or under-constrained reactions in the gap-filled model that cause false positive predictions (predictions of growth
where growth is known not to occur); and step 4, gap generation reconciliation to combine the gap generation solutions to maximize correction of false positive predictions with a minimum of model
modifications. While the gap filling and gap generation steps are based entirely on the existing GrowMatch procedure (with some changes to the objective function), the reconciliation steps described
here are new.
Model optimization step one: gap filling
The gap filling step of the model optimization process, originally proposed by Kumar et al. [43], attempts to correct false negative predictions in the original model by either relaxing the
reversibility constraints on existing reactions or by adding new reactions to the model. For each simulated experimental condition with a false negative prediction, the following optimization was
performed on a superset of reactions consisting of every balanced reaction in the KEGG or in any one of ten published genome-scale models [7,12,20,27,44-49]:
Subject to:
The objective of the gap filling procedure (Equation 8) is to minimize the number of reactions that are not in the original model but must be added in order for biomass to be produced under the
simulated experimental conditions. Because the gap filling is run only for conditions with a false negative prediction by the original model, at least one reaction will always need to be added.
In the gap filling formulation, all reactions are treated as reversible, and every reversible reaction is decomposed into separate forward and reverse component reactions. This decomposition of
reversible reactions allows for the independent addition of each direction of a reaction by the gap filling, which is necessary for gaps to be filled by the relaxation of the reversibility
constraints on existing reactions. As a result of this decomposition, the reactions represented in the gap filling formulation are the forward and backward components of the reactions in the original
KEGG/model superset. In the objective of the gap filling formulation, r[gapfilling ]represents the total number of component reactions in the superset; z[i ]is a binary use variable equal to 1 if the
flux through component reaction i is nonzero; and λ[gapfill, i ]is a constant representing the cost associated with the addition of component reaction i to the model. If component reaction i is
already present in the model, λ[gapfill, i ]is equal to zero. Otherwise, λ[gapfill, i]is calculated by using Equation 12:
Each of the P variables in Equation 12 is a binary constant representing a type of penalty applied for the addition of various component reactions to the model. These constants are equal to 1 if the
penalty applies to a particular reaction and equal to zero otherwise. P[KEGG, i ]penalizes the addition of component reactions that are not in the KEGG database. Reactions in the KEGG database are
favored because they are up to date and typically do not involve any lumping of metabolites. P[structure, i ]penalizes the addition of component reactions that involve metabolites with unknown
structures. P[known-ΔG, i ]penalizes the addition of component reactions for which Δ[r]G'° cannot be estimated. P[unfavorable, i ]penalizes the addition of component reactions operating in an
unfavorable direction as predicted by our reaction directionality prediction method. Inclusion of these penalty terms in the λ[gapfill, i ]objective coefficients significantly improves the quality of
the solutions produced by the gap filling method.
Equation 9 represents the mass balance constraints that enforce the quasi-steady-state assumption of FBA. In this equation, N[super ]is the stoichiometric matrix for the decomposed superset of KEGG/
model reactions, and v is the vector of fluxes through the forward and reverse components of our superset reactions.
Equation 10 enforces the bounds on the component reaction fluxes (v[i]), and the values of the component reaction use variables (z[i]). This equation ensures that each component reaction flux, v[i],
must be zero unless the use variable associated with the component reaction, z[i], is equal to 1. The v[max, i ]term in Equation 10 is the key to the simulation of experimental conditions in FBA. If
v[max, i ]corresponds to a reaction associated with a knocked-out gene in the simulated experiment, this v[max, i ]is set to zero. If v[max, i ]corresponds to the uptake of a nutrient not found in
the media conditions being simulated, this v[max, i ]is also set to zero. Equation 11 constrains the flux through the biomass reaction in the model, v[bio], to a nonzero value, which is necessary to
identify sets of component reactions that must be added to the model in order for growth to be predicted under the conditions being simulated.
Each solution produced by the gap filling optimization defines a list of irreversible reactions within the original model that should be made reversible and a set of reactions not in the original
model that should be added in order to fix a single false negative prediction. Recursive mixed integer linear programming (MILP) [50] was applied to identify the multiple gap filling solutions that
may exist to correct each false negative prediction. Each solution identified by recursive MILP was implemented in a test model and validated against the complete set of experimental conditions. All
incorrect predictions by a test model associated with each gap filling solution were tabulated into an error matrix for use in the next step of the model optimization process: gap filling
Model optimization step two: gap filling reconciliation
The gap filling step in the model optimization algorithm produces multiple equally optimal solutions to correct each false negative prediction in the unoptimized model. While all of these solutions
repair at least one false negative prediction, they often do so at the cost of introducing new false positive predictions. To identify the cross-section of gap filling solutions that results in an
optimal fit to the available experimental data with minimal modifications to the original model, we apply the gap filling reconciliation step of the model optimization procedure. In this step, we
perform the following integer optimization that maximizes the correction of false negative errors, minimizes the introduction of new false positive errors, and minimizes the net changes made to the
Subject to:
In the objective of the gap filling reconciliation formulation (Equation 13), n[obs ]and r[sol ]are constants representing the total number of experimental observations and the number of unique
component reactions involved in the gap filling solutions, respectively; λ[gapfill, i ]and z[i ]carry the same definitions as in the gap filling formulation; and o[k ]is a binary variable equal to
zero if observation k is expected to be correctly predicted given the values of z[i ]and equal to 1 otherwise.
The values of the o[k ]variables are controlled by the constraints defined in Equations 14 and 15. Equation 14 is written for any experimental condition with a false negative prediction by the
original model. This constraint states that at least one gap filling solution that corrects this false negative prediction must be implemented in order for this prediction error to be corrected in
the gap-filled model. Equation 15 is written for any experimental condition where the original model correctly predicts that zero growth will occur. This constraint states that implementation of any
gap filling solution that causes a new false positive prediction for this condition will result in an incorrect prediction by the gap-filled model. In these constraints, n[sol ]is the total number of
gap filling solutions; ε[j, k ]is a binary constant equal to 1 if condition k is correctly predicted by solution j and equal to zero otherwise; s[j ]is a binary variable equal to 1 if gap filling
solution j should be implemented in the gap-filled model and equal to zero otherwise.
The final set of constraints for this formulation (Equation 16) enforce the condition that a gap filling solution (represented by the use variable s[j]) is not implemented in the gap-filled model
unless all of the reaction additions and modifications (represented by the use variable z[i]) that constitute the solution have been implemented in the model. In these constraints, γ[i, j ]is a
constant equal to 1 if reaction i is involved in solution j and equal to zero otherwise.
Once again, recursive MILP was applied to identify multiple equivalently optimal solutions to the gap filling reconciliation problem, and each solution was validated against the complete set of
experimental data to ensure that the combination of multiple gap filling solutions did not give rise to additional false positive predictions. The solutions that resulted in the most accurate
prediction of growth in all experimental conditions were manually curated to identify the most physiologically relevant solution. This solution was then implemented in the original model to produce
the gap-filled model.
Model optimization step three: gap generation
The gap-filled model produced by the gap filling reconciliation step not only will retain all of the false positive predictions generated by the original model but also will generate a small number
of new false positive predictions that arise as a result of additions and modifications made during the gap filling process. In the gap generation step of the model optimization procedure we attempt
to correct these false positive predictions either by removing irreversible reactions or by converting reversible reactions into irreversible reactions. For each simulated experimental condition with
a false positive prediction by the gap-filled model, the following optimization was performed:
Subject to:
The objective of the gap generation procedure (Equation 17) is to minimize the number of component reactions that must be removed from the model in order to eliminate biomass production under
conditions where the organism is known not to produce biomass. As in the gap filling optimization, all reversible reactions are decomposed into separate forward and backward component reactions. This
process enables the independent removal of each direction of operation of the reactions in the model. As a result, r[gapgen ]in Equation 17 is equal to the number of irreversible reactions plus twice
the number of reversible reactions in the gap-filled model; z[i ]is a binary use variable equal to 1 if the flux through component reaction i is greater than zero and equal to zero otherwise; λ[
gapfill, i ]is a constant representing the cost of removal of component reaction i from the model. λ[gapfill, i ]is calculated using Equation 28:
The P[irreversible, i ]term in Equation 28 is a binary constant equal to 1 if reaction i is irreversible and associated with at least one gene in the model. This term exists to penalize the complete
removal of reactions from the model (as is done when removing one component of an irreversible reaction) over the adjustment of the reversibility of a reaction in the model (as is done when removing
one component of a reversible reaction).
Equations 18 and 19 represent the mass balance constraints and flux bounds that simulate the experimental conditions with false positive predictions. N[gapfilled ]is the stoichiometric matrix for the
gap-filled model with the decomposed reversible reactions; v[no-growth ]is the vector of fluxes through the reactions under the false positive experimental conditions; and v[max, no-growth, i ]is the
upper-bound on the flux through reaction i set to simulate the false positive experimental conditions.
Equations 20 and 21 define the dual constraints associated with each flux in the primal FBA formulation. In these constraints, σ[i, j ]is the stoichiometric coefficient for metabolite j in reaction i
; m[j ]is the dual variable associated with the mass balance constraint for metabolite j in the primal FBA formulation; μ[i ]is the dual variable associated with the upper-bound constraint on the
flux through reaction i in the primal FBA formulation; and K is a large constant selected such that the Equation 20 and 21 constraints are always feasible when z[i ]is equal to zero. Equation 22 sets
the dual slack variable associated with reaction i, μ[i], to zero when the use variable associated with component reaction i, z[i], is equal to zero. Equation 22 and the term involving K in Equations
20 and 21 exist to eliminate all dual constraints and variables associated with component reaction i when component reaction i is flagged to be removed by the gap generation optimization.
Equation 23 is the constraint that sets the original primal FBA objective (maximization of biomass production) equal to the dual FBA objective (minimization of flux slack). This constraint ensures
that every set of v[no-growth ]fluxes that satisfies the constraints in Equations 20 to 23 represents an optimal solution to the original FBA problem that maximizes biomass production. Therefore, if
the biomass flux is set to zero, as is done in Equation 24, this is equivalent to stating that the binary use variables z[i ]must be set in such a way that the maximum production of biomass when
simulating the false positive experimental conditions must be zero.
With no additional constraints, the gap generation optimization would produce solutions recommending the knockout of component reactions that cause the loss of biomass production under every
experimental condition instead of just the false positive conditions. Constraints are required to ensure that only solutions that eliminate biomass production under the false positive conditions
while preserving biomass production in all other conditions will be feasible. These constraints are defined by Equations 25, 26, and 27, which represent the FBA constraints simulating an experimental
condition where the organism being modeled is known to grow. When the false positive condition being simulated by the v[max, no-growth, i ]values is the knockout of an essential gene or interval, the
v[max, growth, i ]values in Equation 26 simulate the same media conditions with no reactions knocked out. When the false positive condition being simulated is an unviable media, the v[max, growth, i
]values simulate a viable media. Because the binary z[i ]variables are shared by the 'no growth' and 'growth' FBA constraints, z[i ]will be set to zero only for those reactions that are not essential
or coessential under the 'growth' conditions but are essential or coessential under the 'no growth conditions'. To further reduce the probability that a gap generation solution will cause new false
negative predictions, we identified the component reactions in the gap-filled model that were essential for the correct prediction of growth in at least three of the experimental conditions prior to
running the gap generation optimization. The z[i ]variables associated with these essential component reactions were fixed at one to prevent their removal in the gap generation optimization.
As done in previous steps, recursive MILP was used to identify up to ten equally optimal solutions that correct each false positive prediction error in the gap-filled model. Each solution was
implemented and validated against the complete set of experimental data, and the accuracy of each solution was tabulated into a matrix for use in the final step of the model optimization procedure:
gap generation reconciliation.
Model optimization step four: gap generation reconciliation
Like the gap filling step, the gap generation step of the model optimization process produces multiple equally optimal solutions to correct each false positive prediction in the gap-filled model, and
many of these solutions introduce new false negative prediction errors. To identify the cross-section of gap generation solutions that results in the maximum correction of false positive predictions
with the minimum addition of false negative predictions, we perform one final optimization step: gap generation reconciliation. The optimization problem solved in the gap generation reconciliation
step is identical to the gap filling reconciliation optimization except that the constraints defined by Equations 14 and 15 are replaced by the constraints defined by Equations 29 and 30:
Equation 29 is written for any experimental condition with a false positive prediction by the gap-filled model. This constraint states that at least one gap generation solution that corrects the
false positive prediction must be implemented for the condition to be correctly predicted by the optimized model. Equation 30 is written for any experimental condition where the original model
correctly predicts that growth will occur. This constraint states that implementation of any gap generation solution that causes a new false positive prediction will result in a new incorrect
prediction by the optimized model. All of the variables and constants used in Equations 29 and 30 have the same meaning as in Equations 14 and 15.
Although the objective, remaining constraints, and remaining variables in the gap generation reconciliation are mathematically identical to the gap filling reconciliation, some variables take on a
different physiological meaning. Because gap generation solutions involve the removal (not the addition) of reactions from the gap-filled model, the reaction use variable z[i ]is now equal to 1 if a
reaction is to be removed from the gap-filled model and equal to zero otherwise.
The gap generation reconciliation was solved repeatedly by using recursive MILP to identify multiple solutions to the gap generation reconciliation optimization, and each solution was implemented in
a test model and validated against the complete set of experimental data. The solutions associated with the most accurate test models were manually examined to identify the most physiologically
relevant solution. The selected solution was then implemented in the gap-filled model to produce the optimized iBsu1103 model.
Δ[f]G'°: standard Gibbs free energy change of formation; Δ[r]G'°: standard Gibbs free energy change of reaction; Δ[r]G^'m: milli-molar Gibbs free energy change of reaction;
Authors' contributions
CH, JZ, MC, and RS all participated in the reconstruction, curation, and analysis of the iBsu1103 model. CH developed the reaction reversibility prediction and model optimization methods with
direction and advice provided by RS. RS conceived of the project and coordinated all research. CH wrote the paper with revisions by JZ, MC, and RS. All authors read and approved the final manuscript.
Additional data files
The following additional data are available with the online version of this paper: an Excel file containing all supplementary data associated with the iBsu1103 model as Tables S1 to S11 (Additional
data file 1); a zip archive containing tab-delimited text files for all of the supplementary tables included in Additional data file 1 (Additional data file 2); a zip archive containing data on the
structure of every molecule in the model in molfile format (Additional data file 3); an SBML version of the model, which may be used with the published COBRA toolbox [18] to run FBA on the model
(Additional data file 4).
Supplementary Material
Additional data file 1:
Tables S1 and S2 contain all compound and reaction data associated with the model, respectively; Table S3 lists all of the open problem reactions in the model; Table S4 lists all of the essential
genes that have nonessential homologs in the B. subtilis genome; Table S5 lists all of the changes made to the model during the model optimization process; Table S6 lists the reactions in the Oh et
al. model that are not in the iBsu1103 model; Table S7 shows simulation results for all 1,500 experimental conditions; Table S8 provides the details on the media formulations used for each FBA
simulation; and Tables S9, S10, and S11 show all data on the genes, functional roles, and subsystems in the B. subtilis SEED annotation.
Additional data file 2:
The text files may be copied into any spreadsheet program of choice to visualize the data for the iBsu1103 model.
Additional data file 3:
These molfiles reflect the structure of the predominant ionic form of the compounds at neutral pH as predicted using the MarvinBeans software [28]. These structures were used with the group
contribution method [40-42] to estimate the Δ[f]G'° and Δ[r]G'° for the compounds and reactions in the model.
Additional data file 4:
This may be used with the published COBRA toolbox [18] to run FBA on the model.
This work was supported in part by the US Department of Energy under contract DE-ACO2-06CH11357. We thank Professor DeJongh for assistance in curating the reaction-to-functional role mapping. We
thank Professor Noirot for expert advice on B. subtilis behavior. We thank Dr Overbeek and the entire SEED development team for advice and assistance in using the SEED annotation system. We also
thank Mindy Shi and Fangfang Xia for technical support.
• Zweers JC, Barak I, Becher D, Driessen AJ, Hecker M, Kontinen VP, Saller MJ, Vavrova L, van Dijl JM. Towards the development of Bacillus subtilis as a cell factory for membrane proteins and
protein complexes. Microb Cell Fact. 2008;7:10. doi: 10.1186/1475-2859-7-10. [PMC free article] [PubMed] [Cross Ref]
• Fabret C, Ehrlich SD, Noirot P. A new mutation delivery system for genome-scale approaches in Bacillus subtilis. Mol Microbiol. 2002;46:25–36. doi: 10.1046/j.1365-2958.2002.03140.x. [PubMed] [
Cross Ref]
• Kobayashi K, Ehrlich SD, Albertini A, Amati G, Andersen KK, Arnaud M, Asai K, Ashikaga S, Aymerich S, Bessieres P, Boland F, Brignell SC, Bron S, Bunai K, Chapuis J, Christiansen LC, Danchin A,
Debarbouille M, Dervyn E, Deuerling E, Devine K, Devine SK, Dreesen O, Errington J, Fillinger S, Foster SJ, Fujita Y, Galizzi A, Gardan R, Eschevins C, et al. Essential Bacillus subtilis genes.
Proc Natl Acad Sci USA. 2003;100:4678–4683. doi: 10.1073/pnas.0730515100. [PMC free article] [PubMed] [Cross Ref]
• Morimoto T, Kadoya R, Endo K, Tohata M, Sawada K, Liu S, Ozawa T, Kodama T, Kakeshita H, Kageyama Y, Manabe K, Kanaya S, Ara K, Ozaki K, Ogasawara N. Enhanced recombinant protein productivity by
genome reduction in Bacillus subtilis. DNA Res. 2008;15:73–81. doi: 10.1093/dnares/dsn002. [PMC free article] [PubMed] [Cross Ref]
• Fischer E, Sauer U. Large-scale in vivo flux analysis shows rigidity and suboptimal performance of Bacillus subtilis metabolism. Nat Genet. 2005;37:636–640. doi: 10.1038/ng1555. [PubMed] [Cross
• Bochner BR. Global phenotypic characterization of bacteria. Fems Microbiol Rev. 2009;33:191–205. doi: 10.1111/j.1574-6976.2008.00149.x. [PMC free article] [PubMed] [Cross Ref]
• Oh YK, Palsson BO, Park SM, Schilling CH, Mahadevan R. Genome-scale reconstruction of metabolic network in Bacillus subtilis based on high-throughput phenotyping and gene essentiality data. J
Biol Chem. 2007;282:28791–28799. doi: 10.1074/jbc.M703759200. [PubMed] [Cross Ref]
• Burgard AP, Maranas CD. Probing the performance limits of the Escherichia coli metabolic network subject to gene additions or deletions. Biotechnol Bioeng. 2001;74:364–375. doi: 10.1002/bit.1127.
[PubMed] [Cross Ref]
• Edwards JS, Palsson BO. Robustness analysis of the Escherichia coli metabolic network. Biotechnol Prog. 2000;16:927–939. doi: 10.1021/bp0000712. [PubMed] [Cross Ref]
• Edwards JS, Ibarra RU, Palsson BO. In silico predictions of Escherichia coli metabolic capabilities are consistent with experimental data. Nat Biotechnol. 2001;19:125–130. doi: 10.1038/84379. [
PubMed] [Cross Ref]
• Mahadevan R, Edwards JS, Doyle FJ., 3rd Dynamic flux balance analysis of diauxic growth in Escherichia coli. Biophys J. 2002;83:1331–1340. doi: 10.1016/S0006-3495(02)73903-9. [PMC free article] [
PubMed] [Cross Ref]
• Goelzer A, Bekkal Brikci F, Martin-Verstraete I, Noirot P, Bessieres P, Aymerich S, Fromion V. Reconstruction and analysis of the genetic and metabolic regulatory networks of the central
metabolism of Bacillus subtilis. BMC Syst Biol. 2008;2:20. doi: 10.1186/1752-0509-2-20. [PMC free article] [PubMed] [Cross Ref]
• Kanehisa M, Goto S. KEGG: Kyoto encyclopedia of genes and genomes. Nucleic Acids Res. 2000;28:27–30. doi: 10.1093/nar/28.1.27. [PMC free article] [PubMed] [Cross Ref]
• Kanehisa M, Goto S, Kawashima S, Nakaya A. The KEGG databases at GenomeNet. Nucleic Acids Res. 2002;30:42–46. doi: 10.1093/nar/30.1.42. [PMC free article] [PubMed] [Cross Ref]
• Overbeek R, Disz T, Stevens R. The SEED: A peer-to-peer environment for genome annotation. Commun ACM. 2004;47:46–51. doi: 10.1145/1029496.1029525. [Cross Ref]
• Aziz RK, Bartels D, Best AA, DeJongh M, Disz T, Edwards RA, Formsma K, Gerdes S, Glass EM, Kubal M, Meyer F, Olsen GJ, Olson R, Osterman AL, Overbeek RA, McNeil LK, Paarmann D, Paczian T,
Parrello B, Pusch GD, Reich C, Stevens R, Vassieva O, Vonstein V, Wilke A, Zagnitko O. The RAST Server: rapid annotations using subsystems technology. BMC Genomics. 2008;9:75. doi: 10.1186/
1471-2164-9-75. [PMC free article] [PubMed] [Cross Ref]
• Seed Viewer http://seed-viewer.theseed.org/
• Jankowski MD, Henry CS, Broadbelt LJ, Hatzimanikatis V. Group contribution method for thermodynamic analysis of complex metabolic networks. Biophys J. 2008;95:1487–1499. doi: 10.1529/
biophysj.107.124784. [PMC free article] [PubMed] [Cross Ref]
• Henry CS, Jankowski MD, Broadbelt LJ, Hatzimanikatis V. Genome-scale thermodynamic analysis of Escherichia coli metabolism. Biophys J. 2006;90:1453–1461. doi: 10.1529/biophysj.105.071720. [PMC
free article] [PubMed] [Cross Ref]
• Feist AM, Henry CS, Reed JL, Krummenacker M, Joyce AR, Karp PD, Broadbelt LJ, Hatzimanikatis V, Palsson BØ. A genome-scale metabolic reconstruction for Escherichia coli K-12 MG1655 that accounts
for 1261 ORFs and thermodynamic information. Mol Syst Biol. 2007;3:121. doi: 10.1038/msb4100155. [PMC free article] [PubMed] [Cross Ref]
• Kummel A, Panke S, Heinemann M. Systematic assignment of thermodynamic constraints in metabolic network models. BMC Bioinformatics. 2006;7:512. doi: 10.1186/1471-2105-7-512. [PMC free article] [
PubMed] [Cross Ref]
• Kummel A, Panke S, Heinemann M. Putative regulatory sites unraveled by network-embedded thermodynamic analysis of metabolome data. Mol Syst Biol. 2006;2 2006.0034. [PMC free article] [PubMed]
• Henry CS, Broadbelt LJ, Hatzimanikatis V. Thermodynamics-based metabolic flux analysis. Biophys J. 2007;92:1792–1805. doi: 10.1529/biophysj.106.093138. [PMC free article] [PubMed] [Cross Ref]
• Beard DA, Qian H. Thermodynamic-based computational profiling of cellular regulatory control in hepatocyte metabolism. Am J Physiol Endocrinol Metab. 2005;288:E633–E644. doi: 10.1152/
ajpendo.00239.2004. [PubMed] [Cross Ref]
• Kumar VS, Maranas CD. GrowMatch: an automated method for reconciling in silico/in vivo growth predictions. PLoS Comput Biol. 2009;5:e1000308. doi: 10.1371/journal.pcbi.1000308. [PMC free article]
[PubMed] [Cross Ref]
• DeJongh M, Formsma K, Boillot P, Gould J, Rycenga M, Best A. Toward the automated generation of genome-scale metabolic networks in the SEED. BMC Bioinformatics. 2007;8:139. doi: 10.1186/
1471-2105-8-139. [PMC free article] [PubMed] [Cross Ref]
• Reed JL, Vo TD, Schilling CH, Palsson BO. An expanded genome-scale model of Escherichia coli K-12 (iJR904 GSM/GPR). Genome Biol. 2003;4:R54. doi: 10.1186/gb-2003-4-9-r54. [PMC free article] [
PubMed] [Cross Ref] [PMC free article] [PubMed] [Cross Ref]
• The Marvin Family http://www.chemaxon.com/product/marvin_land.html
• Dauner M, Sauer U. Stoichiometric growth model for riboflavin-producing Bacillus subtilis. Biotechnol Bioeng. 2001;76:132–143. doi: 10.1002/bit.1153. [PubMed] [Cross Ref]
• Sauer U, Hatzimanikatis V, Hohmann HP, Manneberg M, van Loon AP, Bailey JE. Physiology and metabolic fluxes of wild-type and riboflavin-producing Bacillus subtilis. Appl Environ Microbiol. 1996;
62:3687–3696. [PMC free article] [PubMed]
• Matsumoto K, Okada M, Horikoshi Y, Matsuzaki H, Kishi T, Itaya M, Shibuya I. Cloning, sequencing, and disruption of the Bacillus subtilis psd gene coding for phosphatidylserine decarboxylase. J
Bacteriol. 1998;180:100–106. [PMC free article] [PubMed]
• Sonenshein AL, Hoch JA, Losick R. Bacillus subtilis and its Closest Relatives: from Genes to Cells. Washington, DC: ASM Press; 2002.
• Soga T, Ohashi Y, Ueno Y, Naraoka H, Tomita M, Nishioka T. Quantitative metabolome analysis using capillary electrophoresis mass spectrometry. J Proteome Res. 2003;2:488–494. doi: 10.1021/
pr034020m. [PubMed] [Cross Ref]
• Suthers PF, Dasika MS, Kumar VS, Denisov G, Glass JI, Maranas CD. A genome-scale metabolic reconstruction of Mycoplasma genitalium, iPS189. PLoS Comput Biol. 2009;5:e1000285. doi: 10.1371/
journal.pcbi.1000285. [PMC free article] [PubMed] [Cross Ref]
• DeJongh M, Formsma K, Boillot P, Gould J, Rycenga M, Best A. Toward the automated generation of genome-scale metabolic networks in the SEED. BMC Bioinformatics. 2007;8:139. doi: 10.1186/
1471-2105-8-139. [PMC free article] [PubMed] [Cross Ref]
• Papoutsakis ET, Meyer CL. Equations and calculations of product yields and preferred pathways for butanediol and mixed-acid fermentations. Biotechnol Bioeng. 1985;27:50–66. doi: 10.1002/
bit.260270108. [PubMed] [Cross Ref]
• Jin YS, Jeffries TW. Stoichiometric network constraints on xylose metabolism by recombinant Saccharomyces cerevisiae. Metab Eng. 2004;6:229–238. doi: 10.1016/j.ymben.2003.11.006. [PubMed] [Cross
• Varma A, Palsson BO. Stoichiometric flux balance models quantitatively predict growth and metabolic by-product secretion in wild-type Escherichia-coli W3110. Appl Environ Microbiol. 1994;60
:3724–3731. [PMC free article] [PubMed]
• Varma A, Palsson BO. Metabolic capabilities of Escherichia-coli. 2. Optimal-growth patterns. J Theoret Biol. 1993;165:503–522. doi: 10.1006/jtbi.1993.1203. [Cross Ref]
• Jankowski MD, Henry CS, Broadbelt LJ, Hatzimanikatis V. Group contribution method for thermodynamic analysis on a genome-scale. Biophys J. 2008;95:1487–1499. doi: 10.1529/biophysj.107.124784. [
PMC free article] [PubMed] [Cross Ref]
• Mavrovouniotis ML. Group contributions for estimating standard Gibbs energies of formation of biochemical-compounds in aqueous-solution. Biotechnol Bioeng. 1990;36:1070–1082. doi: 10.1002/
bit.260361013. [PubMed] [Cross Ref]
• Mavrovouniotis ML. Estimation of standard Gibbs energy changes of biotransformations. J Biol Chem. 1991;266:14440–14445. [PubMed]
• Satish Kumar V, Dasika MS, Maranas CD. Optimization based automated curation of metabolic reconstructions. BMC Bioinformatics. 2007;8:212. doi: 10.1186/1471-2105-8-212. [PMC free article] [PubMed
] [Cross Ref]
• Feist AM, Scholten JC, Palsson BO, Brockman FJ, Ideker T. Modeling methanogenesis with a genome-scale metabolic reconstruction of Methanosarcina barkeri. Mol Syst Biol. 2006;2 2006.0004. [PMC
free article] [PubMed]
• Oliveira AP, Nielsen J, Forster J. Modeling Lactococcus lactis using a genome-scale flux model. BMC Microbiol. 2005;5:39. doi: 10.1186/1471-2180-5-39. [PMC free article] [PubMed] [Cross Ref]
• Duarte NC, Herrgard MJ, Palsson BO. Reconstruction and validation of Saccharomyces cerevisiae iND750, a fully compartmentalized genome-scale metabolic model. Genome Res. 2004;14:1298–1309. doi:
10.1101/gr.2250904. [PMC free article] [PubMed] [Cross Ref]
• Becker SA, Palsson BO. Genome-scale reconstruction of the metabolic network in Staphylococcus aureus N315: an initial draft to the two-dimensional annotation. BMC Microbiol. 2005;5:8. doi:
10.1186/1471-2180-5-8. [PMC free article] [PubMed] [Cross Ref]
• Jamshidi N, Palsson BO. Investigating the metabolic capabilities of Mycobacterium tuberculosis H37Rv using the in silico strain iNJ661 and proposing alternative drug targets. BMC Syst Biol. 2007;
1:26. doi: 10.1186/1752-0509-1-26. [PMC free article] [PubMed] [Cross Ref]
• Thiele I, Vo TD, Price ND, Palsson BO. Expanded metabolic reconstruction of Helicobacter pylori (iIT341 GSM/GPR): an in silico genome-scale characterization of single- and double-deletion
mutants. J Bacteriol. 2005;187:5818–5830. doi: 10.1128/JB.187.16.5818-5830.2005. [PMC free article] [PubMed] [Cross Ref]
• Lee S, Phalakornkule C, Domach MM, Grossmann IE. Recursive MILP model for finding all the alternate optima in LP models for metabolic networks. Computers Chem Eng. 2000;24:711–716. doi: 10.1016/
S0098-1354(00)00323-9. [Cross Ref]
Articles from Genome Biology are provided here courtesy of BioMed Central
Your browsing activity is empty.
Activity recording is turned off.
See more... | {"url":"http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2718503/?tool=pubmed","timestamp":"2014-04-21T05:01:33Z","content_type":null,"content_length":"181947","record_id":"<urn:uuid:e45e821c-d9de-4b4a-99b1-cd088e98178f>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00560-ip-10-147-4-33.ec2.internal.warc.gz"} |
Throttling a Gas Cylinder
Your question needs more detail. It is not apparent which quantities are being held fixed here. If it is adiabatic expansion then the volume V is obviously changing. There is no thermodynamic process
(except diffusion) where P,V, and T all remain constant.
I'll assume you have an insulated gas expanding due to some valve being opened:
The relation you need to use is [itex]P_1{V_1}^\gamma=P_2{V_2}^\gamma[/itex], assuming the gas is ideal. The derivation of this relation can be found at | {"url":"http://www.physicsforums.com/showthread.php?t=648378","timestamp":"2014-04-17T21:33:38Z","content_type":null,"content_length":"38008","record_id":"<urn:uuid:c0ae68bd-f3ae-4294-bd62-18ec8fdb5054>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00102-ip-10-147-4-33.ec2.internal.warc.gz"} |
exponents problem
January 31st 2007, 01:10 PM #1
Oct 2006
exponents problem
i have to find two expressions that are mathematically equal to e^-2x
how do i go about this? any help would be appreciated!
Hello, clockingly!
i have to find two expressions that are mathematically equal to $e^{-2x}$
Are there any suggestions of what is expected?
If not, both $e^{-2x} + 0$ and $e^{-2x} \times 1$ certainly work.
There are many ways of writing it.
Take your pick
January 31st 2007, 05:16 PM #2
Global Moderator
Nov 2005
New York City
January 31st 2007, 05:48 PM #3
Super Member
May 2006
Lexington, MA (USA)
February 1st 2007, 05:25 AM #4 | {"url":"http://mathhelpforum.com/pre-calculus/10944-exponents-problem.html","timestamp":"2014-04-18T17:07:25Z","content_type":null,"content_length":"40066","record_id":"<urn:uuid:4505564a-c923-4337-97c2-e44bedff2dd6>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00172-ip-10-147-4-33.ec2.internal.warc.gz"} |
Positive eigenvalues implies positive definite
January 16th 2012, 07:16 PM #1
Junior Member
May 2011
Positive eigenvalues implies positive definite
Can someone just tell me if this proof works? As a quick word of explanation since most of my posts will probably be like this - I'm studying math on my own and am without a professor or anyone
else to check things over. It would be nice to think that I'd know when a proof doesn't work, but I know I miss things sometimes.
"Let $T$ be a self-adjoint linear operator on an n-dimensional vector space $V$, and let $A=[T]_\beta$, where $\beta$ is an orthonormal basis for $V$. Prove: $T$ is positive definite if and only
if all of its eigenvalues are positive."
I know I got the forward direction, so here's the other direction.
Since T is self-adjoint, there exists an orthonormal basis (call it $\beta$) for $V$ consisting of eigenvectors of $T$, which implies that $T$ is diagonalizable - that is, there exists an
invertible matrix $P$ such that $D=P^{-1}[T]_\beta P$ is a diagonal matrix. In particular, the entries of $D$ are the eigenvalues of $T$. Letting $d_{ii}$ be the entries of $D$ and $x_i$the $i$
-th component of $x$, we have $\displaystyle\sum_i d_{ii}x_i^2 > 0$, and so $\langle T(x), x \rangle > 0$ for all nonzero $x$ and $T$ is positive definite.
Follow Math Help Forum on Facebook and Google+ | {"url":"http://mathhelpforum.com/advanced-algebra/195452-positive-eigenvalues-implies-positive-definite.html","timestamp":"2014-04-17T09:16:47Z","content_type":null,"content_length":"34180","record_id":"<urn:uuid:26745b68-d326-4ad1-93d5-5fadda260718>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00018-ip-10-147-4-33.ec2.internal.warc.gz"} |
Narrow Search
Earth and space science
Sort by:
Per page:
Now showing results 1-10 of 32
This is an online set of information about astronomical alignments of ancient structures and buildings. Learners will read background information about the alignments to the Sun in such structures as
the Great Pyramid, Chichen Itza, and others.... (View More) Next, the site contains 10 short problem sets that involve a variety of math skills, including determining the scale of a photo, measuring
and drawing angles, plotting data on a graph, and creating an equation to match a set of data. Each set of problems is contained on one page and all of the sets utilize real-world problems relating
to astronomical alignments of ancient structures. Each problem set is flexible and can be used on its own, together with other sets, or together with related lessons and materials selected by the
educator. This was originally included as a folder insert for the 2010 Sun-Earth Day. (View Less)
This is an activity about detecting elements by using light. Learners will develop and apply methods to identify and interpret patterns to the identification of fingerprints. They look at
fingerprints of their classmates, snowflakes, and finally... (View More) “spectral fingerprints” of elements. They learn to identify each image as unique, yet part of a group containing recognizable
similarities. The activity is part of Project Spectra, a science and engineering program for middle-high school students, focusing on how light is used to explore the Solar System. (View Less)
This is a math-science integrated unit about spectrographs. Learners will find and calculate the angle that light is transmitted through a holographic diffraction grating using trigonometry. After
finding this angle, the students will build their... (View More) own spectrographs in groups and research and design a ground or space-based mission using their creation. After the project is
complete, student groups will present to the class on their trials, tribulations, and findings during this process. The activity is part of Project Spectra, a science and engineering program for
middle-high school students, focusing on how light is used to explore the Solar System. (View Less)
This is a book containing over 200 problems spanning over 70 specific topic areas covered in a typical Algebra II course. Learners can encounter a selection of application problems featuring
astronomy, earth science and space exploration, often with... (View More) more than one example in a specific category. Learners will use mathematics to explore science topics related to a wide
variety of NASA science and space exploration endeavors. Each problem or problem set is introduced with a brief paragraph about the underlying science, written in a simplified, non-technical jargon
where possible. Problems are often presented as a multi-step or multi-part activities. This book can be found on the Space Math@NASA website. (View Less)
In this problem set, learners will consider the temperature in Kelvin of various places in the universe and use equations to convert measures from the three temperature scales to answer a series of
questions. Answer key is provided. This is part of... (View More) Earth Math: A Brief Mathematical Guide to Earth Science and Climate Change. (View Less)
This is a booklet containing 37 space science mathematical problems, several of which use authentic science data. The problems involve math skills such as unit conversions, geometry, trigonometry,
algebra, graph analysis, vectors, scientific... (View More) notation, and many others. Learners will use mathematics to explore science topics related to Earth's magnetic field, space weather, the
Sun, and other related concepts. This booklet can be found on the Space Math@NASA website. (View Less)
In this problem set, learners will refer to the tabulated data used to create the Keeling Curve of atmospheric carbon dioxide to create a mathematical function that accounts for both periodic and
long-term changes. They will use this function to... (View More) answer a series of questions, including predictions of atmospheric concentration in the future. A link to the data, which is in an
Excel file, as well as the answer key are provided. This is part of Earth Math: A Brief Mathematical Guide to Earth Science and Climate Change. (View Less)
In this problem set, learners will create and use a differential equation of rate-of-change of atmospheric carbon dioxide. They will refer to the "Keeling Curve" graph and information on the sources
and sinks of carbon on Earth to create the... (View More) equation and apply it to answer a series of questions. Answer key is provided. This is part of Earth Math: A Brief Mathematical Guide to
Earth Science and Climate Change. (View Less)
This is a booklet containing 31 problem sets that involve a variety of math skills, including scientific notation, simple algebra, and calculus. Each set of problems is contained on one page.
Learners will use mathematics to explore varied space... (View More) science topics including black holes, ice on Mercury, a mathematical model of the Sun's interior, sunspots, the heliopause, and
coronal mass ejections, among many others. (View Less)
This is a booklet containing 87 problem sets that involve a variety of math skills, including scale, geometry, graph analysis, fractions, unit conversions, scientific notation, simple algebra, and
calculus. Each set of problems is contained on one... (View More) page. Learners will use mathematics to explore varied space science topics in the areas of Earth science, planetary science, and
astrophysics, among many others. This booklet can be found on the Space Math@NASA website. (View Less)
«Previous Page1234 Next Page» | {"url":"http://nasawavelength.org/resource-search?facetSort=1&topicsSubjects=Mathematics%3AAlgebra&educationalLevel=High+school","timestamp":"2014-04-19T16:42:22Z","content_type":null,"content_length":"73921","record_id":"<urn:uuid:212458ac-6f28-4b2b-b53d-db804551022b>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00329-ip-10-147-4-33.ec2.internal.warc.gz"} |
Homogeneity of Variances
Certain tests (e.g. ANOVA) require that the variances of different populations are equal. This can be determined by the following approaches:
• Comparison of graphs (esp. box plots)
• Comparison of variance, standard deviation and IQR statistics
• Statistical tests
The F test presented in Two Sample Hypothesis Testing of Variances can be used to determine whether the variances of two populations are equal. For three or more variables the following statistical
tests for homogeneity of variances are commonly used:
• Levene’s test
• Bartlett’s test
Using the terminology from Definition 1 of Basic Concepts for ANOVA, the following null and alternative hypotheses are used for either of these tests:
H[0]: $\sigma_1^2$ = $\sigma_2^2$^ = ⋯ = $\sigma_k^2$
H[1]: Not all variances are equal (i.e. $\sigma_i^2$ ≠ $\sigma_j^2$ for some i, j)
Levene’s Test
For Levene’s test, the residuals e[ij] of the group means from the cell means are calculated as follows:
An ANOVA is then conducted on the absolute value of the residuals. If the group variances are equal, then the average size of the residual should be the same across all groups.
Example 1: Use Levene’s test to determine whether the 4 samples in Example 2 of Basic Concepts for ANOVA have significantly different population variances.
Since p-value = .90357 > .05 = α (Figure 1), we cannot reject the null hypothesis, and conclude there is no significant difference between the 4 group means, and so the ANOVA test conducted
previously for Example 2 of Basic Concepts for ANOVA satisfies the homogenity of variances assumption.
There are three versions of the Levene’s test:
• Use of mean (as in the explanation above)
• Use of median (replace mean by median above)
• Use of 10% trimmed mean (replace mean by 10% trimmed mean above)
The three choices determine the robustness and power of Levene’s test. By robustness, we mean the ability of the test to not falsely detect unequal variances when the underlying data are not normally
distributed and the variables are in fact equal. By power, we mean the ability of the test to detect unequal variances when the variances are in fact unequal.
Levene’s original paper only proposed using the mean. Brown and Forsythe extended Levene’s test to use either the median or the trimmed mean. They performed Monte Carlo studies that indicated that
using the trimmed mean performed best when the underlying data had a heavy-tailed distribution and the median performed best when the underlying data had a skewed distribution. Using the mean
provided the best power for symmetric, moderate-tailed, distributions.
Although the optimal choice depends on the underlying distribution, the definition based on the median is recommended as the choice that provides good robustness against many types of non-normal data
while retaining good power. Another choice may be better based on knowledge of the underlying distribution of the data.
Some cautions about Levene’s test: You need to assume that the absolute values of the residuals satisfy the assumptions of ANOVA. Also, a more liberal cut off value when testing homogeneity of
variances is often used due to the poor power of these tests.
Real Statistics Function: The following supplemental functions contained in the Real Statistics Resource Pack compute the p-value for Levene’s test.
LEVENE(R1, type) = p-value of for Levene’s test for the data in range R1. If type = 0 then group means are used; if type > 0 then group medians are used; if type < 0 then 10% trimmed group means are
used. If the second argument is omitted it defaults to 0.
This function ignores any empty or non-numeric cells.
For example, for the data in Example 1, LEVENE(B6:E13) = LEVENE(B6:E13, 0) = 0.90357 (referring to Figure 1). Note that, for the same data, LEVENE(B6:E13, 1) = 0.97971 and LEVENE(B6:E13, 2) =
Real Statistics Data Analysis Tool: A Levene’s Test option is included in the Single Factor Anova data analysis tool. This options displays the results of all three versions of Levene’s test.
To use this tool for Example 1, enter Ctrl-m and select Single Factor Anova from the menu. A dialog box similar to that shown in Figure 1 of Confidence Interval for ANOVA appears. Enter B5:E13 in
the Input Range, check Column headings included with data, select the Levene’s Test option and click on the OK button.
Bartlett’s Test
We now show another test for homogeneity of variances using the Bartlett’s test statistic B, which is approximately chi-square:
where s^2 is the pooled variance, which as we have seen is MS[W], and
B can also be defined as follows:
Here MS[W] is the pooled variance across all groups. Thus the null hypothesis that all the group variances are equal is rejected if p-value < α where p-value = CHIDIST(B, k–1). B is only
approximately chi-square, but the approximation should be good enough if there are at least 3 observations in each sample.
Bartlett’s test is very sensitive to departures from normality. If the samples come from non-normal distributions, then Bartlett’s test may simply be testing for non-normality. Levene’s test is less
sensitive to departures from normality.
Example 2: Use Bartlett’s test to determine whether the 4 samples in Example 2 of Basic Concepts for ANOVA have significantly different population variances.
We obtain Bartlett’s test statistic B (cell I6 of Figure 2) by calculating the numerator and denominator of B as described above (cells I4 and I5). To do this we first calculate the values df[j], 1
⁄ df[j], $s_j^2$ and ln $s_j^2$ (cells in the range B13:E16). We also calculate df[W], 1 ⁄ df[W], MS[W] and ln MS[W] (cells in range F13:F16). Note that MS[W] = SUMPRODUCT(B13:E13,B15:E15)/F13.
Since p-value = CHITEST(B, k–1) = CHITEST(1.88,3) < .979 > .05 = α, we don’t reject the null hypothesis, and so conclude that there is no significant difference between the variances of the four
Note that if we change the first sample for Method 4 to 185 (instead of 85) and repeat the analysis we would find that there would be a significant difference in the variances (B = 17.23, p-value =
.001 < .05 = α). This would be due to this one outlier. That it was an outlier would show up easily in any graphic representation. We would then need to decide whether this item was simply an error
in measurement or a true measurement (see Outliers in ANOVA).
Dealing with non-heterogeneity of variances
We present four ways of dealing with models where the variances are not sufficiently homogeneous:
In the rest of this section we will look at transformations that can address homogeneity of variance. In particular, we look at square root and log transformations. For transformations that address
normality Transformations to Create Symmetry.
Log transformation for homogeneity of variances: A log transformation can be effective when the standard deviations of the group samples are proportional to the group means. Here a log to any base
can be used, although log base 10 and the natural log (i.e. log base e) are the common choices. Since you can’t take the log of a negative number, it may be necessary to use the transformation f(x) =
log(x+a) where a is a constant sufficiently large to make sure that all the x + a are positive.
Example 3: In an experiment the data in Figure 3 were collected. Check that the variances are homogeneous before proceeding with other tests.
The sample variances in Figure 3 seem quite different. When we perform Levene’s test (Figure 4), we confirm that there is a significant difference between the variances (p-value = 0.024 < .05 = α).
We note there is a correlation between the group means and group standard deviations (r = .88), which leads us to try making a log transformation (here we use base 10) to try to achieve homogeneity
of variances (table on the left of Figure 15.23).
We can see that the variances in the transformed data are more similar. This time Levene’s test (the table on the right of Figure 5) shows that there is no significant difference between the
variances (p-value =.20 > .05).
Square root transformation for homogeneity of variances: When the group means are proportional to the group variances, often a square root transformation $f(x) = \sqrt{x}$ is useful. Since you can’t
take the square root of a negative number, it may be necessary to use a transformation of form $f(x) = \sqrt{x + a}$, where a is a constant chosen to make sure that all values of x + a are positive.
If the values of x are small (e.g. |x| < 10), it might be better to use the transformation $f(x) = \sqrt{x + .5}$ or $f(x) = \sqrt{x}$ + $\sqrt{x + 1}$.
6 Responses to Homogeneity of Variances
1. Many thanks for this… The easy to follow guide to Levene’s and Bartlett’s included in your download is just what I needed to sort out a tricky analytical problem…
□ Ned,
I am very pleased that the site has been useful for you. I hope that you will use it again in the future.
2. Hi,
Can you please let me know what transformation method I should be using if both standard deviation to means and means to variances are not proportional? There is no strong correlation for both?
□ Sriya,
There is no easy answer to your question. It all depends on your data. There are an unlimited number of transformations as well (1/x, x^2, etc.). It also may turn out that a particular
transformation creates more problems than it solves.
3. Sir
Will you add a real statistics function for “Bartlett’s Test” ?
□ Colin,
Bartlett’s Test is also called Box’s Test. This is already included in the Real Statistics Resource Pack (see multivariate statistics portion of the website). | {"url":"http://www.real-statistics.com/one-way-analysis-of-variance-anova/homogeneity-variances/","timestamp":"2014-04-16T17:39:07Z","content_type":null,"content_length":"55494","record_id":"<urn:uuid:0f999863-1ed3-4172-904a-c6d32f9c8cb1>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00428-ip-10-147-4-33.ec2.internal.warc.gz"} |
10.2.2 Operations and Elements
Next: 10.2.3 Navier-Stokes Solver Up: DIMEFEM: High-level Portable Previous: 10.2.1 Memory Allocation
Finite-element approximations to functions form a finite-dimensional vector space, and as such may be multiplied by a scalar and added. Functions are provided to do these operations. If the function
is expressed as Lagrangian elements it may also be differentiated, which changes the order of representation: For example, differentiating a quadratic element produces a linear element.
At present, DIMEFEM provides two kinds of elements, Lagrangian and Gaussian, although strictly speaking the latter is not a finite element because it possesses no interpolation functions. The
Gaussian element is simply a collection of function values at points within each triangle and a set of weights, so that integrals may be done by summing the function values multiplied by the weights.
As with one-dimensional Gaussian integration, integrals are exact to some polynomial order. We cannot differentiate Gaussian FEFs, but can apply pointwise operators such as multiplication and
function evaluation that cannot be done in the Lagrangian representation.
Consider the nonlinear operator L defined by
The most accurate way to evaluate this is to start with u in Lagrangian form, differentiate, convert to Gaussian representation, exponentiate, then multiply by the weights and sum. This can be done
explicitly with DIMEFEM, but in the future we hope to create an environment which ``knows'' about representations, linearity, and so on, and can parse an expression such as the above and evaluate it
The computational kernel of any finite-element software is the linear solver. We have implemented this with preconditioned conjugate gradient , so that the user supplies a linear operator L, an
elliptic bilinear operator a, a scalar product S (a strongly elliptic symmetric bilinear operator which satisfies the triangle inequality), and an initial guess for the solution. The
conjugate-gradient solver replaces the guess by the solution u of the standard variational equation
Next: 10.2.3 Navier-Stokes Solver Up: DIMEFEM: High-level Portable Previous: 10.2.1 Memory Allocation
Guy Robinson
Wed Mar 1 10:19:35 EST 1995 | {"url":"http://www.netlib.org/utk/lsi/pcwLSI/text/node243.html","timestamp":"2014-04-19T06:53:49Z","content_type":null,"content_length":"4697","record_id":"<urn:uuid:5ea0df89-9d46-4e37-b739-e0534a624247>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00012-ip-10-147-4-33.ec2.internal.warc.gz"} |
At one time or another, most of us will likely have to write code performing some amount of numerical computation beyond simple integer arithmetic. As many of us are neither mathematicians nor
intimately familiar with the bit gymnastics our machines must perform in order to manipulate numbers, we can get ourselves into trouble if we're not careful. Luckily, "Java Number Cruncher" comes to
the rescue.
This book is an introduction to numerical computing using Java providing "non-theoretical explanations of practical numerical algorithms." While this sounds like heady stuff, freshman level calculus
should be sufficient to get the most out of this text.
The first three chapters are amazingly useful, and worth the price of admission alone. Mak does a fine job explaining in simple terms the pitfalls of even routine integer and floating-point
calculations, and how to mitigate these problems. Along the way the reader learns the details of how Java represents numbers and why good math goes bad. The remainder of the book covers iterative
computations, matrix operations, and several "fun" topics, including fractals and random number generation.
The author conveys his excitement for the subject in an easy-to-read, easy-to-understand manner. Examples in Java clearly demonstrate the topics covered. Some may not like that the complete source is
in-line with the text, but this is subjective. Overall, I found this book educational, interesting, and quite enjoyable to read.
(Jason Menard - Bartender, May 2003) | {"url":"http://www.javaranch.com/journal/2003/06/BookReviewOfTheMonth.htm","timestamp":"2014-04-16T05:14:34Z","content_type":null,"content_length":"3828","record_id":"<urn:uuid:c9798943-3b2f-4f12-8a04-b14518cbbe00>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00047-ip-10-147-4-33.ec2.internal.warc.gz"} |
V.: Definitions in answer set programming
Results 1 - 10 of 14
- In Proceedings of International Conference on Logic Programming and Nonmonotonic Reasoning (LPNMR , 2005
"... Abstract. Equilibrium logic, introduced by David Pearce, extends the concept of an answer set from logic programs to arbitrary sets of formulas. Logic programs correspond to the special case in
which every formula is a “rule ” — an implication that has no implications in the antecedent (body) and c ..."
Cited by 68 (8 self)
Add to MetaCart
Abstract. Equilibrium logic, introduced by David Pearce, extends the concept of an answer set from logic programs to arbitrary sets of formulas. Logic programs correspond to the special case in which
every formula is a “rule ” — an implication that has no implications in the antecedent (body) and consequent (head). The semantics of equilibrium logic looks very different from the usual definitions
of an answer set in logic programming, as it is based on Kripke models. In this paper we propose a new definition of equilibrium logic which uses the concept of a reduct, as in the standard
definition of an answer set. Second, we apply the generalized concept of an answer set to the problem of defining the semantics of aggregates in answer set programming. We propose, in particular, a
semantics for weight constraints that covers the problematic case of negative weights. Our semantics of aggregates is an extension of the approach due to Faber, Leone, and Pfeifer to a language with
choice rules and, more generally, arbitrary rules with nested expressions. 1
- Artificial Intelligence , 2002
"... We propose a new definition of abduction in logic programming, and contrast it with that of Kakas and Mancarella's. We then introduce a rewriting system for answering queries and generating
explanations, and show that it is both sound and complete under the partial stable model semantics and so ..."
Cited by 20 (5 self)
Add to MetaCart
We propose a new definition of abduction in logic programming, and contrast it with that of Kakas and Mancarella's. We then introduce a rewriting system for answering queries and generating
explanations, and show that it is both sound and complete under the partial stable model semantics and sound and complete under the answer set semantics when the underlying program is so-called
odd-loop free. We discuss an application of the work to a problem in reasoning about actions and provide some experimental results. 1 Abduction in logic programming In general, given a background
theory T , and an observation q to explain, an abduction of q w.r.t. T is a theory \Pi such that \Pi [ T j= q. Normally, we want to put some additional conditions on \Pi, such as that it is
consistent with T and contains only those propositions called abducibles. For instance, in propositional logic, given a background theory T , a set A of assumptions or abducibles, and a proposition
q, an explanation S...
- IN PADL’04, LECTURE NOTES IN ARTIFICIAL INTELLIGENCE (LNCS , 2004
"... In this paper we show how CR-Prolog, a recent extension of A-Prolog, was used in the successor of USA-Advisor (USA-Smart) in order to improve the quality of the plans returned. The general
problem that we address is that of improving the quality of plans by taking in consideration statements that ..."
Cited by 13 (4 self)
Add to MetaCart
In this paper we show how CR-Prolog, a recent extension of A-Prolog, was used in the successor of USA-Advisor (USA-Smart) in order to improve the quality of the plans returned. The general problem
that we address is that of improving the quality of plans by taking in consideration statements that describe "most desirable" plans. We believe that USA-Smart proves that CR-Prolog provides a
simple, elegant, and flexible solution to this problem, and can be easily applied to any planning domain. We also discuss how alternative extensions of A-Prolog can be used to obtain similar results.
- Ceur-WS , 2003
"... The paper is an epistemological analysis of logic programming and shows an epistemological ambiguity. Many different logic programming formalisms and semantics have been proposed. Hence, logic
programming can be seen as a family of formal logics, each induced by a pair of a syntax and a semantics ..."
Cited by 8 (3 self)
Add to MetaCart
The paper is an epistemological analysis of logic programming and shows an epistemological ambiguity. Many different logic programming formalisms and semantics have been proposed. Hence, logic
programming can be seen as a family of formal logics, each induced by a pair of a syntax and a semantics, and each having a different declarative reading. However, we may expect that (a) if a program
belongs to different logics of this family and has the same formal semantics in these logics, then the declarative meaning attributed to this program in the different logics is equivalent, and (b)
that one and the same logic in this family has not been associated with distinct declarative readings.
- In Proc. 15th Canadian Conference on AI, LNCS , 2002
"... Logic programming with the stable model semantics has been proposed as a constraint programming paradigm for solving constraint satisfaction and other combinatorial problems. In such a language
one writes function-free logic programs with negation. Such a program is instantiated to a ground program ..."
Cited by 7 (3 self)
Add to MetaCart
Logic programming with the stable model semantics has been proposed as a constraint programming paradigm for solving constraint satisfaction and other combinatorial problems. In such a language one
writes function-free logic programs with negation. Such a program is instantiated to a ground program from which the stable models are computed. In this paper, we identify a class of logic programs
for which the current techniques in solving SAT problems can be adopted for the computation of stable models efficiently. These logic programs are called 2-literal programs where each rule or
constraint consists of at most two literals. Many logic programming encodings of graph-theoretic, combinatorial problems given in the literature fall into the class of 2-literal programs. We show
that a 2-literal program can be translated to a SAT instance without using extra variables. We report and compare experimental results on solving a number of benchmarks by a stable model generator
and by a SAT solver. 1
- In Proceedings of International Workshop on Non-Monotonic Reasoning , 2004
"... It is well known that it is possible to split certain autoepistemic theories under the semantics of expansions, i.e. to divide such a theory into a number of different “levels”, such that the
models of the entire theory can be constructed by incrementally constructing models for each level. Similar ..."
Cited by 6 (3 self)
Add to MetaCart
It is well known that it is possible to split certain autoepistemic theories under the semantics of expansions, i.e. to divide such a theory into a number of different “levels”, such that the models
of the entire theory can be constructed by incrementally constructing models for each level. Similar results exist for other non-monotonic formalisms, such as logic programming and default logic. In
this work, we present a general, algebraic theory of splitting under a fixpoint semantics. Together with the framework of approximation theory, a general fixpoint theory for arbitrary operators, this
gives us a uniform and powerful way of deriving splitting results for each logic with a fixpoint semantics. We demonstrate the usefulness of this approach, by applying our results to auto-epistemic
, 2008
"... To my family ..."
, 2008
"... Answer set programming (ASP) is a logic programming paradigm that can be used to solve complex combinatorial search problems. Aggregates are an ASP construct that plays an important role in many
applications. Defining a satisfactory semantics of aggregates turned out to be a difficult problem, and i ..."
Cited by 3 (0 self)
Add to MetaCart
Answer set programming (ASP) is a logic programming paradigm that can be used to solve complex combinatorial search problems. Aggregates are an ASP construct that plays an important role in many
applications. Defining a satisfactory semantics of aggregates turned out to be a difficult problem, and in this paper we propose a new approach, based on an analogy between aggregates and
propositional connectives. First, we extend the definition of an answer set/stable model to cover arbitrary propositional theories; then we define aggregates on top of them both as primitive
constructs and as abbreviations for formulas. Our definition of an aggregate combines expressiveness and simplicity, and it inherits many theorems about programs with nested expressions, such as
theorems about strong equivalence and splitting. 1
"... Abstract. A recent framework of relativized hyperequivalence of programs offers a unifying generalization of strong and uniform equivalence. It seems to be especially well suited for
applications in program optimization and modular programming due to its flexibility that allows us to restrict, indep ..."
Cited by 3 (2 self)
Add to MetaCart
Abstract. A recent framework of relativized hyperequivalence of programs offers a unifying generalization of strong and uniform equivalence. It seems to be especially well suited for applications in
program optimization and modular programming due to its flexibility that allows us to restrict, independently of each other, the head and body alphabets in context programs. We study relativized
hyperequivalence for the three semantics of logic programs given by stable, supported and supported minimal models. For each semantics, we identify four types of contexts, depending on whether the
head and body alphabets are given directly or as the complement of a given set. Hyperequivalence relative to contexts where the head and body alphabets are specified directly has been studied before.
In this paper, we establish the complexity of deciding relativized hyperequivalence wrt the three other types of context programs. 1
"... This paper studies a semantics of multiple logic programs, and synthesizes a program having such a collective semantics. More precisely, the following two problems are considered: given two
logic programs P1 and P2, which have the collections of answer sets AS(P1) andAS(P2), respectively; (i) find a ..."
Cited by 2 (0 self)
Add to MetaCart
This paper studies a semantics of multiple logic programs, and synthesizes a program having such a collective semantics. More precisely, the following two problems are considered: given two logic
programs P1 and P2, which have the collections of answer sets AS(P1) andAS(P2), respectively; (i) find a program Q which has the set of answer sets such that AS(Q) =AS(P1)∪AS(P2); (ii) find a program
R which has the set of answer sets such that AS(R) =AS(P1) ∩AS(P2). A program Q satisfying the condition (i) is called generous coordination of P1 and P2; andRsatisfying (ii) is called rigorous
coordination of P1 and P2. Generous coordination retains all of the answer sets of each program, but permits the introduction of additional answer sets of the other program. By contrast, rigorous
coordination forces each program to give up some answer sets, but the result remains within the original answer sets for each program. Coordination provides a program that reflects the meaning of two
or more programs. We provide methods for constructing these two types of coordination and address its application to logic-based multi-agent systems. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=365682","timestamp":"2014-04-20T20:01:49Z","content_type":null,"content_length":"38492","record_id":"<urn:uuid:40efb00f-f886-4090-8467-49263f8e6ddd>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00389-ip-10-147-4-33.ec2.internal.warc.gz"} |
Complex Analysis - Integration
Use a theorem to show that: $\int_{-\infty}^{\infty}3sin2t/t(t^2+9)dt = (\pi i/3)(1-1/e^6)$
How can the integral that you gave, which is real valued possibly have $i$ in it? Check your problem again.
I was moving threads and found this one. Let me try to find this integral. Here we need to use two theorems. Michael Jordan's Lemma: Let $f(z)$ be a meromorphic function in the upper half-plane with
finitely many poles: $b_1,...,b_n$ in upper half-plane. Given that $f(z)\to 0 \mbox{ as }|z| \to \infty$ in the upper-half plane. Then for each positive number $a$ we have: $\int_{-\infty}^{\infty} e
^{iax} f(x) dx = 2\pi i \sum_{k=1}^n \mbox{res}(g,b_k)$ where $g(z) = e^{iaz}f(z)$ Lemma: Let $f(z)$ has a simple pole at $c$ with residue $\rho$. Let $\gamma$ be the curve $\gamma(t) = c + re^{it} \
mbox{ for }a\leq t\leq b$. Then, $\lim_{r\to 0^+} \int_{\gamma}f(z) dz = i \rho (b-a)$ Now we can evaluate, $\int_{-\infty}^{\infty}\frac{3\sin 2x}{x(x^2+9)} dx$ Consider the meromorphic function, $f
(z) = \frac{3}{z(z^2+9)}$ The bad thing is that we cannot use Jordan's Lemma because the poles of this function are $z=0,z=\pm 3i$. We disregard $z=-3i$ because we never will be working in the lower
half plane, however $z=0$ lies on the real axis! So the conditions of Jordan's Lemma are not fullfiled. In order to avoid that we will use the other lemma by drawing a small circular contor around is
"bad" point. Look at picture below. The bottom is divided into three parts: $\gamma_1$ which is the left-line segment moving from $-\infty$ to $-r$. Also, $\gamma_2$ which is the semi-circular
contour (notice, this is negatively oriented). Finally, $\gamma_3$ which is moving from $r$ to $\infty$. Now the only pole in upper half plane of $g(z) = \frac{3e^{2zi}}{z(z^2+9)}$ is $z=3i$. And $\
mbox{res}(g,3i)=\lim_{z\to 3i} (z-3i)\cdot \frac{3e^{2zi}}{(z-3i)(z+3i)} = \frac{3e^{-6}}{6i} = - \frac{1}{2}ie^{-6}$ Now by this generalization of Michael Jordan's Lemma we have (notice the
negative): $\int_{-\infty}^{-r}\frac{3e^{2xi}}{x(x^2+9)} dx - \int_{\gamma_2} \frac{3e^{2zi}}{z(z^2+9)} + \int_r^{\infty} \frac{3e^{2xi}}{x(x^2+9)} dx = 2\pi i \left( -\frac{1}{2}ie^{-6} \right) = \
pi e^{-6}$ Now if we take the limit at $r\to 0^+$ by that Lemma we have: $\int_{-\infty}^{0^-} \frac{3e^{2xi}}{x(x^2+9)}dx - \mbox{res}(g,0)\cdot (\pi - 0)\cdot i + \int_{0^+}^{\infty} \frac{3e^
{2xi}}{x(x^2+9)} dx = \pi e^{-6}$ Since $\mbox{res}(g,0) = \lim_{z\to 0} z\cdot \frac{e^{2zi}}{z(z^2+9)} = \frac{1}{9}$ We have, $\int_{-\infty}^{0^-} \frac{3e^{2xi}}{x(x^2+9)} dx + \int_{0^+}^{\
infty} \frac{3e^{2xi}}{x(x^2+9)} dx = \pi e^{-6}+ \frac{1}{9}\pi i$ Thus, $\int_{-\infty}^{\infty} \frac{3e^{2xi}}{x(x^2+9)} dx= \int_{-\infty}^{\infty} \frac{3\cos 2x}{x(x^2+9)} + i \int_{-\infty}^
{\infty} \frac{3\sin 2x}{x(x^2+9)} dx = \pi e^{-6}+\frac{1}{9}\pi i$ Thus, $\int_{-\infty}^{\infty} \frac{3\cos 2x}{x(x^2+9)} \ dx = \pi e^{-6}$ And, $\int_{-\infty}^{\infty} \frac{3\sin 2x}{x(x^
2+9)} \ dx = \frac{1}{9}\pi$ | {"url":"http://mathhelpforum.com/calculus/16286-complex-analysis-integration.html","timestamp":"2014-04-21T15:14:41Z","content_type":null,"content_length":"46804","record_id":"<urn:uuid:d3b84d9f-c603-48d0-88f6-0484fde4a501>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00039-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Suppose we are given set A = {1, 2, 3, 4, 6, 12} and a relation, R, from A x A. The relation is defined as follows: R = {(a, b) | a divides b} where (a, b) are elements of A x A. a) List all the
ordered pairs (a, b) that are elements of the relation. b) Use the results from part a to construct the corresponding zero-one matrix.
• 10 months ago
• 10 months ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/51b2439fe4b05b167ed347f7","timestamp":"2014-04-16T19:46:35Z","content_type":null,"content_length":"90305","record_id":"<urn:uuid:e932d518-5a0c-4b7b-9a65-06afe279aedc>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00499-ip-10-147-4-33.ec2.internal.warc.gz"} |
500 meters is how many miles
You asked:
500 meters is how many miles
Say hello to Evi
Evi is our best selling mobile app that can answer questions about local knowledge, weather, books, music, films, people and places, recipe ideas, shopping and much more. Over the next few months we
will be adding all of Evi's power to this site.
Until then, to experience all of the power of Evi you can download Evi for free on iOS, Android and Kindle Fire. | {"url":"http://www.evi.com/q/500_meters_is_how_many_miles","timestamp":"2014-04-18T19:33:22Z","content_type":null,"content_length":"52095","record_id":"<urn:uuid:3beb595e-544d-4b2d-8b8d-846b0aa81679>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00089-ip-10-147-4-33.ec2.internal.warc.gz"} |
A Framework for Binding Operators
Sun Yong
Abstract: Binding appears in logic, programming and concurrency, e.g. it appears in Lambda Calculus, say the lambda-abstraction. However, the binding in Lambda Calculus is unary. Certainly, we can
generalize the idea of unary binding to an arbitrary-finite numbers of binding. Algebraically, we can extend the framework of Universal Algebra [23,28,56] and take arbitrary finite bindings as
primitives. Therefore, operations in the new extended signature must be of second order instead of first order, and we name them as Binding Operators. The resulting framework is named as a Framework
for Binding Operators, which coincides with Shapiro's diminished second order language [138].
With a modification of Aczel's Frege Structure [4], we derive the algebras for Binding Operators, i.e. eBAs. The usual first order algebras, Plotkin's Pw-model of Lambda Calculus , and Girard's
qualitative domains [47] turn out to be special cases of eBAs. And also eBAs turn out to be i a generalization of Kechris and Moschovakis' suitable class of functionals in Recursion in Higher Types
[83] and (ii) a generalized Volken's lambda- family [7, p.127]. Following Birkhoff [16], we would like to equationally characterize Binding Operators. Kechris and Moschovakis' Enumeration Theorem
[83] suggests that an algebraic characterization of such might be possible.
Unfortunately, eBAs and the usual satisfaction |=eBA of Binding Equations over these eBAs, in Birkhoff's approach, do not work. Therefore, we have to find either a remedy for it or a new semantic
model for Binding Operators. We will present two solutions, one for each.
(a) For a remedy, we discover a condition for Birkhoff's approach to work. This condition is necessary and sufficient, and we call it an admissible condition, which turns out weaker than Plotkin's
Logical Relations [121] in the sense that ``logical'' implies ``admissible''. An admissible equational calculus |-eBA for Binding Equations is obtained, which is sound and complete with respect to
admissible satisfaction |=eBA. The relationship between Completeness and Admissible Completeness (or between satisfaction |=eBA and admissible satisfaction |=eBA) is discussed, although it is not
completely clear. Other problems remain open as well, say the closedness of direct products and the admissible variety problem.
(b) For a new semantic model, we will give a new binding algebra, i.e. iBA, which is intensional in contrast to the previous (extensional) one. Actually, an iBA is a generalization of Friedman's
Prestructures [46]. A sound and complete equational calculus |-iBA (in iBAs) is established. However, the derivability of |-iBA is weaker than the one of |-eBA. In other words, to share a same proof
power with |-eBA, |-iBA has to use axiomatic schemas instead of pure axioms. Also, the relations between extensional satisfaction |-eBA and intensional satisfaction |-iBA, and between admissible
calculus |-eBA and calculus |-iBA are discussed.
Examples of applications with present Framework for Binding Operators and |-iBA are given. They are equationalizations of (i) First Order Logic [140,139], (ii) Lambda Calculus and Combinatory Logic
[32,33,7]. and (iii) Milner's Calculus of Communicating Systems [108,109] with data-dependency. These demonstrate that the Framework for Binding Operators provides a unified algebraic framework for
all of logic, computation and parallel computation.
PhD Thesis - Price £10.00
LFCS report ECS-LFCS-92-207 (also published as CST-91-92) | {"url":"http://www.lfcs.inf.ed.ac.uk/reports/92/ECS-LFCS-92-207/","timestamp":"2014-04-18T13:53:13Z","content_type":null,"content_length":"7789","record_id":"<urn:uuid:c6643902-d4fa-4f75-bebc-3dbb986b16cd>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00218-ip-10-147-4-33.ec2.internal.warc.gz"} |
Patent application title: EXCITATION VECTOR GENERATOR, SPEECH CODER AND SPEECH DECODER
Sign up to receive free email alerts when patent applications with chosen keywords are published SIGN UP
A noise estimating apparatus estimates two types of noise spectra for removing a noise component using the two types of noise spectra. The noise estimating apparatus includes an A/D converter that
converts an input speech signal to a digital signal, and a Fourier transformer that performs a discrete Fourier transform on the digital signal having a predetermined time length to obtain an input
spectrum and a complex spectrum. The noise estimating apparatus also includes a noise spectrum storage device that stores the two types of noise spectra, including a mean noise spectrum and a
compensation noise spectrum, and a noise estimator that estimates a new compensation noise spectrum and a new mean noise spectrum as new two types of noise spectra.
A noise estimating apparatus for estimating two types of noise spectra for removing a noise component using the two types of noise spectra, the apparatus comprising: an A/D converter that converts an
input speech signal to a digital signal; a Fourier transformer that performs a discrete Fourier transform on the digital signal having a predetermined time length obtained by the A/D converter to
obtain an input spectrum and a complex spectrum; a noise spectrum storage device that stores the two types of noise spectra including a mean noise spectrum for use in noise cancellation processing
and a compensation noise spectrum for compensating for a spectrum of an over-reduced frequency in the noise cancellation processing; a noise estimator that estimates a new compensation noise spectrum
and a new mean noise spectrum as new two types of noise spectra, the new compensation noise spectrum being obtained by comparing the input spectrum obtained by the Fourier transformer with the
compensation noise spectrum stored in the noise spectrum storage device, the new mean noise spectrum being obtained by using the input spectrum obtained by the Fourier transformer in a learning
algorithm, and stores the new two types of spectra in the noise spectrum storage device.
A noise estimating apparatus according to claim 1, wherein the noise estimator determines if it is a noise segment in advance; compares the input spectrum obtained by the Fourier transformer with the
noise spectrum for compensation for each frequency when it is determined as noise; sets the noise spectrum for compensation of an associated frequency as a new input spectrum thereby estimating the
compensation noise spectrum, when the input spectrum is smaller than the new noise spectrum for compensation; separately from the estimation of the compensation noise spectrum, estimates the new mean
noise spectrum by the learning algorithm adding the input spectrum at a given ratio; and further stores the obtained new compensation noise spectrum and the new mean noise spectrum in the noise
spectrum storage device.
A noise estimating method for estimating two types of noise spectra for removing a noise component using the two types of noise spectra, the method comprising: A/D converting an input speech signal
to a digital signal; performing a discrete Fourier transform on the digital signal having a predetermined time length obtained by the A/D converting to obtain an input spectrum and a complex
spectrum; storing, in a noise spectrum storage device, the two types of noise spectra including a mean noise spectrum for use in noise cancellation processing and a compensation noise spectrum for
compensating for a spectrum of an over-reduced frequency in the noise cancellation processing; and estimating a new compensation noise spectrum and a new mean noise spectrum as new two types of noise
spectra, the new compensation noise spectrum being obtained by comparing the input spectrum obtained by the Fourier transformation step with the compensation noise spectrum stored in the noise
spectrum storage device, the new mean noise spectrum being obtained by using the input spectrum obtained by the Fourier transformation step in a learning algorithm, and stores the new two types of
noise spectra in the noise spectrum storage device.
This is a continuation of pending U.S. patent application Ser. No. 12/870,122 filed Aug. 27, 2010, which is a continuation of U.S. patent application Ser. No. 12/134,256 which issued into U.S. Pat.
No. 7,809,557 on Oct. 5, 2010, which is a continuation of U.S. patent application Ser. No. 11/421,932, which issued into U.S. Pat. No. 7,398,205 on Jul. 8, 2008, which is a continuation of U.S.
patent application Ser. No. 09/849,398 which issued into U.S. Pat. No. 7,289,952 on Oct. 30, 2007, which is a divisional of U.S. patent application Ser. No. 09/101,186, which issued into U.S. Pat.
No. 6,453,288 on Sep. 17, 2002, which was the National Stage of International Application No. PCT/JP97/04033, filed Nov. 6, 1997 the contents of which are each expressly incorporated by reference
herein in their entireties. The International Application was not published in English.
TECHNICAL FIELD [0002]
The present invention relates to an excitation vector generator capable of obtaining a high-quality synthesized speech, and a speech coder and a speech decoder which can code and decode a
high-quality speech signal at a low bit rate.
BACKGROUND ART [0003]
A CELP (Code Excited Linear Prediction) type speech coder executes linear prediction for each of frames obtained by segmenting a speech at a given time, and codes predictive residuals (excitation
signals) resulting from the frame-by-frame linear prediction, using an adaptive codebook having old excitation vectors stored therein and a random codebook which has a plurality of random code
vectors stored therein. For instance, "Code-Excited Linear Prediction (CELP): High-Quality Speech at Very Low Bit Rate," M. R. Schroeder, Proc. ICASSP '85, pp. 937-940 discloses a CELP type speech
FIG. 1 illustrates the schematic structure of a CELP type speech coder. The CELP type speech coder separates vocal information into excitation information and vocal tract information and codes them.
With regard to the vocal tract information, an input speech signal 10 is input to a filter coefficients analysis section 11 for linear prediction and linear predictive coefficients (LPCs) are coded
by a filter coefficients quantization section 12. Supplying the linear predictive coefficients to a synthesis filter 13 allows vocal tract information to be added to excitation information in the
synthesis filter 13. With regard to the excitation information, excitation vector search in an adaptive codebook 14 and a random codebook 15 is carried out for each segment obtained by further
segmenting a frame (called subframe). The search in the adaptive codebook 14 and the search in the random codebook 15 are processes of determining the code number and gain (pitch gain) of an adaptive
code vector, which minimizes coding distortion in an equation 1, and the code number and gain (random code gain) of a random code vector.
v: speech signal (vector)
H: impulse response convolution matrix of the
= [ h ( 0 ) 0 0 0 h ( 1 ) h ( 0 ) 0 0 0 h ( 2 ) h ( 1 ) h ( 0 ) 0 0 0 0 0 h ( 0 ) 0 h ( L - 1 ) h ( 1 ) h ( 0 ) ] ##EQU00001##
synthesis filter
. where
h: impulse response (vector) of the synthesis filter
L: frame length
p: adaptive code vector
c: random code vector
ga: adaptive code gain (pitch gain)
gc: random code gain
Because a closed loop search of the code that minimizes the equation 1 involves a vast amount of computation for the code search, however, an ordinary CELP type speech coder first performs adaptive
codebook search to specify the code number of an adaptive code vector, and then executes random codebook search based on the searching result to specify the code number of a random code vector.
The speech coder search by the CELP type speech coder will now be explained with reference to FIGS. 2A through 2C. In the figures, a code x is a target vector for the random codebook search obtained
by an equation 2. It is assumed that the adaptive codebook search has already been accomplished.
=v-gaHp (2)
where [0015]
x: target (vector) for the random codebook search
v: speech signal (vector)
H: impulse response convolution matrix H of the synthesis filter
p: adaptive code vector
ga: adaptive code gain (pitch gain)
The random codebook search is a process of specifying a random code vector c which minimizes coding distortion that is defined by an equation 3 in a distortion calculator 16 as shown in FIG. 2A.
where [0021]
x: target (vector) for the random codebook search
H: impulse response convolution matrix of the synthesis filter
c: random code vector
gc: random code gain.
The distortion calculator 16 controls a control switch 21 to switch a random code vector to be read from the random codebook 15 until the random code vector c is specified.
An actual CELP type speech coder has a structure in FIG. 2B to reduce the computational complexities, and a distortion calculator 16' carries out a process of specifying a code number which maximizes
a distortion measure in an equation 4.
( x ' Hc ) 2 Hc 2 = ( ( x ' H ) c ) 2 Hc 2 = ( x '' c ) 2 Hc 2 = ( x '' c ) 2 c ' H ' Hc ( 4 ) ##EQU00002##
where [0027]
x: target (vector) for the random codebook search
H: impulse response convolution matrix of the synthesis filter
: transposed matrix of H
: time reverse synthesis of x using H (x'
c: random code vector.
Specifically, the random codebook control switch 21 is connected to one terminal of the random codebook 15 and the random code vector c is read from an address corresponding to that terminal. The
read random code vector c is synthesized with vocal, tract information by the synthesis filter 13, producing a synthesized vector Pfc. Then, the distortion calculator 16' computes a distortion
measure in the equation 4 using a vector x' obtained by a time reverse process of a target x, the vector Hc resulting from synthesis of the random code vector in the synthesis filter and the random
code vector c. As the random codebook control switch 21 is switched, computation of the distortion measure is performed for every random code vector in the random codebook.
Finally, the number of the random codebook control switch 21 that had been connected when the distortion measure in the equation 4 became maximum is sent to a code output section 17 as the code
number of the random code vector.
FIG. 2C shows a partial structure of a speech decoder. The switching of the random codebook control switch 21 is controlled in such a way as to read out the random code vector that has a transmitted
code number. After a transmitted random code gain gc and filter coefficient are set in an amplifier 23 and a synthesis filter 24, a random code vector is read out to restore a synthesized speech.
In the above-described speech coder/speech decoder, the greater the number of random code vectors stored as excitation information in the random codebook 15 is, the more possible it is to search a
random code vector close to the excitation vector of an actual speech. As the capacity of the random codebook (ROM) is limited, however, it is not possible to store countless random code vectors
corresponding to all the excitation vectors in the random codebook. This restricts improvement on the quality of speeches.
Also has proposed an algebraic excitation which can significantly reduce the computational complexities of coding distortion in a distortion calculator and can eliminate a random codebook (ROM)
(described in "8 KBIT/S ACELP CODING OF SPEECH WITH 10 MS SPEECH-FRAME: A CANDIDATE FOR CCITT STANDARDIZATION": R. Salami, C. Laflamme, J-P. Adoul, ICASSP '94, pp. II-97 to II-100, 1994).
The algebraic excitation considerably reduces the complexities of computation of coding distortion by previously computing the results of convolution of the impulse response of a synthesis filter and
a time-reversed target and the autocorrelation of the synthesis filter and developing them in a memory. Further, a ROM in which random code vectors have been stored is eliminated by algebraically
generating random code vectors. A CS-ACELP and ACELP which use the algebraic excitation have been recommended respectively as G. 729 and G. 723.1 from the ITU-T.
In the CELP type speech coder/speech decoder equipped with the above-described algebraic excitation in a random codebook section, however, a target for a random codebook search is always coded with a
pulse sequence vector, which puts a limit to improvement on speech quality.
DISCLOSURE OF INVENTION [0039]
It is therefore a primary object of the present invention to provide an excitation vector generator, a speech coder and a speech decoder, which can significantly suppress the memory capacity as
compared with a case where random code vectors are stored directly in a random codebook, and can improve the speech quality
It is a secondary object of this invention to provide an excitation vector generator, a speech, coder and a speech decoder, which can generate complicated random code vectors as compared with a case
where an algebraic excitation is provided in a random codebook section and a target for a random codebook search is coded with a pulse sequence vector, and can improve the speech quality.
In this invention, the fixed code vector reading section and fixed codebook of a conventional CELP type speech coder/decoder are respectively replaced with an oscillator, which outputs different
vector sequences in accordance with the values of input seeds, and a seed storage section which stores a plurality of seeds (seeds of the oscillator). This eliminates the need for fixed code vectors
to be stored directly in a fixed codebook (ROM) and can thus reduce the memory capacity significantly.
Further, according to this invention, the random code vector reading section and random codebook of the conventional CELP type speech coder/decoder are respectively replaced with an oscillator and a
seed storage section. This eliminates the need for random code vectors to be stored directly in a random codebook (ROM) and can thus reduce the memory capacity significantly.
The invention is an excitation vector generator which is so designed as to store a plurality of fixed waveforms, arrange the individual fixed waveforms at respective start positions based on start
position candidate information and add those fixed waveforms to generate an excitation vector. This can permit an excitation vector close to an actual speech to be generated.
Further, the invention is a CELP type speech coder/decoder constructed by using the above excitation vector generator as a random codebook. A fixed waveform arranging section may algebraically
generate start position candidate information of fixed waveforms.
Furthermore, the invention is a CELP type speech coder/decoder, which stores a plurality of fixed waveforms, generates an impulse with respect to start position candidate information of each fixed
waveform, convolutes the impulse response of a synthesis filter and each fixed waveform to generate an impulse response for each fixed waveform, computes the autocorrelations and correlations of
impulse responses of the individual fixed waveforms and develop them in a correlation matrix. This can provide a speech coder/decoder which improves the quality of a synthesized speech at about the
same computation cost as needed in a case of using an algebraic excitation as a random codebook.
Moreover, this invention is a CELP type speech coder/decoder equipped with a plurality of random codebooks and switch means for selecting one of the random codebooks. At least one random codebook may
be the aforementioned excitation vector generator, or at least one random codebook may be a vector storage section having a plurality of random number sequences stored therein or a pulse sequences
storage section having a plurality of random number sequences stored therein, or at least two random codebooks each having the aforementioned excitation vector generator may be provided with the
number of fixed waveforms to be stored differing from one random codebook to another, and the switch means selects one of the random codebooks so as to minimize coding distortion at the time of
searching a random codebook or adaptively selects one random codebook according to the result of analysis of speech segments.
BRIEF DESCRIPTION OF DRAWINGS [0047]
FIG. 1 is a schematic diagram of a conventional CELP type speech coder;
FIG. 2A is a block diagram of an excitation vector generating section in the speech coder in FIG. 1;
FIG. 2B is a block diagram of a modification of the excitation vector generating section which is designed to reduce the computation cost;
FIG. 2C is a block diagram of an excitation vector generating section in a speech decoder which is used as a pair with the speech coder in FIG. 1;
FIG. 3 is a block diagram of the essential portions of a speech coder according to a first mode;
FIG. 4 is a block diagram of an excitation vector generator equipped in the speech coder of the first mode;
FIG. 5 is a block diagram of the essential portions of a speech coder according to a second mode;
FIG. 6 is a block diagram of an excitation vector generator equipped in the speech coder of the second mode;
FIG. 7 is a block diagram of the essential portions of a speech coder according to third and fourth modes;
FIG. 8 is a block diagram of an excitation vector generator equipped in the speech coder of the third mode;
FIG. 9 is a block diagram of a non-linear digital filter equipped in the speech coder of the fourth mode;
FIG. 10 is a diagram of the adder characteristic of the non-linear digital filter shown in FIG. 9;
FIG. 11 is a block diagram of the essential portions of a speech coder according to a fifth mode;
FIG. 12 is a block diagram of the essential portions of a speech coder according to a sixth mode;
FIG. 13A is a block diagram of the essential portions of a speech coder according to a seventh mode;
FIG. 13B is a block diagram of the essential portions of the speech coder according to the seventh mode;
FIG. 14 is a block diagram of the essential portions of a speech decoder according to an eighth mode;
FIG. 15 is a block diagram of the essential portions of a speech coder according to a ninth mode;
FIG. 16 is a block diagram of a quantization target LSP adding section equipped in the speech coder according to the ninth mode;
FIG. 17 is a block diagram of an LSP quantizing/decoding section equipped in the speech coder according to the ninth mode;
FIG. 18 is a block diagram of the essential portions of a speech coder according to a tenth mode;
FIG. 19A is a block diagram of the essential portions of a speech coder according to an eleventh mode;
FIG. 19B is a block diagram of the essential portions of a speech decoder according to the eleventh mode;
FIG. 20 is a block diagram of the essential portions of a speech coder according to a twelfth mode;
FIG. 21 is a block diagram of the essential portions of a speech coder according to a thirteenth mode;
FIG. 22 is a block diagram of the essential portions of a speech coder according to a fourteenth mode;
FIG. 23 is a block diagram of the essential portions of a speech coder according to a fifteenth mode;
FIG. 24 is a block diagram of the essential portions of a speech coder according to a sixteenth mode;
FIG. 25 is a block diagram of a vector quantizing section in the sixteenth mode;
FIG. 26 is a block diagram of a parameter coding section of a speech coder according to a seventeenth mode; and
FIG. 27 is a block diagram of a noise canceler according to an eighteenth mode.
BEST MODES FOR CARRYING OUT THE INVENTION [0078]
Preferred modes of the present invention will now be described specifically with reference to the accompanying drawings.
(First Mode)
FIG. 3 is a block diagram of the essential portions of a speech coder according to this mode. This speech coder comprises an excitation vector generator 30, which has a seed storage section 31 and an
oscillator 32, and an LPC synthesis filter 33.
Seeds (oscillation seeds) 34 output from the seed storage section 31 are input to the oscillator 32. The oscillator 32 outputs different vector sequences according to the values of the input seeds.
The oscillator 32 oscillates with the content according to the value of the seed (oscillation seed) 34 and outputs an excitation vector 35 as a vector sequence. The LPC synthesis filter 33 is
supplied with vocal tract information in the form of the impulse response convolution matrix of the synthesis filter, and performs convolution on the excitation vector 35 with the impulse response,
yielding a Synthesized speech 36. The impulse response convolution of the excitation vector 35 is called LPC synthesis.
FIG. 4 shows the specific structure the excitation vector generator 30. A seed to be read from the seed storage section 31 is switched by a control switch 41 for the seed storage section in
accordance with a control signal given from a distortion calculator.
Simple storing of a plurality of seeds for outputting different vector sequences from the oscillator 32 in the seed storage section 31 can allow more random code vectors to be generated with less
capacity as compared with a case where complicated random code vectors are directly stored in a random codebook.
Although this mode has been described as a speech coder, the excitation vector generator 30 can be adapted to a speech decoder. In this case, the speech decoder has a seed storage section with the
same contents as those of the seed storage section 31 of the speech coder and the control switch 41 for the seed storage section is supplied with a seed number selected at the time of coding.
(Second Mode)
FIG. 5 is a block diagram of the essential portions of a speech coder according to this mode. This speech coder comprises an excitation vector generator 50, which has a seed storage section 51 and a
non-linear oscillator 52, and an LPC synthesis filter 53.
Seeds (oscillation seeds) 54 output from the seed storage section 51 are input to the non-linear oscillator 52. An excitation vector 55 as a vector sequence output from the non-linear oscillator 52
is input to the LPC synthesis filter 53. The output of the LPC synthesis filter 53 is a Synthesized speech 56.
The non-linear oscillator 52 outputs different vector sequences according to the values of the input seeds 54, and the LPC synthesis filter 53 performs LPC synthesis on the input excitation vector 55
to output the synthesized speech 56.
FIG. 6 shows the functional blocks of the excitation vector generator 50. A seed to be read from the seed storage section 51 is switched by a control switch 41 for the seed storage section in
accordance with a control signal given from a distortion calculator.
The use of the non-linear oscillator 52 as an oscillator in the excitation vector 50 can suppress divergence with oscillation according to the non-linear characteristic, and can provide practical
excitation vectors.
Although this mode has been described as a speech coder, the excitation vector generator 50 can be adapted to a speech decoder. In this case, the speech decoder has a seed storage section with the
same contents as those of the seed storage section 51 of the speech coder and the control switch 41 for the seed storage section is supplied with a seed number selected at the time of coding.
(Third Mode)
FIG. 7 is a block diagram of the essential portions of a speech coder according to this mode. This speech coder comprises an excitation vector generator 70, which has a seed storage section 71 and a
non-linear digital filter 72, and an LPC synthesis filter 73. In the diagram, numeral "74" denotes a seed (oscillation seed) which is output from the seed storage section 71 and input to the
non-linear digital filter 72, numeral "75" is an excitation vector as a vector sequence output from the non-linear digital filter 72, and numeral "76" is a synthesized speech output from the LPC
synthesis filter 73.
The excitation vector generator 70 has a control switch 41 for the seed storage section which switches a seed to be read from the seed storage section 71 in accordance with a control signal given
from a distortion calculator, as shown in FIG. 8.
The non-linear digital filter 72 outputs different vector sequences according to the values of the input seeds, and the LPC synthesis filter 73 performs LPC synthesis on the input excitation vector
75 to output the synthesized speech 76.
The use of the non-linear digital filter 72 as an oscillator in the excitation vector 70 can suppress divergence with oscillation according to the non-linear characteristic, and can provide practical
excitation vectors. Although this mode has been described as a speech coder, the excitation vector generator 70 can be adapted to a speech decoder. In this case, the speech decoder has a seed storage
section with the same contents as those of the seed storage section 71 of the speech coder and the control switch 41 for the seed storage section is supplied with a seed number selected at the time
of coding.
(Fourth Mode)
A speech coder according to this mode comprises an excitation vector generator 70, which has a seed storage section 71 and a non-linear digital filter 72, and an LPC synthesis filter 73, as shown in
FIG. 7.
Particularly, the non-linear digital filter 72 has a structure as depicted in FIG. 9. This non-linear digital filter 72 includes an adder 91 having a non-linear adder characteristic as shown in FIG.
10, filter state holding sections 92 to 93 capable of retaining the states (the values of y(k-1) to y(k-N)) of the digital filter, and multipliers 94 to 95, which are connected in parallel to the
outputs of the respective filter state holding sections 92-93, multiply filter-states by gains and output the results to the adder 91. The initial values of the filter states are set in the filter
state holding sections 92-93, by seeds read from the seed storage section 71. The values of the gains of the multipliers 94-95 are so fixed that the polarity, of the digital filter lies outside a
unit circle on a Z plane.
FIG. 10 is a conceptual diagram of the non-linear adder characteristic of the adder 91 equipped in the non-linear digital filter 72, and shows the input/output relation of the adder 91 which has a
2's complement characteristic. The adder 91 first acquires the sum of adder inputs or the sum of the input values to the adder 91, and then uses the non-linear characteristic illustrated in FIG. 10
to compute an adder output corresponding to the input sum.
In particular, the non-linear digital filter 72 is a second-order all-pole model so that the two filter state holding sections 92 and 93 are connected in series, and the multipliers 94 and 95 are
connected to the outputs of the filter state holding sections 92 and 93. Further, the digital filter in which the non-linear adder characteristic of the adder 91 is a 2's complement characteristic is
used. Furthermore, the seed storage section 71 retains seed vectors of 32 words as particularly described in Table 1.
-US-00001 TABLE 1 Seed vectors for generating random code vectors i Sy(n - 1)[i] Sy(n - 2)[i] 1 0.250000 0.250000 2 -0.564643 -0.104927 3 0.173879 -0.978792 4 0.632652 0.951133 5 0.920360 -0.113881 6
0.864873 -0.860368 7 0.732227 0.497037 8 0.917543 -0.035103 9 0.109521 -0.761210 10 -0.202115 0.198718 11 -0.095041 0.863849 12 -0.634213 0.424549 13 0.948225 -0.184861 14 -0.958269 0.969458 15
0.233709 -0.057248 16 -0.852085 -0.564948
In the thus constituted speech coder, seed vectors read from the seed storage section 71 are given as initial values to the filter state holding sections 92 and 93 of the non-linear digital filter
72. Every time zero is input to the adder 91 from an input vector (zero sequences), the non-linear digital filter 72 outputs one sample (y(k)) at a time which is sequentially transferred as a filter
state to the filter state holding sections 92 and 93. At this time, the multipliers 94 and 95 multiply the filter states output from the filter state holding sections 92 and 93 by gains a1 and a2
respectively. The adder 91 adds the outputs of the multipliers 94 and 95 to acquire the sum of the adder inputs, and generates an adder output which is suppressed between +1 to -1 based on the
characteristic in FIG. 10. This adder output (y(k+1)) is output as an excitation vector and is sequentially transferred to the filter state holding sections 92 and 93 to produce a new sample (y
Since the coefficients 1 to N of the multipliers 94-95 are fixed so that particularly the poles of the non-linear digital filter lies outside a unit circle on the Z plane according to this mode,
thereby providing the adder 91 with a non-linear adder characteristic, the divergence of the output can be suppressed even when the input to the non-linear digital filter 72 becomes large, and
excitation vectors good for practical use can be kept generated. Further, the randomness of excitation vectors to be generated can be secured.
Although this mode has been described as a speech coder, the excitation vector generator 70 can be adapted to a speech decoder. In this case, the speech decoder has a seed storage section with the
same contents as those of the seed storage section 71 of the speech coder and the control switch 41 for the seed storage section is supplied with a seed number selected at the time of coding.
(Fifth Mode)
FIG. 11 is a block diagram of the essential portions of a speech coder according to this mode. This speech coder comprises an excitation vector generator 110, which has an excitation vector storage
section 111 and an added-excitation-vector generator 112, and an LPC synthesis filter 113.
The excitation vector storage section 111 retains old excitation vectors which are read by a control switch upon reception of a control signal from an unillustrated distortion calculator.
The added-excitation-vector generator 112 performs a predetermined process, indicated by an added-excitation-vector number excitation vector, on an old excitation vector read from the storage section
111 to produce a new excitation vector. The added-excitation-vector generator 112 has a function of switching the process content for an old excitation vector in accordance with the
added-excitation-vector number.
According to the thus constituted speech coder, an added-excitation-vector number is given from the distortion calculator which is executing, for example, an excitation vector search. The
added-excitation-vector generator 112 executes different processes on old excitation vectors depending on the value of the input added-excitation-vector number to generate different added excitation
vectors, and the LPC synthesis filter 113 performs LPC synthesis on the input excitation vector to output a synthesized speech.
According to this mode, random excitation vectors can be generated simply by storing fewer old excitation vectors in the excitation vector storage section 111 and switching the process contents by
means of the added-excitation-vector generator 112, and it is unnecessary to store random code vectors directly in a random codebook (ROM). This can significantly reduce the memory capacity.
Although this mode has been described as a speech coder, the excitation vector generator 110 can be adapted to a speech decoder. In this case, the speech decoder has an excitation vector storage
section with the same contents as those of the excitation vector storage section 111 of the speech coder and an added-excitation-Vector number selected at the time of coding is given to the
added-excitation-vector generator 112.
(Sixth Mode)
FIG. 12 shows the functional blocks of an excitation vector generator according to this mode. This excitation vector generator comprises an added-excitation-vector generator 120 and an excitation
vector storage section 121 where a plurality of element vectors 1 to N are stored.
The added-excitation-vector generator 120 includes a reading section 122 which performs a process of reading a plurality of element vectors of different lengths from different positions in the
excitation vector storage section 121, a reversing section 123 which performs a process of sorting the read element vectors in the reverse order, a multiplying section 124 which performs a process of
multiplying a plurality of vectors after the reverse process by different gains respectively, a decimating section 125 which performs a process of shortening the vector lengths of a plurality of
vectors after the multiplication, an interpolating section 126 which performs a process of lengthening the vector lengths of the thinned vectors, an adding section 127 which performs a process of
adding the interpolated vectors, and a process determining/instructing section 128 which has a function of determining a specific processing scheme according to the value of the input
added-excitation-vector number and instructing the individual sections and a function of holding a conversion map (Table 2) between numbers and processes which is referred to at the time of
determining the specific process contents.
-US-00002 TABLE 2 Conversion map between numbers and processes Bit stream (MS_LSB) 6 5 4 3 2 1 0 V1 reading position 3 2 1 0 (16 kinds) V2 reading position 2 1 0 4 3 (32 kinds) V3 reading position 4
3 2 1 0 (32 kinds) Reverse process 0 (2 kinds) Multiplication 1 0 (4 kinds) decimating process 1 0 (4 kinds) interpolation 0 (2 kinds)
The added-excitation-vector generator 120 will now be described more specifically. The added-excitation-vector generator 120 determines specific processing schemes for the reading section 122, the
reversing section 123, the multiplying section 124, the decimating section 125, the interpolating section 126 and the adding section 127 by comparing the input added-excitation-vector number (which
is a sequence of 7 bits taking any integer value from 0 to 127) with the conversion map between numbers and processes (Table 2), and reports the specific processing schemes to the respective
The reading section 122 first extracts an element vector 1 (V1) of a length of 100 from one end of the excitation vector storage section 121 to the position of n1, paying attention to a sequence of
the lower four bits of the input added-excitation-vector number (n1: an integer value from 0 to 15). Then, the reading section 122 extracts an element vector 2 (V2) of a length of 78 from the end of
the excitation vector storage section 121 to the position of n2+14 (an integer value from 14 to 45), paying attention to a sequence of five bits (n2: an integer value from 14 to 45) having the lower
two bits and the upper three bits of the input added-excitation-vector number linked together. Further, the reading section 122 performs a process of extracting an element vector 3 (V3) of a length
of Ns (=52) from one end of the excitation vector storage section 121 to the position of n3+46 (an integer value from 46 to 77), paying attention to a sequence of the upper five bits of the input
added-excitation-vector number (n3: an integer value from 0 to 31), and sending V1, V2 and V3 to the reversing section 123.
The reversing section 123 performs a process of sending a vector having V1, V2 and V3 rearranged in the reverse order to the multiplying section 124 as new V1, V2 and V3 when the least significant
bit of the added-excitation-vector number is "0" and sending V1, V2 and V3 as they are to the multiplying section 124 when the least significant bit is "1."
Paying attention to, a sequence of two bits having the upper seventh and sixth bits of the added-excitation-vector number linked, the multiplying section 124 multiplies the amplitude of V2 by -2 when
the bit sequence is "00," multiplies the amplitude of V3 by -2 when the bit sequence is "01," multiplies the amplitude of V1 by -2 when the bit sequence is "10" or multiplies the amplitude of V2 by 2
when the bit sequence is "11," and sends the result as new V2 and V3 to the decimating section 125.
Paying attention to a sequence of two bits having the upper fourth and third bits of the added-excitation-vector number linked, the decimating section 125
(a) sends vectors of 26 samples extracted every other sample from V1, V2 and V3 as new V1, V2 and V3 to the interpolating section 126 when the bit sequence is "00," (b) sends vectors of 26 samples
extracted every other sample from V1 and V3 and every third sample from V2 as new V1, V3 and V2 to the interpolating section 126 when the bit sequence is "01," (c) sends vectors of 26 samples
extracted every fourth sample from V1 and every other sample from V2 and V3 as new V1, V2 and V3 to the interpolating section 126 when the bit sequence is "10," and (d) sends vectors of 26 samples
extracted every fourth sample from V1, every third sample from V2 and every other sample from V3 as new V1, V2 and V3 to the interpolating section 126 when the bit sequence is "11."
Paying attention to the upper third bit of the added-excitation-vector number, the interpolating section 126
(a) sends vectors which have V1, V2 and V3 respectively substituted in even samples of zero vectors of a length Ns (=52) as new V1, V2 and V3 to the adding section 127 when the value of the third bit
is "0" and (b) sends vectors which have V1, V2 and V3 respectively substituted in odd samples of zero vectors of a length Ns (=52) as new V1, V2 and V3 to the adding section 127 when the value of the
third bit is "1."
The adding section 127 adds the three vectors (V1, V2 and V3) produced by the interpolating section 126 to generate an added excitation vector.
According to this mode, as apparent from the above, a plurality of processes are combined at random in accordance with the added-excitation-vector number to produce random excitation vectors, so that
it is unnecessary to store random code vectors as they are in a random codebook (ROM), ensuring a significant reduction in memory capacity.
Note that the use of the excitation vector generator of this mode in the speech coder of the fifth mode can allow complicated and random excitation vectors to be generated without using a
large-capacity random codebook.
(Seventh Mode)
A description will now be given of a seventh mode in which the excitation vector generator of any one of the above-described first to sixth modes is used in a CELP type speech coder that is based on
the PSI-CELP, the standard speech coding/decoding system for PDC digital portable telephones in Japan.
FIG. 13A is presents a block diagram of a speech coder according to the seventh mode. In this speech coder, digital input speech data 1300 is supplied to a buffer 1301 frame by frame (frame length Nf
=104). At this time, old data in the buffer 1301 is updated with new data supplied. A frame power quantizing/decoding section 1302 first reads a processing frame s(i) (0≦i≦Nf-1) of a length Nf (=104)
from the buffer 1301 and acquires mean power amp of samples in that processing frame from an equation 5.
= i = 0 Nf s 2 ( i ) Nf ( 5 ) ##EQU00003##
where [0120]
amp: mean power of samples in a processing frame
i: element number (0≦i≦Nf-1) in the processing frame
s(i): samples in the processing frame
Nf: processing frame length (=52).
The acquired mean power amp of samples in the processing frame is converted to a logarithmically converted value amplog from en equation 6.
= log 10 ( 255 × amp + 1 ) log 10 ( 255 + 1 ) ( 6 ) ##EQU00004##
where amplog
: logarithmically converted value of the mean power of samples in the processing frame
amp: mean power of samples in the processing frame.
The acquired amplog is subjected to scalar quantization using a scalar-quantization table Cpow of 10 words as shown in Table 3 stored in a power quantization table storage section 1303 to acquire an
index of power Ipow of four bits, decoded frame power spow is obtained from the acquired index of power Ipow, and the index of power Ipow and decoded frame power spow are supplied to a parameter
coding section 1331. The power quantization table storage section 1303 is holding a power scalar-quantization table (Table 3) of 16 words, which is referred to when the frame power quantizing/
decoding section 1302 carries out scalar quantization of the logarithmically converted value of the mean power of the samples in the processing frame.
-US-00003 TABLE 3 Power scalar-quantization table i Cpow(i) 1 0.00675 2 0.06217 3 0.10877 4 0.16637 5 0.21876 6 0.26123 7 0.30799 8 0.35228 9 0.39247 10 0.42920 11 0.46252 12 0.49503 13 0.52784 14
0.56484 15 0.61125 16 0.67498
An LPC analyzing section 1304 first reads analysis segment data of an analysis segment length Nw (=256) from the buffer 1301, multiplies the read analysis segment data by a Hamming window of a window
length Nw (=256) to yield a Hamming windowed analysis data and acquires the autocorrelation function of the obtained Hamming windowed analysis data to a prediction order Np (=10). The obtained
autocorrelation function is multiplied by a lag window table (Table 4) of 10 words stored in a lag window storage section 1305 to acquire a Hamming windowed autocorrelation function, performs linear
predictive analysis on the obtained Hamming windowed autocorrelation function to compute an LPC parameter α(i) (1≦i≦Np) and outputs the parameter to a pitch pre-selector 1308.
-US-00004 TABLE 4 Lag window table i Wlag(i) 0 0.9994438 1 0.9977772 2 0.9950056 3 0.9911382 4 0.9861880 5 0.9801714 6 0.9731081 7 0.9650213 8 0.9559375 9 0.9458861
Next, the obtained LPC parameter α(i) is converted to an LSP (Linear Spectrum Pair) ω(i) (1≦i≦Np) which is in turn output to an LSP quantizing/decoding section 1306. The lag window storage section
1305 is holding a lag window table to which the LPC analyzing section refers.
The LSP quantizing/decoding section 1306 first refers to a vector quantization table of an LSP stored in a LSP quantization table storage section 1307 to perform vector quantization on the LSP
received from the LPC analyzing section 1304, thereby selecting an optimal index, and sends the selected index as an LSP code lisp to the parameter coding section 1331. Then, a centroid corresponding
to the LSP code is read as a decoded LSP ωq(i) (1≦i≦Np) from the LSP quantization table storage section 1307, and the read decoded LSP is sent to an LSP interpolation section 1311. Further, the
decoded LSP is converted to an LPC to acquire a decoded LSP αq(i) (1≦i≦NP), which is in turn sent to a spectral weighting filter coefficients calculator 1312 and a perceptual weighted LPC synthesis
filter coefficients calculator 1314. The LSP quantization table storage section 1307 is holding an LSP vector quantization table to which the LSP quantizing/decoding section 1306 refers when
performing vector quantization on an LSP.
The pitch pre-selector 1308 first subjects the processing frame data s(i) (0≦i≦Nf-1) read from the buffer 1301 to inverse filtering using the LPC α (i) (1≦i≦Np) received from the LPC analyzing
section 1304 to obtain a linear predictive residual signal res(i) (0≦i≦Nf-1), computes the power of the obtained linear predictive residual signal res(i), acquires a normalized predictive residual
power resid resulting from normalization of the power of the computed residual signal with the power of speech samples of a processing subframe, and sends the normalized predictive residual power to
the parameter Coding section 1331. Next, the linear predictive residual signal res(i) is multiplied by a Hamming window of a length Nw (=256) to produce a Hamming windowed linear predictive residual
signal resw(i) (0≦i≦Nw-1), and an autocorrelation function φint(i) of the produced resw(i) is obtained over a range of Lmin-2≦i≦Lmax+2 (where Lmin is 16 in the shortest analysis segment of a long
predictive coefficient and Lmax is 128 in the longest analysis segment of a long predictive coefficient). A polyphase filter coefficient Cppf (Table 5) of 28 words stored in a polyphase coefficients
storage section 1309 is convoluted in the obtained autocorrelation function φint(i) to acquire an autocorrelation function φdq(i) at a fractional position shifted by -1/4 from an integer lag int, an
autocorrelation function φaq(i) at a fractional position shifted by +1/4 from the integer lag int, and an autocorrelation function φah(i) at a fractional position shifted by +1/2 from the integer lag
-US-00005 TABLE 5 Polyphase filter coefficients Cppf i Cppf(i) 0 0.100035 1 -0.180063 2 0.900316 3 0.300105 4 -0.128617 5 0.081847 6 -0.060021 7 0.000000 8 0.000000 9 1.000000 10 0.000000 11 0.000000
12 0.000000 13 0.000000 14 -0.128617 15 0.300105 16 0.900316 17 -0.180063 18 0.100035 19 -0.069255 20 0.052960 21 -0.212207 22 0.636620 23 0.636620 24 -0.212207 25 0.127324 26 -0.090946 27 0.070736
Further, for each argument i in a range of Lmin-2≦i≦Lmax+2, a process of an equation 7 of substituting the largest one of φint(i), φdq(i), φaq(i) and φah(i) in φmax(i) to acquire (Lmax-Lmin+1) pieces
of φmax(i).
φmax(i):maximum value of φint(i),φdq(i),φaq(i),φah(i) (7)
where [0132]
φmax(i): the maximum value among φint(i), φdq(i), φaq(i), φah(i)
I: analysis segment of a long predictive coefficient (Lmin≦i≦Lmax)
Lmin: shortest analysis segment (=16) of the long predictive coefficient
Lmax: longest analysis segment (=128) of the long predictive coefficient
φint(i): autocorrelation function of an integer lag (int) of a predictive residual, signal
φdq(i): autocorrelation function of a fractional lag (int-1/4) of the predictive residual signal
φaq(i): autocorrelation function of a fractional lag (int+1/4) of the predictive residual signal
φah(i): autocorrelation function of a fractional lag (int+1/2) of the predictive residual signal.
Larger top six are selected from the acquire (Lmax-Lmin+1) pieces of φmax(i) and are saved as pitch candidates psel(i) (0≦i≦5), and the linear predictive residual signal res(i) and the first pitch
candidate psel(0) are sent to a pitch weighting filter calculator 1310 and psel(i) (0≦i≦5) to an adaptive code vector generator 1319.
The polyphase coefficients storage section 1309 is holding polyphase filter coefficients to be referred to when the pitch pre-selector 1308 acquires the autocorrelation of the linear predictive
residual signal to a fractional lag precision and when the adaptive code vector generator 1319 produces adaptive code vectors to a fractional precision.
The pitch weighting filter calculator 1310 acquires pitch predictive coefficients cov(i) (0≦i≦2) of a third order from the linear predictive residuals res(i) and the first pitch candidate psel(0)
obtained by the pitch pre-selector 1308. The impulse response of a pitch weighting filter Q(z) is obtained from an equation which uses the acquired pitch predictive coefficients cov(i) (0≦i≦2), and
is sent to the spectral weighting filter coefficients calculator 1312 and a perceptual weighting filter coefficients calculator 1313.
( z ) = 1 + i = 0 2 cov ( i ) × λ pi × z - psel ( 0 ) + i - 1 ( 8 ) ##EQU00005##
where [0143]
Q(z): transfer function of the pitch weighting filter
cov(i): pitch predictive coefficients (0≦i≦2)
λpi; pitch weighting constant (=0.4)
psel(0): first pitch candidate.
The LSP interpolation section 1311 first acquires a decoded interpolated LSP ωintp(n,i) (1≦i≦Np) subframe by subframe from an equation 9 which uses a decoded LSP ωq(i) for the current processing
frame, obtained by the LSP quantizing/decoding section 1306, and a decoded LSP ωqp(i) for a previous processing frame which has been acquired and sexed earlier.
ω int p ( n , i ) = { 0.4 × ω q ( i ) + 0.6 × ω qp ( i ) n = 1 ω q ( i ) n = 2 ( 9 ) ##EQU00006##
where [0148]
ωintp(n,j): interpolated LSP of the n-th subframe
n: subframe number (=1,2)
ωq(i): decoded LSP of a processing frame
ωqp(i): decoded LSP of a previous processing frame.
A decoded interpolated LPC αq(n,i) (1≦i≦Np) is obtained by converting the acquired ωintp(n,i) to an LPC and the acquired, decoded interpolated LPC α q(n,i) (1≦i≦Np) is sent to the spectral weighting
filter coefficients calculator 1312 and the perceptual weighted LPC synthesis filter coefficients calculator 1314.
The spectral weighting filter coefficients calculator 1312, which constitutes an MA type spectral weighting filter I(z) in an equation 10, sends its impulse response to the perceptual weighting
filter coefficients calculator 1313.
( z ) = i = 1 Nfir α fir ( i ) × z - i ( 10 ) ##EQU00007##
where [0154]
I(z): transfer function of the MA type spectral weighting filter
Nfir: filter order (=11) of I(z)
αfir(i): filter order (1≦i≦Nfir) of I(z).
Note that the impulse response αfir(i) (1≦i≦Nfir) in the equation 10 is an impulse response of an ARMA type spectral weighting filter G(z), given by an equation 11, cut after Nfir(=11).
( z ) = 1 + i = 1 Np α ( n , i ) × λ ma i × z - i 1 + i = 1 Np α ( n , i ) × λ ar i × z - i ( 11 ) ##EQU00008##
where [0158]
G(z): transfer function of the spectral weighting filter
n: subframe number (=1,2)
Np: LPC analysis order (=10)
α(n,i): decoded interpolated LSP of the n-th subframe
λma: numerator constant (=0.9) of G(z)
λar: denominator constant (=0.4) of G(z).
The perceptual weighting filter coefficients calculator 1313 first constitutes a perceptual weighting filter W(z) which has as an impulse response the result of convolution of the impulse response of
the spectral weighting filter I(z) received from the spectral weighting filter coefficients calculator 1312 and the impulse response of the pitch weighting filter Q(z) received from the pitch
weighting filter calculator 1310, and sends the impulse response of the constituted perceptual weighting filter W(z) to the perceptual weighted LPC synthesis filter coefficients calculator 1314 and a
perceptual weighting section 1315.
The perceptual weighted LPC synthesis filter coefficients calculator 1314 constitutes a perceptual weighted LPC synthesis filter H(z) from an equation 12 based on the decoded interpolated LPC αq(n,i)
received from the LSP interpolation section 1311 and the perceptual weighting filter W(z) received from the perceptual weighting filter coefficients calculator 1313.
( z ) = 1 1 + i = 1 Np α q ( n , i ) × z - i W ( z ) ( 12 ) ##EQU00009##
where [0166]
H(z): transfer function of the perceptual weighted synthesis filter
Np: LPC analysis order
αq(n,i): decoded interpolated LPC of the n-th subframe
n: subframe number (=1,2)
W(z): transfer function of the perceptual weighting filter (I(z) and Q(z) cascade-connected).
The coefficient of the constituted perceptual weighted LPC synthesis filter H(z) is sent to a target vector generator A 1316, a perceptual weighted LPC reverse synthesis filter A 1317, a perceptual
weighted LPC synthesis filter A 1321, a perceptual weighted LPC reverse synthesis filter B 1326 and a perceptual weighted LPC synthesis filter B 1329.
The perceptual weighting section 1315 inputs a subframe signal read from the buffer 1301 to the perceptual weighted LPC synthesis filter H(z) in a zero state, and sends its outputs as perceptual
weighted residuals spw(i) (0≦i≦Ns-1) to the target vector generator A 1316.
The target vector generator A 1316 subtracts a zero input response Zres(i) (0≦i≦Ns-1), which is an output when a zero sequence is input to the perceptual weighted LPC synthesis filter H(i) obtained
by the perceptual weighted LPC synthesis filter coefficients calculator 1314, from the perceptual weighted residuals spw(i) (0≦i≦Ns-1) obtained by the perceptual weighting section 1315, and sends the
subtraction result to the perceptual weighted LPC reverse synthesis filter A 1317 and a target vector generator B 1325 as a target vector r(i) (0≦i≦Ns-1) for selecting an excitation vector.
The perceptual weighted LPC reverse synthesis filter A 1317 sorts the target vectors r(i) (0≦i≦Ns-1) received from the target vector generator A 1316 in a time reverse order, inputs the acquired
vectors to the perceptual weighted LPC synthesis filter H(z) with the initial state of zero, and sorts its outputs again in a time reverse order to obtain time reverse synthesis rh(k) (0≦i≦Ns-1) of
the target vector, and sends the vector to a comparator A 1322.
Stored in an adaptive codebook 1318 are old excitation vectors which are referred to when the adaptive code vector generator 1319 generates adaptive code vectors. The adaptive code vector generator
1319 generates Nac pieces of adaptive code vectors Pacb(i,k) (0≦i≦Nac-1, 0≦k≦≦Ns-1, 6≦Nac≦24) based on six pitch candidates psel(j) (0≦j≦5) received from the pitch pre-selector 1308, and sends the
vectors to an adaptive/fixed selector 1320. Specifically, as shown in Table 6, adaptive code vectors are generated for four kinds of fractional lag positions per a single integer lag position when
16≦psel(j)≦44, adaptive code vectors are generated for two kinds of fractional lag positions per a single integer lag position when 46≦psel(j)≦64, and adaptive code vectors are generated for integer
lag positions when 65≦psel(j)≦128. From this, depending on the value of psel(j) (0≦j≦5), the number of adaptive code vector candidates Mac is 6 at a minimum and 24 at a maximum.
-US-00006 TABLE 6 Total number of adaptive code vectors and fixed code vectors Total number of vectors 255 Number of adaptive 222 code vectors 16 ≦ psel(i) ≦ 44 116 (29 × four kinds of fractional
lags) 45 ≦ psel(i) ≦ 64 42 (21 × two kinds of fractional lags) 65 ≦ psel(i) ≦ 128 64 (64 × one kind of fractional lag) Number of fixed 32 (16 × two kinds of codes) code vectors
Adaptive code vectors to a fractional precision are generated through an interpolation which convolutes the coefficients of the polyphase filter stored in the polyphase coefficients, storage section
Interpolation corresponding to the value of lagf(i) means interpolation corresponding to an integer lag position when lagf(i)=0, interpolation corresponding to a fractional lag position shifted by -1
/2 from an integer lag position when lagf(i)=1, interpolation corresponding to a fractional lag position shifted by +1/4 from an integer lag position when lagf(i)=2, and interpolation corresponding
to a fractional lag position shifted by -1/4 from an integer lag position when lagf(i)=3.
The adaptive/fixed selector 1320 first receives adaptive code vectors of the Nac (6 to 24) candidates generated by the adaptive code vector generator 1319 and sends the vectors to the perceptual
weighted LPC synthesis filter A 1321 and the comparator A 1322.
To pre-select the adaptive code vectors Pacb(i,k) (0≦i≦Nac-1, 0≦k≦Ns-1, 6≦Nac≦24) generated by the adaptive code vector generator 1319 to Nacb (=4) candidates from Nac (6 to 24) candidates, the
comparator A 1322 first acquires the inner products prac(i) of the time reverse synthesized vectors rh(k) (0≦i≦Ns-1) of the target vector, received from the perceptual weighted LPC reverse synthesis
filter A 1317, and the adaptive code vectors Pacb(i,k) from an equation 13.
( i ) = k = 0 Ns - 1 Pacb ( i , k ) × rh ( k ) ( 13 ) ##EQU00010##
where [0180]
Prac(i): reference value for pre-selection of adaptive code vectors
Nac: the number of adaptive code vector candidates after pre-selection (=6 to 24)
i: number of an adaptive code vector (0≦i≦Nac-1)
Pacb(i,k): adaptive code vector
rh(k): time reverse synthesis of the target vector r(k).
By comparing the obtained inner products Prac(i), the top Nacp (=4) indices when the values of the products become large and inner products with the indices used as arguments are selected and are
respectively saved as indices of adaptive code vectors after pre-selection apsel(j) (0≦j≦Nacb-1) and reference values after pre-selection of adaptive code vectors prac(apsel(j)), and the indices of
adaptive code vectors after pre-selection apsel(j) (0≦j≦Nacb-1) are output to the adaptive/fixed selector 1320.
The perceptual weighted LPC synthesis filter A 1321 performs perceptual weighted LPC synthesis on adaptive code vectors after pre-selection Pacb(absel(j),k), which have been generated by the adaptive
code vector generator 1319 and have passed the adaptive/fixed selector 1320, to generate synthesized adaptive code vectors SYNacb(apsel(j),k) which are in turn sent to the comparator A 1322. Then,
the comparator A 1322 acquires reference values for final-selection of an adaptive code vector sacbr(j) from an equation 14 for final-selection on the Nacb (=4) adaptive code vectors after
pre-selection Pacb(absel(j),k), pre-selected by the comparator A 1322 itself.
( j ) = prac 2 ( apsel ( j ) ) k = 0 Ns - 1 SYNacb 2 ( j , k ) ( 14 ) ##EQU00011##
where [0187]
sacbr(j): reference value for final-selection of an adaptive code vector
prac( ): reference values after pre-selection of adaptive code vectors
apsel(j): indices of adaptive code vectors after pre-selection
k: vector order (0≦j≦Ns-1)
j: number of the index of a pre-selected adaptive code vector (0≦j≦Nacb-1)
Ns: subframe length (=52)
Nacb: the number of pre-selected adaptive code vectors (=4)
SYNacb(J,K): synthesized adaptive code vectors.
The index when the value of the equation 14 becomes large and the value of the equation 14 with the index used as an argument are sent to the adaptive/fixed selector 1320 respectively as an index of
adaptive code vector after final-selection ASEL and a reference value after final-selection of an adaptive code vector sacbr(ASEL).
A fixed codebook 1323 holds Nfc (=16) candidates of vectors to be read by a fixed code vector reading section 1324. To pre-select fixed code vectors Pfcb(i,k) (0≦i≦Nfc-1, 0≦k≦Ns-1) read by the fixed
code vector reading section 1324 to Nfcb (=2) candidates from Nfc (=16) candidates, the comparator A 1322 acquires the absolute values |prfc(i)| of the inner products of the time reverse synthesized
vectors rh(k) (0≦i≦Ns-1) of the target vector, received from the perceptual weighted LPC reverse synthesis filter A 1317, and the fixed code vectors Pfcb(i,k) from an equation 15.
( i ) = k = 0 Ns - 1 Pfcb ( i , k ) × rh ( k ) ( 15 ) ##EQU00012##
where [0197]
|prfc(i)|: reference values for pre-selection of fixed code vectors
k: element number of a vector (0≦k≦Ns-1)
i: number of a fixed code vector (0≦i≦Nfc-1)
Nfc: the number of fixed code vectors (=16)
Pfcb(i,k): fixed code vectors
rh(k): time reverse synthesized vectors of the target vector rh(k).
By comparing the values |prfc(i)| of the equation 15, the top Nfcb (=2) indices when the values become large and the absolute values of inner products with the indices used as arguments are selected
and are respectively saved as indices of fixed code vectors after pre-selection fpsel(j) (0≦j≦Nfcb-1) and reference values for fixed code vectors after pre-selection |prfc(fpsel(j)|, and indices of
fixed code vectors after pre-selection fpsel(j) (0≦j≦Nfcb-1) are output to the adaptive/fixed selector 1320.
The perceptual weighted LPC synthesis filter A 1321 performs perceptual weighted LPC synthesis on fixed code vectors after pre-selection Pfcb(fpsel(j),k) which have been read from the fixed code
vector reading section 1324 and have passed the adaptive/fixed selector 1320, to generate synthesized fixed code vectors SYNfcb(fpsel(j),k) which are in turn sent to the comparator A 1322.
The comparator A 1322 further acquires a reference value for final-selection of a fixed code vector sfcbr(j) from an equation 16 to finally select an optimal fixed code vector from the Nfcb (=2)
fixed code vectors after pre-selection Pfcb(fpsel(j),k), pre-selected by the comparator A 1322 itself.
( j ) = prfc ( fpsel ( j ) 2 k = 0 Ns - 1 SYNfcb 2 ( j , k ) ( 16 ) ##EQU00013##
where [0206]
sfcbr(j): reference value for final-selection of a fixed code vector
|prfc( )|: reference values after pre-selection of fixed code vectors
fpsel(j): indices of fixed code vectors after pre-selection (0≦j≦Nfcb-1)
k: element number of a vector (0≦k≦Ns-1)
j: number of a pre-selected fixed code vector (0≦j≦Nfcb-1)
Ns: subframe length (=52)
Nfcb: the number of pre-selected fixed code vectors (=2)
SYNfcb(J,K): synthesized fixed code vectors.
The index when the value of the equation 16 becomes large and the value of the equation 16 with the index used as an argument are sent to the adaptive/fixed selector 1320 respectively as an index of
fixed code vector after final-selection FSEL and a reference value after final-selection of a fixed code vector sacbr(FSEL).
The adaptive/fixed selector 1320 selects either the adaptive code vector after final-selection or the fixed code vector after final-selection as an adaptive/fixed code vector AF(k) (0≦k≦Ns-1) in
accordance with the size relation and the polarity relation among prac(ASEL), sacbr(ASEL), |prfc(FSEL)| and sfcbr(PSEL) (described in an equation 17) received from the comparator A 1322.
( k ) = { Pacb ( ASEL , k ) sacbr ( ASEL ) ≧ sfcbr ( FSEL ) , prac ( ASEL ) > 0 0 sacbr ( ASEL ) ≧ sfcbr ( FSEL ) , prac ( ASEL ) ≦ 0 Pfcb ( FSEL , k ) sacrb ( ASEL ) < sfcbr ( FSEL ) , prfc ( FSEL )
≧ 0 - Pfcb ( FSEL , k ) sacbr ( ASEL ) < sfcbr ( FSEL ) , prfc ( FSEL ) < 0 ( 17 ) ##EQU00014##
where [0216]
AF(k): adaptive/fixed code vector
ASEL: index of adaptive code vector after final-selection
FSEL: index of fixed code vector after final-selection
k: element number of a vector
Pacb(ASEL,k): adaptive code vector after final-selection
Pfcb(FSEL,k): fixed code vector after final-selection Pfcb(FSEL,k)
sacbr(ASEL): reference value after final-selection of an adaptive code vector
sfcbr(FSEL): reference value after final-selection of a fixed code vector
prac(ASEL): reference values after pre-selection of adaptive code vectors
prfc(FSEL): reference values after pre-Selection of fixed code vectors prfc (FSEL).
The selected adaptive/fixed code vector AF(k) is sent to the perceptual weighted LPC synthesis filter A 1321 and an index representing the number that has generated the selected adaptive/fixed code
vector AF(k) is sent as an adaptive/fixed index AFSEL to the parameter coding section 1331. As the total number of adaptive code vectors and fixed code vectors is designed to be 255 (see Table 6),
the adaptive/fixed index AFSEL is a code of 8 bits.
The perceptual weighted LPC synthesis filter A 1321 performs perceptual weighted LPC synthesis on the adaptive/fixed code vector AF(k), selected by the adaptive/fixed selector 1320, to generate a
synthesized adaptive/fixed code vector SYNaf(k) (0≦k≦Ns-1) and sends it to the comparator A 1322.
The comparator A 1322 first obtains the power powp of the synthesized adaptive/fixed code vector SYNaf(k) (0≦k≦Ns-1) received from the perceptual weighted LPC synthesis filter A 1321 using an
equation 18.
= k = 0 Ns - 1 SYNaf 2 ( k ) ( 18 ) ##EQU00015##
where [0229]
powm: power of adaptive/fixed code vector (SYNaf(k))
k: element number of a vector (0≦k≦Ns-1)
Ns: subframe length (=52)
SYNaf(k): adaptive/fixed code vector.
Then, the inner product pr of the target vector received from the target vector generator A 1316 and the synthesized adaptive/fixed code vector SYNaf(k) is acquired from an equation 19.
= k = 0 Ns - 1 SYNaf ( k ) × r ( k ) ( 19 ) ##EQU00016##
where pr
: inner product of SYNaf(k) and r(k)
Ns: subframe length (=52)
SYNaf(k): adaptive/fixed code vector
r(k): target vector
k: element number of a vector (0≦k≦Ns-1).
Further, the adaptive/fixed code vector AF(k) received from the adaptive/fixed selector 1320 is sent to an adaptive codebook updating section 1333 to compute the power POWaf of AF(k), the synthesized
adaptive/fixed code vector SYNaf(k) and POWaf are sent to the parameter coding section 1331, and powp, pr, r(k) and rh(k) are sent to a comparator B 1330.
The target vector generator B 1325 subtracts the synthesized adaptive/fixed code vector SYNaf(k), received from the comparator A 1322, from the target vector r(i) (0≦i≦Ns-1) received from the
comparator A 1322, to generate a new target vector, and sends the new target vector to the perceptual weighted LPC reverse synthesis filter B 1326.
The perceptual weighted LPC reverse synthesis filter B 1326 sorts the new target vectors, generated by the target vector generator B 1325, in a time reverse order, sends the sorted vectors to the
perceptual weighted LPC synthesis filter in a zero state, the output vectors are sorted again in a time reverse order to generate time-reversed synthesized vectors ph(k) (0≦k≦Ns-1) which are in turn
sent to the comparator B 1330.
An excitation vector generator 1337 in use is the same as, for example, the excitation vector generator 70 which has been described in the section of the third mode. The excitation vector generator
70 generates a random code vector as the first seed is read from the seed storage section 71 and input to the non-linear digital filter 72. The random code vector generated by the excitation vector
generator 70 is sent to the perceptual weighted LPC synthesis filter B 1329 and the comparator B 1330. Then, as the second seed is read from the seed storage section 71 and input to the non-linear
digital filter 72, a random code vector is generated and output to the filter B 1329 and the comparator B 1330.
To pre-select random code vectors generated based on the first seed to Nstb (=6) candidates from Nst (=64) candidates, the comparator B 1330 acquires reference values cr(i1) (0≦il≦Nstbl-1) for
pre-selection of first random code vectors from an equation 20.
( i 1 ) = j = 0 Ns - 1 Pstbl ( ilj ) × rh ( j ) - pr powp j = 0 Ns - 1 Pstbl ( ilj ) × p h ( j ) ( 20 ) ##EQU00017##
where [0243]
cr(i1): reference values for pre-selection of first random code vectors
Ns: subframe length (=52)
rh(j): time reverse synthesized vector of a target vector (r(j))
powp: power of an adaptive/fixed vector (SYNaf(k))
pr: inner product of SYNaf(k) and r(k)
Pstbl(il,j): first random code vector
ph(j): time reverse synthesized vector of SYNaf(k)
il: number of the first random code vector (0≦i1≦Nst-1)
j: element number of a vector.
By comparing the obtained values cr(i1), the top Nstb (=6) indices when the values become large and inner products with the indices used as arguments are selected and are respectively saved as
indices of first random code vectors after pre-selection s1psel(j1) (0≦j1≦Nstb-1) and first random code vectors after pre-selection Pstb1(s1psel(j1),k) (0≦j1≦Nstb-1, 0≦k≦Ns-1). Then, the same process
as done for the first random code vectors is performed for second random code vectors and indices and inner products are respectively saved as indices of second random code vectors after
pre-selection s1psel(j2) (0≦j2≦Nstb-1) and second random code vectors after pre-selection Pstb2(s2psel(j2),k) (0≦j2≦Nstb-1, 0≦k≦Ns-1).
The perceptual weighted LPC synthesis filter B 1329 performs perceptual weighted LPC synthesis on the first random code vectors after pre-selection Pstb1(s1psel(j1),k) to generate synthesized first
random code vectors SYNstb1(s1psel(j1),k) which are in turn sent to the comparator B 1330. Then, perceptual weighted LPC synthesis is performed on the second random code vectors after pre-selection
Pstb2(s1psel(j2),k) to generate synthesized second random code vectors SYNstb2(s2psel(j2),k) which are in turn sent to the comparator B 1330.
To implement final-selection on the first random code vectors after pre-selection Pstb1(s1psel(j1),k) and the second random code vectors after pre-selection Pstb2(s1psel(j2),k), pre-selected by the
comparator B 1330 itself, the comparator B 1330 carries out the computation of an equation 21 on the synthesized first random code vectors SYNstb1(s1psel(j1),k) computed in the perceptual weighted
LPC synthesis filter B 1329.
1 ( slpse 1 ( j 1 ) , k ) = SYNstbl ( slpse 1 ( j 1 ) , k ) - SYNaf ( j 1 ) powp k = 0 Ns - 1 Pstb 1 ( slpse 1 ( j 1 ) , k ) × p h ( k ) ( 21 ) ##EQU00018##
where [0255]
SYNOstb1(s1psel(j1),k): orthogonally synthesized first random code vector
SYNstb1(s1psel(j1),k): synthesized first random code vector
Pstb1(s1psel(j1),k): first random code vector after pre-selection
SYNaf(j): adaptive/fixed code vector
powp: power of adaptive/fixed code vector (SYNaf(j))
Ns: subframe length (=52)
ph(k): time reverse synthesized vector of SYNaf(j)
j1: number of first random code vector after pre-selection
k: element number of a vector (0≦k≦Ns-1).
Orthogonally synthesized first random code vectors SYNOstb1(s1psel(j1),k) are obtained, and a similar computation is performed on the synthesized second random code vectors SYNstb2(s2psel(j2),k) to
acquire orthogonally synthesized second random code vectors SYNOstb2(s2psel(j2),k), and reference values after final-selection of a first random code vector s1cr and reference values after
final-selection of a second random code vector s2cr are computed in a closed loop respectively using equations 22 and 23 for all the combinations (36 combinations) of (s1psel(j1), s2psel(j2)).
1 = cscr 1 2 k = 0 Ns - 1 [ SYNOstb 1 ( s 1 pse 1 ( j 1 ) , k ) + SYNOstb 2 ( s 2 pse 1 ( j 2 ) , k ) ] 2 ( 22 ) ##EQU00019##
where [0265]
scr1: reference value after final-selection of a first random code vector
cscr1: constant previously computed from an equation 24
SYNOstb1(s1psel(j1),k): orthogonally synthesized first random code vectors
SYNOstb2(s2psel(j2),k): orthogonally synthesized second random code vectors
r(k): target vector
s1psel(j1): index of first random code vector after pre-selection
s2psel(j2): index of second random code vector after pre-selection
Ns: subframe length (=52)
k: element number of a vector.
2 = ( cscr 2 2 ) k = 0 Ns - 1 [ SYNOstb 1 ( s 1 pse 1 ( j 1 ) , k - SYNOstb 2 ( s 2 pse 1 ( j 2 ) , k ) ] 2 ( 23 ) ##EQU00020##
where [0274]
scr2: reference value after final-selection of a second random code vector
cscr2: constant previously computed from an equation 25
SYNOstb1(s1psel(j1),k): orthogonally synthesized first random code vectors
SYNOstb2(s2psel(j2),k): orthogonally synthesized second random code vectors
r(k): target vector
s1psel(j1): index of first random code vector after pre-selection
s2psel(j2): index of second random code vector after pre-selection
Ns: subframe length (=52)
k: element number of a vector.
Note that cs1cr in the equation 22 and cs2cr in the equation 23 are constants which have been calculated previously using the equations 24 and 25, respectively.
csc r
1 = k = 0 Ns - 1 SYNOstb 1 ( s 1 pse 1 ( j 1 ) , k ) × r ( k ) / K = 0 Ns - 1 SYNOstb 2 ( s 2 pse 1 ( j 2 ) , k ) × r ( k ) ( 24 ) ##EQU00021##
where [0284]
cscr1: constant for an equation 29
SYNOstb1(s1psel(j1),k): orthogonally synthesized first random code vectors
SYNOstb2(s2psel(j2),k): orthogonally synthesized second random code vectors
r(k): target vector
s1psel(j1): index of first random code vector after pre-selection
s2psel(j2): index of second random code vector after pre-selection
Ns: subframe length (=52)
k: element number of a vector.
1 = k = 0 Ns - 1 SYNOstb 1 ( s 1 pse 1 ( j 1 ) , k ) × r ( k ) - K = 0 Ns - 1 SYNOstb 2 ( s 2 pse 1 ( j 2 ) , k ) × r ( k ) ( 25 ) ##EQU00022##
where [0292]
cscr2: constant for the equation 23
SYNOstb1(s1psel(j1),k): orthogonally synthesized first random code vectors
SYNOstb2(s2psel(j2),k): orthogonally synthesized second random code vectors
r(k): target vector
s1psel(j1): index of first random code vector after pre-selection
s2psel(j2): index of second random code vector after pre-selection
Ns: subframe length (=52)
k: element number of a vector.
The comparator B 1330 substitutes the maximum value of S1cr in MAXs1cr, substitutes the maximum value of S2cr in MAXs1cr, sets MAXs1cr or MAXs1cr, whichever is larger, as scr, and sends the value of
s1psel(j1), which had been referred to when scr was obtained, to the parameter coding section 1331 as an index of a first random code vector after final-selection SSEL1. The random code vector that
corresponds to SSEL1 is saved as a first random code vector after final-selection Pstb1(SSEL1,k), and is sent to the parameter coding section 1331 to acquire a first random code vector after
final-selection SYNstb1(SSEL1,k) (0≦k≦Ns-1) corresponding to Pstb1(SSEL1,k).
Likewise, the value of s2psel(j2), which had been referred to when scr was obtained, to the parameter coding section 1331 as an index of a second random code vector after final-selection SSEL2. The
random code vector that corresponds to SSEL2 is saved as a second random code vector after final-selection Pstb2(SSEL2,k), and is sent to the parameter coding section 1331 to acquire a second random
code vector after final-selection SYNstb2(SSEL2,k) (0≦k≦Ns-1) corresponding to Pstb2(SSEL2,k).
The comparator B 1330 further acquires codes S1 and S2 by which Pstb1(SSEL1,k) and Pstb2(SSEL2,k) are respectively multiplied, from an equation 26, and sends polarity information Isls2 of the
obtained S1 and S2 to the parameter coding section 1331 as a gain polarity index Isls2 (2-bit information).
( S 1 , S 2 ) = { ( + 1 , + 1 ) scr 1 ≧ scr 2 , cscr 1 ≧ 0 ( - 1 , - 1 ) scr 1 ≧ scr 2 , cscr 1 < 0 ( + 1 , - 1 ) scr 1 < scr 2 , cscr 2 ≧ 0 ( - 1 , + 1 ) scr 1 < scr 2 , cscr 2 < 0 ( 26 ) ##EQU00023
where [0303]
S1: code of the first random code vector after final-selection
S2: code of the second random code vector after final-selection
scr1: output of the equation 29
scr2: output of the equation 23
cscr1: output of the equation 24.
cscr2: output of the equation 25.
A random code vector SP(k) (0≦k≦Ns-1) is generated by an equation 27 and output to the adaptive codebook updating section 1333, and its power POWsf is acquired and output to the parameter coding
section 1331.
(k)=S1×Pstb1(SSEL1,k)+S2×Pstb2(SSEL2,k) (27)
where ST
(k): probable code vector
S1: code of the first random code vector after final-selection
S2: code of the second random code vector after final-selection
Pstb1(SSEL1,k): first-stage settled code vector after final-selection
Pstb1(SSEL2,k): second-stage settled code vector after final-selection
SSEL1: index of the first random code vector after final-selection
SSEL2: second random code vector after final-selection
k: element number of a vector (0≦k≦Ns-1).
A synthesized random code vector SYNst(k) (0≦k≦Ns-1) is generated by an equation 28 and output to the parameter coding section 1331.
(k)=S1×SYNstb1(SSEL1,k)+S2×SYNstb2(SSEL2,k) (28)
where STNst
(k): synthesized probable code vector
S1: code of the first random code vector after final-selection
S2: code of the second random code vector after final-selection
SYNstb1(SSEL1,k): synthesized first random code vector after final-selection
SYNstb2(SSEL2,k): synthesized second random code vector after final-selection
k: element number of a vector (0≦k≦Ns-1).
The parameter coding section 1331 first acquires a residual power estimation for each subframe rs is acquired from an equation 29 using the decoded frame power spow which has been obtained by the
frame power quantizing/decoding section 1302 and the normalized predictive residual power resid, which has been obtained by the pitch pre-selector 1308.
=Ns×spow×resid (29)
where rs
: residual power estimation for each subframe
Ns: subframe length (=52)
spow: decoded frame power
resid: normalized predictive residual power.
A reference value for quantization gain selection STDg is acquired from an equation 30 by using the acquired residual power estimation for each subframe rs, the power of the adaptive/fixed code
vector POWaf computed in the comparator A 1322, the power of the random code vector POWst computed in the comparator. B 1330, a gain quantization table (CGaf[i],CGst[i]) (0≦i≦127) of 256 words stored
in a gain quantization table storage section 1332 and the like.
-US-00007 TABLE 7 Gain quantization table i CGaf(i) CGst(i) 1 0.38590 0.23477 2 0.42380 0.50453 3 0.23416 0.24761 126 0.35382 1.68987 127 0.10689 1.02035 128 3.09711 1.75430 (30) STDg = k = 0 Ns - 1
( rs POWaf CGaf ( Ig ) × SYNaf ( k ) + rs POWst CGst ( Ig ) × SYNst ( k ) - r ( k ) ) 2 ##EQU00024##
where [0328]
STDg: reference value for quantization gain selection
rs: residual power estimation for each subframe
POWaf: power of the adaptive/fixed code vector
POWSst: power of the random code vector
i: index of the gain quantization table (0≦i≦127)
CGaf(i): component on the adaptive/fixed code vector side in the gain quantization table
CGst(i): component on the random code vector side in the gain quantization table
SYNaf(k): synthesized adaptive/fixed code vector
SYNst(k): synthesized random code vector
r(k): target vector
Ns: subframe length (=52)
k: element number of a vector (0≦k≦Ns-1).
One index when the acquired reference value for quantization gain selection STDg becomes minimum is selected as a gain quantization index Ig, a final gain on the adaptive/fixed code vector side Gaf
to be actually applied to AF(k) and a final gain on the random code vector side Gst to be actually applied to ST(k) are obtained from an equation 31 using a gain after selection of the adaptive/fixed
code vector CGaf(Ig), which is read from the gain quantization table based on the selected gain quantization index Ig, a gain after selection of the random code vector CGst(Ig), which is read from
the gain quantization table based on the selected gain quantization index Ig and so forth, and are sent to the adaptive codebook updating section 1333.
( Gaf , Gst ) = ( rs POWaf CGaf ( Ig ) , rs POWst CGst ( IG ) ) ( 31 ) ##EQU00025##
where [0341]
Gaf: final gain on the adaptive/fixed code vector side
Gst: final, gain on the random code vector side Gst
rs: residual power estimation for each subframe
POWaf: power of the adaptive/fixed code vector
POWst: power of the random code vector
CGaf(Ig): power of a fixed/adaptive side code vector
CGst(Ig): gain after selection of a random code vector side
Ig: gain quantization index.
The parameter coding section 1331 converts the index of power Ipow, acquired by the frame power quantizing/decoding section 1302, the LSP code Ilsp, acquired by the LSP quantizing/decoding section
1306, the adaptive/fixed index AFSEL, acquired by the adaptive/fixed selector 1320, the index of the first random code vector after final-selection SSEL1, the second random code vector after
final-selection SSEL2 and the polarity information Isls2, acquired by the comparator B 1330, and the gain quantization index Ig, acquired by the parameter coding section 1331, into a speech code,
which is in turn sent to a transmitter 1334.
The adaptive codebook updating section 1333 performs a process of an equation 32 for multiplying the adaptive/fixed code vector AF(k), acquired by the comparator A 1322, and the random code vector ST
(k), acquired by the comparator B 1330, respectively by the final gain on the adaptive/fixed code vector side Gaf and the final gain on the random code vector side Gst, acquired by the parameter
coding section 1331, and then adding the results to thereby generate an excitation vector ex(k) (0≦k≦Ns-1), and sends the generated excitation vector ex(k) (0≦k≦Ns-1) to the adaptive codebook 1318.
(k)=Gaf×AF(k)+Gst×ST(k) (32)
where [0351]
ex(k): excitation vector
AF(k): adaptive/fixed code vector
ST(k): random code vector
k: element number of a vector (0≦k≦Ns-1).
At this time, an old excitation vector in the adaptive codebook 1318 is discarded and is updated with a new excitation vector ex(k) received from the adaptive codebook updating section 1333.
(Eighth Mode)
A description will now be given of an eighth mode in which any excitation vector generator described in first to sixth modes is used in a speech decoder that is based on the PSI-CELP, the standard
speech coding/decoding system for PDC digital portable telephones. This decoder Makes a pair with the above-described seventh mode.
FIG. 14 presents a functional block diagram of a speech decoder according to the eighth mode. A parameter decoding section 1402 obtains the speech code (the index of power Ipow, LSP code Ilsp,
adaptive/fixed index APSEL, index of the first random code vector after final-selection SSEL1, second random code vector after final-selection SSEL2, gain quantization index Ig and gain polarity
index Isls2), sent from the CELP type speech coder illustrated in FIG. 13, via a transmitter 1401.
Next, a scalar value indicated by the index of power Ipow is read from the power quantization table (see Table 3) stored in a power quantization table storage section 1405, is sent as decoded frame
power spow to a power restoring section 1417, and a vector indicated by the LSP code Ilsp is read from the LSP quantization table an LSP quantization table storage section 1404 and is sent as a
decoded LSP to an LSP interpolation section 1406. The adaptive/fixed index AFSEL is sent to an adaptive code vector generator 1408, a fixed code vector reading section 1411 and an adaptive/fixed
selector 1412, and the index of the first random code vector after final-selection SSEL1 and the second random code vector after final-selection SSEL2 are output to an excitation vector generator
1414. The vector (CAaf(Ig), CGst(Ig)) indicated by the gain quantization index Ig is read from the gain quantization table (see Table 7) stored in a gain quantization table storage section 1403, the
final gain on the final gain on the adaptive/fixed code vector side Gaf to be actually applied to AF(k) and the final gain on the random code vector side Gst to be actually applied to ST(k) are
acquired from the equation 31 as done on the coder side, and the acquired final gain on the adaptive/fixed code vector side Gaf and final gain on the random code vector side Gst are output together
with the gain polarity index Isls2 to an excitation vector generator 1413.
The LSP interpolation section 1406 obtains a decoded interpolated LSP ωintp(n,i) (1≦i≦Np) subframe by subframe from the decoded LSP received from the parameter decoding section 1402, converts the
obtained ωintp(n,i) to an LPC to acquire a decoded interpolated LPC, and sends the decoded interpolated LPC to an LPC synthesis filter 1416.
The adaptive code vector generator 1408 convolute some of polyphase coefficients stored in a polyphase coefficients storage section 1409 (see Table 5) on vectors read from an adaptive codebook 1407,
based on the adaptive/fixed index AFSEL received from the parameter decoding section 1402, thereby generating adaptive code vectors to a fractional precision, and sends the adaptive code vectors to
the adaptive/fixed selector 1412. The fixed code vector reading section 1411 reads fixed code vectors from a fixed codebook 1410 based on the adaptive/fixed index AFSEL received from the parameter
decoding section 1402, and sends them to the adaptive/fixed selector 1412.
The adaptive/fixed selector 1412 selects either the adaptive code vector input from the adaptive code vector generator 1408 or the fixed code vector input from the fixed code vector reading section
1411, as the adaptive/fixed code vector AF(k), based on the adaptive/fixed index AFSEL received from the parameter decoding section 1402, and sends the selected adaptive/fixed code vector AF(k) to
the excitation vector generator 1413. The excitation vector generator 1414 acquires the first seed and second seed from the seed storage section 71 based on the index of the first random code vector
after final-selection SSEL1 and the second random code vector after final-selection SSEL2 received from the parameter decoding section 1402, and sends the seeds to the non-linear digital filter 72 to
generate the first random code vector and the second random code vector, respectively. Those reproduced first random code vector and second random code vector are respectively multiplied by the
first-stage information S1 and second-stage information S2 of the gain polarity index to generate an excitation vector ST(k), which is sent to the excitation vector generator 1413.
The excitation vector generator 1413 multiplies the adaptive/fixed code vector AF(k), received from the adaptive/fixed selector 1412, and the excitation vector ST(k), received from the excitation
vector generator 1414, respectively by the final gain on the adaptive/fixed code vector side Gaf and the final gain on the random code vector side Gst, obtained by the parameter decoding section
1402, performs addition or subtraction based on the gain polarity index Isls2, yielding the excitation vector ex(k), and sends the obtained excitation vector to the excitation vector generator 1413
and the adaptive codebook 1407. Here, an old excitation vector in the adaptive codebook 1407 is updated with a new excitation vector input from the excitation vector generator 1413.
The LPC synthesis filter 1416 performs LPC synthesis on the excitation vector, generated by the excitation vector generator 1413, using the synthesis filter which is constituted by the decoded
interpolated LPC received from the LSP interpolation section 1406, and sends the filter output to the power restoring section 1417. The power restoring section 1417 first obtains the mean power of
the synthesized vector of the excitation vector obtained by the LPC synthesis filter 1416, then divides the decoded frame power spow, received from the parameter decoding section 1402, by the
acquired mean power, and multiplies the synthesized vector of the excitation vector by the division result to generate a synthesized speech 518.
(Ninth Mode)
FIG. 15 is a block diagram of the essential portions of a speech coder according to a ninth mode. This speech coder has a quantization target LSP adding section 151, an LSP quantizing/decoding
section 152, a LSP quantization error comparator 153 added to the speech coder shown in FIG. 13 or parts of its functions modified.
The LPC analyzing section 1304 acquires an LPC by performing linear predictive analysis on a processing frame in the buffer 1301, converts the acquired LPC to produce a quantization target LSP, and
sends the produced quantization target LSP to the quantization target LSP adding section 151. The LPC analyzing section 1304 also has a particular function of performing linear predictive analysis on
a pre-read area to acquire an LPC for the pre-read area, converting the obtained LPC to an LSP for the pre-read area, and sending the LSP to the quantization target LSP adding section 151.
The quantization target LSP adding section 151 produces a plurality of quantization target LSPs in addition to the quantization target LSPs directly obtained by converting LPCs in a processing frame
in the LPC analyzing section 1304.
The LSP quantization table storage section 1307 stores the quantization table which is referred to by the LSP quantizing/decoding section 152, and the LSP quantizing/decoding section 152 quantizes/
decodes the produced plurality of quantization target LSPs to generate decoded LSPs.
The LSP quantization error comparator 153 compares the produced decoded LSPs with one another to select, in a closed loop, one decoded LSP which minimizes an allophone, and newly uses the selected
decoded LSP as a decoded LSP for the processing frame.
FIG. 16 presents a block diagram of the quantization target LSP adding section 151.
The quantization target LSP adding section 151 comprises a current frame LSP memory 161 for storing the quantization target LSP of the processing frame obtained by the LPC analyzing section 1304, a
pre-read area ISP memory 162 for storing the LSP of the pre-read area obtained by the LPC analyzing section 1304, a previous frame ISP memory 163 for storing the decoded ISP of the previous
processing frame, and a linear interpolation section 164 which performs linear interpolation on the LSPs read from those three memories to add a plurality of quantization target LSPs.
A plurality of quantization target LSPs are additionally produced by performing linear interpolation on the quantization target LSP of the processing frame and the LSP of the pre-read, and produced
quantization target LSPs are all sent to the LSP quantizing/decoding section 152.
The quantization target LSP adding section 151 will now be explained more specifically. The LPC analyzing section 1304 performs linear predictive analysis on the processing frame in the buffer to
acquire an LPC α(i) (1≦i≦Np) of a prediction order Np (=10), converts the obtained LPC to generate a quantization target LSP ω(i) (1≦i≦Np), and stores the generated quantization target LSP ω(i)
(1≦i≦Np) in the current frame LSP memory 161 in the quantization target LSP adding section 151. Further, the LPC analyzing section 1304 performs linear predictive analysis on the pre-read area in the
buffer to acquire an LPC for the pre-read area, converts the obtained LPC to generate a quantization target LSP ωf(i) (1≦i≦Np), and stores the generated quantization target LSP ω(i) (1≦i≦Np) for the
pre-read area in the pre-read area LSP memory 162 in the quantization target LSP adding section 151.
Next, the linear interpolation section 164 reads the quantization target LSP ω(i) (1≦i≦Np) for the processing frame from the current frame LSP memory 161, the LSP ωf(i) (1≦i≦Np) for the pre-read area
from the pre-read area LSP memory 162, and decoded LSP ωqp(i) (1≦i≦Np) for the previous processing frame from the previous frame LSP memory 163, and executes conversion shown by an equation 33 to
respectively generate first additional quantization target LSP ω1(i) (1≦i≦Np), second additional quantization target LSP ω2(i) (1≦i≦Np), and third additional quantization target LSP ω1(i) (1≦i≦Np).
[ ω 1 ( i ) ω 2 ( i ) ω 3 ( i ) ] = [ 0.8 0.2 0.0 0.5 0.3 0.2 0.8 0.3 0.5 ] [ ω q ( i ) ω qp ( i ) ω f ( i ) ] ( 33 ) ##EQU00026##
where [0374]
ω1(i): first additional quantization target LSP
ω2(i); second additional quantization target LSP
ω3(i): third additional quantization target LSP
i: LPC order (1≦i≦Np)
: LPC analysis order (=10)
ωq(i); decoded LSP for the processing frame
ωqp(i); decoded LSP for the previous processing frame
ωf(i): LSP for the pre-read area.
The generated ω1(i), ω2(i) and ω3(i) are sent to the LSP quantizing/decoding section 152. After performing vector quantization/decoding of all the four quantization target LSPs ω(i), ω1(i), ω2(i) and
ω3(i), the LSP quantizing/decoding section 152 acquires power Epow(ω) of an quantization error for ω(i), power Epow(ω1) of an quantization error for ω1(i), power Epow(ω2) of an quantization error for
ω2(i), and power Epow(ω3) of an quantization error for ω3(i), carries out conversion of an equation 34 on the obtained quantization error powers to acquire reference values STDlsp(ω), STDlsp(ω1),
STDlsp(ω2) and STDlsp(ω3) for selection of a decoded LSP.
[ STDlsp ( ω ) STDlsp ( ω 1 ) STDlsp ( ω2 ) STDlsp ( ω 3 ) ] = [ Epow ( ω ) Epow ( ω 1 ) Epow ( ω 2 ) Epow ( ω 3 ) ] - [ 0.0010 0.0005 0.0002 0.0000 ] ( 34 ) ##EQU00027##
where [0382]
STDlsp(ω): reference value for selection of a decoded LSP for ω(i)
STDlsp(ω1): reference value for selection of a decoded LSP for ω1(i)
STDlsp(ω2): reference value for selection of a decoded LSP for ω2(i)
STDlsp(ω3): reference value for selection of a decoded LSP for ω3(i)
Epow(ω): quantization error power for ω(i)
Epow(ω1): quantization error power for ω1(i)
Epow(ω2): quantization error power for ω2(i)
Epow(ω3): quantization error power for ω3(i).
The acquired reference values for selection of a decoded LSP are compared with one another to select and output the decoded LSP for the quantization target LSP that becomes minimum as a decoded LSP
ωq(i) (1≦i≦Np) for the processing frame, and the decoded LSP is stored in the previous frame LSP memory 163 so that it can be referred to at, the time of performing vector quantization of the LSP of
the next frame.
According to this mode, by effectively using the high interpolation characteristic of an LSP (which does not cause an allophone even synthesis is implemented by using interpolated LSPs), vector
quantization of LSPs can be so conducted as not to produce an allophone even for an area like the top of a word where the spectrum varies significantly. It is possible to reduce an allophone in a
synthesized speech which may occur when the quantization characteristic of an LSP becomes insufficient.
FIG. 17 presents a block diagram of the LSP quantizing/decoding section 152 according to this mode. The LSP quantizing/decoding section 152 has a gain information storage section 171, an adaptive
gain selector 172, a gain multiplier 173, an LSP quantizing section 174 and an LSP decoding section 175.
The gain information storage section 171 stores a plurality of gain candidates to be referred to at the time the adaptive gain selector 172 selects the adaptive gain. The gain multiplier 173
multiplies a code vector, read from the LSP quantization table storage section 1307, by the adaptive gain selected by the adaptive gain selector 172. The LSP quantizing section 174 performs vector
quantization of a quantization target LSP using the code vector multiplied by the adaptive gain. The LSP decoding section 175 has a function of decoding a vector-quantized LSP to generate a decoded
LSP and outputting it, and a function of acquiring an LSP quantization error, which is a difference between the quantization target LSP and the decoded LSP, and sending it to the adaptive gain
selector 172. The adaptive gain selector 172 acquires the adaptive gain by which a code vector is multiplied at the time of vector-quantizing the quantization target LSP of the processing frame by
adaptively adjusting the adaptive gain based on gain generation information stored in the gain information storage section 171, on the basis of, as references, the level of the adaptive gain by which
a code vector is multiplied at the time the quantization target LSP of the previous processing frame was vector-quantized and the LSP quantization error for the previous frame, and sends the obtained
adaptive gain to the gain multiplier 173.
The LSP quantizing/decoding section 152 performs vector-quantizes and decodes a quantization target LSP while adaptively adjusting the adaptive gain by which a code vector is multiplied in the above
The LSP quantizing/decoding section 152 will now be discussed more specifically. The gain information storage section 171 is storing four gain candidates (0.9, 1.0, 1.1 and 1.2) to which the adaptive
gain selector 172 refers. The adaptive gain selector 172 acquires a reference value for selecting an adaptive gain, Slsp, from an equation 35 for dividing power ERpow, generated at the time of
quantizing the quantization target LSP of the previous frame, by the square of an adaptive gain Gqlsp selected at the time of vector-quantizing the quantization target LSP of the previous processing
= ERpow Gqlsp 2 ( 35 ) ##EQU00028##
where [0396]
Slsp: reference value for selecting an adaptive gain
ERpow: quantization error power generated when quantizing the LSP of the previous frame
Gqlsp: adaptive gain selected when vector-quantizing the LSP of the previous frame.
One gain is selected from the four gain candidates (0.9, 1.0, 1.1 and 1.2), read from the gain information storage section 171, from en equation 36 using the acquired reference value Slsp for
selecting the adaptive gain. Then, the value of the selected adaptive gain Gqlsp is sent to the gain multiplier 173, and information (2-bit information) for specifying type of the selected adaptive
gain from the four types is sent to the parameter coding section.
= { 1.2 Slsp > 0.0025 1.1 Slsp > 0.0015 1.0 Slsp > 0.0008 0.9 Slsp ≦ 0.0008 ( 36 ) ##EQU00029##
where Glsp
: adaptive gain by which a code vector for LS quantization is multiplied
Slsp: reference value for selecting an adaptive gain.
The selected adaptive gain Glsp and the error which has been produced in quantization are saved in the variable Gqlsp and ERpow until the quantization target LSP of the next frame is subjected to
vector quantization.
The gain multiplier 173 multiplies a code vector, read from the LSP quantization table storage section 1307, by the adaptive gain selected by the adaptive gain selector 172, and sends the result to
the LSP quantizing section 174. The LSP quantizing section 174 performs vector quantization on the quantization target LSP by using the code vector multiplied by the adaptive gain, and sends its
index to the parameter coding section. The LSP decoding section 175 decodes the LSP, quantized by the LSP quantizing section 174, acquiring a decoded LSP, outputs this decoded LSP, subtracts the
obtained decoded LSP from the quantization target LSP to obtain an LSP quantization error, computes the power ERpow of the obtained LSP quantization error, and sends the power to the adaptive gain
selector 172.
This mode can suppress an allophone in a synthesized speech which may be produced when the quantization characteristic of an LSP becomes insufficient.
(Tenth Mode)
FIG. 18 presents the structural blocks of an excitation vector generator according to this mode. This excitation vector generator has a fixed waveform storage section 181 for storing three fixed
waveforms (v1 (length: L1), v2 (length: L2) and v3 (length: L3)) of channels CH1, CH2 and CH3, a fixed waveform arranging section 182 for arranging the fixed waveforms (v1, v2, v3), read from the
fixed waveform storage section 181, respectively at positions P1, P2 and P3, and an adding section 183 for adding the fixed waveforms arranged by the fixed waveform arranging section 182, generating
an excitation vector.
The operation of the thus constituted excitation vector generator will be discussed.
Three fixed waveforms v1, v2 and v3 are stored in advance in the fixed waveform storage section 181. The fixed waveform arranging section 182 arranges (shifts) the fixed waveform v1, read from the
fixed waveform storage section 181, at the position P1 selected from start position candidates for CH1, based on start position candidate information for fixed waveforms it has as shown in Table 8,
and likewise arranges the fixed waveforms v2 and v3 at the respective positions P2 and P3 selected from start position candidates for CH2 and CH3.
-US-00008 TABLE 8 Channel start position candidate information number Sign for fixed waveform CH1 ±1 P1(0, 10, 20, 30, . . . , 60, 70) CH2 ±1 P 2 ( 2 , 12 , 22 , 32 , , 62 , 72 6 , 16 , 26 , 36 , ,
66 , 76 ) ##EQU00030## CH3 ±1 P 3 ( 4 , 14 , 24 , 34 , , 64 , 74 8 , 18 , 28 , 38 , , 68 , 78 ) ##EQU00031##
The adding section 183 adds the fixed waveforms, arranged by the fixed waveform arranging section 182, to generate an excitation vector.
It is to be noted that code numbers corresponding, one to one, to combination information of selectable start position candidates of the individual fixed waveforms (information representing which
positions were selected as P1, P2 and P3, respectively) should be assigned to the start position candidate information of the fixed waveforms the fixed waveform arranging section 182 has.
According to the excitation vector generator with the above structure, excitation information can be transmitted by transmitting code numbers correlating to the start position candidate information
of fixed waveforms the fixed waveform arranging section 182 has, and the code numbers exist by the number of products of the individual start position candidates, so that an excitation vector close
to an actual speech can be generated.
Since excitation information can be transmitted by transmitting code numbers, this excitation vector generator can be used as a random codebook in a speech coder/decoder.
While the description of this mode has been given with reference to a case of using three fixed waveforms as shown in FIG. 18, similar functions and advantages can be provided if the number of fixed
waveforms (which coincides with the number of channels in FIG. 18 and Table 8) is changed to other values.
Although the fixed waveform arranging section 182 in this mode has been described as having the start position candidate information of fixed waveforms given in Table 8, similar functions and
advantages can be provided for other start position candidate information of fixed waveforms than those in Table 8.
(Eleventh Mode)
FIG. 19A is a structural block diagram of a CELP type speech coder according to this mode, and FIG. 19B is a structural block diagram of a CELP type speech decoder which is paired with the CELP type
speech coder.
The CELP type speech coder according to this mode has an excitation vector generator which comprises a fixed waveform storage section 181A, a fixed waveform arranging section 182A and an adding
section 183A. The fixed waveform storage section 181A stores a plurality of fixed waveforms. The fixed waveform arranging section 182A arranges (shifts) fixed waveforms, read from the fixed waveform
storage section 181A, respectively at the selected positions, based on start position candidate information for fixed waveforms it has. The adding section 183A adds the fixed waveforms, arranged by
the fixed waveform arranging section 182A, to generate an excitation vector c.
This CELP type speech coder has a time reversing section 191 for time-reversing a random codebook searching target x to be input, a synthesis filter 192 for synthesizing the output of the time
reversing section 191, a time reversing section 193 for time-reversing the output of the synthesis filter 192 again to yield a time-reversed synthesized target x', a synthesis filter 194 for
synthesizing the excitation vector c multiplied by a random code vector gain gc, yielding a synthesized excitation vector s, a distortion calculator 205 for receiving x', c and S and computing
distortion, and a transmitter 196.
According to this mode, the fixed waveform storage section 181A, the fixed waveform arranging section 182A and the adding section 183A correspond to the fixed waveform storage section 181, the fixed
waveform arranging section 182 and the adding section 183 shown in FIG. 18, the start position candidates of fixed waveforms in the individual channels correspond to those in Table 8, and channel
numbers, fixed waveform numbers and symbols indicating the lengths and positions in use are those shown in FIG. 18 and Table 8.
The CELP type speech decoder in FIG. 198 comprises a fixed waveform storage section 181B for storing a plurality of fixed waveforms, a fixed waveform arranging section 182B for arranging (shifting)
fixed waveforms, read from the fixed waveform storage section 181B, respectively at the selected positions, based on start position candidate information for fixed waveforms it has, an adding section
183B for adding the fixed waveforms, arranged by the fixed waveform arranging section 182B, to yield an excitation vector c, a gain multiplier 197 for multiplying a random code vector gain gc, and a
synthesis filter 198 for synthesizing the excitation vector c to yield a synthesized excitation vector s.
The fixed waveform storage section 181B and the fixed waveform arranging section 182B in the speech decoder have the same structures as the fixed waveform storage section 181A and the fixed waveform
arranging section 182A in the speech coder, and the fixed waveforms stored in the fixed waveform storage sections 181A and 181B have such characteristics as to statistically minimize the cost
function in the equation 3, which is the coding distortion computation of the equation 3 using a random codebook searching target by cost-function based learning.
The operation of the thus constituted speech coder will be discussed.
The random codebook searching target x is time-reversed by the time reversing section 191, then synthesized by the synthesis filter 192 and then time-reversed again by the time reversing section 193,
and the result is sent as a time-reversed synthesized target x' to the distortion calculator 205.
The fixed waveform arranging section 182A arranges (shifts) the fixed waveform v1, read from the fixed waveform storage section 181A, at the position P1 selected from start position candidates for
CH1, based on start position candidate information for fixed waveforms it has as shown in Table 8, and likewise arranges the fixed waveforms v2 and v3 at the respective positions P2 and P3 selected
from start position candidates for CH2 and CH3. The arranged fixed waveforms are sent to the adding section 183A and added to become an excitation vector c, which is input to the synthesis filter
194. The synthesis filter 194 synthesizes the excitation vector c to produce a synthesized excitation vector s and sends it to the distortion calculator 205.
The distortion calculator 205 receives the time-reversed synthesized target x', the excitation vector c and the synthesized excitation vector s and computes coding distortion in the equation 4.
The distortion calculator 205 sends a signal to the fixed waveform arranging section 182A after computing the distortion. The process from the selection of start position candidates corresponding to
the three channels by the fixed waveform arranging section 182A to the distortion computation by the distortion calculator 205 is repeated for every combination of the start position candidates
selectable by the fixed waveform arranging section 182A.
Thereafter, the combination of the start position candidates that minimizes the coding distortion is selected, and the code number which corresponds, one to one, to that combination of the start
position candidates and the then optimal random code vector gain gc are transmitted as codes of the random codebook to the transmitter 196.
The fixed waveform arranging section 182B selects the positions of the fixed waveforms in the individual channels from start position candidate information for fixed waveforms it has, based on
information sent from the transmitter 196, arranges (shifts) the fixed waveform v1, read from the fixed waveform storage section 181B, at the position P1 selected from start position candidates for
CH1, and likewise arranges the fixed waveforms v2 and v3 at the respective positions P2 and P3 selected from start position candidates for CH2 and CH3. The arranged fixed waveforms are sent to the
adding section 183B and added to become an excitation vector c. This excitation vector c is multiplied by the random code vector gain gc selected based on the information from the transmitter 196,
and the result is sent to the synthesis filter 198. The synthesis filter 198 synthesizes the gc-multiplied excitation vector c to yield a synthesized excitation vector s and sends it out.
According to the Speech coder/decoder with the above structures, as an excitation vector is generated by the excitation vector generator which comprises the fixed waveform storage section, fixed
waveform arranging section and the adding section, a synthesized excitation vector obtained by synthesizing this excitation vector in the synthesis filter has such a characteristic statistically
close to that of an actual target as to be able to yield a high-quality synthesized speech, in addition to the advantages of the tenth mode.
Although the foregoing description of this mode has been given with reference to a case where fixed waveforms obtained by learning are stored in the fixed waveform storage sections 181A and 181B,
high-quality synthesized speeches can also obtained even when fixed waveforms prepared based on the result of statistical analysis of the random codebook searching target x are used or when
knowledge-based fixed waveforms are used.
While the description of this mode has been given with reference to a case of using three fixed waveforms, similar functions and advantages can be provided if the number of fixed waveforms is changed
to other values.
Although the fixed waveform arranging section in this mode has been described as having the start position candidate information of fixed waveforms given in Table 8, similar functions and advantages
can be provided for other start position candidate information of fixed waveforms than those in Table 8.
(Twelfth Mode)
FIG. 20 presents a structural block diagram of a CELP type speech coder according to this mode.
This CELP type speech coder includes a fixed waveform storage section 200 for storing a plurality of fixed waveforms (three in this mode: CH1:W1, CH2:W2 and CH3:W3), and a fixed waveform arranging
section 201 which has start position candidate information of fixed waveforms for generating start positions of the fixed waveforms, stored in the fixed waveform storage section 200, according to
algebraic rules. This CELP type speech coder further has a fixed waveform an impulse response calculator 202 for each waveform, an impulse generator 203, a correlation matrix calculator 204, a time
reversing section 191, a synthesis filter 192' for each waveform, a time reversing section 193 and a distortion calculator 205.
The impulse response calculator 202 has a function of convoluting three fixed waveforms from the fixed waveform storage section 200 and the impulse response h (length L=subframe length) of the
synthesis filter to compute three kinds of impulse responses for the individual fixed waveforms (CH1:h1, CH2:h2 and CH3:h3, length L=subframe length).
The synthesis filter 192' has a function of convoluting the output of the time reversing section 191, which is the result of the time-reversing the random codebook searching target x to be input, and
the impulse responses for the individual waveforms, h1, h2 and h3, from the impulse response calculator 202.
The impulse generator 203 sets a pulse of an amplitude 1 (a polarity present) only at the start position candidates P1, P2 and P3, selected by the fixed waveform arranging section 201, generating
impulses for the individual channels (CH1:d1, CH2:d2 and CH3:d3).
The correlation matrix calculator 204 computes autocorrelation of each of the impulse responses h1, h2 and h3 for the individual waveforms from the impulse response calculator 202, and correlations
between h1 and h2, h1 and h3, and h2 and h3, and develops the obtained correlation values in a correlation matrix RR.
The distortion calculator 205 specifies the random code vector that minimizes the coding distortion, from an equation 37, a modification of the equation 4, by using three time-reversed synthesis
targets (x'1, x'2 and x'3), the correlation matrix RR and the three impulses (d1, d2 and d3) for the individual channels.
( i = 1 3 ? d i ) 2 ? ? ? ? H j d i ? indicates text missing or illegible when filed ( 37 ) ##EQU00032##
where [0437]
di: impulse (vector) for each channel
),k=0 to L-1,p
: n start position candidates of the i-th channel
: impulse response convolution matrix for each waveform (H
: fixed waveform convolution matrix
W i
= [ w i ( 0 ) 0 0 0 0 0 w i ( 1 ) w i ( 0 ) 0 0 0 0 0 w i ( 2 ) w i ( 1 ) w i ( 0 ) 0 0 0 0 0 0 0 0 0 w i ( L i - 1 ) w i ( L i - 2 ) 0 0 0 0 w i ( L i - 1 ) w i ( L i - 2 ) 0 0 0 w i ( L i - 1 ) 0 0
0 0 0 0 0 0 0 0 0 w i ( L i - 1 ) w i ( 1 ) w i ( 0 ) ] ##EQU00033##
where w
is the fixed waveform (length: L
) of the i-th channel
: vector obtained by time reverse synthesis of x using H
Here, transformation from the equation 4 to the equation 37 is shown for each of the denominator term (equation 38) and the numerator term (equation 39).
( x t Hc ) 2 = ( x t H ( W 1 d 1 + W 2 d 2 + W 3 d 3 ) ) 2 = ( x t ( H 1 d 1 + H 2 d 2 + H 3 d 3 ) ) 2 = ( ( x t H 1 ) d 1 + ( x t H 2 ) d 2 + ( x t H 3 ) d 3 ) 2 = ( x 1 ' t d 1 + x 2 ' t d 2 + x 3
' t d 3 ) 2 = ( i = 1 3 x i ' t d i ) 2 ( 38 ) ##EQU00034##
where [0444]
x: random codebook searching target (vector)
: transposed vector of x
H: impulse response convolution matrix of the synthesis filter
c: random code vector (c=W
: fixed waveform convolution matrix
di: impulse (vector) for each channel
: impulse response convolution matrix for each waveform (H
: vector obtained by time reverse synthesis of x using H
2 = H ( W 1 d 1 + W 2 d 2 + W 3 d 3 ) 2 = H 1 d 1 + H 2 d 2 + H 3 d 3 2 = ( H 1 d 1 + H 2 d 2 + H 3 d 3 ) t ( H 1 d 1 + H 2 d 2 + H 3 d 3 ) = ( d 1 t H 1 t + d 2 t H 2 t + d 3 t H 3 t ) ( H 1 d 1 + H
2 d 2 + H 3 d 3 ) = i = 1 3 j = 1 3 d 1 t H 1 t d j H j ( 39 ) ##EQU00035##
where H
: impulse response convolution matrix of the synthesis filter
c: random code vector (c=W1d1+W2d2+W3d3)
: fixed waveform convolution matrix
di: impulse (vector) for each channel
: impulse response convolution matrix for each waveform (H
The operation of the thus constituted CELP type speech coder will be described.
To begin with, the impulse response calculator 202 convolutes three fixed waveforms stored and the impulse response h to compute three kinds of impulse responses h1, h2 and h3 for the individual
fixed waveforms, and sends them to the synthesis filter 192' and the correlation matrix calculator 204.
Next, the synthesis filter 192' convolutes the random codebook searching target x, time-reversed by the time reversing section 191, and the input three kinds of impulse responses h1, h2 and h3 for
the individual waveforms. The time reversing section 193 time-reverses the three kinds of output vectors from the synthesis filter 192' again to yield three time-reversed synthesis targets x'1, x'2
and x'3, and sends them to the distortion calculator 205.
Then, the correlation matrix calculator 204 computes autocorrelations of each of the input three kinds of Impulse responses h1, h2 and h3 for the individual waveforms and correlations between h1 and
h2, h1 and h3, and h2 and h3, and sends the obtained autocorrelations and correlations value to the distortion calculator 205 after developing them in the correlation matrix RR.
The above process having been executed as a pre-process, the fixed waveform arranging section 201 selects one start position candidate of a fixed waveform for each channel, and sends the positional
information to the impulse generator 203.
The impulse generator 203 sets a pulse of an amplitude 1 (a polarity present) at each of the start position candidates, obtained from the fixed waveform arranging section 201, generating impulses d1,
d2 and d3 for the individual channels and sends them to the distortion calculator 205.
Then, the distortion calculator 205 computes a reference value for minimizing the coding distortion in the equation 37, by using three time-reversed synthesis targets x'1, x'2 and x'3 for the
individual waveforms, the correlation matrix RR and the three impulses d1, d2 and d3 for the individual channels.
The process from the selection of start position candidates corresponding to the three channels by the fixed waveform arranging section 201 to the distortion computation by the distortion calculator
205 is repeated for every combination of the start position candidates selectable by the fixed waveform arranging section 201. Then, code number which corresponds to the combination of the start
position candidates that minimizes the reference value for searching the coding distortion in the equation 37 and the then optimal gain are specified with the random code vector gain gc used as a
code of the random codebook, and are transmitted to the transmitter.
The speech decoder of this mode has a similar structure to that of the tenth mode in FIG. 19B, and the fixed waveform storage section and the fixed waveform arranging section in the speech coder have
the same structures as the fixed waveform storage section and the fixed waveform arranging section in the speech decoder. The fixed waveforms stored in the fixed waveform storage section is a fixed
waveform having such characteristics as to statistically minimize the cost function in the equation 3 by the training using the coding distortion equation (equation 3) with a random codebook
searching target as a cost-function.
According to the thus constructed speech codex/decoder, when the start position candidates of fixed waveforms in the fixed waveform arranging section can be computed algebraically, the numerator in
the equation 37 can be computed by adding the three terms of the time-reversed synthesis target for each waveform, obtained in the previous processing stage, and then obtaining the square of the
result. Further, the numerator in the equation 37 can be computed by adding the nine terms in the correlation matrix of the impulse responses of the individual waveforms obtained in the previous
processing stage. This can ensure searching with about the same amount of computation as needed in a case where the conventional algebraic structural excitation vector (an excitation vector is
constituted by several pulses of an amplitude 1) is used for the random codebook.
Furthermore, a synthesized excitation vector in the synthesis filter has such a characteristic statistically close to that of an actual target as to be able to yield a high-quality synthesized
Although the foregoing description of this mode has been given with reference to a case where fixed waveforms obtained through training are stored in the fixed waveform storage section, high-quality
synthesized speeches can also obtained even when fixed waveforms prepared based on the result of statistical analysis of the random codebook searching target x are used or when knowledge-based fixed
waveforms are used.
While the description of this mode has been given with reference to a case of using three fixed waveforms, similar functions and advantages can be provided if the number of fixed waveforms is changed
to other values.
Although the fixed waveform arranging section in this mode has been described as having the start position candidate information of fixed waveforms given in Table 8, similar functions and advantages
can be provided for other start position candidate Information of fixed waveforms than those in Table 8.
(Thirteenth Mode)
FIG. 21 presents a structural block diagram of a CELP type speech coder according to this mode. The speech coder according to this mode has two kinds of random codebooks A 211 and B 212, a switch 213
for switching the two kinds of random codebooks from one to the other, a multiplier 214 for multiplying a random code vector by a gain, a synthesis filter 215 for synthesizing a random code vector
output from the random codebook that is connected by means of the switch 213, and a distortion calculator 216 for computing coding distortion in the equation 2.
The random codebook A 211 has the structure of the excitation vector generator of the tenth mode, while the other random codebook B 212 is constituted by a random sequence storage section 217 storing
a plurality of random code vectors generated from a random sequence. Switching between the random codebooks is carried out in a closed loop. The x is a random codebook searching target.
The operation of the thus constituted CELP type speech coder will be discussed.
First, the switch 213 is connected to the random codebook A 211, and the fixed waveform arranging section 182 arranges (shifts) the fixed waveforms, read from the fixed waveform storage section 181,
at the positions selected from start position candidates of fixed waveforms respectively, based on start position candidate information for fixed waveforms it has as shown in Table 8. The arranged
fixed waveforms are added together in the adding section 183 to become a random code vector, which is sent to the synthesis filter 215 after being multiplied by the random code vector gain. The
synthesis filter 215 synthesizes the input random code vector and sends the result to the distortion calculator 216.
The distortion calculator 216 performs minimization of the coding distortion in the equation 2 by using the random codebook searching target x and the synthesized code vector obtained from the
synthesis filter 215.
After computing the distortion, the distortion calculator 216 sends a signal to the fixed waveform arranging section 182. The process from the selection of start position candidates corresponding to
the three channels by the fixed waveform arranging section 182 to the distortion computation by the distortion calculator 216 is repeated for every combination of the start position candidates
selectable by the fixed waveform arranging section 182.
Thereafter, the combination of the start position candidates that minimizes the coding distortion is selected, and the code number which corresponds, one to one, to that Combination of the start
position candidates, the then optimal random code vector gain gc and the minimum coding distortion value are memorized.
Then, the switch 213 is connected to the random codebook B 212, causing a random sequence read from the random sequence storage section 217 to become a random code vector. This random code vector,
after being multiplied by the random code vector gain, is input to the synthesis filter 215. The synthesis filter 215 synthesizes the input random code vector and sends the result to the distortion
calculator 216.
The distortion calculator 216 computes the coding distortion in the equation 2 by using the random codebook searching target x and the synthesized code vector obtained from the synthesis filter 215.
After computing the distortion, the distortion calculator 216 sends a signal to the random sequence storage section 217. The process from the selection of the random code vector by the random
sequence storage section 217 to the distortion computation by the distortion calculator 216 is repeated for every random code vector selectable by the random sequence storage section 217.
Thereafter, the random code vector that minimizes the coding distortion is selected, and the code number of that random code vector, the then optimal random code vector gain gc and the minimum coding
distortion value are memorized.
Then, the distortion calculator 216 compares the minimum coding distortion value obtained when the switch 213 is connected to the random codebook A 211 with the minimum coding distortion value
obtained when the switch 213 is connected to the random codebook B 212, determines switch connection information when smaller coding distortion was obtained, the then code number and the random code
vector gain are determined as speech codes, and are sent to an unillustrated transmitter.
The speech decoder according to this mode which is paired with the speech coder of this mode has the random codebook A, the random codebook B, the switch, the random code vector gain and the
synthesis filter having the same structures and arranged in the same way as those in FIG. 21, a random codebook to be used, a random code vector and a random code vector gain are determined based on
a speech code input from the transmitter, and a synthesized excitation vector is obtained as the output of the synthesis filter.
According to the speech coder/decoder with the above structures, one of the random code vectors to be generated from the random codebook A and the random code vectors to be generated from the random
codebook B, which minimizes the coding distortion in the equation 2, can be selected in a closed loop, making it possible to generate an excitation vector closer to an actual speech and a
high-quality synthesized speech.
Although this mode has been illustrated as a speech coder/decoder based on the structure in FIG. 2 of the conventional CELP type speech coder, similar functions and advantages can be provided even if
this mode is adapted to a CELP type speech coder/decoder based on the structure, in FIGS. 19A and 198 or FIG. 20.
Although the random codebook A 211 in this mode has the same structure as shown in FIG. 18, similar functions and advantages can be provided even if the fixed waveform storage section 181 takes
another structure (e.g., in a base where it has four fixed waveforms).
While the description of this mode has been given with reference to a case where the fixed waveform arranging section 182 of the random codebook A 211 has the start position candidate information of
fixed waveforms as shown in Table 8, similar functions and advantages can be provided even for a case where the section 182 has other start position candidate information of fixed waveforms.
Although this mode has been described with reference to a case where the random codebook B 212 is constituted by the random sequence storage section 217 for directly storing a plurality of random
sequences in the memory, similar functions and advantages can be provided even for a case where the random codebook B 212 takes other excitation vector structures (e.g., when it is constituted by
excitation vector generation information with an algebraic structure).
Although this mode has been described as a CELP type speech coder/decoder having two kinds of random codebooks, similar functions and advantages can be provided even in a case of using a CELP type
speech coder/decoder having three or more kinds of random codebooks.
(Fourteenth Mode)
FIG. 22 presents a structural block diagram of a CELP type speech coder according to this mode. The speech coder according to this mode has two kinds of random codebooks. One random codebook has the
structure of the excitation vector generator shown in FIG. 18, and the other one is constituted of a pulse sequences storage section which retains a plurality of pulse sequences. The random codebooks
are adaptively switched from one to the other by using a quantized pitch gain already acquired before random codebook search.
The random codebook A 211, which comprises the fixed waveform storage section 181, fixed waveform arranging section 182 and adding section 183, corresponds to the excitation vector generator in FIG.
18. A random codebook B 221 is comprised of a pulse sequences storage section 222 where a plurality of pulse sequences are stored. The random codebooks A 211 and B 221 are switched from one to the
other by means of a switch 213'. A multiplier 224 outputs an adaptive code vector which is the output of an adaptive codebook 223 multiplied by the pitch gain that has already been acquired at the
time of random codebook search. The output of a pitch gain quantizer 225 is given to the switch 213'.
The operation of the thus constituted CELP type speech coder will be described.
According to the conventional CELP type speech coder, the adaptive codebook 223 is searched first, and the random codebook search is carried out based on the result. This adaptive codebook search is
a process of selecting an optimal adaptive code vector from a plurality of adaptive code vectors stored in the adaptive codebook 223 (vectors each obtained by multiplying an adaptive code vector and
a random code vector by their respective gains and then adding them together). As a result of the process, the code number and pitch gain of an adaptive code vector are generated.
According to the CELP type speech coder of this mode, the pitch gain quantizer 225 quantizes this pitch gain, generating a quantized pitch gain, after which random codebook search will be performed.
The quantized pitch gain obtained by the Ditch gain quantizer 225 is sent to the switch 213' for switching between the random codebooks.
The switch 213' connects to the random codebook A 211 when the value of the quantized pitch gain is small, by which it is considered that the input speech is unvoiced, and connects to the random
codebook B 221 when the value of the quantized pitch gain is large, by which it is considered that the input speech is voiced.
When the switch 213' is connected to the random codebook A 211, the fixed waveform arranging section 182 arranges (shifts) the fixed waveforms, read from the fixed waveform storage section 181, at
the positions selected from start position candidates of fixed waveforms respectively, based on start position candidate information for fixed waveforms it has as shown in Table 8. The arranged fixed
waveforms are sent to the adding section 183 and added together to become a random code vector. The random code vector is sent to the synthesis filter 215 after being multiplied by the random code
vector gain. The synthesis filter 215 synthesizes the input random code vector and sends the result to the distortion calculator 216.
The distortion calculator 216 computes coding distortion in the equation 2 by using the target x for random codebook search and the synthesized code vector obtained from the synthesis filter 215.
After computing the distortion, the distortion calculator 216 sends a signal to the fixed waveform arranging section 182. The process from the selection of start position candidates corresponding to
the three channels by the fixed waveform arranging section 182 to the distortion computation by the distortion calculator 216 is repeated for every combination of the start position candidates
selectable by the fixed waveform arranging section 182.
Thereafter, the combination of the start position candidates that minimizes the coding distortion is selected, and the code number which corresponds, one to one, to that combination of the start
position candidates, the then optimal random code vector gain gc and the quantized pitch gain are transferred to a transmitter as a speech code. In this mode, the property of unvoiced sound should be
reflected on fixed waveform patterns to be stored in the fixed waveform storage section 181, before speech coding takes places.
When the switch 213' is connected to the random codebook B 212, a pulse sequence read from the pulse sequences storage section 222 becomes a random code vector. This random code vector is input to
the synthesis filter 215 through the switch 213' and multiplication of the random code vector gain. The synthesis filter 215 synthesizes the input random code vector and sends the result to the
distortion calculator 216.
The distortion calculator 216 computes the coding distortion in the equation 2 by using the target x for random codebook search X and the synthesized code vector obtained from the synthesis filter
After computing the distortion, the distortion calculator 216 sends a signal to the pulse sequences storage section 222. The process from the selection of the random code vector by the pulse
sequences storage section 222 to the distortion computation by the distortion calculator 216 is repeated for every random code vector selectable by the pulse sequences storage section 222.
Thereafter, the random code vector that minimizes the coding distortion is selected, and the code number of that random code vector, the then optimal random code vector gain gc and the quantized
pitch gain are transferred to the transmitter as a speech code.
The speech decoder according to this mode which is paired with the speech coder of this mode has the random codebook A, the random codebook B, the switch, the random code vector gain and the
synthesis filter having the same structures and arranged in the same way as those in FIG. 22. First, upon reception of the transmitted quantized pitch gain, the coder side determines from its level
whether the switch 213' has been connected to the random codebook A 211 or to the random codebook B 221. Next, based on the code number and the sign of the random code vector, a synthesized
excitation vector is obtained as the output of the synthesis filter.
According to the speech coder/decoder with the above structures, two kinds of random codebooks can be switched adaptively in accordance with the characteristic of an input speech (the level of the
quantized pitch gain is used to determine the transmitted quantized pitch gain in this mode), so that when the input speech is voiced, a pulse sequence can be selected as a random code vector whereas
for a strong voiceless property, a random code vector which reflects the property of voiceless sounds can be selected. This can ensure generation of excitation vectors closer to the actual sound
property and improvement of synthesized sounds. Because switching is performed in a closed loop in this mode as mentioned above, the functional effects can be improved by increasing the amount of
information to be transmitted.
Although this mode has been illustrated as a speech coder/decoder based on the structure in FIG. 2 of the conventional CELP type speech coder, similar functions and advantages can be provided even if
this mode is adapted to a CELP type speech coder/decoder based on the structure in FIGS. 19A and 19B or FIG. 20.
In this mode, a quantized pitch gain acquired by quantizing the pitch gain of an adaptive code vector in the pitch gain quantizer 225 is used as a parameter for switching the switch 213'. A pitch
period calculator may be provided so that a pitch period, computed from an adaptive code vector can be used instead.
Although the random codebook A 211 in this mode has the same structure as shown in FIG. 18, similar functions and advantages can be provided even if the fixed waveform storage section 181 takes
another structure (e.g., in a case where it has four fixed waveforms).
While the description of this mode has been given with reference to the case where the fixed waveform arranging section 182 of the random codebook A 211 has the start position candidate information
of fixed waveforms as shown in Table 8, similar functions and advantages can be provided even for a case where the section 182 has other start position candidate information of fixed waveforms.
Although this mode has been described with reference to the case where the random codebook B 212 is constituted by the pulse sequences storage section 222 for directly storing a pulse sequence in the
memory, similar functions and advantages can be provided even for a case where the random codebook B 212 takes other excitation vector structures (e.g., when it is constituted by excitation vector
generation information with an algebraic structure).
Although this mode has been described as a CELP type speech coder/decoder having two kinds of random codebooks, similar functions and advantages can be provided even in a case of using a CELP type
speech coder/decoder having three or more kinds of random codebooks.
(Fifteenth Mode)
FIG. 23 presents a structural block diagram of a CELP type speech coder according to this mode. The speech coder according to this mode has two kinds of random codebooks. One random codebook takes
the structure of the excitation vector generator shown in FIG. 18 and has three fixed waveforms stored in the fixed waveform storage section, and the other one likewise takes the structure of the
excitation vector generator shown in FIG. 18 but has two fixed waveforms stored in the fixed waveform storage section. Those two kinds of random codebooks are switched in a closed loop.
The random codebook A 211, which comprises a fixed waveform storage section A 181 having three fixed waveforms stored therein, fixed waveform arranging section A 182 and adding section 183,
corresponds to the structure of the excitation vector generator in FIG. 18 which however has three fixed waveforms stored in the fixed waveform storage section.
A random codebook B 230 comprises a fixed waveform storage section B 231 having two fixed waveforms stored therein, fixed waveform arranging section H 232 having start position candidate information
of fixed waveforms as shown in Table 9 and adding section 233, which adds two fixed waveforms, arranged by the fixed waveform arranging section B 232, thereby generating a random code vector. The
random codebook B 230 corresponds to the structure of the excitation vector generator in FIG. 18 which however has two fixed waveforms stored in the fixed waveform storage section.
-US-00009 TABLE 9 Channel Channel number Sign Start position number Sign candidates fixed waveforms CH1 ±1 P1 ( 0 , 4 , 8 , 12 , 16 , , 72 , 76 2 , 6 , 10 , 14 , 18 , , 74 , 78 ##EQU00036## CH2 ±1 P2
( 1 , 5 , 9 , 13 , 17 , , 73 , 77 3 , 7 , 11 , 15 , 19 , , 75 , 79 ##EQU00037##
The other structure is the same as that of the above-described thirteenth mode.
The operation of the CELP type speech coder constructed in the above way will be described.
First, the switch 213 is connected to the random codebook A 211, and the fixed waveform arranging section A 182 arranges (shifts) three fixed waveforms, read from the fixed waveform storage section A
181, at the positions selected from start position candidates of fixed waveforms respectively, based on start position candidate information for fixed waveforms it has as shown in Table 8. The
arranged three fixed waveforms are output to the adding section 183 and added together to become a random code vector. This random code vector is sent to the synthesis filter 215 through the switch
213 and the multiplier 214 for multiplying it by the random code vector gain. The synthesis filter 215 synthesizes the input random code vector and sends the result to the distortion calculator 216.
The distortion calculator 216 computes coding distortion in the equation 2 by using the random codebook search target X and the synthesized code vector obtained from the synthesis filter 215.
After computing the distortion, the distortion calculator 216 sends a signal to the fixed waveform arranging section A 182. The process from the selection of start position candidates corresponding
to the three channels by the fixed waveform arranging section A 182 to the distortion computation by the distortion calculator 216 is repeated for every combination of the start position candidates
selectable by the fixed waveform arranging section A 182.
Thereafter, the combination of the start position candidates that minimizes the coding distortion is selected, and the code number which corresponds, one to one, to that combination of the start
position candidates, the then optimal random code vector gain gc and the minimum coding distortion value are memorized.
In this mode, the fixed waveform patterns to be stored in the fixed waveform storage section A 181 before speech coding are what have been acquired through training in such a way as to minimize
distortion under the condition of three fixed waveforms in use.
Next, the switch 213 is connected to the random codebook B 230, and the fixed waveform arranging section B 232 arranges (shifts) two fixed waveforms, read from the fixed waveform storage section B
231, at the positions selected from start position candidates of fixed waveforms respectively, based on start position candidate information for fixed waveforms it has as shown in Table 9. The
arranged two fixed waveforms are output to the adding section 233 and added together to become a random code vector. This random code vector is sent to the synthesis filter 215 through the switch 213
and the multiplier 214 for multiplying it by the random code vector gain. The synthesis filter 215 synthesizes the input random code vector and sends the result to the distortion calculator 216.
The distortion calculator 216 computes coding distortion in the equation 2 by using the target x for random codebook search X and the synthesized code vector obtained from the synthesis filter 215.
After computing the distortion, the distortion calculator 216 sends a signal to the fixed waveform arranging section B 232. The process from the selection of start position candidates corresponding
to the three channels by the fixed waveform arranging section B 232 to the distortion computation by the distortion calculator 216 is repeated for every combination of the start position candidates
selectable by the fixed waveform arranging section B 232.
Thereafter, the combination of the start position candidates that minimizes the coding distortion is selected, and the code number which corresponds, one to one, to that combination of the start
position candidates, the then optimal random code vector gain gc and the minimum coding distortion value are memorized. In this mode, the fixed waveform patterns to be stored in the fixed waveform
storage section B 231 before speech coding are what have been acquired through training in such a way as to minimize distortion under the condition of two fixed waveforms in use.
Then, the distortion calculator 216 compares the minimum coding distortion value obtained when the switch 213 is connected to the random codebook B 230 with the minimum coding distortion value
obtained when the switch 213 is connected to the random codebook B 212, determines switch connection information when smaller coding distortion was obtained, the then code number and the random code
vector gain are determined as speech codes, and are sent to the transmitter.
The speech decoder according to this mode has the random codebook A, the random codebook B, the switch, the random code vector gain and the synthesis filter having the same structures and arranged in
the same way as those in FIG. 23, a random codebook to be used, a random code vector and a random code vector gain are determined based on a speech code input from the transmitter, and a synthesized
excitation vector is obtained as the output of the synthesis filter.
According to the speech coder/decoder with the above structures, one of the random code vectors to be generated from the random codebook A and the random code vectors to be generated from the random
codebook B, which minimizes the coding distortion in the equation 2, can be selected in a closed loop, making it possible to generate an excitation vector closer to an actual speech and a
high-quality synthesized speech.
Although this mode has been illustrated as a speech coder/decoder based on the structure in FIG. 2 of the conventional CELP type speech coder, similar functions and advantages can be provided even if
this mode is adapted to a CELP type speech coder/decoder based on the structure in FIGS. 19A and 198 or FIG. 20.
Although this mode has been described with reference to the case where the fixed waveform storage section A 181 of the random codebook A 211 stores three fixed waveforms, similar functions and
advantages can be provided even if the fixed waveform storage section A 181 stores a different number of fixed waveforms (e.g., in a case where it has four fixed waveforms). The same is true of the
random codebook B 230.
While the description of this mode has been given with reference to the case where the fixed waveform arranging section A 182 of the random codebook A 211 has the start position candidate information
of fixed waveforms as shown in Table 8, similar functions and advantages can be provided even for a case where the section 182 has other start position candidate information of fixed waveforms. The
same is applied to the random codebook B 230.
Although this mode has been described as a CELP type speech coder/decoder having two kinds of random codebooks, similar functions and advantages can be provided even in a case of using a CELP type
speech coder/decoder having three or more kinds of random codebooks.
(Sixteenth Mode)
FIG. 24 presents a structural block diagram of a CELP type speech coder according to this mode. The speech coder acquires LPC coefficients by performing autocorrelation analysis and LPC analysis on
input speech data 241 in an LPC analyzing section 242, encodes the obtained LPC coefficients to acquire LPC codes, and encodes the obtained LPC codes to yield decoded LPC coefficients.
Next, an excitation vector generator 245 acquires an adaptive code vector and a random code vector from an adaptive codebook 243 and an excitation vector generator 244, and sends them to an LPC
synthesis filter 246. One of the excitation vector generators of the above-described first to fourth and tenth modes is used for the excitation vector generator 244. Further, the LPC synthesis filter
246 filters two excitation vectors, obtained by the excitation vector generator 245, with the decoded LPC coefficients obtained by the LPC analyzing section 242, thereby yielding two synthesized
A comparator 247 analyzes a relationship between the two synthesized speeches, obtained by the LPC synthesis filter 246, and the input speech, yielding optimal values (optimal gains) of the two
synthesized speeches, adds the synthesized speeches whose powers have been adjusted with the optimal gains, acquiring a total synthesized speech, and then computes a distance between the total
synthesized speech and the input speech.
Distance computation is also carried out on the input speech and multiple synthesized speeches, which are obtained by causing the excitation vector generator 245 and the LPC synthesis filter 246 to
function with respect to all the excitation vector samples those are generated by the random codebook 243 and the excitation vector generator 244. Then, the index of the excitation vector sample
which provides the minimum one of the distances obtained from the computation. The obtained optimal gains, the obtained index of the excitation vector sample and two excitation vectors corresponding
to that index are sent to a parameter coding section 248.
The parameter Coding section 248 encodes the optimal gains to obtain gain codes, and the LPC codes and the index of the excitation vector sample are all sent to a transmitter 249. An actual
excitation signal is produced from the gain codes and the two excitation vectors corresponding to the index, and an old excitation vector sample is discarded at the same time the excitation signal is
stored in the adaptive codebook 243.
FIG. 25 shows functional blocks of a section in the parameter coding section 248, which is associated with vector quantization of the gain.
The parameter coding section 248 has a parameter converting section 2502 for converting input optimal gains 2501 to a sum of elements and a ratio with respect to the sum to acquire quantization
target vectors, a target vector extracting section 2503 for obtaining a target vector by using old decoded code vectors, stored in a decoded vector storage section, and predictive coefficients stored
in a predictive coefficients storage section, a decoded vector storage section 2504 where old decoded code vectors are stored, a predictive coefficients storage section 2505, a distance calculator
2506 for computing distances between a plurality of code vectors stored in a vector codebook and a target vector obtained by the target vector extracting section by using predictive coefficients
stored in the predictive coefficients storage section, a vector codebook 2507 where a plurality of code vectors are stored, and a comparator 2508, which controls the vector codebook and the distance
calculator for comparison of the distances obtained from the distance calculator to acquire the number of the most appropriate code vector, acquires a code vector from the vector storage section
based on the obtained number, and updates the content of the decoded vector storage section using that code vector.
A detailed description will now be given of the operation of the thus constituted parameter coding section 248. The vector codebook 2507 where a plurality of general samples (code vectors) of a
quantization target vector are stored should be prepared in advance. This is generally prepared by an LBG algorithm (IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. COM-28, NO. 1, PP 84-95, JANUARY 1980)
based on multiple vectors which are obtained by analyzing multiple speech data.
Coefficients for predictive coding should be stored in the predictive coefficients storage section 2505. The predictive coefficients will now be discussed after describing the algorithm. A value
indicating a unvoiced state should be stored as an initial value in the decoded Vector storage section 2504. One example would be a code vector with the lowest power.
First, the input optimal gains 2501 (the gain of an adaptive excitation vector and the gain of a random excitation vector) are converted to element vectors (inputs) of a sum and a ratio in the
parameter converting section 2502. The conversion method is illustrated in an equation 40.
=Ga/(Ga+Gs) (40)
where [0542]
(Ga, Gs): optical gain
Ga: gain of an adaptive excitation vector
Gs: gain of stochastic excitation vector
(P, R): input vectors
P: sum
R: ratio.
It is to be noted that Ga above should not necessarily be a positive value. Thus, R may take a negative value. When Ga+Gs becomes negative, a fixed value prepared in advance is substituted.
Next, based on the vectors obtained by the parameter converting section 2502, the target vector extracting section 2503 acquires a target vector by, using old decoded code vectors, stored in the
decoded vector storage section 2504, and predictive coefficients stored in the predictive coefficients storage section 2504. An equation for computing the target vector is given by an equation 41.
= P - ( ? Upi × pi + ? Vpi × ri ) Tr = R - ( ? Uri × pi + ? Vri × ri ) ? indicates text missing or illegible when filed ( 41 ) ##EQU00038##
where [0550]
(Tp, Tr): target vector
(P, R): input vector
(pi, ri): old decoded vector
Upi, Vpi, Uri, Vri: predictive coefficients (fixed values)
i: index indicating how old the decoded vector is
l: prediction order.
Then, the distance calculator 2506 computes a distance between a target vector obtained by the target vector extracting section 2503 and a code vector stored in the vector codebook 2507 by using the
predictive coefficients stored in the predictive coefficients storage section 2505. An equation for computing the distance is given by an equation 42.
= Wp × ( Tp - UpO × Cpn - VpO × Crn ) 2 + Wr × ( Tr - UpO × Cpn - VrO × Crn ) 2 ( 42 ) ##EQU00039##
where [0557]
Dn: distance between a target vector and a code vector
(Tp, Tr): target vector
UpO, VpO, UrO, VrO: predictive coefficients (fixed values)
(Cpn, Crn): code vector
n: the number of the code vector
Wp, Wr: weighting coefficient (fixed) for adjusting the sensitivity against distortion.
Then, the comparator 2508 controls the vector codebook 2507 and the distance calculator 2506 to acquire the number of the code vector which has the shortest distance computed by the distance
calculator 2506 from among a plurality of code vectors stored in the vector codebook 2507, and sets the number as a gain code 2509. Based on the obtained gain code 2509, the comparator 2508 acquires
a decoded vector and updates the content of the decoded vector storage section 2504 using that vector. An equation 43 shows how to acquire a decoded vector.
= ( ? Upi × pi + ? Vpi × ri ) + UpO × Cpn + VpO × Crn R = ( ? Uri × pi + ? Vri × ri ) + UrO × Cpn + VrO × Crn ? indicates text missing or illegible when filed ( 43 ) ##EQU00040##
where (Cpn, Crn): code vector
(P, r): decoded vector
(pi, ri): old decoded vector
Upi, Vpi, Uri, Vri: predictive coefficients (fixed values)
i: index indicating how old the decoded vector is
l: prediction order.
n: the number of the code vector.
An equation 44 shows an updating scheme.
Processing Order
:code of the gain. (44)
Meanwhile, the decoder, which should previously be provided with a vector codebook, a predictive coefficients storage section and a coded vector storage section similar to those of the coder,
performs decoding through the functions of the comparator of the coder of generating a decoded vector and updating the decoded vector storage section, based on the gain code transmitted from the
A scheme of setting predictive coefficients to be stored in the predictive coefficients storage section 2505 will now be described.
Predictive coefficients are obtained by quantizing a lot of training speech data first, collecting input vectors obtained from their optimal gains and decoded vectors at the time of quantization,
forming a population, then minimizing total distortion indicated by the following equation 45 for that population. Specifically, the values of Upi and Uri are acquired by solving simultaneous
equations which are derived by partial differential of the equation of the total distortion with respect to Upi and Uri.
= t = 0 T { Wp × ( Pt - ? Upi × p t , i ) 2 + Wr × ( Rt - ? Uri × rt , i ) 2 } p t , O = Cpn ( t ) rt , O = Crn ( t ) ? indicates text missing or illegible when filed ( 45 ) ##EQU00041##
where [0576]
Total: total distortion
t: time (frame number)
T: the number of pieces of data in the population
(Pt, Rt): optimal gain at time t
(pti, rti): decoded vector at time t
Upi, Vpi, Uri, Vri: predictive coefficients (fixed values)
i: index indicating how old the decoded vector is
l: prediction order.
(Cpn.sub.(t), Crn.sub.(t)): code vector at time t
n: the number of the code vector
Wp, Wr: weighting coefficient (fixed) for adjusting the sensitivity against distortion.
According to such a vector quantization scheme, the optimal gain can be vector-quantized as it is, the feature of the parameter converting section can permit the use of the correlation between the
relative levels of the power and each gain, and the features of the decoded vector storage section, the predictive coefficients storage section, the target vector extracting section and the distance
calculator can ensure predictive coding of gains using the correlation between the mutual relations between the power and two gains. Those features can allow the correlation among parameters to be
utilized sufficiently.
(Seventeenth Mode)
FIG. 26 presents a structural block diagram of a parameter coding section of a speech coder according to this mode. According to this mode, vector quantization is performed while evaluating
gain-quantization originated distortion from two synthesized speeches corresponding to the index of an excitation vector and a perpetual weighted input speech.
As shown in FIG. 26, the parameter coding section has a parameter calculator 2602, which computes parameters necessary for distance computation from input data or a perpetual weighted input speech, a
perpetual weighted LPC synthesis of adaptive code vector and a perpetual weighted LPC synthesis of random code vector 2601 to be input, a decoded vector stored in a decoding vector storage section,
and predictive coefficients stored in a predictive coefficients storage section, a decoded vector storage, section 2603 where old decoded code vectors are stored, a predictive coefficients storage
section 2604 where predictive coefficients are stored, a distance calculator 2605 for computing coding distortion of the time when decoding is implemented with a plurality of code vectors stored in a
vector codebook by using the predictive coefficients stored in the predictive coefficients storage section, a vector codebook 2606 where a plurality of code vectors are stored, and a comparator 2607,
which controls the vector codebook and the distance calculator for comparison of the coding distortions obtained from the distance calculator to acquire the number of the most appropriate code
vector, acquires a code vector from the vector storage section based on the obtained number, and updates the content of the decoded vector storage section using that code vector.
A description will now be given of the vector quantizing operation of the thus constituted parameter coding section. The vector codebook 2606 where a plurality of general samples (code vectors) of a
quantization target vector are stored should be prepared in advance. This is generally prepared by an LBG algorithm (IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. COM-28, NO. 1, PP 84-95, JANUARY 1980)
or the like based on multiple vectors which are obtained by analyzing multiple speech data. Coefficients for predictive coding should be stored in the predictive coefficients storage section 2604.
Those coefficients in use are the same predictive coefficients as stored in the predictive coefficients storage section 2505 which has been discussed in (Sixteenth Mode). A value indicating a
unvoiced state should be stored as an initial value in the decoded vector storage section 2603.
First, the parameter calculator 2602 computes parameters necessary for distance computation from the input perpetual weighted input speech, perpetual weighted LPC synthesis of adaptive code vector
and perpetual weighted LPC synthesis of random code vector, and further from the decoded vector stored in the decoded vector storage section 2603 and the predictive coefficients stored in the
predictive coefficients storage section 2604. The distances in the distance calculator are based on the following equation 46.
= ? ( Xi - Gan × Ai - Gsn × Si ) 2 Gan = Orn × e × p ( Opn ) Gsn = ( 1 - Orn ) × e × p ( Opn ) Opn = Yp + UpO × Cpn + VpO × Crn Yp = j = 1 J Upj × pj + j = 1 J Vpj × rj Yr = j = 1 J Urj × pj + j = 1
J Vrj × rj ? indicates text missing or illegible when filed ( 46 ) ##EQU00042##
, Gsn: decoded gain
(Opn, Orn): decoded vector
(Yp, Yr): predictive vector
En: coding distortion when the n-th gain code vector is used
Xi: perpetual weighted input speech
Ai: perpetual weighted LPC synthesis of adaptive code vector
Si: perpetual weighted LPC synthesis of stochastic Code vector
n: code of the code vector
i: index of excitation data
I: subframe length (coding unit of the input speech)
(Cpn, Crn): code vector
(pj, rj): old decoded vector
Upj, Vpj, Urj, Vrj: predictive coefficients (fixed values)
j: index indicating how old the decoded vector is
J: prediction order.
Therefore, the parameter calculator 2602 computes those portions which do not depend on the number of a code vector. What is to be computed are the predictive vector, and the correlation among three
synthesized speeches or the power. An equation for the computation is given by an equation 47.
= j = 1 J Upj × pj + j = 1 J Vpj × rj Yr = j = 1 J Urj × pj + j = 1 J Vrj × rj Dxx = i = 0 I Xi × Xi Dxa = i = 0 I Xi × Ai × 2 Dxs = i = 0 I Xi × Si × 2 Daa = i = 0 I Ai × Ai Das = i = 0 I Ai × Si ×
2 Dss = i = 0 I Si × Si ( 47 ) ##EQU00043##
where [0607]
(Yp, Yr): predictive vector
Dxx, Dxa, Dxs, Daa, Das, Dss: value of correction among synthesized speeches or the power
Xi: perpetual weighted input speech
Ai: perpetual weighted LPC synthesis of adaptive code vector
Si: perpetual weighted LPC synthesis of stochastic code vector
i: index of excitation data
I: subframe length (coding unit of the input speech)
(pj, rj): old decoded vector
Upj, Vpj, Urj, Vrj: predictive coefficients (fixed values)
j: index indicating how old the decoded vector is
J: prediction order.
Then, the distance calculator 2506 computes a distance between a target vector obtained by the target vector extracting section 2503 and a code vector stored in the vector codebook 2507 by using the
predictive coefficients stored in the predictive coefficients storage section 2505. An equation for computing the distance, is given, by an equation 42.
×Dss-Gan×Dxa-Gsn.tim- es.Dxs+Gan×Gsn×Das
=Yr+UrO×Cpn+YrO×Crn (48)
where En
: coding distortion when the n-th gain code vector is used
Dxx, Dxa, Dxs, Daa, Das, Dss: value of correction among synthesized speeches or the power
Gan, Gsn: decoded gain
(Opn, Orn): decoded vector
(Yp, Yr): predictive vector
UpO, VpO, UrO, VrO: predictive coefficients (fixed values)
(Cpn, Crn): code vector
n: the number of the code vector.
Actually, Dxx does not depend on the number n of the code vector so that its addition can be omitted.
Then, the comparator 2607 controls the vector codebook 2606 and the distance calculator 2605 to acquire the number of the code vector which has the shortest distance computed by the distance
calculator 2605 from among a plurality of code vectors stored in the vector codebook 2606, and sets the number as a gain code 2608. Based on the obtained gain code 2608, the comparator 2607 acquires
a decoded vector and updates the content of the decoded vector storage section 2603 using that vector. A code vector is obtained from the equation 44.
Further, the updating scheme, the equation 44, is used.
Meanwhile, the speech decoder should previously be provided with a vector codebook, a predictive coefficients storage section and a coded vector storage section similar to those of the speech coder,
and performs decoding through the functions of the comparator of the coder of generating a decoded vector and updating the decoded vector storage section, based on the gain code transmitted from the
According to the thus constituted mode, vector quantization can be performed while evaluating gain-quantization originated distortion from two synthesized speeches corresponding to the index of the
excitation vector and the input speech, the feature of the parameter converting section can permit the use of the correlation between the relative levels of the power and each gain, and the features
of the decoded vector storage section, the predictive coefficients storage section, the target vector extracting section and the distance calculator can ensure predictive coding of gains using the
correlation between the mutual relations between the power and two gains. This can allow the correlation among parameters to be utilized sufficiently.
(Eighteenth Mode)
FIG. 27 presents a structural block diagram of the essential portions of a noise canceler according to this mode. This noise canceler is Installed in the above-described speech coder. For example, it
is placed at the preceding stage of the buffer 1301 in the speech coder shown in FIG. 13.
The noise canceler shown in FIG. 27 comprises an A/D converter 272, a noise cancellation coefficient storage section 273, a noise cancellation coefficient adjusting section 274, an input waveform
setting section 275, an LPC analyzing section 276, a Fourier transform section 277, a noise canceling/spectrum compensating section 278, a spectrum stabilizing section 279, an inverse Fourier
transform section 280, a spectrum enhancing section 281, a waveform matching section 282, a noise estimating section 284, a noise spectrum storage section 285, a previous spectrum storage section
286, a random phase storage section 287, a previous waveform storage section 288, and a maximum power storage section 289.
To begin with, initial settings will be discussed. Table 10 shows the names of fixed parameters and setting examples.
-US-00010 TABLE 10 Fixed Parameters Setting Examples frame length 160 (20 msec for 8-kHz sampling data) pre-read data length 80 (10 msec for the above data) FET order 256 LPC prediction order 10
sustaining number of noise spectrum reference 30 designated minimum power 20.0 AR enhancement coefficient 0 0.5 MA enhancement coefficient 0 0.8 high-frequency enhancement coefficient 0 0.4 AR
enhancement coefficient 1-0 0.66 MA enhancement coefficient 1-0 0.64 AR enhancement coefficient 1-1 0.7 MA enhancement coefficient 1-1 0.6 high-frequency enhancement coefficient 1 0.3 power
enhancement coefficient 1.2 noise reference power 20000.0 unvoiced segment power reduction coefficient 0.3 compensation power increase coefficient 2.0 number of consecutive noise references 5 noise
cancellation coefficient training coefficient 0.8 unvoiced segment detection coefficient 0.05 designated noise cancellation coefficient 1.5
Phase data for adjusting the phase should have been stored in the random phase storage section 287. Those are used to rotate the phase in the spectrum stabilizing section 279. Table 11 shows a case
where there are eight kinds of phase data.
-US-00011 TABLE 11 Phase Data (-0.51, 0.86), (0.98, -0.17) (0.30, 0.95), (-0.53, -0.84) (-0.94, -0.34), (0.70, 0.71) (-0.22, 0.97), (0.38, -0.92)
Further, a counter (random phase counter) for using the phase data should have been stored in the random phase storage section 287 too. This value should have been initialized to 0 before storage.
Next, the static RAM area is set. Specifically, the noise cancellation coefficient storage section 273, the noise spectrum storage section 285, the previous spectrum storage section 286, the previous
waveform storage section 288 and the maximum power storage section 289 are cleared. The following will discuss the individual storage sections and a setting example.
The noise cancellation coefficient storage section 273 is an area for storing a noise cancellation coefficient whose initial value stored is 20.0. The noise spectrum storage section 285 is an area
for storing, for each frequency, mean noise power, a mean noise spectrum, a compensation noise spectrum for the first candidate, a compensation noise spectrum for the second candidate, and a frame
number (sustaining number) indicating how many frames earlier the spectrum value of each frequency has changed; a sufficiently large value for the mean noise power, designated minimum power for the
mean noise spectrum, and sufficiently large values for the compensation noise spectra and the sustaining number should be stored as initial values.
The previous spectrum storage section 286 is an area for storing compensation noise power, power (full range, intermediate range) of a previous frame (previous frame power), smoothing power (full
range, intermediate range) of a previous frame (previous smoothing power), and a noise sequence number; a sufficiently large value for the compensation noise power. 0.0 for both the previous frame
power and full frame smoothing power and a noise reference sequence number as the noise sequence number should be stored.
The previous waveform storage section 288 is an area for storing data of the output signal of the previous frame by the length of the last pre-read data for matching of the output signal, and all 0
should be stored as an initial value. The spectrum enhancing section 281, which executes ARMA and high-frequency enhancement filtering, should have the statuses of the respective filters cleared to 0
for that purpose. The maximum power storage section 289 is an area for storing the maximum power of the input signal, and should have 0 stored as the maximum power.
Then, the noise cancellation algorithm will be explained block by block with reference to FIG. 27.
First, an analog input signal 271 including a speech is subjected to A/D conversion in the A/D converter 272, and is input by one frame length+pre-read data length (160+80=240 points in the above
setting example). The noise cancellation coefficient adjusting section 274 computes a noise cancellation coefficient and a compensation coefficient from an equation 49 based on the noise cancellation
coefficient stored in the noise cancellation coefficient storage section 273, a designated noise cancellation coefficient, a learning coefficient for the noise cancellation coefficient, and a
compensation power increase coefficient. The obtained noise cancellation coefficient is stored in the noise cancellation coefficient storage section 273, the input signal obtained by the A/D
converter 272 is sent to the input waveform setting section 275, and the compensation coefficient and noise cancellation coefficient are sent to the noise estimating section 284 and the noise
canceling/spectrum compensating section 278.
=Q/q×D (49)
where [0642]
q: noise cancellation coefficient
Q: designated noise cancellation coefficient
C: learning coefficient for the noise cancellation coefficient
r: compensation coefficient
D: compensation power increase coefficient.
The noise cancellation coefficient is a coefficient indicating a rate of decreasing noise, the designated noise cancellation coefficient is a fixed coefficient previously designated, the learning
coefficient for the noise cancellation coefficient is a coefficient indicating a rate by which the noise cancellation coefficient approaches the designated noise cancellation coefficient, the
compensation coefficient is a coefficient for adjusting the compensation power in the spectrum compensation, and the compensation power increase coefficient is a coefficient for adjusting the
compensation coefficient.
In the input waveform setting section 275, the input signal from the A/D converter 272 is written in a memory arrangement having a length of 2 to an exponential power from the end in such a way that
FFT (Fast Fourier Transform) can be carried out. 0 should be filled in the front portion. In the above setting example, 0 is written in 0 to 15 in the arrangement with a length of 256, and the input
signal is written in 16 to 255. This arrangement is used as a real number portion in FFT of the eighth order. An arrangement having the same length as the real number portion is prepared for an
imaginary number portion, and all 0 should be written there.
In the LPC analyzing section 276, a hamming window is put on the real number area set in the input waveform setting section 275, autocorrelation analysis is performed on the Hamming-windowed waveform
to acquire an autocorrelation value, and autocorrelation-based LPC analysis is performed to acquire linear predictive coefficients. Further, the obtained linear predictive coefficients axe sent to
the spectrum enhancing section 281.
The Fourier transform section 277 conducts discrete Fourier transform by FFT using the memory arrangement of the real number portion and the imaginary number portion, obtained by the input waveform
setting section 275. The sum of the absolute values of the real number portion and the imaginary number portion of the obtained complex spectrum is computed to acquire the pseudo amplitude spectrum
(input spectrum hereinafter) of the input signal. Further, the total sum of the input spectrum value of each frequency (input power hereinafter) is obtained and sent to the noise estimating section
284. The complex spectrum itself is sent to the spectrum stabilizing section 279.
A process in the noise estimating section 284 will now be discussed.
The noise estimating section 284 compares the input power obtained by the Fourier transform section 277 with the maximum power value stored in the maximum power storage section 289, and stores the
maximum power value as the input power value in the maximum power storage section 289 when the maximum power is smaller. If at least one of the following cases is satisfied, noise estimation is
performed, and if none of them are met, noise estimation is not carried out
(1) The input power is smaller than the maximum power multiplied by an unvoiced segment detection coefficient.
(2) The noise cancellation coefficient is larger than the designated noise cancellation coefficient plus 0.2.
(3) The input power is smaller than a value obtained by multiplying the mean noise power, obtained from the noise spectrum storage section 285, by 1.6.
The noise estimating algorithm in the noise estimating section 284 will now be discussed.
First, the sustaining numbers of all the frequencies for the first and second candidates stored in the noise spectrum storage section 285 are updated (incremented by 1). Then, the sustaining number
of each frequency for the first candidate is checked, and when it is larger than a previously set sustaining number of noise spectrum reference, the compensation spectrum and sustaining number for
the second candidate are set as those for the first candidate, and the compensation spectrum of the second candidate is set as that of the third candidate and the sustaining number is set to 0. Note
that in replacement of the compensation spectrum of the second candidate, the memory can be saved by not storing the third candidate and substituting a value slightly larger than the second
candidate. In this mode, a spectrum which is 1.4 times greater than the compensation spectrum of the second candidate is substituted.
After renewing the sustaining number, the compensation noise spectrum is compared with the input spectrum for each frequency. First, the input spectrum of each frequency is compared with the
compensation nose spectrum of the first candidate, and when the input spectrum is smaller, the compensation noise spectrum and sustaining number for the first candidate are set as those for the
second candidate, and the input spectrum is set as the compensation spectrum of the first candidate with the sustaining number set to 0. In other cases than the mentioned condition, the input
spectrum is compared with the compensation nose spectrum of the second candidate, and when the input spectrum is smaller, the input spectrum is set as the compensation spectrum of the second
candidate with the sustaining number set to 0. Then, the obtained compensation spectra and sustaining numbers of the first and second candidates are stored in the noise spectrum storage section 285.
At the same time, the mean noise spectrum is updated according to the following equation 50.
=Si×g+Si×(1-g) (50)
where [0659]
s: means noise spectrum
S: input spectrum
g: 0.9 (when the input power is larger than a half the mean noise power)
0.5 (when the input power is equal to or smaller than a half the mean noise power)
i: number of the frequency.
The mean noise spectrum is pseudo mean noise spectrum, and the coefficient g in the equation 50 is for adjusting the speed of learning the mean noise spectrum. That is, the coefficient has such an
effect that when the input power is smaller than the noise power it is likely to be a noise-only segment so that the learning speed will be increased, and otherwise, it is likely to be in a speech
segment so that the learning speed will be reduced.
Then, the total of the values of the individual frequencies of the mean noise spectrum is obtained to be the mean noise power. The compensation noise spectrum, mean noise spectrum and mean noise
power are stored in the noise spectrum storage section 285.
In the above noise estimating process, the capacity of the RAM constituting the noise spectrum storage section 285 can be saved by making a noise spectrum of one frequency correspond to the input
spectra of a plurality of frequencies. As one example is illustrated the RAM capacity of the noise spectrum storage section 285 at the time of estimating a noise spectrum of one frequency from the
input spectra of four frequencies with FFT of 256 points in this mode used. In consideration of the (pseudo) amplitude spectrum being horizontally symmetrical with respect to the frequency axis, to
make estimation for all the frequencies, spectra of 128 frequencies and 128 sustaining numbers are stored, thus requiring the RAM capacity of a total of 768 W or 128 (frequencies)×2 (spectrum and
sustaining number)×3 (first and second candidates for compensation and mean).
When a noise spectrum of one frequency is made to correspond to input spectra of four frequencies, by contrast, the required RAM capacity is a total of 192 W or 32 (frequencies)×2 (spectrum and
sustaining number)×3 (first and second candidates for compensation and mean). In this case, it has been confirmed through experiments that for the above 1×4 case, the performance is hardly
deteriorated while the frequency resolution of the noise spectrum decreases. Because this means is not for estimation of a noise spectrum from a spectrum of one frequency, it has an effect of
preventing the spectrum from being erroneous estimated as a noise spectrum when a normal sound (sine wave, vowel or the like) continues for a long period of time.
A description will now be given of a process in the noise canceling/spectrum compensating section 278.
A result of multiplying the mean noise spectrum, stored in the noise spectrum storage section 285, by the noise cancellation coefficient obtained by the noise cancellation coefficient adjusting
section 274 is subtracted from the input spectrum (spectrum difference hereinafter). When the RAM capacity of the noise spectrum storage section 285 is saved as described in the explanation of the
noise estimating section 284, a result of multiplying a mean noise spectrum of a frequency corresponding to the input spectrum by the noise cancellation coefficient is subtracted. When the spectrum
difference becomes negative, compensation is carried out by setting a value obtained by multiplying the first candidate of the compensation noise spectrum stored in the noise spectrum storage section
285 by the compensation coefficient obtained by the noise cancellation coefficient adjusting section 274. This is performed for every frequency. Further, flag data is prepared for each frequency so
that the frequency by which the spectrum difference has been compensated can be grasped. For example, there is one area for each frequency, and 0 is set in case of no compensation, and 1 is set when
compensation has been carried out. This flag data is sent together with the spectrum difference to the spectrum stabilizing section 279. Furthermore, the total number of the compensated (compensation
number) is acquired by checking the values of the flag data, and it is sent to the spectrum stabilizing section 279 too.
A process in the spectrum stabilizing section 279 will be discussed below. This process serves to reduce allophone feeling mainly of a segment which does not contain speeches.
First, the sum of the spectrum differences of the individual frequencies obtained from the noise canceling/spectrum compensating section 278 is computed to obtain two kinds of current frame powers,
one for the full range and the other for the intermediate range. For the full range, the current frame power is obtained for all the frequencies (called the full range; 0 to 128 in this mode). For
the intermediate range, the current frame power is obtained for an perpetually important, intermediate band (called the intermediate range; 16 to 79 in this mode).
Likewise, the sum of the compensation noise spectra for the first candidate, stored in the noise spectrum storage section 285, is acquired as current frame noise power (full range, intermediate
range). When the values of the compensation numbers obtained from the noise canceling/spectrum compensating section 278 are checked and are sufficiently large, and when at least one of the following
three conditions is met, the current frame is determined as a noise-only segment and a spectrum stabilizing process is performed.
(1) The input power is smaller than the maximum power multiplied by an unvoiced segment detection coefficient.
(2) The current frame power (intermediate range) is smaller than the current frame noise power (intermediate range) multiplied by 5.0.
(3) The input power is smaller than noise reference power.
In a case where no stabilizing process is not conducted, the consecutive noise number stored in the previous spectrum storage section 286 is decremented by 1 when it is positive, and the current
frame noise power (full range, intermediate range) is set as the previous frame power (full range, intermediate range) and they are stored in the previous spectrum storage section 286 before
proceeding to the phase diffusion process.
The spectrum stabilizing process will now be discussed. The purpose for this process is to stabilize the spectrum in an unvoiced segment (speech-less and noise-only segment) and reduce the power.
There are two kinds of processes, and a process 1 is performed when the consecutive noise number is smaller than the number of consecutive noise references while a process 2 is performed otherwise.
The two processes will be described as follow.
(Process 1)
The consecutive noise number stored in the previous spectrum storage section 286 is incremented by 1, and the current frame noise power (full range, intermediate range) is set as the previous frame
power (full range, intermediate range) and they are stored in the previous spectrum storage section 286 before proceeding to the phase adjusting process.
(Process 2)
The previous frame power, the previous frame smoothing power and the unvoiced segment power reduction coefficient, stored in the previous spectrum storage section 286, are referred to and are changed
according to an equation 51.
129=D129×0.5+Dd129×0.5 (51)
where [0680]
Dd80: previous frame smoothing power (intermediate range)
D80: previous frame power (intermediate range)
Dd129: previous frame smoothing power (full range)
D129: previous frame power (full range)
A80: current frame noise power (intermediate range)
A129: current frame noise power (full range).
Then, those powers are reflected on the spectrum differences. Therefore, two coefficients, one to be multiplied in the intermediate range (coefficient 1 hereinafter) and the other to be multiplied in
the full range (coefficient 2 hereinafter), are computed. First, the coefficient 1 is computed from an equation 52.
1=D80/A80(when A80>0)
1.0(when A80≦0) (52)
where [0687]
r1: coefficient 1
D80: previous frame power (intermediate range)
A80: current frame noise power (intermediate range).
As the coefficient 2 is influenced by the coefficient 1, acquisition means becomes slightly complicated. The procedures will be illustrated below.
(1) When the previous frame smoothing power (full range) is smaller than the previous frame power (intermediate range) or when the current frame noise power (full range) is smaller than the current
frame noise power (intermediate range), the flaw goes to (2), but goes to (3) otherwise.
(2) The coefficient 2 is set to 0.0, and the previous frame power (full range) is set as the previous frame power (intermediate range), then the flow goes to (6).
(3) When the current frame noise power (full range) is equal to the current frame noise power (intermediate range), the flow goes to (4), but goes to (5) otherwise.
(4) The coefficient 2 is set to 1.0, and then the flow goes to (6).
(5) The coefficient 2 is acquired from the following equation 53, and then the flow goes to (6).
2=(D129-D80)/(A129-A80) (53)
where [0696]
r2: coefficient 2
D129: previous frame power (full range)
D80: previous frame power (intermediate range)
A129: current frame noise power (full range)
A80: current frame noise power (intermediate range).
(6) The computation of the coefficient 2 is terminated.
The coefficients 1 and 2 obtained in the above algorithm always have their upper limits clipped to 1.0 and lower limits to the unvoiced segment power reduction coefficient. A value obtained by
multiplying the spectrum difference of the intermediate frequency (16 to 79 in this example) by the coefficient 1 is set as a spectrum difference, and a value obtained by multiplying the spectrum
difference of the frequency excluding the intermediate range from the full range a that spectrum difference (0 to 15 and 80 to 128 in this example) by the coefficient 2 is set as a spectrum
difference. Accordingly, the previous frame power (full range, intermediate range) is converted by the following equation 54.
129=D80+(A129-A80)×r2 (54)
where [0703]
r1: coefficient 1
r2: coefficient 2
D80: previous frame power (intermediate range)
A80: current frame noise power (intermediate range)
D129: previous frame power (full range)
A129: current frame noise power (full range).
Various sorts of power data, etc. obtained in this manner axe all stored in the previous spectrum storage section 286 and the process 2 is then terminated.
The spectrum stabilization by the spectrum stabilizing section 279 is carried out in the above manner.
Next, the phase adjusting process will be explained. While the phase is not changed in principle in the conventional spectrum subtraction, a process of altering the phase at random is executed when
the spectrum of that frequency is compensated at the time of cancellation. This process enhances the randomness of the remaining noise, yielding such an effect of making is difficult to give a
perpetually adverse impression.
First, the random phase counter stored in the random phase storage section 287 is obtained. Then, the flag data (indicating the presence/absence of compensation) of all the frequencies are referred
to, and the phase of the complex spectrum obtained by the Fourier transform section 277 is rotated using the following equation 55 when compensation has been performed.
=Bt (55)
where [0713]
Si, Ti: complex spectrum
i: index indicating the frequency
R: random phase data
c: random phase counter
Bs, Bt: register for computation.
In the equation 55, two random phase data are used in pair. Every time the process is performed once, the random phase counter is incremented by 2, and is set to 0 when it reaches the upper limit (16
in this mode). The random phase counter is stored in the random phase storage section 287 and the acquired complex spectrum is sent to the inverse Fourier transform section 280. Further, the total of
the spectrum differences (spectrum difference power hereinafter) and it is sent to the spectrum enhancing section 281.
The inverse Fourier transform section 280 constructs a new complex spectrum based on the amplitude of the spectrum difference and the phase of the complex spectrum, obtained by the spectrum
stabilizing section 279, and carries out inverse Fourier transform using FFT. (The yielded signal is called a first order output signal.) The obtained first order output signal is sent to the
spectrum enhancing section 281.
Next, a process in the spectrum enhancing section 281 will be discussed.
First, the mean noise power stored in the noise spectrum storage section 285, the spectrum difference power obtained by the spectrum stabilizing section 279 and the noise reference power, which is
constant, are referred to select an MA enhancement coefficient and AR enhancement coefficient. The selection is implemented by evaluating the following two conditions.
(Condition 1)
The spectrum difference power is greater than a value obtained by multiplying the mean noise power, stored in the noise spectrum storage section 285, by 0.6, and the mean noise power is greater than
the noise reference power.
(Condition 2)
The spectrum difference power is greater than the mean noise power.
When the condition 1 is met, this segment is a "voiced segment," the MA enhancement coefficient is set to an MA enhancement coefficient 1-1, the AR enhancement coefficient is set to an AR enhancement
coefficient 1-1, and a high-frequency enhancement coefficient is set to a high-frequency enhancement coefficient 1. When the condition 1 is not satisfied but the condition 2 is met, this segment is
an "unvoiced segment," the MA enhancement coefficient is set to an MA enhancement coefficient 1-0, the AR enhancement coefficient is set to an AR enhancement coefficient 1-0, and the high-frequency
enhancement coefficient is set to 0. When the condition 1 is satisfied but the condition 2 is not, this segment is an "unvoiced, noise-only segment," the MA enhancement coefficient is set to an MA
enhancement coefficient 0, the AR enhancement coefficient is set to an AR enhancement coefficient 0, and the high-frequency enhancement coefficient is set to a high-frequency enhancement coefficient
Using the linear predictive coefficients obtained from the LPC analyzing section 276, the MA enhancement coefficient and the AR enhancement coefficient, an MA coefficient AR coefficient of an extreme
enhancement filter are computed based on the following equation 56.
where [0726]
α(ma)i: MA coefficient
α(ar)i: AR coefficient
αi: linear predictive coefficient
β: MA enhancement coefficient
γ: AR enhancement coefficient
i: number.
Then, the first order output signal acquired by the inverse Fourier transform section 280 is put through the extreme enhancement filter using the MA coefficient and AR coefficient. The transfer
function of this filter is given by the following equation 57.
1 + α ( ma ) 1 × Z - 1 + α ( ma ) 2 × Z - 2 + + α ( ma ) j × Z - j 1 + α ( ar ) 1 × Z - 1 + α ( ar ) 2 × Z - 2 + + α ( ar ) j × Z - j ( 57 ) ##EQU00044##
where [0733]
: MA coefficient
: AR coefficient
j: order.
Further, to enhance the high frequency component, high-frequency enhancement filtering is performed by using the high-frequency enhancement coefficient. The transfer function of this filter is given
by the following equation 58.
δ: high-frequency enhancement coefficient.
A signal obtained through the above process is called a second order output signal. The filter status is saved in the spectrum enhancing section 281.
Finally, the waveform matching section 282 makes the second order output signal, obtained by the spectrum enhancing section 281, and the signal stored in the previous waveform storage section 288,
overlap one on the other with a triangular window. Further, data of this output signal by the length of the last pre-read data is stored in the previous waveform storage section 288. A matching
scheme at this time is shown by the following equation 59.
O j
= ( j × D j + ( L - j ) × Z j ) L ( j = 0 ~ L - 1 ) O j = D j ( j = L ~ L / M - 1 ) Z j = O M + j ( j = 0 ~ L - 1 ) ( 59 ) ##EQU00045##
Where [0739]
: output signal
: second order output signal
: output signal
L: pre-read data length
M: frame length.
It is to be noted that while data of the pre-read data length+frame length is output as the output signal, that of the output signal which can be handled as a signal is only a segment of the frame
length from the beginning of the data. This is because, later data of the pre-read data length will be rewritten when the next output signal is output. Because continuity is compensated in the entire
segments of the output signal, however, the data can be used in frequency analysis, such as LPC analysis or filter analysis.
According to this mode, noise spectrum estimation can be conducted for a segment outside a voiced segment as well as in a voiced segment, so that a noise spectrum can be estimated even when it is not
clear at which timing a speech is present in data.
It is possible to enhance the characteristic of the input spectrum envelope with the linear predictive coefficients, and to possible to prevent degradation of the sound quality even when the noise
level is high.
Further, using the mean spectrum of noise can cancel the noise spectrum more significantly. Further, separate estimation of the compensation spectrum can ensure more accurate compensation.
It is possible to smooth a spectrum in a noise-only segment where no speech is contained, and the spectrum in this segment can prevent allophone feeling from being caused by an extreme spectrum
variation which is originated from noise cancellation.
The phase of the compensated frequency component can be given a random property, so that noise remaining uncanceled can be converted to noise which gives less perpetual allophone feeling.
The proper weighting can perpetually be given in a voiced segment, and perpetual-weighting originating allophone feeling can be suppressed in an unvoiced segment or an unvoiced syllable segment.
INDUSTRIAL APPLICABILITY [0751]
As apparent from the above, an excitation vector generator, a speech coder and speech decoder according to this invention are effective in searching for excitation vectors and are suitable for
improving the speech quality.
Patent applications by Toshiyuki Morii, Kanagawa JP
Patent applications by PANASONIC CORPORATION
Patent applications in class Transformation
Patent applications in all subclasses Transformation
User Contributions:
Comment about this patent or add new information about this topic: | {"url":"http://www.faqs.org/patents/app/20120185242","timestamp":"2014-04-24T06:36:38Z","content_type":null,"content_length":"261235","record_id":"<urn:uuid:5ef66397-e61a-45df-8af7-f35be45b22dc>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00381-ip-10-147-4-33.ec2.internal.warc.gz"} |
A New Kind of Science: The NKS Forum - Does M-theory with Wolfram's automaton explain the Pioneer anomaly?
David Brown
Registered: May 2009
Posts: 173
Does M-theory with Wolfram's automaton explain the Pioneer anomaly?
If M-theory with the Nambu transfer machine is a valid theory, then does it explain the Pioneer anomaly?
YES, INDEED! — or my theory is completely wrong.
When all known forces acting on the each of the two Pioneer spacecraft are taken into consideration, a very small but unexplained force remains. It appears to cause a constant sunward acceleration of
(8.74±1.33) * 10**(-10) m/sec**2, for both spacecraft. If the positions of the spacecraft are predicted one year in advance based on measured velocity and known forces (mostly gravity), they are
actually found to be some 400 kilometers closer to the sun at the end of the year.
According to Turyshev & Toth, “Radio-metric Doppler tracking data received from the Pioneer 10 and 11 spacecraft from heliocentric distances of 20 – 70 AU has consistently indicated the presence of a
small anomalous blue-shifted frequency drift uniformly changing with a rate of ~ 6 * 10**(-9) Hz/sec (or cycles/sec**2). Various distributions of dark matter in the solar system have been proposed to
explain the anomaly … However, it would have to be a special smooth distribution of dark matter that is not gravitationally modulated as normal matter so obviously is. …
Ultimately, the search for the Pioneer anomaly came down to one question: “Is it the heat or not the heat?””
Slava G. Turyshev & Viktor T. Toth, ''The Pioneer Anomaly'', ''Living Rev. Relativity'' 13, 2010, 4
IT’S UNEXPECTED HEAT LOSS DUE TO EXCESS FREDKIN RED SHIFT! — or my theory is completely wrong.
I have suggested that the -1/2 in Einstein’s field equations needs to be replaced by -1/2 + FF/2. Note that, in my theory, the distribution of dark matter is very smooth, because so-called dark
matter is really a necessary adjustment to Einstein’s field equations. I have suggested a theory based upon 4 cosmological principles that I assigned the names of Witten, Wolfram, Fredkin, and
Einstein. The Wolfram and Fredkin principles are extremely controversial. In particular, I suggest that Newton’s force law should be replaced by:
Non-gravitational force = mass times acceleration.
Gravitational force = (mass times Newtonian gravitational acceleration) plus (mass times acceleration due to Fredkin dark matter force that INCREASES GRAVITATIONAL RED SHIFT BEYOND EINSTEIN’S RED
SHIFT PREDICTION by a very small consistent increase).
According to Einstein’s “The meaning of relativity”, pages 91-92, there is a gravitational redshift precisely calculable in terms of general relativity theory. If receivingstation-redshift(∆) is
defined to be the redshifted gravitational first time-derivative predicted by Einstein at distance ∆ from the sun precisely at the site of the receiving station for the Doppler tracking data, then:
FF * (∫ receivingstation-redshift(∆) d∆) / (2 epsilon AU) represents the Fredkin excess redshift for the Pioneer Doppler tracking data, where the integration is carried out for ∆ from 1 minus epsilon
to 1 plus epsilon astronomical units. (Almost all of the gravitational red shift for the Pioneer incoming signal occurs near the receiving station. According to my theory, not only does this
particular signal have an unexpectedly large gravitational redshift but so do all photons everywhere in our universe.)
THEREFORE, FF * (∫ receivingstation-redshift(∆) d∆) / (2 epsilon AU) must equal roughly sqrt(60) * 10**(-5) hertz if my theory has any hope of being correct. This value of FF must explain all the
dark matter in our observable universe, or else my theory is completely wrong,
According to my analysis, my theory fails without M-theory because there would be no Nambu transfer machine. Conversely, M-theory would fail without the Nambu transfer machine, because space roar
rules out a curling-up mechanism for M-theory.
M-theorists, at this stage, do not agree with me. If empirical evidence proves my theory is correct, then M-theorists shall rush to embrace it. If empirical evidence proves my theory is incorrect,
then I believe that M-theory would then have to be drastically revised but that M-theorists would stubbornly cling to what they have now.
If the cosmological constant has a value of roughly (10**-4) ((eV)**4) and we adopt for dark matter the value sqrt(60/4) * 10**-5 ((eV)**4), then sqrt(60/4) * 10**-5 * (72.8/22.7) ~ 10**-4 to a crude
What is my bizarre theory all about?
Hypothesis 1. Some form of M-theory is necessary because quantum field theory cannot predict gravitons.
Hypothesis 2. Space roar is the manifestation of the Wolframian updating parameter. Space roar is the proof that the Wolframian automaton with the Fredkin delivery machine and with the Nambu transfer
machine is nature’s way of implementing M-theory.
Hypothesis 3. Dark matter is virtual mass-energy with zero inertial mass-energy and positive gravitational mass-energy. Dark energy is virtual mass-energy mass with zero inertial mass-energy and
negative gravitational mass-energy. Real mass-energy is virtual mass-energy that is explicitly or implicitly measured by the Wolframian automaton. Einstein’s equivalence principle is totally correct
for real energy BUT FAILS BADLY FOR VIRTUAL MASS-ENERGY.
Hypothesis 4. M-theory is the smoothing of the Nambu transfer machine. In order to define the Nambu transfer machine, M-theorists need X, where X is to M-theory as Kepler’s laws are to Newtonian
mechanics. Precise information about the GZK paradox is what is needed.
Does this make sense to M-theorists? (Other theoretical physicists are not of much help because they don’t know M-theoretical mathematics.) Hugh Everett’s many worlds theory is important pioneering
work but can’t make new predictions. However, I say that M-theorists need my theory because it is the unique valid physical interpretation of M-theory.
Note added 12-30-2010: I noticed that Prof. Antonio F. Rañada has partially anticipated the content of this posting in his Jan. 2005 paper entitled “The Pioneer anomaly as acceleration of the
clocks”. He says that the frequency of photons increases uniformly and adiabatically because of the expansion of the universe and his phenomenological theory; whereas, I say that the frequency of the
photons increases uniformly and adiabatically because of my physical interpretation of M-theory with Wolfram’s automaton.
http://arxiv.org/abs/gr-qc/0410084 “The Pioneer anomaly as acceleration of the clocks”
Last edited by David Brown on 01-04-2011 at 11:54 AM
Report this post to a moderator | IP: Logged | {"url":"http://forum.wolframscience.com/showthread.php?postid=6414","timestamp":"2014-04-21T02:22:46Z","content_type":null,"content_length":"21610","record_id":"<urn:uuid:d356dd7a-1533-4cf2-beea-781a5ed9f16a>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00043-ip-10-147-4-33.ec2.internal.warc.gz"} |
Drug test probability
Mumble.com uses tests with a 5% false positive and a 10% false negative. A mandatory second testing is being done.
a. if one employee is selected at random, what is the probability that the selected employee uses drugs and tests positive twice?
b. what is the probability that the employee tests positive twice?
c. if we know that the randomly chosen employee has tested positive twice, what is the probability that he uses drugs? | {"url":"http://mathhelpforum.com/statistics/208709-drug-test-probability.html","timestamp":"2014-04-16T06:27:40Z","content_type":null,"content_length":"35912","record_id":"<urn:uuid:b647e486-8b25-499a-b609-800df8fb9902>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00442-ip-10-147-4-33.ec2.internal.warc.gz"} |
Evaluation of LS-DYNA Concrete Material Model 159
PDF Version
(6.84 MB)
PDF files can be viewed with the Acrobat® Reader®
Chapter 4. Regulation of Softening Formulation
One desirable attribute of the finite element method is convergence of the solution with reasonable mesh refinement. However, finite element solutions are known to have difficulty converging if the
materials being modeled contain softening formulations. With softening formulations, there is a tendency for the greatest amount of damage to accumulate in the smallest elements. This is the result
of modeling smaller fracture energy in the smaller elements. To overcome this tendency, a regulatory method was implemented to promote convergence by eliminating element-to-element variation in
fracture energy. Refer to this report's companion Users Manual for a theoretical discussion of the regulatory technique implemented.^(6) This chapter demonstrates convergence of results using the
regulatory technique implemented for both single element and single material simulations.
Single Elements in Tension and Compression
Calculations were performed with single elements as a first step in evaluating mesh objectivity of the softening response. Calculations were performed for four cube sizes: 12.7 mm (0.5 inches), 38.1
mm (1.5 inches), 76.2 mm (3 inches), and 152.4 mm (6 inches) on each side. Each cube is modeled with quarter symmetry using nodal boundary conditions. Two loading conditions are analyzed. These are
direct pull (uniaxial tensile stress) and unconfined compression (uniaxial compression stress). In each case, a uniform constant velocity is applied to the four nodes at one end of the element, while
the four nodes at the other end are restrained from axial motion (lateral motion is not restrained). The velocity is 2.54 millimeters per second (mm/sec) (0.1 inch per second (inch/sec)) and is held
constant for the duration of the simulations.
Stress-displacement curves for the direct pull simulations are shown in Figures 27 and 28. All elements exhibit linear elastic behavior to peak strength, followed by softening. The strength is
independent of element size, although the displacement at peak strength increases with increasing element size. If stress-strain rather than stress-displacement were plotted (not shown), then all
elements would peak at the same strain level (as expected).
Shifted stress-displacement results are shown in Figures 27 and 28. The curves are shifted by subtracting the displacement at peak stress from the displacement history. With such manipulation, the
peak stress of each curve is plotted at a shifted displacement of 0. Plotted in this manner, the softening portion is identical for each curve, as desired. Hence the fracture energy (area under the
softening portion of the stress-displacement curve) is constant from one element to another. These simulations demonstrate that the element-based softening theory is accurate and implemented without
any coding errors.
Stress-displacement results for the unconfined compression simulations are shown in Figures 29 and 30. Once shifted, the softening portions of the four stress-displacement curves are close to, but
not exactly equal to, each other. Careful checking of these results indicates that all four curves have the same fracture energy, but not the same shape. Note that all four curves have a point of
agreement around 0.25 mm (0.01 inches) of displacement. For displacements less than 0.25 mm (0.01 inches), the smaller elements soften more rapidly than the larger elements. For displacements greater
than 0.25 mm (0.01 inches), the smaller elements soften less rapidly than the larger elements. The overall result is consistent fracture energy from element to element.
psi = 145.05 MPa, mm = 0.039 inch
Figure 27. The fracture energy, which is the area under the softening portion of the stress-displacement curve, is independent of element size in the direct pull simulations (not shifted).
psi = 145.05 MPa, mm = 0.039 inch
Figure 28. The fracture energy, which is the area under the softening portion of the stress-displacement curve, is independent of element size in the direct pull simulations (shifted to displacement
at peak stress).
psi = 145.05 MPa, mm = 0.039 inch
Figure 29. Although the fracture energy is constant, the softening curves vary slightly with element size in the unconfined compression simulations (not shifted).
psi = 145.05 MPa, mm = 0.039 inch
Figure 30. Although the fracture energy is constant, the softening curves vary slightly with element size in the unconfined compression simulations (shifted to displacement at peak stress).
These results indicate that in compression, the regulatory method allows small differences in softening behavior to occur as a function of element size. However, the author suggests that the
regulation is adequate despite these small differences. Figure 29 and 30 show that the largest and smallest elements vary in size by a factor of 12. This variation in element size is much larger than
that typically meshed by most analysts. Within a region of interest, a factor of 2 variations in element size is typical. The 38.1-mm (1.5-inch) and 76.2-mm (3-inch) elements in Figures 29 and 30
vary in size by a factor of 2, and their difference in softening response is minimal.
Cylinders in Tension and Compression
The objective of this chapter is to demonstrate mesh size sensitivity, and its regulation, for direct pull and unconfined compression of a concrete cylinder. This chapter begins with a description of
each mesh used in the calculations, followed by results of calculations performed without regulation of the softening formulation. In general, these calculations do not converge to a unique solution
and therefore demonstrate mesh size sensitivity, which is undesirable. This demonstration is followed by a theoretical discussion of softening formulations, with and without regulation. The chapter
then discusses the results of calculations performed with regulation. The direct pull simulations demonstrate convergence to a unique solution with reasonable mesh refinement. The unconfined
compression calculations tend toward convergence, but do not actually converge. Finally, conclusions are drawn concerning application of the technique implemented for regulating convergence of
material models containing softening formulations.
Details of Each Cylinder Mesh
The specimen analyzed is a concrete cylinder that is 304.8 mm (12 inches) long and 152.4 mm (6 inches) in diameter. This simple structure is being analyzed because it isolates one softening material
model of interest, without interaction with other materials or complications from detailed boundary conditions or contact surfaces.
Cylinders of two mesh refinements are analyzed, as shown in Figure 31. The basic mesh contains 768 solid elements. This mesh refinement is intended to be typical of what most users would generate to
simulate a cylinder test. The more refined mesh contains 2,592 solid elements. This mesh is more refined than what most users would generate. In addition, one single element and a two element mesh
are analyzed. The single element is 304.8 mm (12 inches) long and 135.1 mm (5.32 inches) along each edge of the square cross section, in such manner that the volume of the single element is equal to
the volume of the cylinder. Each element of the two element mesh is 152.4 mm (6 inches) long, with a 135.1 mm (5.32 inches) by 135.1 mm (5.32 inches) cross section.
Two loading conditions are analyzed. These are direct pull (uniaxial tensile stress) and unconfined compression (uniaxial compression stress). In each case, a uniform constant velocity is applied to
the nodes at both ends of the cylinder. The velocity is 1.27 mm/sec. It is held constant for the 40 msec duration of the direct pull simulations and for the 1,500 msec duration of the unconfined
compression simulations.
One boundary condition is analyzed for direct pull of the cylinders. The boundary condition is no lateral constraint on the ends of the cylinder so that the cylinder is free to expand or contract
laterally along its entire length. This idealized condition produces a realistic damage mode (breaking of the cylinder along one band of elements).
One boundary condition is analyzed for the unconfined compression cylinder. The boundary condition is a fixed end condition (no lateral motion of the nodes along the top and bottom of the cylinder).
This condition produces a realistic double diagonal-type damage mode often seen in tests.
Figure 31. Refinement of each mesh used in sensitivity analyses.
Demonstration of Mesh Size Sensitivity
Direct Pull without Regulation. Stress-displacement curves and damage fringes for the direct pull simulations without regulation are given in Figures 32 and 33. The stress history for the single and
double elements is output directly from ls-post. The stress history for the cylinders is obtained from the cross-sectional force history that is output from ls-post. The cross section is located at
the axial midplane of the cylinder. The stress is calculated by dividing the force history by the initial cross-sectional area of the cylinder (which is 719.2 millimeters square (mm^2) (28.3 inches
square (inches^2)). Each displacement history is output by ls-post at the end nodes.
Figure 32 demonstrates that the softening response becomes more and more brittle as the mesh is refined. In particular, the stress-displacement behavior for the basic mesh does not converge to that
of the refined mesh. This situation is undesirable. The damage modes for the basic mesh and refined mesh (see Figure 33) are similar, although they appear to be different at first glance. Damage is
localized into a single row of elements that soften as they stretch. The row of elements is at one end of the refined mesh, but more toward the center of the basic mesh.
Parametric studies (not shown) indicate that the row of elements that damages is sensitive to the rate of loading. The rate of loading affects the uniformity of the stress distribution along the
length of the cylinder. The rate of loading used in these calculations (quasi-static at 1.27 mm/sec (0.05 inch/sec) is slow enough that the stress distribution is uniform at peak strength to within
about .001 of 1 percent. With a top-to-bottom uniform mesh, loading, and stress distribution, fracture is likely to occur anywhere along the length of the cylinder, and not necessarily at the center
of the cylinder. However, the addition of friction at each end of the cylinder would probably push damage more toward the center of the cylinder.
For the single element mesh, the damage mode is softening of the entire element. For the double element mesh, very slight damage initiates simultaneously in both elements. But then damage in one of
the two elements rapidly dominates, resulting in the softening and stretching of that one element, while the other remains effectively undamaged.
All direct pull calculations were conducted with the user-defined material model for concrete linked to version 970, using softening parameters of A = 0.2467 and B = 0.1 in tension (P < 0) and A =
1.058 and B = 100 in compression (P > 0). These parameters are more fully described in the next section.
psi = 145.05 MPa, mm = 0.039 inch
Figure 32. The stress-displacement curves calculated in direct pull with an unregulated softening formulation do not converge as the mesh is refined.
Figure 33. The damage mode calculated in direct pull with an unregulated softening formulation is damage within a single band of elements.
Unconfined Compression without Regulation. Stress-displacement curves and damage fringes for these fixed end unconfined compression simulations without regulation are given in Figures 34 and 35. The
softening response tends to become more brittle as the mesh is refined. However, lack of convergence is not nearly as pronounced as previously demonstrated in Figure 32 for the direct pull
simulations. In fact, the stress-displacement behavior for the basic mesh resembles that of the refined mesh. The author of this report considers the agreement between the basic and refined mesh
curves as acceptable.
In addition, note that the single element does not damage. With fixed end conditions, the single element simulates uniaxial strain response, not uniaxial stress response. Therefore, this simulation
displays hardening behavior (pushes the cap out without triggering damage) that is typically seen in tests of concrete under uniaxial strain conditions. The single element with fixed ends is a very
poor representation of a cylinder with fixed ends.
As seen in Figure 35, the damage mode for the basic mesh and refined mesh is similar. Damage is primarily localized into two diagonals, forming an X-shape. This damage is in agreement with the double
diagonal damage mode commonly observed in compression tests with fixed ends. For the double element mesh, both elements damage almost simultaneously and equally.
psi = 145.05 MPa, mm = 0.039 inch
Figure 34. The stress-displacement curves calculated in unconfined compression with an unregulated softening formulation are similar for the basic and refined meshes (fixed ends).
Figure 35. The damage mode calculated in unconfined compression with an unregulated softening formulation is a double diagonal (in the basic and refined meshes with fixed end conditions).
Description of Regulatory Technique
Softening is a reduction in strength with continued straining once a damage threshold is reached. Softening is modeled via a damage formulation. Without the damage formulation, most material models
predict perfectly plastic behavior for laboratory test simulations like direct pull and unconfined compression. Such behavior is not realistic for materials like concrete, rock, soil, composites, and
Softening is modeled with two parameters. Without regulation of mesh size dependency, these parameters are A and B, and are independent of element size. With regulation, these parameters are B and G
[f] rather than A and B. When the damage threshold is attained, the concrete material model internally solves for the value of A based upon the initial element size, the initial damage threshold, the
fracture energy G[f], and a user-specified input value for B.
Evaluation of Regulatory Technique
Direct Pull with Regulation. Stress-displacement curves and damage fringes for the direct pull simulations with regulation are given in Figures 36 and 37. Although the softening response becomes more
brittle as the mesh is refined, convergence is attained for the basic and refined meshes, as desired. In addition, the damage modes for the basic mesh and refined mesh are also in agreement: both
calculate fracture along a single band of elements.
psi = 145.05 MPa, mm = 0.039 inch
Figure 36. The stress-displacement curves calculated in direct pull with a regulated softening formulation converge as the mesh is refined.
Figure 37. The damage modes calculated in direct pull with a regulated softening formulation are in agreement for the basic and refined mesh simulations.
All direct pull calculations were conducted with the user-defined material model for concrete linked to version 970, using softening parameters of G[f] = 0.098 megapascal-millimeters (MPa-mm) (0.56
pounds per inch (lb/inch)) and B = 0.1 in tension (P < 0) with G[f] = 9.81 MPa-mm (56 lb/inch) and B = 100 in compression (P > 0).
Unconfined Compression with Regulation. Stress-displacement curves and damage fringes for these fixed end unconfined compression simulations with regulation are given in Figures 38 and 39. The
stress-displacement behavior for the refined mesh is nearly in agreement with that of the basic mesh, although the refined mesh is slightly more ductile. Greater ductility with mesh refinement is
opposite the trend observed for the direct pull simulations.
The damage mode for the basic and refined meshes (see Figure 39) is similar. Damage is roughly localized into two diagonals, forming an X-shape. However, the damage is less diffuse (more localized),
forming a sharper X, for the refined mesh than it is for the basic mesh. This situation suggests that a correlation may exist between diffusivity and softening response: the more diffuse the damage,
the more brittle the softening behavior. These results also suggest that additional mesh refinement is needed to attain complete convergence.
psi = 145.05 MPa, mm = 0.039 inch
Figure 38. The stress-displacement curves calculated in unconfined compression with a regulated softening formulation nearly converge as the mesh is refined (fixed ends).
Figure 39. The X-shaped damage mode calculated in unconfined compression with a regulated softening formulation is similar for the basic and refined mesh simulations (fixed ends).
The argument could be made, however, that the basic and refined mesh simulations conducted without regulation (Figures 34 and 35) are in just as good agreement as those conducted with regulation
(Figures 38 and 39). Hence, these simulations are inconclusive with regard to whether the proposed regulation scheme is necessary for the compressive response mode.
Note also that regulation has an opposite effect on the softening response in tension than it does in compression. In tension, regulation increases brittleness as the mesh is refined, at least until
convergence is attained. In compression, regulation decreases brittleness (increases ductility) as the mesh is refined, hopefully until convergence is attained. Hence, if an analysis is conducted
with a mesh that is too crude, there may be a tendency to underpredict tensile damage and overpredict compressive damage.
All unconfined compression calculations were conducted with concrete material model 159 in ls-dyna version 971, using softening parameters of G[f] = 0.098 MPa-mm (0.56 lb per inch (lb/inch)) and B =
0.1 in tension (P < 0) with G[f] = 9.81 MPa-mm (56 lb/inch) and B = 100 in compression
(P > 0).
Effect of Additional Mesh Refinement on Cylinder Response
A very refined mesh with 6,144 elements was created to further evaluate mesh size objectivity of the cylinder in compression. Refinement of this mesh is shown in Figure 40. Damage modes calculated
with and without regulation are also shown in Figure 40. The calculations were conducted with fixed end conditions. Both calculations exhibit two diagonal bands of damage, forming an X. In the
regulated calculation, these bands form simultaneously and about equally. In the unregulated calculation, one band forms prior to the other and exhibits greater damage. This X damage mode is the same
as that previously calculated with less mesh refinement (768 and 2,592 elements), except now the damage is more localized so that the bands are more distinct.
Stress-displacement histories are shown in Figure 41 for all calculations without regulation and in Figure 42 for all calculations with regulation. Without regulation, the history calculated with the
very refined mesh is more brittle than those calculated with less mesh refinement. With regulation, the history calculated with the very refined mesh is slightly more ductile than those calculated
with less mesh refinement. Agreement is reasonable, but not exact.
Figure 40. A crisp X-shaped band of damage is calculated for the very refined mesh, with or without regulation of the softening response.
psi = 145.05 MPa, mm = 0.039 inch
Figure 41. The stress-displacement curves calculated in unconfined compression without a regulated softening formulation become more brittle as the mesh is refined (fixed ends).
psi = 145.05 MPa, mm = 0.039 inch
Figure 42. The stress-displacement curves calculated in unconfined compression with a regulated softening formulation become more ductile as the mesh is refined (fixed ends).
Summary Regarding Regulation
The direct pull calculations demonstrate that the technique implemented for regulating mesh size sensitivity achieves convergence of the solution with reasonable mesh refinement, as proposed. The
direct pull calculations exercise the brittle damage mode only; therefore, this conclusion applies to the brittle damage mode only, and not the ductile damage mode.
The unconfined compression calculations support the application of the proposed regulation technique for the ductile damage mode, but do not indicate that it is absolutely necessary. Convergence is
reasonable, but not exact. The fixed end calculations produce a double diagonal (X-shaped) damage mode that becomes more localized (less diffuse) with mesh refinement. This damage mode is similar to
that seen in cylinder tests.
All calculations were performed for a single material without the use of contact surfaces or detailed boundary conditions. The appropriate element size to obtain convergence in more detailed
applications may be problem dependent.
Finally, as previously noted, regulation in tension increases brittleness, whereas regulation in compression decreases brittleness (as the mesh is refined), until convergence is attained. Hence, if
an analysis is conducted with a mesh that is too crude, there will be a tendency to underpredict tensile damage and overpredict compressive damage.
Previous | Table of Contents | Next | {"url":"http://www.fhwa.dot.gov/publications/research/infrastructure/structures/05063/chapt4.cfm","timestamp":"2014-04-17T15:39:44Z","content_type":null,"content_length":"34930","record_id":"<urn:uuid:2839ca07-2973-4d4b-aa2e-e8e8e89f1bb5>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00078-ip-10-147-4-33.ec2.internal.warc.gz"} |
Symbolism and Terminology in Enzyme Kinetics
(Recommendation 1981)
Enzyme Reactions & Inhibition
Continued from Introduction, Definitions, Order of Reaction & Rate Constants
Contents of Section
4.1. Limiting Kinetics of Enzyme-Catalysed Reactions
At very low concentrations of substrate many enzyme-catalysed reactions display approximately second-order kinetics, with rate given by the following equation:
v = k[A] [E][0] [A] . . . . . . . . (8)
in which the symbol k[A] (or, in general, k[R] for a reactant R) is the apparent second-order rate constant or specificity constant and [E][0], which may also be written as [E][t] or [E][stoich], is
the total or stoichiometric concentration of catalytic centres. (This corresponds to the total enzyme concentration only if there is a single catalytic centre per molecule.) The rationale for the
subscript 0 is that the total enzyme concentration is normally the concentration at the instant of mixing, i.e. at time zero. Conversely, at very high substrate concentrations the same reactions
commonly display approximately first-order kinetics (zero-order with respect to substrate):
v = k[0] [E][0] . . . . . . . . (9)
in which k[0], which may also be written as k[cat] is the apparent first-order rate constant. Although these limiting types of behaviour are not universally observed, they are more common than
Michaelis-Menten kinetics (Section 4.2) and provide a basis for classifying inhibitory and other effects (Section 5) independently of the need for Michaelis-Menten kinetics.
The apparent second-order rate constants k[A] and k[B] of competing substrates A and B determine the partitioning between competing reactions, regardless of whether the substrate concentrations are
very small or not, and it is for this reason that the name specificity constant is proposed for this parameter of enzymic catalysis. The apparent first-order rate constant k[0] is a measure of the
catalytic potential of the enzyme and is called the catalytic constant.
The quantity k[0][E][0] is given the symbol V and the name limiting rate. It is particularly useful when k[0] cannot be calculated because the total catalytic-centre concentration is unknown, as in
studies of enzymes of unknown purity, sub-unit structure and molecular mass. The symbol V[max] and the names maximum rate and maximum velocity are also in widespread use although under normal
circumstances there is no finite substrate concentration at which v = V and hence no maximum in the mathematical sense. The form V[max] is convenient in speech as it avoids the need for a cumbersome
distinction between 'capital V' and 'lower case v'. When a true maximum does occur (as insubstrate inhibition; Section 4.3) the symbol v[max] (not V[max]) and the name maximum rate may be used for
the true maximum value of v but care should be taken to avoid confusion with the limiting rate.
4.2. Michaelis-Menten Kinetics
Sometimes the relationship between the rate of an enzyme-catalysed reaction and the substrate concentration takes the form
where V and K[mA] are constants at a given temperature and a given enzyme concentration. The reaction is then said to display Michaelis-Menten kinetics. (The term hyperbolic kinetics is also
sometimes used because a plot of v against [A] has the form ot a rectangular hyperbola through the origin with asymptotes v = V and [A] = -K[mA]. This term, and others that imply the use of
particular kinds of plot. should be used with care to avoid ambiguity, as they can be misleading if used out of context.) The constant V is the limiting rate, with the same meaning as in Section 4.1.
The second constant K[mA] is known as the Michaelis constant for A; the alternative name Michaelis concentration may also be used and has the advantage of emphasizing that the quantity concerned has
the dimensions of a concentration and is not, in general, an equilibrium constant. When only one substrate is being considered the qualifier A may be omitted, so that the symbol becomes K[m]. When
the qualifier is included its location is a matter of typographical convenience; no particular significance attaches to such variants as K[mA]. The Michaelis constant (or Michaelis concentration) is
the substrate concentration at which v = 0.5 V, and its usual unit is mol dm^-3, which may be written as mol L^-1 or M. The term Michaelis constant and the symbol K[m] should not be used when
Michaelis-Menten kinetics are not obeyed (see Section 4.3).
For a reaction obeying Michaelis-Menten kinetics the rate in the limit of very low substrate concentrations is v = V[A]/K[mA], and comparison with Eqn (8) shows that V/K[mA] = k[A] [E][0]. In the
limit of very high substrate concentrations v = V, and comparison with Eqn (9) gives V = k[0] [E][0]. The Michaelis constant K[mA] is therefore k[0]/k[A], and Eqn (10) can be written as
An indefinitely large number of mechanisms generate Michaelis-Menten kinetics, and still more generate limiting behaviour of the kind described in Section 4.1. Consequently there is no general
definition of any of the kinetic parameters k[A], k[0], V and K[mA] in terms of the rate constants for the elementary steps of a particular mechanism.
4.3. Non-Michaelis-Menten Kinetics
When the kinetic behaviour does not conform to Eqn (10) or Eqn (11) the reaction is said to exhibit non-Michaelis-Menten kinetics. If the Michaelis-Menten equation is obeyed approximately over a
restricted range of substrate concentrations it may be convenient to regard this behaviour as a deviation from this equation rather than as an unrelated phenomenon. For example, a reaction may obey
an equation of the following form
in which the constants V', K'[mA] and K[iA] are used for illustration without any implication of universal or standard definitions. If K[iA] is large compared with KmA the behaviour predicted by Eqn
(12) will approximate to that predicted by Eqn (11), with V and K[mA] replaced by V' and K[mA], in the lower range of substrate concentrations. However, with Eqn (12) the rate passes through a
maximum as the concentration increases, and there is said to be inhibition by substrate, and the constant K[iA], which has the dimensions of a concentration, is called the substrate inhibition
When more complex kinds of non-Michaelis-Menten behaviour occur it is usually unhelpful to use terminology and symbolism suggestive of the Michaelis-Menten equation; instead the approach discussed in
Section 10 is appropriate. In all cases it is advisable to avoid the term Michaelis constant and the symbol K[m] when the Michaelis-Menten equation is not obeyed, because it is defined as a parameter
of that equation. The symbol [A][0.5] or [A][1/2], not K[mA], may be used for the value of [A] at which v = 0.5 V.
5.1. Michaelis-Menten Kinetics
Regardless of the number of substrates, a reaction is said to obey Michaelis-Menten kinetics if the rate equation can be expressed in the following form:
which can be regarded as a generalization of Eqn (11). (Z is used here as an example of a product as suggested in Section 2.) Each term in the denominator of the rate expression contains unity or any
number of product concentrations in its numerator. and a coefficient k and any number of substrate concentrations raised to no higher than the first power in its denominator. The constant k[0]
corresponds to k[0] in Eqn (11); each other coefficient is assigned a subscript for each substrate concentration in the denominator of the term concerned and a superscript for each product
concentration in the numerator. The term 1/k[0] must be present, together with one term for each substrate of the form 1/k[A] [A], but the terms in products of concentrations. such as those shown in
Eqn ( 13) with coefficients k[AB] and k[0], k[A], k[AB],
Note The conventional Scottish pronunciation of this name may be expressed in the International Phonetic Alphabet as [di:'jel], with only slightly more stress on the second syllable than the
Eqn (13) can be applied to reactions with any number of substrates and products and can also be extended to some kinds of inhibition by substrate, i.e. to the simpler kinds of non-Michaelis-Menten
kinetics. It is thus an equation of considerable generality. It is simplest, however, to consider terminology in the context of a two-substrate reaction, and this will be done in Section 5.2.
5.2. Michaelis-Menten Kinetics of a Two-Substrate Reaction
For a two-substrate reaction in the absence of products Eqn (13) simplifies to the following equation:
If the concentration of one substrate, known as the constant substrate, is held constant, while that of the other, known as the variable substrate, is varied, the rate is of the form of the
Michaelis-Menten equation in terms of the variable substrate, because Eqn (14) can be rearranged to
(cf. Eqn 10), where
is known as the apparent catalytic constant, and
is known as the apparent specificity constant for A. It follows from Eqns (16) and (17) that k[0] and k[A] when [B] is extrapolated to an infinite value. This relationship provides the basis for
defining the catalytic constant and the specificity constants in reactions with more than one substrate: in general, the catalytic constant of an enzyme is the value of v/[E][0] obtained by
extrapolating all substrate concentrations to infinity; for any substrate A the specificity constant is the apparent value when all other substrate concentrations are extrapolated to infinity.
Eqn (14) may also be rearranged into a form resembling Eqn (11), as follows:
in which V = k[0][E][0] is the limiting rate, which may also, subject to the reservations noted in section 4.1, be called the maximum rate or maximum velocity and symbolized as Vmax; KmA = k[0]/k[A]
is the Michaelis constant for A; K[mB] = k[0]/k[B] is the Michaelis constant for B; and K[iA] = k[B]/k[AB] is the inhihition constant for A. In some mechanisms K[iA] is equal to the true dissociation
constant for the EA complex: when this is the case the alternative symbol K[iA] and the name substrate-dissociation constant for A (cf. section 3.2) may be used. If Eqn(18) is interpreted
operationally rather than as the equation for a particular mechanism it is arbitrary whether the constant in the denominator is written with K[iA]K[mB] (as shown) or as K[mA]K[iB], where K[iB] = k[A]
/k[AB]. However, for some mechanisms only one of the two ratios k[B]/k[AB] and k[A]/k[AB] has a simple mechanistic interpretation and this may dictate which inhibition constant it is appropriate to
The term inhibition constant and the symbol K[iA] derive from the fact that the quantity concerned is related to (and in the limiting cases equal to) the inhibition constant K[ic] or K[iu] (as
defined below in Section 6.4) measured in experiments where the substrate is treated as an inhibitor of the reverse reaction. However, the relationships are not always simple and quantities such as K
[iA] in Eqn (18) can be and nearly always are defined and measured without any reference to inhibition experiments. For these reasons some members of the panel feel that the symbolism and terminology
suggested are not completely satisfactory. No alternative system has so far gained wide support, however.
An apparent Michaelis constant for A (and similarly for B) may be defined by dividing Eqn (16) by Eqn (17):
This equation provides the basis for defining the Michaelis constant for any substrate in a reaction with more than one substrate: the Michaelis constant for A, K[mA], is the value of the apparent
Michaelis constant for A when the concentrations of all substrates except A are extrapolated to infinity. This definition applies to reactions with any numbers of substrates, as also does the
definition of the limiting rate V as k[0] [E][0], but in other respects it becomes very cumbersome to define constants resembling K[iA] for reactions with more than two substrates. The symbolism of
Eqn (13) (or the alternative in terms of Dalziel coefficients) is readily extended to reactions with three or more substrates, however.
6.1. Reversible and Irreversible Inhibitions
Sometimes the effect of an inhibitor can be reversed by decreasing the concentration of inhibitor (e.g. by dilution or dialysis). The inhibition is then said to be reversible. If, once inhibition has
occurred, there is no reversal of inhibition on decreasing the inhibitor concentration the inhibition is said to be irreversible; irreversible inhibition is an example of enzyme inactivation. The
distinction between reversible and irreversible inhibition is not absolute and may be difficult to make if the inhibitor binds very tightly to the enzyme and is released very slowly. Reversible
inhibitors that behave in a way that is difficult to distinguish from irreversible inhibition are called tight-binding inhibitors.
6.2. Linear and Non-Linear Inhibition
Sometimes the effect of an inhibitor I can be expressed by multiplying one or more of the terms in the denominator of the general rate expression (Eqn 13) by factors of the form (1 + [I]/K[i]). The
inhibition is then said to be linear and K[i] which has the dimensions of a concentration, is called an inhibition constant for the inhibitor I. The word linear in this definition refers to the fact
that the inhibition is fully specified by terms in the denominator of the rate expression that are linear in inhibitor concentration, not to the straightness of any plots that may be used to
characterize the inhibition experimentally.
If the inhibition cannot be fully expressed by means of linear factors in the denominator the inhibition is said to be non-linear.
Linear inhibition is sometimes called complete inhibition, and the contrasting term partial inhibition is sometimes used for a type of non-linear inhibition in which saturation with inhibitor does
not decrease the rate to zero. These latter terms are discouraged because they can be misleading, implying, for example, that the rate may indeed be decreased to zero in 'complete inhibition' at
non-saturating concentrations of inhibitor.
If a reaction occurs in the absence of inhibitor with rate v[0] and in the presence of inhibitor with rate v[i], the degree of inhibition is defined as
As this quantity is a ratio of rates it is dimensionless. The subscripts '0' and 'i' are useful for distinguishing between uninhibited and inhibited reactions respectively when they are required
together, but are usually omitted when no confusion is likely.
6.4. Classification of Inhibition Types
Provided that an enzyme behaves in accordance with the limiting behaviour described in Section 4.1 both in the absence of inhibitor (which is always true if Michaelis-Menten kinetics are obeyed and
is also true more generally), the type of inhibition may be classified according to whether it affects the apparent value of k[A], the apparent value of k[0], or both.
If the apparent value of k[A] is decreased by the inhibitor the inhibition is said to have a competitive component, and if the inhibitor has no effect on the apparent value of k[0] the inhibition is
said to be competitive.. In linear inhibition there is a linear effect on 1/:
and the constant K[ic] is called the competitive inhibition constant for I.
Conversely, if there is an effect on the apparent value of k[0] the inhibition has an uncompetitive component, and if the innibitor has no effect on the apparent value of k[A] the inhibition is said
to be uncompetitive. In linear inhibition there is a linear effect on 1/:
and constant K[iu] is called uncompetitive inhibition constant for 1.
If both competitive and uncompetitive components are present in the inhibition it is said to be mixed. The term non-competitive inhibition is sometimes used instead of mixed inhibition, but this
usage is discouraged, first because the same term is often used for the special case of mixed inhibition in which K[ic] = K[iu], second because it suggests that mixed inhibition is the antithesis of
competitive inhibition whereas this description actually applies more accurately to uncompetitive inhibition, and third because the shorter word mixed expresses clearly the fact that both competitive
and uncompetitive components are present.
Mixed inhibition as defined here encompasses such a broad range of behaviour that it may sometimes be helpful to subdivide it further. The case in which K[ic] < K[iu] may then be called predominantly
competitive inhibition, the case with K[ic] = K[iu] may be called pure non-competitive inhibition, and the case with K[ic] > K[iu] may be called predominantly uncompetitive inhibition. The classical
term for pure non-competitive inhibition was simply non-competitive inhibition, but this term has become ambiguous because of its widespread use for all kinds of mixed inhibition and because of this
ambiguity it is discouraged for all purposes.
Both K[ic] and K[iu] have the dimensions of concentrations and may therefore be expressed in mol dm^-3, mol L^-1 or M. In contexts where distinction between K[ic] and K[iu] is unnecessary or
inappropriate the general symbol K[i] may be used for either. In the past there has been no generally understood symbol for the uncompetitive inhibition constant, which has been variously represented
as K[i], K'[i], K[ii], etc. A new and unambiguous symbol seems required, therefore, and K[iu] is proposed. Although the competitive inhibition constant has much more uniformly been expressed as K[i],
the occasional use of the same symbol for the uncompetitive inhibition constant, together with the view that a logical and symmetrical symbolism is desirable, has suggested that the symbol K[ic]
should be used for the competitive inhibition constant whenever any ambiguity might attend the use of the more general symbol K[i].
As K[ic] and K[iu] can in principle be determined by measuring the effects of inhibitor on the slopes and ordinate intercepts respectively of plots of 1/v against 1/[A] they, have sometimes been
symbolized as K[is] (for K[i] slope) and K[ii] (for K[i] intercept) respectively. Slopes and intercepts are not consistent from one kind of plot to another, however; for example, the slope and
intercept in a plot of [A]/v against [A] correspond, respectively, to the intercept and slope of a plot of 1/v against 1/[A]. Such symbols are therefore ambiguous and should not be used except in
explicit reference to particular plots.
In reactions with more than one substrate the classification of inhibitors as competitive, uncompetitive or mixed is not absolute but depends on which substrate is variable (in the sense of Section
5.2). For example, a particular inhibitor may cause variation in competitive inhibitor with respect to A but a mixed inhibitor with respect to B. In such systems the inhibition constants K[ic] and K
[iu] refer to the limiting behaviour for saturating concentrations of all substrates except for the variable substrate. Inhibition constants observed at non-saturating concentrations of the constant
substrates are apparent values and may be symbolized as
For some mechanisms some inhibition constants may be true dissociation constants. Whether this is true or not it does not form part of the definitions of the inhibition types and inhibition constants
given above, which are purely operational, in keeping with the policy set out in Section 1. When symbols are required for the dissociation constants of particular species they should be explicitly
defined in a way that avoids confusion with the operationally defined inhibition constants. A system of the following kind may be appropriate, but if used it should be explicitly defined in context.
For a binary complex, e.g. EI, the dissociation constant may be symbolized as K with the name of the complex as subscript, e.g. K[EI] For higher complexes where the nature of the dissociation needs
to be specified, a full stop (period) may be used to separate the parts of the complex that dissociate from one another; for example, K[EI.S] may be used for the dissociation of EIS into EI + S,
whereas K[ES.I] may be used for the dissociation of the same complex into ES + I.
The products of nearly all enzyme-catalyzed reactions behave as inhibitors when they are present in the reaction mixture. When considered in this light they are called product inhibitors and the
phenomenon is known as product inhibition. Product inhibition is always reversible (at least in principle) but in other respects occurs in the same varieties as other kinds of inhibition and requires
no special discussion or definitions.
Continue with Activation, pH, kinetics, mechanism, summary.
Return to home page for Enzyme Kinetics. | {"url":"http://www.chem.qmul.ac.uk/iubmb/kinetics/ek4t6.html","timestamp":"2014-04-17T15:51:00Z","content_type":null,"content_length":"32097","record_id":"<urn:uuid:458b217a-5b1c-4263-b0a6-5b75ee1f3d30>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00027-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Kathy uses the following steps to construct a perpendicular line through a point C on a line segment. Step 1: From point C, draw two arcs intersecting the line segment in points A and B. Step 2:
Using slightly more compass width draw two arcs from point A, one above and the other below the line segment. Step 3: Using the same width of the compass, draw two arcs from point B, above and below
the line segment. Step 4: Label the point of intersection of the arcs above the line segment as F and below the line as E. Step 5: Using a compass, join E and F. Part A: Which is the first incorrect
• one year ago
• one year ago
Best Response
You've already chosen the best response.
Part A: Which is the first incorrect step? Part B: Using complete sentences, explain your answer for Part A. Part C: Explain why a compass works for the construction done by Kathy.
Best Response
You've already chosen the best response.
Problem is, that is how I would construct perpendicular lines. :/
Best Response
You've already chosen the best response.
same here...there needs to be something wrong though Is it that she drew an arc above the line when she was supposed to do it below?
Best Response
You've already chosen the best response.
|dw:1336341976935:dw| I think it is...
Best Response
You've already chosen the best response.
I think perpendicular line 'through' a point suggests you need both though
Best Response
You've already chosen the best response.
Actually you would need both anyway as you wouldn't be able to align your ruler without both points.
Best Response
You've already chosen the best response.
You already have point C above the line segment so there is no need for another point above the line segment. So I think it might be that.
Best Response
You've already chosen the best response.
This is my answer: Part A: The first incorrect step is that she drew an arc above the line segment (Step 2). Part B: Step 2 is the first incorrect step because point C is already above the line
segment so there is no need to create another point above the line segment, only below it. Part C: A compass works for the construction because you are able to draw arcs of same lengths from the
points created on the line segment.
Best Response
You've already chosen the best response.
Was there a diagram, as I don't see where it specifies where C is. I assumed it was on the line.
Best Response
You've already chosen the best response.
I'm pretty sure it was above the line...I'm good now thanks
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/4fa6f3d9e4b029e9dc366114","timestamp":"2014-04-16T18:59:52Z","content_type":null,"content_length":"62661","record_id":"<urn:uuid:5a006a74-e0a1-4bd8-897f-3372a4cc2ed7>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00468-ip-10-147-4-33.ec2.internal.warc.gz"} |
stastics- correlation and regression
Number of results: 1,149
stastics- correlation and regression
When x increases by 1, y increases by 1/4. 1 is correct, but if both variables increase, is that a positive or negative correlation?
Tuesday, October 19, 2010 at 1:31am by PsyDAG
stastics- correlation and regression
for the table that follows: answer the following questions: x y 1 -1/4 2 -1/2 3 -3/4 4 ? * would the correlation between x and y in the table show above be positive or negative? i think that it would
be negative, i am not sure that is correct? * find the value of y in the ...
Tuesday, October 19, 2010 at 1:31am by mary-please help figure out this problem.
Statistics - Correlation Coefficient/Regression
y = -15.4 + 14.4x the correlation coefficient is correct because it is geomatric mean of two regression coefficient the second regression coefficient is 0.066428.
Saturday, June 21, 2008 at 8:35pm by saqib
A) If a regression experiment has correlation coefficient r= 0.75, what precent of total variation is explained by the regression? B) if 85% of total variation is explained by the regression, and
there is an inverse relationship between y and x, then what is the correlation ...
Sunday, April 17, 2011 at 6:12pm by SROBSON
A. If the correlation coefficient is 0.65, what is the sign of the slope of the regression line? B. As the correlation coefficient decreases from -0.97 to -0.99, do the points of the scatter plot
move toward the regression line, or away from it?
Friday, October 21, 2011 at 2:59pm by Jen
If the correlation coefficient is -0.54, what is the sign of the slope of the regression line? - As the correlation coefficient decreases from 0.86 to 0.81, do the points of the scatter plot move
toward the regression line, or away from it
Thursday, October 4, 2012 at 11:01am by mom
college statistics
if the correlation co-efficient is o.32 what is the sign of the slope regression line as the correlation co efficient decreases from 0.78 to 0.71 do the oints of the scatter plot move toward the
regression line or away from it
Saturday, May 26, 2012 at 10:19am by melba
correlation and regression
the data given below has y = 3x-5 for its regression equation. find the standard error of estimate
Tuesday, December 4, 2012 at 4:09pm by reta
can someone please help me fast before midnight. a)I need the correlation between x and y b)slope of linear regression c)y intercept of linear regression for the points (1,8)(2,6)(3,4)(4,2)(5,5)
Monday, May 5, 2008 at 10:53pm by john
A perfect positive or negative correlation means that A) the explanatory causes the y-variable. B) 100% of the variation is explained. C) we get the same regression equation if we switch the x and y
variables. D) the slope of the regression equation is 1.0 I chose B
Wednesday, September 18, 2013 at 4:05pm by Robby
x y 1 -10.0 2 -20.0 3 -30.0 4 -40.0 5 -50.0 - What would be the slope of this regression line? - Would the correlation between x and y be positive or negative? - How would you interpret these data in
terms of linear regression?
Tuesday, October 25, 2011 at 8:35pm by Jen
college statistics
construct a scatterplot for x,y values x 1 2 3 4 5 y 0.5 0.0 -0.5-1.0-1.5 what would be the slope of this regression line would the correlation between xand y be postive oe negative how would you
interpret these data in terms of regression
Saturday, May 26, 2012 at 3:28pm by melba
Selected students means you are dealing with a sample from a population. The best you can do with a sample is to estimate the correlation coefficient of the population, or the correlation coefficient
of the sample. See the formulas and examples from: http://stattrek.com/...
Saturday, April 6, 2013 at 5:16pm by MathMate
The following linear equation, y = b0 + b1x, is a regression line with y-intercept b0 and slope b1. Define These Terms 1) correlation coefficient 2) Linear regression equation
Thursday, May 19, 2011 at 6:25pm by Leslie
Correlation coefficients range from -1 to +1 where the two extremes enable the researcher to make almost perfect "predictions". On the other hand, the closer the correlation coefficient approaches
zero, the less reliable would be the indication of the regression line. Can you ...
Friday, July 26, 2013 at 6:57pm by MathMate
correlation and regression
No data is given.
Tuesday, December 4, 2012 at 4:09pm by MathGuru
The correlation is 0.14. The Pearson r is the correlation coefficient and is a measure of correlation, which can be positive or negative. The closer the correlation is to 0, the weaker it is. The
closer to +1 or -1, the stronger it is. If you have a correlation of +1 or -1, ...
Saturday, June 27, 2009 at 3:26pm by MathGuru
1. For the following scores, X Y 1 6 4 1 1 4 1 3 3 1 Sketch a scatter plot and estimate the value of the Pearson correlation. b. Compute the Pearson correlation 2. For the following set of scores, X
Y 6 4 3 1 5 0 6 7 4 2 6 4 a. Compute the Pearson correlation. b. Add 2 points ...
Monday, March 4, 2013 at 1:06am by Cole Brown
algebra 1
Enter these data points into some graphing software, and ask the computer to perform a linear regression on the data (find the best-fit line that goes through the data points) The computer should
also be able to calculate the correlation coefficient. If the correlation ...
Wednesday, October 24, 2012 at 11:28am by Jennifer
c and d look good The correlation coefficient is a number between 0 and 1. If there is no relationship between the predicted values and the actual values the correlation coefficient is 0 or very low
(the predicted values are no better than random numbers). As the strength of ...
Monday, January 16, 2012 at 7:53pm by Steve
R square is the coeffecient of determination. Is Multiple R the coefficient of correlation?
Monday, June 13, 2011 at 2:06pm by Thara
Construct a scatterplot for the (x, y) values below, and answer the following questions. You do NOT need to submit your scatterplot with your answer; however, show all other work. x y 1 2.5 2 5.0 3
7.5 4 10.0 5 12.5 - What would be the slope of this regression line? - Would ...
Sunday, October 7, 2012 at 9:58am by Anonymous
Healthy breakfast contains the rating of 77 cereals and the number of grams of sugar contained in each serving. A simple linear regression model considering "sugar" as the explanatory variable and
"rating" as the response variable produced the following regression line: rating...
Wednesday, September 19, 2012 at 12:17am by Gee
The coefficient of determination deals with variation and is a measure of how well any given regression line represents its data. If a regression line passes through every point, all of the variation
could be explained. The further the regression line is away from the points, ...
Sunday, April 17, 2011 at 6:12pm by MathGuru
Statistics:Correlation and Least Square Regression
Correlation and Least Square Regression A long term study of changing environmental conditions in Chesapeake Bay found the following annual average salinity readings in one location in the bay: Year
Salinity (%) 1971 13.4 1972 9.8 1973 15.1 1974 14.7 1975 15.1 1976 14 1977 15....
Tuesday, September 11, 2007 at 4:24pm by Carma
Using an online calculator, I have the following: 12 data pairs (x,y): ( 12.0 , 24.0 ); ( 14.0 , 30.0 ); ( 15.0 , 36.0 ); ( 18.0 , 38.0 ); ( 20.0 , 65.0 ); ( 16.0 , 44.0 ); ( 14.0 , 36.0 ); ( 13.0 ,
30.0 ); ( 18.0 , 39.0 ); ( 19.0 , 76.0 ); ( 20.0 , 80.0 ); ( 22.0 , 85.0 ); ...
Thursday, May 30, 2013 at 8:12pm by MathGuru
Friday, December 30, 2011 at 12:31am by PsyDAG
Given the following scatter diagram , the sample correlation coefficient r: Has a positive linear correlation Has a negative linear correlation. Has little or no correlation. Looks close to + 1.00
Thursday, August 19, 2010 at 6:40pm by Karen
Which type of correlation would you expect to see between the distance traveled and the time it takes to travel that distance? A. positive correlation B. negative correlation C. no correlation idk
why this question is confusing me, but it is
Monday, November 11, 2013 at 11:20pm by hermes
Which type of correlation would you expect to see between the distance traveled and the time it takes to travel that distance? A. positive correlation B. negative correlation C. no correlation idk
why this question is confusing me, but it is
Tuesday, November 12, 2013 at 12:25am by hermes
Alex puts his spare change in a jar every night. If he has $11.09 at the end of January, $22.27 at the end of February, $44.35 in April, $75.82 in July, $89 in August, and $114.76 at the end of
October, perform a linear regression on this data to complete the following items. ...
Thursday, January 23, 2014 at 9:03am by Beth
For certain x and y series which are correlated, the two lines of regression are 5x-6y+9=0 , 15x-8y+130=0. The correlation coefficient is :
Sunday, May 19, 2013 at 4:43am by anand nanda
2. Find the value of x^3 +2x^2 -3 when x =3 Answer-42 12. What type of correlation would you expect between the number of minutes a candle burns and the height of the candle? a. no correlation b.
positive correlation c. negative correlation d. equal correlation Answer-c 13. ...
Sunday, August 7, 2011 at 5:06pm by lizzie
a car dealer estimates that they sell two cars on average per week. they only have space in their showroom for 4 cars. if stock replacement cannot take place within a week, what is the probability
assuming that weekly car sales follow a poisson distribution that, the dealer ...
Friday, April 27, 2007 at 3:51am by mary
True 1. CORRELATION Correlation means that two variables (sets of data) have some type of association with each other, such that as one variable increases, the other also increases (a positive
correlation), or decreases (a negative correlation).
Tuesday, November 5, 2013 at 10:41am by Kim
Hi, I'm not sure if I'm giving the correct answer for this problem. The question states; The values of computers in the years after its purchase date are listed in the following table. Year 0=1500,
Year1=1200, Year2=1100,Year3 +1000, Year 4 = 800, Year 5 =400, Year 6= 200. ...
Tuesday, July 9, 2013 at 7:41am by Joan
Using the following Minitab output write the straight line (least squares) equation and the correlation coefficient. The regression equation is Sales = 27.2 + 0.252 Exp Predictor Coef SE Coef T P
Constant 27.195 5.781 4.70 0.005 Exp 0.2517 0.4201 0.60 0.575 S = 8.97833 R-Sq = ...
Wednesday, March 6, 2013 at 7:06pm by BettieLee
algebra 1
Natalie performs a chemistry experiment where she records the tenperture of an ongoing reaction.The solution is 93.5 C after 3 minutes;90 C after 5 minutes,84.8 C after 9 minutes; 70.2 C after 18
minute; 54.4 C after 30 minutes;42.5 C after 37 minutes; and 24.9 C after 48 ...
Wednesday, October 24, 2012 at 11:28am by Janet
In reagards to correlation and regression how can that be applied to accounting. I am trying to find some websites where i could apply one of the above to accounting.
Wednesday, April 30, 2008 at 8:40pm by Caryn
research evaul 342
If the coefficient of correlation is –0.81, then the percentage of the variation in y that is explained by the regression line is 81%. A. True B. False
Tuesday, July 27, 2010 at 12:34am by peter
Make up a scatter diagram with 10 dots for each of the following situations: 1. perfect positive linear correlation, 2. large but not perfect positive linear correlation, 3. small positive linear
correlation, 4. large but not perfect negative linear correlation, 5. no ...
Sunday, March 18, 2012 at 11:27pm by Dawm
Please correct if I have stated the right correlation to the following relationships: a) volume of auto traffic and auto accident rate -> 0 correlation b) pesticide and crop yield -> negative
correlation c) consumption and GDP -> positive correlation
Tuesday, January 25, 2011 at 9:45pm by Anonymous
interpret an r value of -0.82. a)strong negative correlation B)weak negative correlation C)strong positive correlation D)no correlation
Friday, November 13, 2009 at 12:01pm by princess
The data set healthy breakfast contains the ratings of 77 cereals and the number of grams of sugar contained in each searving. A simple linear regression model considering Sugar as the explanatory
variable and Rating as the response variable produced the following regression ...
Thursday, September 13, 2012 at 1:39am by Gee
I need a definition for this constant rate of change when graphing in mathematics and for Positive and Negative correlation and also what the pattern is in this a) 1000, 100, 10, 1 I'll give you a
definition for positive and negative correlation. Correlation is a linear ...
Wednesday, January 10, 2007 at 5:21pm by austin
Math-Algebra II
How would you interpret the findings of a correlation study that reported a linear correlation coefficient of − 1.34? The only thing I know is that it represents a negative correlation. Please shed
any light you can.
Wednesday, January 26, 2011 at 10:25pm by Michelle
Statistics - Correlation Coefficient/Regression
I used an online calculator and came up with this: y = a + bx where: a= -15.4 b= 14.4 r = 0.978 Your formula looks correct. It's easy to make errors with the calculations when doing these kinds of
Saturday, June 21, 2008 at 8:35pm by MathGuru
A linear regression wquation of best best fit between a student's attendance and the degree of success in school is h = .5x + 68.5. The correlation coefficient, r, for these data would be
Monday, December 10, 2012 at 3:56pm by Kyler
Natalie performs a chemistry experiment where she records the temperature of an ongoing reaction. The solution is 93.5º C after 3 minutes; 90º C after 5 minutes, 84.8 C after 9 minutes; 70.2º C after
18 minute; 54.4º C after 30 minutes; 42.5ºC after 37 minutes; and 24.9º C ...
Thursday, May 2, 2013 at 1:35am by Brittany
Statistics: Calculate a Pearson's r
Once you determine the correlation coefficient from a Pearson's r, square the correlation coefficient to find the Coefficient of Determination. The Coefficient of Determination shows the strength of
the relationship between two variables. The ratio of explained variance to ...
Tuesday, October 18, 2011 at 3:25pm by MathGuru
Yes, but some scientists will refer to r^2 as the correlation coefficient while others will refer to r as the correlation coefficient. So, what you should do is take the square root and say something
like "the correlation coeffcient is given by r = 0.469". And then it doesn't ...
Thursday, March 6, 2008 at 5:00pm by Count Iblis
5. Compute the two regression equations on the basis of the following data: X Y Mean 40 45 Standard Deviation 10 9 Given that the coefficient of correlation between X & Y is 0.50. Also estimate the
value of Y for X=48?
Saturday, October 9, 2010 at 2:00am by Neeraj Bhanot
You would need to determine a regression equation for this data. Linear regression determines the best-fitting straight line through the points. A regression equation helps you make predictions for y
based on the value of x.
Tuesday, December 10, 2013 at 11:55pm by MathGuru
If you do this using a calculator, you will have to use the calculator's instructions for finding a regression equation, correlation coefficient (r) and coefficient of determination (r^2).
Tuesday, December 10, 2013 at 8:49pm by MathGuru
X Y 0 23 2 27 2.3 29 4.1 32 5 35 What is the correlation for this data set? 10) Find the regression line for this data set.
Saturday, May 25, 2013 at 3:58pm by Ana
Pre- study scores versus post- study scores for a class of 120 college freshman english students were considerated. The residual plot for the least squares regression line showed no pattern. The
least squares regression line was y^=0.2+0.9x withwith a correlation coefficient r...
Thursday, September 13, 2012 at 2:20am by Gee
Job Sat. INTR. EXTR. Benefits 5.2 5.5 6.8 1.4 5.1 5.5 5.5 5.4 5.8 5.2 4.6 6.2 5.5 5.3 5.7 2.3 3.2 4.7 5.6 4.5 5.2 5.5 5.5 5.4 5.1 5.2 4.6 6.2 5.8 5.3 5.7 2.3 5.3 4.7 5.6 4.5 5.9 5.4 5.6 5.4 3.7 6.2
5.5 6.2 5.5 5.2 4.6 6.2 5.8 5.3 5.7 2.3 5.3 4.7 5.6 4.5 5.9 5.4 5.6 5.4 3.7 6.2...
Tuesday, September 21, 2010 at 1:31pm by chris
Statistics - Correlation Coefficient/Regression
'm having some difficulty with the following question: The following data shows expenditures (in millions of dollars) and case (sales in millions) for 7 major soft drink brands. Calculate the
correlation coefficient. Is this significant at the 5% level (ie, α=.05)? ...
Saturday, June 21, 2008 at 8:35pm by Hannah
True or False and explain: If the slope of a regression line is large, the correlation between the variables will also be large. I was thinking false but am now second guesing myself. Thanks.
Thursday, April 15, 2010 at 7:47pm by Anonymous
Statistics and Research Methods
Here are a few hints: The Pearson r is the correlation coefficient and is a measure of correlation, which can be positive or negative. The closer the correlation is to 0, the weaker it is. The closer
to +1 or -1, the stronger it is. If you have a correlation of +1 or -1, then ...
Thursday, April 11, 2013 at 7:48pm by MathGuru
Construct a scatter plot for the given data. Determine whether there is a positive linear correlation, negative linear correlation, or no linear correlation.
Sunday, December 6, 2009 at 12:18pm by jen
The scatter diagram of the lengths and widths of the petals of a type of flower is football shaped; the correlation is 0.8. The average petal length is 3.1 cm and the SD is 0.25 cm. The average petal
width is 1.8 cm and the SD is 0.2 cm. 1. One of the petals is 0.2 cm wider ...
Thursday, March 28, 2013 at 6:21am by Judy
The scatter diagram of the lengths and widths of the petals of a type of flower is football shaped; the correlation is 0.8. The average petal length is 3.1 cm and the SD is 0.25 cm. The average petal
width is 1.8 cm and the SD is 0.2 cm. 1. One of the petals is 0.2 cm wider ...
Thursday, March 28, 2013 at 6:37am by Judy
In conducting multiple regression analyses (MRAs), a major technical concern involves any high correlation of predictor (regressor) variables included in the model. The term for this is ....? A.
Redundant predictors B. Close "knit" IVs C. Multicollinearity D. Correlative ...
Sunday, September 2, 2012 at 9:03pm by Pat
Statistics 15.59
Computer database GROWCO describes the characterisitics of 100 companies identified by Fortune as the fastest growing. Two of the variables listed are revenue and net income (millions of dollars) for
the most recent four quarters. Determine the linear regression equation ...
Wednesday, June 16, 2010 at 12:16am by Julia Brown
the following sample observations were randomly selected: x: 2,5,6,8,9,11,15 y: 22,23,16,18,19,13,12 a)calculate the correlation coefficient, r b) determine the regression equation c) determine the
value of Y when X=20
Monday, November 29, 2010 at 10:33am by skitter
Jade counted the number of the students on each sports time at her school and the number of wins the team had last season. If she displays her data on a scatter plot, what type of relationship will
she most likely see between the number of students on a team and the number of ...
Tuesday, October 18, 2011 at 8:45pm by Please help
A linear regression equation of best fit between a student's attendance and the degree of success in school is h = .5x + 68.5. The correlation coefficient, r, for these data would be? 1. <?r<1 2. -1
<?r<0 3. r=0 4. r=-1
Monday, December 10, 2012 at 3:59pm by Kyler
stats - URGENT!!!!
the following sample observations were randomly selected: x: 2,5,6,8,9,11,15 y: 22,23,16,18,19,13,12 a)calculate the correlation coefficient, r b) determine the regression equation c) determine the
value of Y when X=20
Tuesday, November 30, 2010 at 10:17am by skitter
stats - URGENT!!!!
the following sample observations were randomly selected: x: 2,5,6,8,9,11,15 y: 22,23,16,18,19,13,12 a)calculate the correlation coefficient, r b) determine the regression equation c) determine the
value of Y when X=20
Tuesday, November 30, 2010 at 10:56am by skitter
stats - URGENT!!!!
the following sample observations were randomly selected: x: 2,5,6,8,9,11,15 y: 22,23,16,18,19,13,12 a)calculate the correlation coefficient, r b) determine the regression equation c) determine the
value of Y when X=20
Tuesday, November 30, 2010 at 12:55pm by skitter
stats - URGENT!!!!
the following sample observations were randomly selected: x: 2,5,6,8,9,11,15 y: 22,23,16,18,19,13,12 a)calculate the correlation coefficient, r b) determine the regression equation c) determine the
value of Y when X=20
Tuesday, November 30, 2010 at 3:05pm by skitter
Coefficient of Determination is explained variation divided by total variation (or more simply, the correlation coefficient squared). To find the correlation coefficient, take the square root of .74
for your answer. The Coefficient of Determination shows the strength of the ...
Saturday, January 31, 2009 at 11:53pm by MathGuru
Mth plz...help
A linear regression equation of best fit between a student's attendence and the degree of sucess in school is h = 0.5x + 68.5. the correlation coefficent, r, for these data would be (1) 0 < r < 1 (2)
-1< r < 0 (3) r = 0 (4) r = -1
Sunday, September 21, 2008 at 11:12am by Ashlee
For 1): Y(hat) comes from substituting an x value into a regression equation and solving for y(hat). Y(hat) is also called the predicted y value in a regression equation. Let's use an example.
Suppose the regression equation is this: y(hat) = 2.75 + .5x If x = 1, then y(hat...
Tuesday, March 19, 2013 at 10:10pm by MathGuru
1. In an ANOVA, one group has a much larger sample mean than the other. The analyst decides to remove this group and to conduct the analysis on the remaining groups only. Which of the following
statements is correct? Select one: a. This analysis is not correct, as now the ...
Monday, October 14, 2013 at 7:36am by Jumbo
Suppose the correlation between two variables is -0.57. If each of the y-values is multiplied by -1, which of the following is true about the new scatterplot? It slopes up to the right, and the
correlation is -0.57 It slopes up to the right, and the correlation is +0.57 It ...
Thursday, November 8, 2012 at 9:35am by John
Suppose the correlation between two variables is -0.57. If each of the y-values is multiplied by -1, which of the following is true about the new scatterplot? It slopes up to the right, and the
correlation is -0.57 It slopes up to the right, and the correlation is +0.57 It ...
Wednesday, November 14, 2012 at 1:02pm by joey
Suppose the correlation between two variables is -0.57. If each of the y-values is multiplied by -1, which of the following is true about the new scatterplot? It slopes up to the right, and the
correlation is -0.57 It slopes up to the right, and the correlation is +0.57 It ...
Thursday, November 15, 2012 at 10:26am by joey
Suppose the correlation between two variables is -0.57. If each of the y-values is multiplied by -1, which of the following is true about the new scatterplot? It slopes up to the right, and the
correlation is -0.57 It slopes up to the right, and the correlation is +0.57 It ...
Friday, November 16, 2012 at 10:34am by John
Suppose the correlation between two variables is -0.57. If each of the y-values is multiplied by -1, which of the following is true about the new scatterplot? It slopes up to the right, and the
correlation is -0.57 It slopes up to the right, and the correlation is +0.57 It ...
Sunday, November 18, 2012 at 10:23am by John
Suppose the correlation between two variables is -0.57. If each of the y-values is multiplied by -1, which of the following is true about the new scatterplot? It slopes up to the right, and the
correlation is -0.57 It slopes up to the right, and the correlation is +0.57 It ...
Monday, December 17, 2012 at 11:14am by Jill
Suppose the correlation between two variables is -0.57. If each of the y-values is multiplied by -1, which of the following is true about the new scatterplot? It slopes up to the right, and the
correlation is -0.57 It slopes up to the right, and the correlation is +0.57 It ...
Tuesday, December 18, 2012 at 10:56am by Jill
Using an online calculator: 4 data pairs (x,y): ( 3.00 , 5.00 ); ( 4.00 , 5.00 ); ( 2.00 , 3.00 ); ( 1.00 , 4.00 ); Regression equation: y = 3 + .5x Correlation Coefficient: r = 0.674 Check with your
own calculations.
Tuesday, August 28, 2012 at 6:12pm by MathGuru
The eggs of a species of bird have an average diameter of 23 mm and an SD of 0.45 mm. The weights of the chicks that hatch from these eggs have an average of 6 grams and an SD of 0.5 grams. The
correlation between the two variables is 0.75 and the scatter diagram is roughly ...
Friday, March 15, 2013 at 4:07pm by Kimberly
Pre study scores versus post-study scores for a class of 120 college freshman english students were considerated. The residual plot for the least squares regression line showed no pattern. The least
square regression line was y = 0.2 + 0.9x with a correlation coefficient r = 0...
Tuesday, September 18, 2012 at 11:59pm by Gee
For the regression equation, Y=bX+a, which of the following X,Y points will be on the regression line? a, a,b b, b,a c, 0,a d, 0,b
Friday, October 26, 2012 at 3:54am by Robbie
statistics in psychology
11. Make up a scatter diagram with 10 dots for each of the following situations: (a) perfect positive linear correlation, (b) large but not perfect positive linear correlation, (c) small positive
linear correlation, (d) large but not perfect negative linear correlation, (e) no...
Monday, July 25, 2011 at 10:25am by desiree
Positive correlation means both variables increase/decrease together. Negative correlation means one variable increases while the other decreases. What is the consistent difference between the y
values to determine the missing value? How good is the predictability of y from x...
Thursday, November 10, 2011 at 7:40pm by PsyDAG
Business Statistics
Given the least squares regression line = -2.88 + 1.77x, and a coefficient of determination of 0.81, the coefficient of correlation is: A) -0.88 B) +0.88 C) +0.90 D) –0.90 I picked C +0.90
Sunday, March 15, 2009 at 7:50pm by Stuart
A set of X and Y scores has SSx=10, SSY=20, and SP=8, what is the slope for the regression equation? a. 8/10 b. 8/20 c. 10/8 d. 20/8 For the regression equation, Y=bX+a, which of the following X,Y
points will be on the regression line? a. a,b b. b,a c. 0, a d. 0, b
Monday, October 22, 2012 at 7:54pm by Robbie
Determine the regression line of x+2y-5=0 and 3x+3y-8=0 (i) y on x and (ii) x on y, (iii) Find r using the regression coefficients
Monday, September 26, 2011 at 8:43am by Deepti
Is is possible for the regression equation to have none of the actual (observed) data points located on the regression line?
Sunday, December 2, 2012 at 11:32pm by Julia
The scatter diagram of the lengths and widths of the petals of a type of flower is football shaped; the correlation is 0.8. The average petal length is 3.1 cm and the SD is 0.25 cm. The average petal
width is 1.8 cm and the SD is 0.2 cm. (a)One of the petals is 0.2 cm wider ...
Thursday, March 28, 2013 at 7:59pm by pat
Explain the concept of correlation and how to interpret correlation coefficients of 0.3,0, and -0.95
Sunday, August 7, 2011 at 9:57am by Toha
Of you perceived a linear model, then the coefficient of correlation was .7, a strong correlation.
Monday, August 8, 2011 at 2:48pm by bobpursley
Not sure what your are asking, but I will give an answer. 1. positive correlation 2. negative correlation
Wednesday, February 6, 2013 at 5:40am by PsyDAG
How would you interpret the findings of a correlation study that reported a linear correlation coefficient of +0.3?
Wednesday, July 4, 2012 at 7:07pm by Jane
Here is a hint: It doesn't matter if the correlation is negative or positive, the closer it is to zero, the weaker the correlation.
Wednesday, April 9, 2014 at 12:53pm by MathGuru
1. Correlation coefficient is number that indicated the degree of linear relationship between two variables. It can vary between -1 and +1. 2. I searched Google under the key words "'linear
regression equation'" to get these possible sources: http://www.google.com/search?...
Thursday, May 19, 2011 at 6:25pm by PsyDAG
Pages: 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | Next>> | {"url":"http://www.jiskha.com/search/index.cgi?query=stastics-+correlation+and+regression","timestamp":"2014-04-20T19:08:51Z","content_type":null,"content_length":"42792","record_id":"<urn:uuid:e7900548-e62e-43db-bbfa-3e9d90863ebf>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00418-ip-10-147-4-33.ec2.internal.warc.gz"} |
Leibniz Rule - quick question
February 27th 2009, 05:30 AM #1
Jan 2008
Leibniz Rule - quick question
I'm working through an example following Leibniz rule...but havning trouble following it:
We differentiate:
using Leibniz's Rule:
So just looking at the first integral,
I don't understand the b'(x)f(x,b(x)) part of the rule. Here b(x) = x so b'(x) = 1. But for f(x,b(x)) this would become f(x,x) right? I'm not sure why this is the same as y(x) though??
Thanks in advance!
There are 2 Leibniz's you have to apply:
both will generate:
$<br /> \ldots + e^{x-x} y(x) +\ldots - e^{x-x} y(x) + \ldots$
Quickly applied the rule, and found:
$<br /> 1+3\int_{0}^{x} e^{3(x-t)} y(t) dt - 3\int_{x}^{1} e^{3(t-x)} y(t) dt<br />$
I might have made a mistake. So be please becareful.
-O works then hopes.
February 27th 2009, 07:38 AM #2
Jan 2009 | {"url":"http://mathhelpforum.com/calculus/76022-leibniz-rule-quick-question.html","timestamp":"2014-04-19T08:33:36Z","content_type":null,"content_length":"32163","record_id":"<urn:uuid:c18f2e3c-ecdc-4be3-b316-80509dff8f93>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00630-ip-10-147-4-33.ec2.internal.warc.gz"} |
Kendoku Puzzles
KENDOKU puzzles are built on square grids of typically 5x5 cells (although puzzles having sizes in the range 4x4 up to 9x9 can also be made). To solve them you must place numbers into the puzzle
cells in such a way that each row and column contains each of the digits from 1 up to the size of the puzzle. In this respect they are similar to Sudoku puzzles. Unlike Sudoku puzzles, you are not
given any starting digits. Instead, the puzzle is divided into Domains which are areas surrounded by a bold outline, and containing from two up to four cells. Each domain contains a hint consisting
of a number and one of the mathematical symbols + x — /. The number is the result of applying the mathematical operation represented by the symbol to the digits contained within the domain. This will
provide enough information to allow each of the digits to be determined. Each puzzle has a unique solution, and no guessing is required.
The following graphics show a KENDOKU puzzle in the
Magnum Opus
Solve screen and a PNG graphic file produced by the program's print function, showing the complete puzzle solution. See also a
full size version of a PDF file
showing the solution.
The key features of the Magnum Opus Kendoku Construction function are as follows:-
Fully automatic construction of puzzles.Puzzles can be built in sizes ranging from 4x4 up to 9x9.Puzzles can be built with difficulty settings of Easy, Moderate and Hard.A multi-build function
allows the construction of many puzzles (thousands if necessary) in a single operation.Fully manual construction is also available. This allows puzzles found in the printed media to be entered
into Magnum Opus.Puzzles can be printed in an unprecedented range of user controlled colors and formats. Output can be sent direct to the printer or, if you install the appropriate software, it
can be sent to a PDF file. The recommended software for this purpose is Primo PDF which you can download free from their website, although you might consider sending them a small donation if you
find their program useful. Mac users will be able to create a PDF file directly from the screen provided by the printer driver.Formatted output can also be exported from the Print screen to
graphic files which conform to the BMP, GIF, JPG or PNG formats.A fully interactive Solve function is available to allow you to solve the puzzles "on screen" without needing to print them. | {"url":"http://www.wordandnumberpuzzles.com/kendoku.html","timestamp":"2014-04-21T13:28:45Z","content_type":null,"content_length":"5839","record_id":"<urn:uuid:4e044f9f-0ed1-4c44-9520-999f56bed7b3>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00421-ip-10-147-4-33.ec2.internal.warc.gz"} |
John McAfee is a Heinlein hero
December 10, 2012
By Andrew
(This article was originally published at Statistical Modeling, Causal Inference, and Social Science, and syndicated at StatsBlogs.)
“A small group of mathematicians”
Jenny Davidson points to this article by Krugman on Asimov’s Foundation Trilogy. Given the silliness of the topic, Krugman’s piece is disappointingly serious (“Maybe the first thing to say about
Foundation is that it’s not exactly science fiction – not really. Yes, it’s set in the future, there’s interstellar travel, people shoot each other with blasters instead of pistols and so on. But
these are superficial details . . . the story can sound arid and didactic. . . . you’ll also be disappointed if you’re looking for shoot-em-up action scenes, in which Han Solo and Luke Skywalker
destroy the Death Star in the nick of time. . . .”). What really jumped out at me from Krugman’s piece, though, was this line:
In Foundation, we learn that a small group of mathematicians have developed “psychohistory”, the aforementioned rigorous science of society.
Like Davidson (and Krugman), I read the Foundation books as a child. I remember the “psychohistory” part, of course, but not that it was invented by mathematicians. That seems so retro! Back in the
day, there were only a few sorts of technical academic fields, and one of these was mathematics. Thus you had Mandelbrot inventing fractals, Turing inventing computer science, and Ulam inventing the
Nowadays, I think of mathematicians as a sort of eccentric band of specialists, working for decades on problems that only they care about, while earning money teaching intro calc and training
graduate students to work for Steven A. Cohen. I’m not saying that’s a fair impression—it would be just as correct for a mathematician to describe statisticians as an eccentric band of mathematical
plodders who make a virtue of their mediocrity and call it practicality—but it’s the impression I get. If I were writing a novel about an exciting new science, I might have it be invented by a
biologist or a computer scientist or even a rogue economist, but I probably wouldn’t think that something so applied would come out of the minds of a band of mathematicians.
A modern Heinlein hero
Perhaps Krugman will next write something on Robert Heinlein, whose writings, like Asimov’s, provide endless retro amusement (for those of us who are amused by such things), with the characteristic
Heinlein hero being someone like a garage mechanic who develops a faster-than-light space drive in his basement workshop. Updating that to the present day, we’d end up with someone like John McAfee,
that internet zillionaire who turned up in Guatemala the other day. Actually, McAfee sounds like a perfect Heinlein hero: a super-rich retired businessman with a fascination with airplanes, guns, and
drugs, and a 20-year-old girlfriend.
Of course, if we were really living in a Heinlein story, McAfee would actually have a time machine in his backyard, and that girlfriend would be a reincarnation of McAfee’s cat.
P.S. At the very end of the Krugman article:
The Foundation Trilogy by Isaac Asimov, introduced by Nobel Prize-winning economist Paul Krugman, is published by The Folio Society priced £75.00.
I know the British economy hasn’t been doing so well lately, but £75 is still a good chunka change, no? What I wonder is, how many people who buy this book will really want to read it all the way
through. Reading about the Foundation trilogy can be fun, but I can’t imagine the book itself can be very easy or pleasant to read at this point. I just feel that at this point I’ve read so many
smooth works of fiction and journalism over the years, that it might be difficult to read something that wooden in style. (Heinlein would be much more readable, I’d think.)
Please comment on the article here: Statistical Modeling, Causal Inference, and Social Science | {"url":"http://www.statsblogs.com/2012/12/10/john-mcafee-is-a-heinlein-hero/","timestamp":"2014-04-17T18:46:10Z","content_type":null,"content_length":"38297","record_id":"<urn:uuid:480a8591-cb3e-461d-aca0-d64715871917>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00063-ip-10-147-4-33.ec2.internal.warc.gz"} |
The perimeter of a semicircle - Math Central
Hi Lisa,
The perimeter of a circle is π × d where d is the diameter so half of that is ^1/[2] π × d. But If you are going to walk around a semicircle you need to go half way around the circle and then across
a diagonal to get back to where you started so the perimeter is ^1/[2] π × d + d. | {"url":"http://mathcentral.uregina.ca/QQ/database/QQ.09.07/h/lisa12.html","timestamp":"2014-04-18T05:34:28Z","content_type":null,"content_length":"7040","record_id":"<urn:uuid:81ac246b-d0dd-4541-8319-204f608526fd>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00184-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mt Pleasant, WI Math Tutor
Find a Mt Pleasant, WI Math Tutor
...I had college calculus, so I can handle the math. Overall, I have a broad educational and teaching base and am used to teaching adults. I held the 63 for almost twenty years in AZ and had to
take some continuing education tests.
44 Subjects: including algebra 2, English, statistics, writing
...I have found that students are very successful when working on a one-on-one basis when you find something that interests them. I have a passion for teaching and a desire for student success. A
bit about my teaching experience: I have taught first, third, fourth, and fifth grades with a typical class size of around 25 students.
19 Subjects: including prealgebra, reading, algebra 1, statistics
...I enjoy teaching and helping students learn and master subject areas that they never thought they could excel at. I look forward to helping you learn and seeing you dominate those subjects
that might initially seem impossible. Please feel free to contact me.
11 Subjects: including prealgebra, SAT math, business, GMAT
...I provide information to students on test taking skills, time management, and simple approaches to complicated problems. Also I provide information on which problems should be performed using
the calculator to obtain the correct answer in short amount of time. We work together to create Basic formula sheet to help them memorize the important topics needed for the test.
11 Subjects: including algebra 1, algebra 2, calculus, ACT Math
...I have performed in many stage productions. I have received drama and improvisation training. I was a member of Player's Workshop and Second City's Children's Theater in Chicago and a member
of an amateur improvisation group.
39 Subjects: including ACT Math, logic, geometry, prealgebra
Related Mt Pleasant, WI Tutors
Mt Pleasant, WI Accounting Tutors
Mt Pleasant, WI ACT Tutors
Mt Pleasant, WI Algebra Tutors
Mt Pleasant, WI Algebra 2 Tutors
Mt Pleasant, WI Calculus Tutors
Mt Pleasant, WI Geometry Tutors
Mt Pleasant, WI Math Tutors
Mt Pleasant, WI Prealgebra Tutors
Mt Pleasant, WI Precalculus Tutors
Mt Pleasant, WI SAT Tutors
Mt Pleasant, WI SAT Math Tutors
Mt Pleasant, WI Science Tutors
Mt Pleasant, WI Statistics Tutors
Mt Pleasant, WI Trigonometry Tutors | {"url":"http://www.purplemath.com/Mt_Pleasant_WI_Math_tutors.php","timestamp":"2014-04-18T08:56:34Z","content_type":null,"content_length":"23915","record_id":"<urn:uuid:a21e13df-45d1-458d-9ad3-972c5dbd33e9>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00524-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts from June 2009 on My Brain is Open
Graceful Tree Conjecture (GTC) is one of my favorite open problems. I posted it on Open Problem Garden couple of years back. I enjoy reading papers related to graceful labeling and try to keep as
up-to-date as possible with the progress towards GTC. Here is a brief introduction to GTC.
Graceful Labeling : Label the vertices of a simple undirected graph $G(V,E)$ (where $|V|=n$ and $|E|=m$) with integers from $0$ to $m$. Now label each edge with absolute difference of the labels
of its incident vertices. The labeling is said to be graceful if the edges are labeled $1$ through $m$ inclusive (with no number repeated).
A graph is called graceful if it has at least one such labeling. This labeling was originally introduced in 1967 by Rosa [Rosa'67]. The name graceful labeling was coined later by Golomb. It is easy
to see that stars and paths are graceful. Some known graceful graphs are paths, stars, complete bipartite graphs, prism graphs, wheel graphs, caterpillar graphs, olive trees, and symmetrical trees,
gear graphs, rectangular grids.
Gracefully labeled graphs have applications in coding theory and communication network addressing. Look at this paper [Basak'04] for an application in MPLS Multicasting. The following conjecture is
due to Kotzig, Ringel and Rosa.
Graceful Tree Conjecture (GTC) : All trees are graceful.
Ringel’s Conjecture : If $T$ is a fixed tree with $m$ edges, then complete graph on $2m+1$ vertices decomposes into $2m+1$ copies of $T$.
Exercise : The existence of a graceful labeling of a given graph G with n edges is a sufficient condition for the existence of a cyclic decomposition of a complete graph of order 2n+1 into
subgraphs isomorphic to G. In particular, GTC implies Ringel’s conjecture.
A caterpillar graph is a tree such that if all leaves and their incident edges are removed, the remainder of the graph forms a path. Here’s a cute exercise. There is an elegant inductive proof.
Exercise : Prove that all caterpillar graphs are graceful.
Open Problems :
□ Graceful Tree Conjecture (GTC) is still open. There are couple of papers on arxiv claiming to have settled GTC. I went through the proofs and found some mistakes.
□ As an intermediate problem, prove that all lobster graphs are graceful. This is also an open problem. Some special cases of lobsters (eg : lobsters with perfect matching) are proved graceful
□ Complexity of graceful labeling is open. A related problem called harmonious labeling was shown to be NP-complete.
□ Improve approximate factors of relaxed graceful labeling [Bussel'02].
References :
• [Rosa'67] Alex Rosa, On certain valuations of the vertices of a graph. Theory of Graphs, (1967) 349–355
• [Bussel'02] Frank Van Bussel: Relaxed Graceful Labellings of Trees. Electr. J. Comb. 9(1): (2002)
• [Morgan'02] David Morgan: All Lobsters with Perfect Matchings are Graceful. Electronic Notes in Discrete Mathematics 11: 503-508 (2002)
• [Basak'04] Ayhan Basak, MPLS Multicasting Using Caterpillars and a Graceful Labelling Scheme, Eighth International Conference on Information Visualisation (IV’04), pp.382-387, 2004
List Coloring of Planar Graphs
A proper coloring of a graph is an assignment of colors to vertices of a graph such that no two adjacent vertices receive the same color. A graph is k-colorable if it can be properly colored with k
colors. For example, the famous Four Color Theorem states that “Evey planar graph is 4-colorable“. This is tight, since a complete graph on four vertices is 4-colorable but not 3-colorable. Deciding
if a graph is 3-colorable is NP-hard. It is natural to ask which planar graphs are 3-colorable.
Grotzsch’s Theorem : Every triangle-tree planar graph is 3-colorable.
Dvorak, Kawarabayashi and Thomas [DKT '08] presented a very short proof of Grotzsch’s theorem and a linear-time algorithm for 3-coloring such graphs.
Given a graph and given a set L(v) of colors for each vertex v, a list coloring is a proper coloring such that every vertex v is assigned a color from the list L(v). A graph is k-list-colorable (or k
-choosable) if it has a proper list coloring no matter how one assigns a list of k colors to each vertex.
If a graph is k-choosable then it is k-colorable (set each L(v) = {1,…k}). But the converse is not true. Following is a bipartite graph (2-colorable) that is not 2-choosable (corresponding lists are
A graph is k-degenerate if each non-empty subgraph contains a vertex of degree at most k. The following fact is easy to prove by induction :
Exercise : A k-degenerate graph is (k+1)-choosable
Are there k-degenerate graphs that are k-choosable ? Following are some known results and open problems :
• Every bipartite planar graph is 3-choosable [Alon & Tarsi '92]. It is easy to prove that every bipartite planar graph is 3-degenerate.
• Every planar is 5-choosable [Thomassen '94]. Note that every planar graph is 5-degenerate. There are planar graphs which are not 4-choosable [Voigt '93].
• Every planar graph of girth at least 5 is 3-choosable. This implies grotzsch’s theorem in a very cute way [Thomassen '03]. There are planar graphs of girth 4 which are not 3-choosable [Voigt
Open Problems :
□ There is a conjecture stating “Every 3-colorable planar graph is 4-choosable”. Prove or disprove this conjecture. A positive answer implies four color theorem !!
□ Another open problem (I learnt this problem from Robin Thomas‘s course on Graph Minors in Spring’2008) is “Find a linear-time algorithm to 3-list-color planar graphs of girth 5″. Thomassen’s
Proof [Thomassen '03] gives a quadratic algorithm. I could not convert his proof into a linear-time algorithm. Probably it requires a new proof. It might be possible to get an $O(n{\log}n)$
algorithm using the data structures of [KK'03].
References :
• [Alon & Tarsi '92] N. Alon, M. Tarsi: Colorings and orientations of graphs. Combinatorica 12(2): 125-134 (1992)
• [Thomassen '94] C. Thomassen: Every Planar Graph Is 5-Choosable. J. Comb. Theory, Ser. B 62(1): 180-181 (1994)
• [Voigt '93] M. Voigt: List colourings of planar graphs. Discrete Mathematics 120(1-3): 215-219 (1993)
• [Thomassen '03] C. Thomassen: A short list color proof of Grötzsch’s theorem. J. Comb. Theory, Ser. B 88(1): 189-192 (2003)
• [Voigt '95] M. Voigt : A not 3-choosable planar graph without 3-cycles. Discrete Mathematics 146(1-3): 325-328 (1995)
• [DKT '08] Z. Dvorak and K. Kawarabayashi and R. Thomas : Three-coloring triangle-free planar graphs in linear time. SODA 2009.
• [KK'03] Lukasz Kowalik, Maciej Kurowski: Short path queries in planar graphs in constant time. STOC 2003: 143-148
Impagliazzo’s Worlds
I am back in Atlanta after attending Valiant’s birthday celebrations, STOC 2009 and Impagliazzo’s Worlds workshop. Another upcoming workshop at Center for Computational Intractability is “Barriers in
Computational Complexity” workshop from August 25-29, 2009. Today’s post is a brief introduction to the five Impagliazzo’s Worlds [Impagliazzo'95].
Algorithmica is the world in which $P=NP$ or $NP \subseteq BPP$.
Heuristica is the world where NP problems are intractable in the worst case but tractable on average.
Pessiland is the world in which there are hard average-case problems, but no one-way functions. As the name suggests this is the worst possible world.
Minicrypt is the world in which one-way functions exist, but public-key cryptography is impossible.
Cryptomania is the world in which public-key cryptography is possible.
Open Problems : An obvious open problem is which world do we live in ? Does Pessiland exist ?
References :
• [Impagliazzo'95] Russell Impagliazzo : A Personal View of Average-Case Complexity. Structure in Complexity Theory Conference 1995: 134-147 [ps] | {"url":"http://kintali.wordpress.com/2009/06/","timestamp":"2014-04-19T17:01:53Z","content_type":null,"content_length":"60273","record_id":"<urn:uuid:1ce42edc-b0d4-4d25-adf0-83cfb2a6ed4b>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00433-ip-10-147-4-33.ec2.internal.warc.gz"} |
Lessons In Electric Circuits -- Volume IV
Lessons In Electric Circuits -- Volume IV
Chapter 2
It is imperative to understand that the type of numeration system used to represent numbers has no impact upon the outcome of any arithmetical function (addition, subtraction, multiplication,
division, roots, powers, or logarithms). A number is a number is a number; one plus one will always equal two (so long as we're dealing with real numbers), no matter how you symbolize one, one, and
two. A prime number in decimal form is still prime if its shown in binary form, or octal, or hexadecimal. π is still the ratio between the circumference and diameter of a circle, no matter what
symbol(s) you use to denote its value. The essential functions and interrelations of mathematics are unaffected by the particular system of symbols we might choose to represent quantities. This
distinction between numbers and systems of numeration is critical to understand.
The essential distinction between the two is much like that between an object and the spoken word(s) we associate with it. A house is still a house regardless of whether we call it by its English
name house or its Spanish name casa. The first is the actual thing, while the second is merely the symbol for the thing.
That being said, performing a simple arithmetic operation such as addition (longhand) in binary form can be confusing to a person accustomed to working with decimal numeration only. In this lesson,
we'll explore the techniques used to perform simple arithmetic functions on binary numbers, since these techniques will be employed in the design of electronic circuits to do the same. You might take
longhand addition and subtraction for granted, having used a calculator for so long, but deep inside that calculator's circuitry all those operations are performed "longhand," using binary
numeration. To understand how that's accomplished, we need to review to the basics of arithmetic.
Adding binary numbers is a very simple task, and very similar to the longhand addition of decimal numbers. As with decimal numbers, you start by adding the bits (digits) one column, or place weight,
at a time, from right to left. Unlike decimal addition, there is little to memorize in the way of rules for the addition of binary bits:
0 + 0 = 0
1 + 0 = 1
0 + 1 = 1
1 + 1 = 10
1 + 1 + 1 = 11
Just as with decimal addition, when the sum in one column is a two-bit (two-digit) number, the least significant figure is written as part of the total sum and the most significant figure is
"carried" to the next left column. Consider the following examples:
. 11 1 <--- Carry bits -----> 11
. 1001101 1001001 1000111
. + 0010010 + 0011001 + 0010110
. --------- --------- ---------
. 1011111 1100010 1011101
The addition problem on the left did not require any bits to be carried, since the sum of bits in each column was either 1 or 0, not 10 or 11. In the other two problems, there definitely were bits to
be carried, but the process of addition is still quite simple.
As we'll see later, there are ways that electronic circuits can be built to perform this very task of addition, by representing each bit of each binary number as a voltage signal (either "high," for
a 1; or "low" for a 0). This is the very foundation of all the arithmetic which modern digital computers perform.
With addition being easily accomplished, we can perform the operation of subtraction with the same technique simply by making one of the numbers negative. For example, the subtraction problem of 7 -
5 is essentially the same as the addition problem 7 + (-5). Since we already know how to represent positive numbers in binary, all we need to know now is how to represent their negative counterparts
and we'll be able to subtract.
Usually we represent a negative decimal number by placing a minus sign directly to the left of the most significant digit, just as in the example above, with -5. However, the whole purpose of using
binary notation is for constructing on/off circuits that can represent bit values in terms of voltage (2 alternative values: either "high" or "low"). In this context, we don't have the luxury of a
third symbol such as a "minus" sign, since these circuits can only be on or off (two possible states). One solution is to reserve a bit (circuit) that does nothing but represent the mathematical
. 101[2] = 5[10] (positive)
. Extra bit, representing sign (0=positive, 1=negative)
. |
. 0101[2] = 5[10] (positive)
. Extra bit, representing sign (0=positive, 1=negative)
. |
. 1101[2] = -5[10] (negative)
As you can see, we have to be careful when we start using bits for any purpose other than standard place-weighted values. Otherwise, 1101[2] could be misinterpreted as the number thirteen when in
fact we mean to represent negative five. To keep things straight here, we must first decide how many bits are going to be needed to represent the largest numbers we'll be dealing with, and then be
sure not to exceed that bit field length in our arithmetic operations. For the above example, I've limited myself to the representation of numbers from negative seven (1111[2]) to positive seven
(0111[2]), and no more, by making the fourth bit the "sign" bit. Only by first establishing these limits can I avoid confusion of a negative number with a larger, positive number.
Representing negative five as 1101[2] is an example of the sign-magnitude system of negative binary numeration. By using the leftmost bit as a sign indicator and not a place-weighted value, I am
sacrificing the "pure" form of binary notation for something that gives me a practical advantage: the representation of negative numbers. The leftmost bit is read as the sign, either positive or
negative, and the remaining bits are interpreted according to the standard binary notation: left to right, place weights in multiples of two.
As simple as the sign-magnitude approach is, it is not very practical for arithmetic purposes. For instance, how do I add a negative five (1101[2]) to any other number, using the standard technique
for binary addition? I'd have to invent a new way of doing addition in order for it to work, and if I do that, I might as well just do the job with longhand subtraction; there's no arithmetical
advantage to using negative numbers to perform subtraction through addition if we have to do it with sign-magnitude numeration, and that was our goal!
There's another method for representing negative numbers which works with our familiar technique of longhand addition, and also happens to make more sense from a place-weighted numeration point of
view, called complementation. With this strategy, we assign the leftmost bit to serve a special purpose, just as we did with the sign-magnitude approach, defining our number limits just as before.
However, this time, the leftmost bit is more than just a sign bit; rather, it possesses a negative place-weight value. For example, a value of negative five would be represented as such:
Extra bit, place weight = negative eight
. |
. 1011[2] = 5[10] (negative)
. (1 x -8[10]) + (0 x 4[10]) + (1 x 2[10]) + (1 x 1[10]) = -5[10]
With the right three bits being able to represent a magnitude from zero through seven, and the leftmost bit representing either zero or negative eight, we can successfully represent any integer
number from negative seven (1001[2] = -8[10] + 1[10] = -7[10]) to positive seven (0111[2] = 0[10] + 7[10] = 7[10]).
Representing positive numbers in this scheme (with the fourth bit designated as the negative weight) is no different from that of ordinary binary notation. However, representing negative numbers is
not quite as straightforward:
zero 0000
positive one 0001 negative one 1111
positive two 0010 negative two 1110
positive three 0011 negative three 1101
positive four 0100 negative four 1100
positive five 0101 negative five 1011
positive six 0110 negative six 1010
positive seven 0111 negative seven 1001
. negative eight 1000
Note that the negative binary numbers in the right column, being the sum of the right three bits' total plus the negative eight of the leftmost bit, don't "count" in the same progression as the
positive binary numbers in the left column. Rather, the right three bits have to be set at the proper value to equal the desired (negative) total when summed with the negative eight place value of
the leftmost bit.
Those right three bits are referred to as the two's complement of the corresponding positive number. Consider the following comparison:
positive number two's complement
--------------- ----------------
In this case, with the negative weight bit being the fourth bit (place value of negative eight), the two's complement for any positive number will be whatever value is needed to add to negative eight
to make that positive value's negative equivalent. Thankfully, there's an easy way to figure out the two's complement for any binary number: simply invert all the bits of that number, changing all
1's to 0's and vice versa (to arrive at what is called the one's complement) and then add one! For example, to obtain the two's complement of five (101[2]), we would first invert all the bits to
obtain 010[2] (the "one's complement"), then add one to obtain 011[2], or -5[10] in three-bit, two's complement form.
Interestingly enough, generating the two's complement of a binary number works the same if you manipulate all the bits, including the leftmost (sign) bit at the same time as the magnitude bits. Let's
try this with the former example, converting a positive five to a negative five, but performing the complementation process on all four bits. We must be sure to include the 0 (positive) sign bit on
the original number, five (0101[2]). First, inverting all bits to obtain the one's complement: 1010[2]. Then, adding one, we obtain the final answer: 1011[2], or -5[10] expressed in four-bit, two's
complement form.
It is critically important to remember that the place of the negative-weight bit must be already determined before any two's complement conversions can be done. If our binary numeration field were
such that the eighth bit was designated as the negative-weight bit (10000000[2]), we'd have to determine the two's complement based on all seven of the other bits. Here, the two's complement of five
(0000101[2]) would be 1111011[2]. A positive five in this system would be represented as 00000101[2], and a negative five as 11111011[2].
We can subtract one binary number from another by using the standard techniques adapted for decimal numbers (subtraction of each bit pair, right to left, "borrowing" as needed from bits to the left).
However, if we can leverage the already familiar (and easier) technique of binary addition to subtract, that would be better. As we just learned, we can represent negative binary numbers by using the
"two's complement" method and a negative place-weight bit. Here, we'll use those negative binary numbers to subtract through addition. Here's a sample problem:
Subtraction: 7[10] - 5[10] Addition equivalent: 7[10] + (-5[10])
If all we need to do is represent seven and negative five in binary (two's complemented) form, all we need is three bits plus the negative-weight bit:
positive seven = 0111[2]
negative five = 1011[2]
Now, let's add them together:
. 1111 <--- Carry bits
. 0111
. + 1011
. ------
. 10010
. |
. Discard extra bit
. Answer = 0010[2]
Since we've already defined our number bit field as three bits plus the negative-weight bit, the fifth bit in the answer (1) will be discarded to give us a result of 0010[2], or positive two, which
is the correct answer.
Another way to understand why we discard that extra bit is to remember that the leftmost bit of the lower number possesses a negative weight, in this case equal to negative eight. When we add these
two binary numbers together, what we're actually doing with the MSBs is subtracting the lower number's MSB from the upper number's MSB. In subtraction, one never "carries" a digit or bit on to the
next left place-weight.
Let's try another example, this time with larger numbers. If we want to add -25[10] to 18[10], we must first decide how large our binary bit field must be. To represent the largest (absolute value)
number in our problem, which is twenty-five, we need at least five bits, plus a sixth bit for the negative-weight bit. Let's start by representing positive twenty-five, then finding the two's
complement and putting it all together into one numeration:
+25[10] = 011001[2] (showing all six bits)
One's complement of 11001[2] = 100110[2]
One's complement + 1 = two's complement = 100111[2]
-25[10] = 100111[2]
Essentially, we're representing negative twenty-five by using the negative-weight (sixth) bit with a value of negative thirty-two, plus positive seven (binary 111[2]).
Now, let's represent positive eighteen in binary form, showing all six bits:
. 18[10] = 010010[2]
. Now, let's add them together and see what we get:
. 11 <--- Carry bits
. 100111
. + 010010
. --------
. 111001
Since there were no "extra" bits on the left, there are no bits to discard. The leftmost bit on the answer is a 1, which means that the answer is negative, in two's complement form, as it should be.
Converting the answer to decimal form by summing all the bits times their respective weight values, we get:
(1 x -32[10]) + (1 x 16[10]) + (1 x 8[10]) + (1 x 1[10]) = -7[10]
Indeed -7[10] is the proper sum of -25[10] and 18[10].
One caveat with signed binary numbers is that of overflow, where the answer to an addition or subtraction problem exceeds the magnitude which can be represented with the alloted number of bits.
Remember that the place of the sign bit is fixed from the beginning of the problem. With the last example problem, we used five binary bits to represent the magnitude of the number, and the left-most
(sixth) bit as the negative-weight, or sign, bit. With five bits to represent magnitude, we have a representation range of 2^5, or thirty-two integer steps from 0 to maximum. This means that we can
represent a number as high as +31[10] (011111[2]), or as low as -32[10] (100000[2]). If we set up an addition problem with two binary numbers, the sixth bit used for sign, and the result either
exceeds +31[10] or is less than -32[10], our answer will be incorrect. Let's try adding 17[10] and 19[10] to see how this overflow condition works for excessive positive numbers:
. 17[10] = 10001[2] 19[10] = 10011[2]
. 1 11 <--- Carry bits
. (Showing sign bits) 010001
. + 010011
. --------
. 100100
The answer (100100[2]), interpreted with the sixth bit as the -32[10] place, is actually equal to -28[10], not +36[10] as we should get with +17[10] and +19[10] added together! Obviously, this is not
correct. What went wrong? The answer lies in the restrictions of the six-bit number field within which we're working Since the magnitude of the true and proper sum (36[10]) exceeds the allowable
limit for our designated bit field, we have an overflow error. Simply put, six places doesn't give enough bits to represent the correct sum, so whatever figure we obtain using the strategy of
discarding the left-most "carry" bit will be incorrect.
A similar error will occur if we add two negative numbers together to produce a sum that is too low for our six-bit binary field. Let's try adding -17[10] and -19[10] together to see how this works
(or doesn't work, as the case may be!):
. -17[10] = 101111[2] -19[10] = 101101[2]
. 1 1111 <--- Carry bits
. (Showing sign bits) 101111
. + 101101
. --------
. 1011100
. |
. Discard extra bit
FINAL ANSWER: 011100[2] = +28[10]
The (incorrect) answer is a positive twenty-eight. The fact that the real sum of negative seventeen and negative nineteen was too low to be properly represented with a five bit magnitude field and a
sixth sign bit is the root cause of this difficulty.
Let's try these two problems again, except this time using the seventh bit for a sign bit, and allowing the use of 6 bits for representing the magnitude:
. 17[10] + 19[10] (-17[10]) + (-19[10])
. 1 11 11 1111
. 0010001 1101111
. + 0010011 + 1101101
. --------- ---------
. 0100100[2] 11011100[2]
. |
. Discard extra bit
. ANSWERS: 0100100[2] = +36[10]
. 1011100[2] = -36[10]
By using bit fields sufficiently large to handle the magnitude of the sums, we arrive at the correct answers.
In these sample problems we've been able to detect overflow errors by performing the addition problems in decimal form and comparing the results with the binary answers. For example, when adding +17
[10] and +19[10] together, we knew that the answer was supposed to be +36[10], so when the binary sum checked out to be -28[10], we knew that something had to be wrong. Although this is a valid way
of detecting overflow, it is not very efficient. After all, the whole idea of complementation is to be able to reliably add binary numbers together and not have to double-check the result by adding
the same numbers together in decimal form! This is especially true for the purpose of building electronic circuits to add binary quantities together: the circuit has to be able to check itself for
overflow without the supervision of a human being who already knows what the correct answer is.
What we need is a simple error-detection method that doesn't require any additional arithmetic. Perhaps the most elegant solution is to check for the sign of the sum and compare it against the signs
of the numbers added. Obviously, two positive numbers added together should give a positive result, and two negative numbers added together should give a negative result. Notice that whenever we had
a condition of overflow in the example problems, the sign of the sum was always opposite of the two added numbers: +17[10] plus +19[10] giving -28[10], or -17[10] plus -19[10] giving +28[10]. By
checking the signs alone we are able to tell that something is wrong.
But what about cases where a positive number is added to a negative number? What sign should the sum be in order to be correct. Or, more precisely, what sign of sum would necessarily indicate an
overflow error? The answer to this is equally elegant: there will never be an overflow error when two numbers of opposite signs are added together! The reason for this is apparent when the nature of
overflow is considered. Overflow occurs when the magnitude of a number exceeds the range allowed by the size of the bit field. The sum of two identically-signed numbers may very well exceed the range
of the bit field of those two numbers, and so in this case overflow is a possibility. However, if a positive number is added to a negative number, the sum will always be closer to zero than either of
the two added numbers: its magnitude must be less than the magnitude of either original number, and so overflow is impossible.
Fortunately, this technique of overflow detection is easily implemented in electronic circuitry, and it is a standard feature in digital adder circuits: a subject for a later chapter.
The singular reason for learning and using the binary numeration system in electronics is to understand how to design, build, and troubleshoot circuits that represent and process numerical quantities
in digital form. Since the bivalent (two-valued) system of binary bit numeration lends itself so easily to representation by "on" and "off" transistor states (saturation and cutoff, respectively), it
makes sense to design and build circuits leveraging this principle to perform binary calculations.
If we were to build a circuit to represent a binary number, we would have to allocate enough transistor circuits to represent as many bits as we desire. In other words, in designing a digital
circuit, we must first decide how many bits (maximum) we would like to be able to represent, since each bit requires one on/off circuit to represent it. This is analogous to designing an abacus to
digitally represent decimal numbers: we must decide how many digits we wish to handle in this primitive "calculator" device, for each digit requires a separate rod with its own beads.
A ten-rod abacus would be able to represent a ten-digit decimal number, or a maxmium value of 9,999,999,999. If we wished to represent a larger number on this abacus, we would be unable to, unless
additional rods could be added to it.
In digital, electronic computer design, it is common to design the system for a common "bit width:" a maximum number of bits allocated to represent numerical quantities. Early digital computers
handled bits in groups of four or eight. More modern systems handle numbers in clusters of 32 bits or more. To more conveniently express the "bit width" of such clusters in a digital computer,
specific labels were applied to the more common groupings.
Eight bits, grouped together to form a single binary quantity, is known as a byte. Four bits, grouped together as one binary number, is known by the humorous title of nibble, often spelled as nybble.
A multitude of terms have followed byte and nibble for labeling specfiic groupings of binary bits. Most of the terms shown here are informal, and have not been made "authoritative" by any standards
group or other sanctioning body. However, their inclusion into this chapter is warranted by their occasional appearance in technical literature, as well as the levity they add to an otherwise dry
• Bit: A single, bivalent unit of binary notation. Equivalent to a decimal "digit."
• Crumb, Tydbit, or Tayste: Two bits.
• Nibble, or Nybble: Four bits.
• Nickle: Five bits.
• Byte: Eight bits.
• Deckle: Ten bits.
• Playte: Sixteen bits.
• Dynner: Thirty-two bits.
• Word: (system dependent).
The most ambiguous term by far is word, referring to the standard bit-grouping within a particular digital system. For a computer system using a 32 bit-wide "data path," a "word" would mean 32 bits.
If the system used 16 bits as the standard grouping for binary quantities, a "word" would mean 16 bits. The terms playte and dynner, by contrast, always refer to 16 and 32 bits, respectively,
regardless of the system context in which they are used.
Context dependence is likewise true for derivative terms of word, such as double word and longword (both meaning twice the standard bit-width), half-word (half the standard bit-width), and quad
(meaning four times the standard bit-width). One humorous addition to this somewhat boring collection of word-derivatives is the term chawmp, which means the same as half-word. For example, a chawmp
would be 16 bits in the context of a 32-bit digital system, and 18 bits in the context of a 36-bit system. Also, the term gawble is sometimes synonymous with word.
Definitions for bit grouping terms were taken from Eric S. Raymond's "Jargon Lexicon," an indexed collection of terms -- both common and obscure -- germane to the world of computer programming.
Lessons In Electric Circuits copyright (C) 2000-2014 Tony R. Kuphaldt, under the terms and conditions of the Design Science License. | {"url":"http://www.ibiblio.org/kuphaldt/electricCircuits/Digital/DIGI_2.html","timestamp":"2014-04-21T00:38:17Z","content_type":null,"content_length":"30013","record_id":"<urn:uuid:650e9e98-a8c4-4265-b725-e3950400bf65>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00257-ip-10-147-4-33.ec2.internal.warc.gz"} |
Non-Linear Behaviour
Description Qty Item Price
your basket is empty
sub total £0.00
proceed to checkout
Non-Linear Behaviour
material non-linear, general:
general set of benchmarks:
• a review of numerous potential benchmark problems, not worked up into formal benchmarks
Benchmark P15 .
2D single element tests subject to secondary creep (Norton law):
• plane stress with constant uniaxial load
• plane stress with constant applied uniaxial displacement
• plane stress with constant biaxial load (3 different ratios)
• plane stress with constant applied biaxial displacement (3 different ratios)
• plane strain with constant unequi-biaxial load
• plane strain with constant applied unequi-biaxial displacement
3D single element tests subject to secondary creep (Norton law):
• with constant unequi-triaxial load
• with constant applied unequi-triaxial displacement
axisymmetric multi-element test subject to secondary creep (Norton law):
• internally pressurised thick hollow cylinder
2D single element tests subject to primary creep (Norton-Bailey law):
• plane stress with constant uniaxial load
• plane stress with constant uniaxial stepped load
• plane stress with constant applied uniaxial displacement
• plane stress with constant biaxial load (2 different ratios)
• plane stress with constant biaxial stepped load (2 different ratios)
• plane stress with constant applied biaxial displacement (2 different ratios)
3D single element test subject to primary creep (Norton-Bailey law):
• with constant unequi-triaxial load
2D single element tests subject to combined primary and secondary creep:
• plane stress with constant uniaxial load
• plane stress with constant uniaxial stepped load
• plane stress with constant applied uniaxial displacement
Benchmarks R0027 , R0049 (subset of above), R0080 (subset of above).
3D multi-element test subject to secondary Norton creep:
• torsional creep of long prismatic circular shaft under uniform twist held constant with time
• torsional creep of long prismatic circular shaft under uniform twist steadily increasing with time
3D multi-element test subject to primary creep (Norton-Bailey law) with time hardening:
• torsional creep of a square shaft under constant twist
• torsional creep of a square shaft under steady twist rate
2D plane strain thick cylinder under temperature transient followed by fluctuating internal pressure:
• both elastic-plastic and creep behaviour over 4 stages; non-proportional loading
Axisymmetric multi-element test subject to temperature-dependent secondary creep (Norton law):
• thick hollow cylinder with temperature gradients and thermally dependent creep laws
Benchmarks R0026 , R0030 , R0049 (subset of above), R0080 (subset of above).
2D single element tests:
• plane strain with perfect plasticity
• plane strain with isotropic hardening
• plane strain comparing kinematic and isotropic hardening
• plane stress with perfect plasticity
• plane stress with isotropic hardening
3D single element tests:
• with perfect plasticity
• with isotropic hardening
Axisymmetric test with perfect plasticity
Axisymmetric test with isotropic hardening
Plane strain cylinder incompressibility test
Benchmarks P06 , R0049 (subset of above), R0072 (subset of above), R0080 (subset of above).
2D two-bar assembly under constant axial load and cyclic temperatures of differing values in each bar:
• with kinematic hardening and thermal load in one bar only, to test elastic shakedown
• with kinematic hardening and larger thermal load amplitude, to test alternating plasticity
• with kinematic hardening and thermal load in both bars, to test different elastic shakedown in each bar
• with perfect plasticity and thermal load in one bar only, to test thermal ratcheting
2D beam with constant end load and cyclic temperature gradient:
• with kinematic hardening, to test elastic/plastic shakedown and cyclic plastic strain accumulation
2D plane strain rigid punch test under prescribed displacements, with many elements:
• with perfect plasticity, testing the solution towards the limit load
• with isotropic bilinear hardening
Thin square plate with uniform load and perfect plasticity:
• with simply supported edges, testing the solution towards the limit load
• with clamped edges, testing the solution towards the limit load
2D plane strain thick cylinder under temperature transient followed by fluctuating internal pressure:
• with both elastic-plastic and creep behaviour over 4 stages; non-proportional loading
2D plane stress half of an eccentric tube under constant internal pressure and cyclically varying through-thickness temperature gradient:
• with perfect plasticity, testing ratcheting and alternating plasticity
Benchmarks R0026 , R0030 , R0049 (subset of above), R0072 (subset of above), R0080 (subset of above).
geometric non-linearity:
Large displacements in truss elements:
• the unrotated element (1)
• limit point behaviour with one variable (2)
• hardening with one variable (3)
• bifurcation problem (4)
• limit point with two variables (5)
• hardening with two variables (6)
• snap back (7)
Benchmarks P03 , R0065 (subset of above), R0072 (subset of above), R0080 (subset of above).
Large displacements and rotations in thin 2D beams and axisymmetric shells:
• rigid body rotation (NLGB1)
• straight cantilever with end moment (NLGB2)
• curved cantilever with end moment (NLGB3)
• straight cantilever with transverse free end point load (NLGB4)
• straight cantilever with axial free end point load (NLGB5)
• symmetric shallow pinned arch buckling (NLGB6)
• unsymmetric deep arch buckling (NLGB7)
• Lee’s frame buckling problem (NLGB8)
• built-in circular plate with uniformly distributed load (NLGS1)
• annular plate with inside edge load (NLGS2)
• truncated cone with outside edge load (NLGS3)
• built-in spherical cap with central point load (NLGS4)
Benchmarks P10 , R0010, R0065 (subset of above), R0072 (subset of above), R0080 (subset of above).
Assembly tests for 3D beams and shells:
• elastic large deflection response of a Z-shaped cantilever beam under end load (3DNLG-1)
• elastic large deflection response of a pear shaped cylinder under end shortening (3DNLG-2)
• elastic lateral buckling of a right angle frame under in-plane end moments (3DNLG-3)
• lateral-torsional buckling of an elastic cantilever subjected to a transverse follower force (3DNLG-4)
• large deflection of a curved elastic cantilever under transverse end load (3DNLG-5)
• buckling of a flat plate with an initial imperfection when subjected to in-plane shear (3DNLG-6)
• large deflection elastic response of a hinged spherical shell with uniform pressure loading (3DNLG-7)
• collapse of a straight pipe under pure bending (3DNLG-8)
• large elastic deflection of a pinched hemispherical shell (3DNLG-8)
• elasto-plastic behaviour of a stiffened cylindrical shell panel under compressive end load (3DNLG-10)
Benchmarks R0024 , R0029 , R0065 (subset of above), R0080 (subset of above).
A survey of possible benchmarks for 3D beams:
• elastic lateral-torsional buckling of a cantilever (B1)
• elastic lateral buckling of a right angle frame under in-plane end moments (B2)
• lateral buckling of an I beam cantilever subjected to transverse force (B3)
• elastic circular ring subjected to non-uniform pressure (B4)
• elastic large deflection of a cantilever under follower transverse force (B5)
• elasto-plastic large deflection of a cantilever under transverse end load (B6)
• elastic large deflection of pinned deep arch subjected to central load (B7)
• elastic large deflection of cantilever 45^0 bend subject to end load (B8)
• elastic large deflection analysis of a prestressed cable (B9)
• elastic large deflection behaviour of a cable under gravity load (B10)
• torsional buckling of end loaded cantilever (B11)
• elastic large deflection behaviour of Lee’s frame (B12)
• elastic large deflection of curved beam under non-conservative in-plane load (B13)
• clamped-hinged deep circular arch subjected to a point load (B14)
• large deflection of a diamond shaped frame (B15)
• planar square frame loaded at the midpoints on opposite sides (B16)
• cantilever under end moment (B17)
• large deflection of a cantilever right angle frame under end load (B18)
• elasto-plastic collapse of a 90^0 pipe bend (B19)
• elasto-plastic collapse of a thin walled elbow under in-plane loading and internal pressure (B20)
• uniform collapse of a curved pipe under pure bending and internal pressure (B21)
• elasto-plastic large deflection response of a Z-shaped cantilever beam (B22)
• elasto-plastic analysis of I beams in torsion (B23)
• finite rigid body rotation of a curved arch (B24)
• collapse analysis of elasto-perfectly plastic triangular truss structure (B25)
• elasto-plastic large deformation of a clamped beam under point load (B26)
• plastic collapse of a two bay two story frame (B27)
Benchmark R0009 .
A survey of possible benchmarks for 3D shells:
• elastic large deflection response of a centrally loaded spherical shell (S1)
• large deflection elasto-plastic analysis of a cylindrical shell roof (S2)
• thermal buckling of a simply supported rectangular plate (S3)
• elastic post-buckling behaviour of a square plate subjected to shear (S4)
• elastic large deflection of a cantilever plate with end moment (S5)
• elasto-plastic buckling of a clamped imperfect spherical cap subjected to uniform pressure (S6)
• elastic large deflection analysis of a pinched hemispherical shell (S7)
• elastic large deflection of a hyperbolic paraboloid under concentrated moment at corners (S8)
• elastic large deflection of a cylindrical shell under point load (S9)
• elastic large deflection of a curved beam (S10)
• finite rigid body rotation of a singly curved shell (S11)
• elastic large deflection response of a pear shaped cylinder under end shortening (S12)
• buckling and post-buckling response of axially compressed perfect and imperfect elliptical cylinders (S13)
• elastic large deflection response of a cylindrical shell with two diametrically opposite cutouts (S14)
• elastic large deflection response of a clamped square plate under uniform pressure (S15)
• post buckling analysis of cylindrical shell under axial compression (S16)
• post buckling behaviour of cylindrical panel under central point load (S17)
• expansion bellows under axial end load (S18)
• elasto-plastic response of a spherical cap under a central load (S19)
• large deflection of a clamped cylindrical panel subjected to uniform pressure (S20)
• snap through of hinged cylindrical shell (S21)
• post buckling of a simply supported circular plate loaded in radial edge compression (S22)
• elastic lateral buckling and large deflection response of a narrow cantilever beam (S23)
• elasto-plastic lateral buckling of a narrow cantilever beam (S24)
• large deflection elasto-plastic response of a circular plate (S25)
Benchmark R0009 .
A survey of possible benchmarks for 3D branched shells:
• elasto-plastic behaviour of a stiffened cylindrical shell panel under compressive end load (BS1)
• post buckling of stiffened panel under axial compression (BS2)
• elastic buckling of T-ring stiffened cylinders under external pressure (BS3)
• buckling of spherical shells ring supported at the edges (BS4)
• elastic buckling of I-ring stiffened cylinders under external pressure (BS5)
• 3D finite rigid body rotation of stiffened shell connected to a hinged bar (BS6)
• non-symmetric elasto-plastic bifurcation buckling of torispherical head with axisymmetric nozzle (BS7)
• elasto-plastic large deflection response of a multi-mitre pipe bend (BS8)
Benchmark R0009 .
contact analysis:
2D multi-element plane strain tests:
• contact patch test
• rigid punch on deformable foundation
• Hertzian contact
• sliding wedge with linear springs
• bending of a plate over a stiff cylinder
• sliding and rolling of a ring on a rigid surface
2D multi-element plane stress tests:
• cantilevered beam loaded against a rigid curvilinear surface
2D solid multi-element tests:
• two contacting rings
• buckling of a curved column with self-contact
2D axisymmetric multi-element tests:
• interference between two cylinders
Benchmark R0081 .
Multi-element 2D/3D tests of a range of physical features:
• 2D contact of cylindrical roller
• 3D punch with rounded edges
• 3D sheet metal forming
• 3D loaded pin
• 3D steel roller on rubber
Benchmark R0094 . | {"url":"http://www.nafems.org/publications/pubguide/benchmarks/Page6/","timestamp":"2014-04-16T07:19:13Z","content_type":null,"content_length":"35670","record_id":"<urn:uuid:72f74f14-cc56-4e57-b9b8-5423d61b0894>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00112-ip-10-147-4-33.ec2.internal.warc.gz"} |
Methods and apparatus for teaching reading and math skills to individuals with dyslexia, dyscalculia, and other neurological impairments
Patent application title: Methods and apparatus for teaching reading and math skills to individuals with dyslexia, dyscalculia, and other neurological impairments
Sign up to receive free email alerts when patent applications with chosen keywords are published SIGN UP
The present invention includes a phonetic alphabet with clarifiers and modifiers that aid in the teaching of reading skills to individuals with dyslexia, dyscalculia, and other neurological
impairments when the present invention letters are connected horizontally in series with clarifiers and modifiers to form a word, a phrase, a sentence, and/or a paragraph. The present invention
further includes mathematical symbols for teaching math skills to individuals with dyslexia, dyscalculia, and other neurological impairments when predetermined geometric shapes are arranged to form
numbers of a base 10 counting system that are capable for using in additional, subtraction, multiplication, division. The present invention of mathematical symbols includes whole numbers, real
numbers, integers, fractions, and decimals. The present invention also includes 2D and 3D tools and methods of using same.
A phonetic language system comprising: an alphabet consisting of only 49 letters to form a word with a vowel or a consonant; and a plurality of word modifiers and clarifiers positionable horizontally
prefixing or suffixing the vowel or consonant of the word.
The phonetic language system according to claim 7, wherein the alphabet and the plurality of modifiers and clarifiers are embodied on a keyboard operably connected to a computer.
The phonetic language system according to claim 1, further comprising tone variation marks being positionable above the word.
The phonetic language system according to claim 1, further comprising volume variation marks being positionable above the word.
The phonetic language system according to claim 1, wherein the clarifiers suffixing the word distinguishes a meaning of the word having the same phonetic spelling.
The phonetic language system according to claim 1, wherein the clarifiers suffixing the word defines a state of the word.
A direct representation mathematical learning system having a plurality of geometry shapes, wherein the plurality of geometric shapes comprise: a large square having a value of "10" forming a
perimeter of a 10 based shape and having a color, a small triangle having a value of "1" and a color, wherein 10 small triangles are capable of being positioned in the perimeter of the 10 based shape
for a value of "10," a large triangle having a value of "5" and a color, wherein the small triangle color is a different color than the large triangle color; wherein the large triangle color is the
same as the large square color; wherein 2 large triangles are capable of being positioned in the perimeter of the 10 based shape for a value of "10;" an outer 4 shape having a value of 4 and a color,
wherein the outer 4 shape is capable of having two sides being positioned adjacent to the perimeter of the 10 based shape, wherein 2 outer 4 shapes are capable of being positioned in the perimeter of
the 10 based shape for a value of "8;" an inner 4 shape having a value of "4" and a color, wherein the inner 4 shape being capable of being surrounded by other geometric shapes of the plurality of
geometric shapes within the perimeter of the 10 based shape, wherein the inner 4 shape color is the same as the outer 4 shape color; a small square having a value of "2" and a color, wherein 5 small
squares being capability of being positioned in the perimeter of the 10 based shape for a value of "10;" an inner 3 shape having a value of "3" and having a geometry that is completely surrounded by
the other geometric shapes when positioned in the perimeter of the 10 based shape; and an outer 3 shape having a value of "3" and having a geometry of 3 small triangles connected thereto at their
respective corner points and having sides capable of being positioned adjacent to the perimeter of the 10 based shape.
The direct representation mathematical learning system according to claim 7, further comprising a longitudinal spacer have a null value being sized to fill a gap formed adjacent to the small
The direct representation mathematical learning system according to claim 7, further comprising an L shaped spacer have a value of "10" or "0" having side lengths equivalent to a side length of the
perimeter of the 10 based shape.
The direct representation mathematical learning system according to claim 7, wherein the plurality of geometric shapes are embodied in two dimensional pieces.
The direct representation mathematical learning system according to claim 7, wherein the plurality of geometric shapes are embodied in one or more cubes.
A method of using a direct representation mathematical learning system as claimed in claim 7 to solve a mathematical equation, the method steps comprising: converting numerical valves of the
mathematical equation based on its place value into a first grouping of 10 based symbols as shown in FIG. 11b, wherein each symbol of the grouping of 10 based symbols has a value of "10" or less;
determining an operation of the mathematical equation selected from the group consisting of addition, subtraction, multiplication, and division; breaking down one or more 10 based symbols of the
first grouping of 10 based symbols into a second grouping of 10 based symbols having smaller values than the one or more 10 based symbols; and executing the operation of the mathematical equation by
arranging a second grouping of 10 based symbols into a third grouping of 10 based symbols.
The method according to claim 12, wherein the operation is addition, the step of executing further comprises grouping by a place value the symbols of the first grouping of 10 based symbols and/or the
symbols of the second grouping of 10 based symbols up to a "10" value to form a third grouping of 10 based symbols for .the place value.
The method according to claim 12, wherein the operation is subtraction, the step of executing further comprises grouping by a place value the symbols of the first grouping of 10 based symbols and/or
the symbols of the second grouping of 10 based symbols to less than a "10" value to form a third grouping of 10 based symbols for .the place value.
The method according to claim 12, wherein the step of converting further comprising using an L-shaped spacer to represent "10" or "
The method according to claim 12, wherein the step of breaking down further comprising using an L-shaped spacer to place therein the broken down one or more 10 based symbols.
The method according to claim 12, wherein the step of executing further comprising using an L-shaped spacer to place therein the third grouping of 10 based symbols.
The method according to claim 12, wherein the step of converting further comprising using a longitudinal spacer to separate symbols representing "
The method according to claim 12, wherein the step of breaking down further comprising using a longitudinal spacer to separate symbols representing "
The method according to claim 12, wherein the step of executing further comprising using a longitudinal spacer to separate symbols representing "
This is a Continuation In Part application of non-provisional application U.S. Ser. No. 12/927,356, titled Methods and Apparatus for Teaching Reading and Math Skills to Individuals with Dyslexia and
Other Neurological Impairments, including Phonetose, SHAPE MATH®, Conceptual Clarifiers, Internet Speaking Reference Chart, Speaking Phonetose Program, Phonetic Hangman, Alternating Line
Highlighting, and English Grid, filed Nov. 12, 2010, which claims priority from U.S. Provisional Patent Application No. 61/260,481, titled METHODS AND APPARATUS FOR TEACHING READING AND MATH SKILLS
HANGMAN, ALTERNATING LINE HIGHLIGHTING, AND ENGLISH GRID, filed Nov. 12, 2009, both pre-incorporated herein by reference.
FIELD OF THE INVENTION [0002]
Teaching methods and apparatus for teaching reading and math skills to individuals with dyslexia and other neurological impairments.
BACKGROUND OF THE INVENTION [0003]
Dyslexia is a learning disorder that manifests itself primarily as a difficulty with reading and spelling. It is separate and distinct from reading difficulties resulting from other causes, such as a
non-neurological deficiency with vision or hearing, or from poor or inadequate reading instruction. It is estimated that dyslexia affects between 5% to 17% of the U.S. population.
Although dyslexia is thought to be the result of a neurological difference, it is not an intellectual disability. Dyslexia is diagnosed in people of all levels of intelligence: below average,
average, above average, and highly gifted.
Dyslexia symptoms vary according to the severity of the disorder as well as the age of the individual.
Pre-school age children. It is difficult to obtain a certain diagnosis of dyslexia before a child begins school, but many dyslexic individuals have a history of difficulties that began well before
kindergarten. Children who exhibit these symptoms have a higher risk of being diagnosed as dyslexic than other children. Some of these symptoms are:
Learns new words slowly;
Has difficulty rhyming words, as in nursery rhymes;
Late in establishing a dominant hand.
Early elementary school-age children:
Difficulty learning the alphabet;
Difficulty with associating sounds with the letters that represent them (sound-symbol correspondence);
Difficulty identifying or generating rhyming words, or counting syllables in words (phonological awareness);
Difficulty segmenting words into individual sounds, or blending sounds to make words (phonemic awareness);
Difficulty with word retrieval or naming problems;
Difficulty learning to decode words;
Confusion with before/after, right/left, over/under, and so on;
Difficulty distinguishing between similar sounds in words; mixing up sounds in multisyllable words (auditory discrimination) (for example, "aminal" for animal, "bisghetti" for spaghetti).
Older elementary school children:
Slow or inaccurate reading.
Very poor spelling;
Difficulty associating individual words with their correct meanings;
Difficulty with time keeping and concept of time;
Difficulty with organization skills;
Due to fear of speaking incorrectly, some children become withdrawn and shy or become bullies out of their inability to understand the social cues in their environment;
Difficulty comprehending rapid instructions, following more than one command at a time or remembering the sequence of things;
Reversals of letters (b for d) and a reversal of words (saw for was) are typical among children who have dyslexia. Reversals are also common for children age 6 and younger who don't have dyslexia.
But with dyslexia, the reversals persist;
Children with dyslexia may fail to see (and occasionally to hear) similarities and differences in letters and words, may not recognize the spacing that organizes letters into separate words, and may
be unable to sound out the pronunciation of an unfamiliar word.
The complexity of a language's orthography, or writing and spelling system, has a direct impact on how difficult it is to learn to read in that language; formally, this is the orthographic depth.
Although English has an alphabetic orthography, it is a complex or deep orthography that employs spelling patterns at several levels. The major structural categories that make up English spelling are
letter-sound correspondences, syllables, and morphemes. Some other languages, such as Spanish, have alphabetic orthographies that employ only letter-sound correspondences, so-called shallow
orthographies. It is relatively easy to learn to read in languages like Spanish; it is much more difficult to learn to read in languages that have more complex orthographies, as in English.
Logographic writing systems, notably Chinese characters, pose additional difficulties.
From a neurological perspective, different types of writing, for example, alphabetic as compared to pictographic, require different neurological pathways in order to read, write and spell. Because
different writing systems require different parts of the brain to process the visual notation of speech, children with reading problems in one language might not have a reading problem in a language
with a different orthography. The neurological skills required to perform the tasks of reading, writing, and spelling can vary between different writing systems and as a result different neurological
skill deficits can cause dyslexic problems in relation to different orthographies.
There is no cure for dyslexia, but dyslexic individuals can learn to read and write with appropriate educational support. For alphabet writing systems, the fundamental aim is to increase a child's
awareness of correspondences between graphemes and phonemes, and to relate these to reading and spelling. It has been found that training focused towards visual language and orthographic issues
yields longer-lasting gains than mere oral phonological training. The best approach is determined by the underlying neurological cause(s) of the dyslexic symptom.
People with dyslexia are often gifted in math. Their three-dimensional visualization skills help them "see" math concepts more quickly and clearly than non-dyslexic people. Unfortunately,
difficulties in directionality, rote memorization, reading, and sequencing can make the math tasks so difficult that their math gifts are never discovered. In particular, many dyslexic children and
teens have problems in some areas of math, especially the multiplication tables, fractions, decimals, percentages, ratio and statistics. Thus, good methods for teaching math to dyslexic individuals
emphasize their visualization skills.
The present invention provides methods and apparatus for teaching reading and math skills to individuals with dyslexia and dyscalculia. The same methods and apparatus may also provide similar aid to
individuals with similar cognitive disorders, such as aphasia. The tools of the present invention can be used in combination with other programs and systems. Primary users are dyslexics, but other
customers could include students studying English as a second language and the Conceptual Clarifier's could be used by those suffering from aphasia and averbia. Conversely, the tools could also be
used to help dyslexics learn other languages. The syllable form of PHONETOSE® can be used by a student to help learn syllable languages such as Korean and Japanese. The tone system could also help a
dyslexic student learn Mandarin or Cantonese Chinese.
SUMMARY OF INVENTION [0034]
The present invention includes a phonetic alphabet with clarifiers and modifiers that aid in the teaching of reading skills to individuals with dyslexia, dyscalculia, and other neurological
impairments when the letters of present invention are connected horizontally in series with clarifiers and modifiers to form a word, a phrase, a sentence, and/or a paragraph. The present invention
further includes mathematical symbols for teaching math skills to individuals with dyslexia, dyscalculia, and other neurological impairments when predetermined geometric shapes are arranged to form
numbers of a base 10 counting system that are capable for using in additional, subtraction, multiplication, division. The present invention of mathematical symbols includes whole numbers, real
numbers, integers, fractions, and decimals. The present invention also includes 2D and 3D tools and methods of using same.
One embodiment of the phonetic language system of the present invention includes: a letter guide that lists letters and corresponding sounds for a written orthography in one or more scripts for each
letter; conceptual clarifiers for distinguishing homophones; and a letter organization system of the letters. The system further comprising a computer keyboard layout that converts the written
orthography for use on a computer standard keyboard. The conceptual clarifiers are a series of abstract symbols positionable at an end of each homophone of the homphones to distinguish the each
homophone from other homophones having the same phonetic spelling. The system further comprising a coding system including "@" for a vowel, "*" for a consonant, "#" for the rest of the word, and "/"
for a syllable division. The system further comprises a computer-based system for constructing words on a timeline and generating the sound of the each letter in the written orthography. A
computer-based system can include a display of the letters in the written orthography wherein activation of each letter generates the coding system and the sound for each letter.
BRIEF DESCRIPTION OF THE DRAWINGS [0036]
FIGS. 1A-1L are illustrations of the PHONETOSE® letter guide, including a list of all the PHONETOSE® letters and sounds they make in all three scripts in both upper and lower case;
FIG. 1M is a list of syllable writing examples of the present invention in PHONETOSE®;
FIG. 1N is a list of syllable possibilities of the present invention in PHONETOSE®;
[0039] FIG. 10
is an example of tone variation marks in of the present invention PHONETOSE®;
[0040] FIG. 1P
is an example in "in syllable" tone variation marks of the present invention in PHONETOSE®;
[0041] FIG. 1Q
is an example of the volume variation mark of the present invention in PHONETOSE®;
FIGS. 1S-1Y are the 8 and 5 tone range of the present invention in PHONETOSE®;
FIG. 2 is an exemplary letter organization system of PHONETOSE®;
FIG. 2A is an example of how to look up homophones in the PHONETOSE® letter organization system using the Internet speaking reference chart;
[0045] FIG. 3
is one embodiment of a PHONETOSE® keyboard layout;
FIGS. 3A-E are magnified views of select keys of the keyboard shown is
FIG. 3
FIGS. 4A-C are the list of Conceptual Clarifiers;
FIGS. 5A-E are a list of English Grid examples;
FIGS. 6A-B are illustrations of one embodiment of a PHONETOSE® speaking program;
[0050] FIG. 7
is an example of Phonetic Hangman;
FIG. 8 shows an example of the horizontal form of syllable PHONETOSE® for the word "computer" in Phonetic Hangman;
[0052] FIG. 9
shows an example of the vertical form of syllable PHONETOSE® for the word "computer" in Phonetic hangman;
FIGS. 10A-B are examples of the Alternate Line Highlighting transparencies;
FIG. 11A is an example of a representation of 28 according to one embodiment of the present invention;
FIGS. 11B-12 shows SHAPE MATH® numbers 1-10, a fifty, one hundred and 60 pattern, and the zero spacers;
FIG. 13 shows the ascending SHAPE MATH® colors;
FIGS. 14-15 show different kinds of ten shapes;
[0058] FIG. 17
shows the written form of numbers 1-10;
FIGS. 18 A-B demonstrate mental SHAPE MATH®;
FIGS. 19A-B shows the quantities of 10 and 100;
FIGS. 20 A-F show place value addition;
FIGS. 21 A-F show place value subtraction;
FIGS. 23-25 show manipulation within the ten shape pattern;
FIGS. 26-34 are examples of multiplication techniques;
FIGS. 35-38 demonstrate time conversion;
FIG. 39 shows a SHAPE MATH® cube laid flat;
FIGS. 40-44 demonstrate mental division;
FIGS. 45-46 demonstrate direct mental division;
FIGS. 47A-B demonstrate long division;
FIGS. 48-58E show SHAPE MATH® currency and demonstrate its application;
FIGS. 59-69C show SHAPE MATH® fractions and demonstrate their application; and
FIGS. 70-77 demonstrate the use of percentages in SHAPE MATH®.
DETAILED DESCRIPTION OF THE INVENTION [0073]
The present invention includes a phonetic alphabet with clarifiers and modifiers and mathematical symbols and methods of teaching and using same. The phonetic alphabet and method of teaching and use
is referred to herein as PHONETOSE®. The mathematical symbols and method of use is referred to herein as SHAPE MATH®. The present invention discloses examples of color schemes associated with numbers
or symbols or blocks or geometry shapes. There examples are for illustrative purposes and are not meant to limit the invention.
PHONETOSE® or "Fa3nedo2s" is a one-to-one phonemic correspondence alternative written orthography for English that is used as a tool to help dyslexic phonemic awareness and used for many of the other
inventions as a reference tool. The principal design is to help dyslexics learn to read by allowing them to learn each sound in English without the confusion of digraphs trigraphs and diphthongs.
PHONETOSE® is designed to avoid many of the problems of other writing systems by applying an understanding of the unique difficulties of dyslexics. The principles and concepts of PHONETOSE® are
described below.
Letter Properties. Every letter follows the principle of self containment, the principle that each letter is unique and without the use of diacritics, which are symbols above or below the character
to distinguish sounds. However, PHONETOSE® does have its own set of modifiers. The key to these modifiers is that they are in the same horizontal, linear visual path as the letter they are modifying
(for example, the modifier is not above or below the adjacent character. This avoids the visual tracking difficulties that come with severe dyslexia. With diacritics, a severe dyslexic can read
through the word and miss the diacritics because they are not in the same visual path. These modifiers are a backwards slash over a character and a circle made from the last line of the character
similar to the circle around the "@" sign. However, these letters with the modifiers should be taught as wholly unique letters. This is similar to how the Korean alphabet is taught as a syllable
language like Japanese when its syllable parts could be taught as separate components. There are also many letters that are wholly unique to PHONETOSE®. Many of these are combinations of the two
letters that make the Digraph of these sounds in English. This fusion allows for a more easy transition to English writing. While other letters have absolutely no connection to written English.
PHONETOSE® letters do not have names as in English were a "w" is called "doubleyoo". In PHONETOSE®, the name of the letter is the sound it makes. The Japanese also use the system of not having names
for their letters. To further clarify a sound you can refer to its characteristics. For instance, you could say whether it is a constant sound like "h" or a sound with a quick attack like "k". It
could be a voiced sound like "b" or an unvoiced sound like "p". It can be a consonant or vowel. Notably, a vowel in PHONETOSE® is a little bit different than in English. A vowel in PHONETOSE® is a
sound that is made entirely with the voice box and is one of the sounds required to be paired with a consonant to make a syllable. These classifying characteristics clear up the confusion between
vowels and consonant that is found in other classifying systems where "h", "w" and "r" are in a lingual limbo. The reason there are no names for the letters is so the student will think of the sound
and not its name. The names of letters are a form of disassociation that can cause another layer of confusion that a student must then filter through. This is not a problem in PHONETOSE®.
There are multiple types of PHONETOSE® script. There are Cursive, Printed and PHONETOSEJI® forms. The first two are as their names suggest but the third one is a bit more complicated. PHONETOSEJI® is
a way of typing PHONETOSE® on a conventional QWERTY keyboard without special software. It is very similar to the Romanization of the Japanese language in that it is a Romanized version of PHONETOSE®.
It uses a combination of letters and numbers that correlate to the single letters of the other two script forms. Letter/number combinations correlate to the placement of the characters in the letter
chart of all the PHONETOSE® letters (see FIG. 2). For example, in this script the "or" sound in English would be expressed "r3" because this sound is the third "r" sound. This is not the preferred
PHONETOSE® script and is only used when access to a PHONETOSE® keyboard is not available. It lacks the one to one phonemic correspondence of the other scripts but the fact that numbers are used for
the digraphs instead of two letters eliminates a lot of the confusion that comes from English spellings. Most importantly, these letter/number combinations are 100% consistent. Also, three consonant
digraphs are reintroduced for this form; ch, th, sh. It is less confusing to reintroduce these than to give them letter/number combinations. As a teaching tool, this PHONETOSE® script can be used as
a transition step to get the student used to tracking two characters to make a single sound in a 100% consistent environment. This is a crucial step to learning many languages including French and
Letter Sounds. Referring to FIGS. 1A-L, PHONETOSE® has a Letter Guide that lists forty-nine (49) PHONETOSE® letters and sounds they make in all three scripts in both upper and lower case. The letters
"C". "Q," and "X" are not included in the present invention phonetic alphabetic.
Conceptual Clarifiers. Now referring to FIGS. 4A-C, the Conceptual Clarifiers are the homophone distinguishing system for PHONETOSE®. It is a series of symbols that only come at the end of a word
that distinguish words that sound the same but have different meanings from each other. A more complete explanation of Conceptual Clarifiers, as well as a list of the Conceptual Clarifiers and their
meanings can be found below in the section named "Conceptual Clarifiers."
Grouping System. Referring to FIG. 2 (PHONETOSE® Letter Organization System), the PHONETOSE® letter grouping system and alphabetical order are based from a few principles applied to several different
areas of letter groups. The first group is the sounds that "o" makes. This section is comprised of six letters. The short "o" sound comes first as it does with all of the vowels. Next comes the long
"o" which follows the pattern of the rest of the vowels. If the explanation of these groupings is to be practical and in order to eliminate the need to explain what sound I am referring to multiple
times I will now switch to the PHONETOSEJI® script for these sounds. Next is o3 which is placed in the last row of the first column so it is next to the letter it is most similar to which is o6. It
is followed by o4 which is in the first row of the second column so it is next to the short "a" sound, which is the sound it is most similar to. Underneath that is the sound "8" which is there
because this sound, if reversed, makes an o2 sound. The o6 is last and its placement has already been explained with the placement of o3.
The rest of the vowels are grouped in a separate section. The first row is made up of the short forms of the letters. As mentioned in the chart, this "u" makes the particular "u" sound found in the
word hull. This is because this is the only sound wholly unique to "u" and, as a PHONETOSE® principle, when a letter is used to make multiple sounds in English it only makes one unique sound in
PHONETOSE®. This is the same reason that "y" only makes the "y" sound in words like young, even though it can make 4 other sounds in English; i, i2, e2 and a2. The next row is all the long forms of
vowels. The following row is the third form letters which are the third form of a vowel. The a3 letter was made to solve the confusion over "u". "u" has two short forms and two long forms for a total
of four different possible sounds. To cut through the confusion I made a separate letter for the important sound of a3. The other letter in this row, r3, was also created to avoid confusion. "R2" is
a combined letter of the sounds "o" and "r" but this combination looks like an English "or". To eliminate this confusion there are separate letters for the r2 sound and the r3 sound. In PHONETOSE®,
all three "r's" are vowels because they can be made solely with the larynx and can serve as a vowel when paired with consonants. Such as in the name Kirk, car and core.
Next are consonants that have been paired together because they have similar sounds. Unlike the next two sections, they do not have an unvoiced pair or a letter whose only distinction is being the
non-voiced form of a voiced letter. The letters made closer to the front of the mouth go first followed by those that are progressively made further back in the mouth. The first row is m, n and n2
because these are all very similar sounds. The next row is w and y for the same reasons.
The next two sections are the voiced and unvoiced pairs. This can be a single group, but is preferably split into two sections. The first section is the rest of the consonants that have voiced and
unvoiced pairs and have what is termed a "quick attack". This is a sound that cannot be continued indefinitely because the sound itself has a definite beginning middle and end. You can liken it to a
symbol crash. You cannot audio loop a symbol crash and get a consistent unvarying sound. You will just hear the symbol crash multiple times. Letters like this are "t", "b" and "k". On the left column
are the unvoiced letters and on the right are the voiced letters. Starting from the top are the letters made most forward in the mouth. At the bottom are those made closest to the back of the mouth.
The second section of voiced and unvoiced pairs are the unvarying sounds. These are the sounds that, if audio looped, you would not hear any variation from the beginning to the end. Letters like this
include, "f", "s" and "z". This section uses the same characteristics of how far forward or back in the mouth the sound is made and the left and right columns in this section are for unvoiced and
voiced pairs.
The next section is a combination of Conceptual Clarifiers and Phonetic letters. Whenever a "t" or "d" is used to make a word past tense, the "d" or "t" from the past tense section is used. The past
tense "t" is always on the left in this section of the chart because it is an unvoiced letter. The past tense "d" is on the right because it is a voiced letter. These two letters are also placed high
enough to be with the quick attack letters because "d" and "t" are quick attack sounds. Underneath these letters on the chart are the plural, possessive and plural and possessive "s" and "z". These
are placed low enough to be with the constant state sounds because "s" and "z" are constant state sounds. These six letters were created to eliminate the need for apostrophes and to help clarify the
meaning of words. Also, these "z" letters are included because in some words the "z" sound is used to make a word plural such as the word eyes. The "s" and "z" sounds in this Past Tense section are
placed low so they correspond with the constant state sound section across the bottom of the chart.
The last group are the "oddballs" comprised of the letters "1" and "h". They do not fit into any other category except for constant state sounds. These two letters are placed in the noted order
because "1" is made further forward in the mouth and only has a voiced form, so it is on the right. "h", on the other hand, only has an unvoiced form, so it is on the left. See FIG. 2.
In the future, another classification method could be whether a letter is made up of two sounds or one. For example, i2 can also be made with an a3 and e2, and a2 can be made with e and e2. Color may
be used to distinguish these sections in order to avoid destroying the complex organizational structure that is already in place.
Keyboard layout. Referring to
FIG. 3
(Keyboard Layout), to overlay 49 PHONETOSE® letters onto a keyboard designed for 26 keys, each key function is configured equivalent to three keys. FIG. 3C shows an enlargement of the o key to show
the three possible symbols it can produce based on which modifier keys are used. The upper left symbol is what the key will produce if pressed with no modifying keys. The upper right symbol is what
the key will produce if the clarifying shift (see FIG. 3D) is pressed. The lower left symbol is what will be produced if the long vowel modifying key is pressed (see FIG. 3E). These modifying keys
are called dead keys. You press the dead key first then you press the character that it will modify. In the case of the clarifier shift (see FIG. 3D), you can do combinations of clarifiers by
pressing the clarifier shift (see FIG. 3D) and then pressing one or more clarifiers. The clarifier shift (see FIG. 3D) is turned off when the spacebar is pressed. The clarifier is always at the end
of a word so the spacebar is always an effective way of deactivating the clarifier shift (see FIG. 3D).
The placement of the modifier keys (see
FIG. 3A
-3E) in relation to the rest of the keys plays an important function in eliminating dyslexic confusion and increasing efficiency. On this keyboard all the vowels are on the left side and all of the
consonants are on the right. The long vowel shift (see FIG. 3E) is on the right side and pressed with the index finger when the hand is on the home row. This separates the work load and prevents the
person from having to change hand positions constantly in order to get the long vowels. The clerical shift (see FIG. 3D) is on the third row from the bottom and pressed with the middle finger of the
left hand. The modifier keys (see
FIG. 3A
-3D) are intentionally pressed with two different fingers to prevent dyslexics from confusing them. They are also placed on different rows for the same reason. The dead key long vowel shift (see FIG.
3E) can also be held down while pressing another key at the same time to modify a key.
The placement of the letters is designed to be as efficient as possible. The keyboard layout is configured to accommodate a different written orthography than English. PHONETOSE® keyboard has a
placement for punctuation: "t" is replaced with the long vowel shift (see FIG. 3E) and "h" is replaced with the PHONETOSE® letter for the "th" sound. The letters "q", "x" and "c", are nonexistent on
the PHONETOSE® keyboard, since they are not included in the present invention phonetic language. All of the "r's" are placed on the left side because in PHONETOSE® "r" is a vowel. The number line has
been used for PHONETOSE® letters and clarifiers. The placement of the letters on the fourth row from the bottom is designed to correspond to the placement of the longest fingers because these are the
only ones that can reach the fourth row without the hand needing to be lifted. A long vowel shift (see FIG. 3E) or standard shift must be used to access the numbers on the number line. All of these
symbols, i.e., ! @ # $ % & *, have been moved to the third row with "*" and "( )" remaining on the fourth row but the position has been altered. In the clarifier space, the "8" key is the symbol for
abbreviation. See
FIG. 3
Now turning to FIGS. 3A-B for a magnified image of select keys, in order to switch to the QWERTY English layout, a user presses the one and two keys at the same time. To switch back, a user presses
the same two keys again. This is the section of the keyboard covered with a combination Union Jack and United States flag in FIGS. 3A and 3B.
There are three different ways of writing numbers in PHONETOSE®. You can spell out the word of the number phonetically followed by the symbol clarifier. You can use the Arabic numerals or what the
majority of nations think of as normal numbers (1,2,3,4,5,6,7,8,9). The last is called SHAPE MATH® and is a nonabstract geometrically based numeral system that is covered further in the document in
the section called SHAPE MATH®.
There is one more way to write PHONETOSE® and it is the most unusual. The other writing method is designed for those that are transitioning from a syllable-based language to English. In some cases, a
dyslexic who has been raised in a language that makes more phonetic sense than English (such as Italian or Japanese) will have the unusual symptom of only being dyslexic when they tried to learn a
problem language such as English or French. The syllable form of PHONETOSE® can be written horizontally or vertically and is based from the three part compound letter syllable writing system of
Korean. Because there are over 130 different syllables possible in English unlike other languages such as Japanese with 48, it is beneficial to use the Korean--like system making the symbol for a
syllable out of its component parts--which makes any combination possible without requiring a specialized symbol for each syllable. This writing system can also be used as a way to get students used
to dividing words according to syllables. This is a very useful skill for reading because many rules as demonstrated elsewhere in the document are dependent on the number of syllables and syllable
There are three different ways of writing the syllable form of PHONETOSE®. The first way is to have the upper part of the symbol contain the first sound of the syllable. Below that is the vowel, and
to the right are any subsequent consonants ending the syllable. Even if the vowel is the initial sound in a syllable, it is still below all the other sounds. A consonant blend stays on the first part
of the symbol, and the rest of the symbol goes underneath it. All parts of the syllable are connected. If the syllable is just a single sound, the sound is enlarged to be the size of a compound
syllable. The next syllable is not connected to the last. At the end stroke of the end character of a syllable is a downward mark that tells the user that there are more syllables in this word. As a
result, a dyslexic reader always knows when one word ends and another one begins. The last syllable of the word does not have this downward mark and can have just the normal last stroke or a
conceptual clarifier. The second way of writing the syllable form of PHONETOSE® is to have the different syllables connected. However, in this way of writing, you can only use it horizontally because
the distinction between syllables would be impossible vertically. The third way is to simply group the PHONETOSE® letters in printed form in the same pattern as the other two ways of writing
syllables already mentioned. You can write the cursive form horizontally or vertically.
Dyslexics may prefer the vertical writing because writing up and down is either relative to the force of gravity or relative to the body. It also avoids confusion over bilateral symmetry or the
confusion over left and right that cause dyslexics so much trouble.
An alteration to the "L" in this form of writing has been made to make a more clear distinction between short "e" and "L". This modified "L" can be seen in FIG. 1M. Examples of syllable PHONETOSE®
writing can be found in FIG. 1M ("Syllable Writing Examples"). This form of writing could be the solution to the PHONETOSE® speaking program. Applying this syllable language to the PHONETOSE®
speaking program would make it sound as fluent as the hypothetical Japanese version of that program. An interface could enable a user to input the individual letters and have the machine group them
into syllables.
Alternatively, a good organizing system for all of the syllables from the PHONETOSE® alphabetical order can be provided. This system allows the student to learn patterns in the English syllables and
to start thinking in syllables instead of adding them on as an afterthought as they do in current Orton Gillingham programs. With this technique, new rules can be derived from the patterns that can
only be noticed when you convert English into a syllable language. The organization system is referenced according to the first sound of the syllable. Syllable groups are further broken down
according to whether the first sound, the last sound or a combination of the two is a consonant or a consonant blend such as kl and gl. This grouping and all the different kinds of syllables in
English is shown in the English Grid technique by FIG. 1N ("Syllable Possibilities"). Since "r" is a vowel, the *r blends are considered a consonant vowel blend. Phonemic correspondence of English is
drastically increased when viewed from a syllable perspective. This grouping can allow for the speaking PHONETOSE® program to use English letters arranged in syllables.
Tone Languages. In some languages tone is a distinguishing attribute of sounds. The same sound to an English speaker would be two different sounds to somebody from a tone language. Tone in this sense
means the pitch or frequency variation in the sound will distinguish one sound from another. One of the main reasons the present invention does not use diacritics in PHONETOSE® was so frequency
variation and volume variation can be placed above words without being confused with diacritics. One continuous line is used from one end of a word to the other. The angle of the line denotes how
quick the frequency or volume change is. An immediate change from one section of a word to another will be a square looking wave. A gradual transition will look more like a gradual stock market
climb. This is like the upward turn in frequency that you hear in a question. If a user is writing in the syllable form of PHONETOSE® he or she uses the same continuous line, but it is placed on the
right side of the syllables starting from the top. When writing horizontally, a user places the line above the syllables in the same way as he or she would normally write PHONETOSE® Volume change and
frequency variation do not usually vary within a syllable, so the syllable form of PHONETOSE® does not inhibit the use of frequency variation marks. Frequency variation that is specific to a
particular word is very rare in English, but there are a few examples, like in the adapted French expression "la de da". This expression would sound very odd if said with the same relative frequency
for each syllable. In PHONETOSE® it is written like the two examples in FIGS. 1O and 1P ("Tone Variation Marks"). The waveform looking symbol to the leftmost part and topmost part of these lines tell
you that this is a frequency variation line and not a volume variation line.
A volume variation line would look like the example in
FIG. 1Q
("Volume Variation Marks").
The few examples of English words that could be better clarified with a frequency variation mark are always relative frequencies to the rest of the word. For example, the "la" syllable only has to be
higher in frequency than the next syllable for it to sound correct. English does not have distinct tones only rarely relative tones. Mandarin Chinese has frequency variation that is dependent on
whether the syllable increases from high to low or low to high or other combinations of varying pitch within a syllable. In this case, the angle of the line would show a user whether the pitch is
increasing, decreasing, or staying the same within the syllable. The next example is not a real Chinese word, but a made up word that demonstrates how this system would work for a language like
Mandarin Chinese. FIG. 1O ("Tone Variation Marks") and
FIG. 1P
("In Syllable Tone Variation Marks") provide examples of how you write "cho4" with the beginning part of the syllable starting with a low-frequency and increasing in frequency. The next syllable is
"yo2n" at the beginning or the syllable having a high pitch lowering to a lower pitch. The last syllable, "so2" has the frequency variation of the first. There also is a much more precise way of
writing syllables in PHONETOSE® which a user can encode called tone letters.
Tone Letters. Tone letters similar to the ones in the international phonetic alphabet can go into the structure of the syllable. This system can represent, pitch shifts, the tone for an entire
syllable, or a word. Like the international phonetic alphabet, this tone system is based from a musical staff. In the present invention, the highest three frequencies point toward the vowel in a
syllable. The two lowest frequencies face away from the vowel so they do not overlap the vowel or the consonant. These tone lines are put on the lines that connect the vowels to the consonants in the
syllable. The centermost tone has a downward stroke to make it clearer that it is the center tone. To apply Frequency Variation Marks to an entire word, a user marks the first syllable of the word
and assumes that it applies to the entire word if no other syllable has a Frequency Variation Mark. For contour tones, a user can do different combinations of the high and low marks. If the entire
syllable is a single tone, a user can mark either one of the syllable-vowel-consonant connection sections. Dyslexics would often confuse which one to mark, so this is beneficial.
While extremely rare, some languages have eight tones. In order to code eight tones, instead of five, add a three staff section at the top of the system structure that is indented from the previous
tone line to distinguish these three extra tones from the others. See
FIG. 1R
(8 and 5 tone range). The indentation and the lines always face towards the vowel. This add-on will make this system compatible with the previous system. A person switching between 5 tone and 8 tone
languages will not have to learn two completely different systems. A user dealing with a syllable that does not have a consonant at the beginning or the end or no consonants simply draws the lines
that would connect the consonants as if they were there, and draws the tone lines as normal. If the spoken language requires a third pitch in the same syllable, a user can add another tone symbol
after the final consonant of the syllable, and it will be facing away from the vowel. These symbols are designed so that the letters will not get in the way. It is easy to remember because there is
no other way for these symbols to be oriented--since the letters and tone symbols would overlap otherwise. Relative orientation is not a problem for dyslexics so this system is relative to the
Conceptual Clarifiers
Now referring to FIGS. 4A-C, the Conceptual Clarifiers are the homophone distinguishing system that is used by PHONETOSE®, wherein a series of abstract symbols at the end of a word distinguishes
words that sound the same but have different meanings from each other within the one to one phonemic correspondence context of PHONETOSE®. It is also used by English Grid (discussed below) to show
conceptual context.
Since the PHONETOSE® language has a one to one phonemic correspondence, each letter only making one sound, homophones like "to", "too" and "two" are written exactly the same way. The only difference
between these words is their meaning. This is where the Conceptual Clarifiers come in. They have broad meanings meant to distinguish these homophones from each other. They make no sound of their own
and have certain traits that prevent them from being confused with phonetic letters. In cursive PHONETOSE® these characters are made from the last stroke of the last letter in the word. All of the
Clarifiers are designed to not have a last stroke that could be connected to another cursive letter. The purpose for this is to physically make it impossible for the conceptual clarifiers to go
anywhere else, but their proper place at the end of a word. In the printed form of PHONETOSE® it is simply drawn from the last letter. This is done by underlining the last letter and extending the
Clarifier from the underline stroke (see
FIG. 1s
Referring to FIGS. 4A-C, the Clarifiers are a very flexible system and a user can use different combinations of Clarifiers to adjust or create new meanings. For example, the Clarifier for group of
people is a combination of the Clarifier for amount and person. The Clarifier for food is comprised of plant, animal and part. This is because food is plant animal parts. There are many different
combinations but as long as the meaning is clear more combinations are possible.
There is the potential for Conceptual Clarifiers to greatly help those that are not native speakers of English. For example, if a foreigner does not know the word for knife, he could type the word
"cut" and attach the "thing" Clarifier. Conversely, he could type the words "knife", "saw", "blade", "ax", "hatchet" or "scissors" and attach the "action" Clarifier.
There is an application for these Clarifiers to be used in English writing in addition to PHONETOSE®. The same technique that was described for foreigners could be used to help those suffering from
phonemic aphasia or averbia. In the case of averbia, a person, because of brain injury, cannot recall any verbs. If this person is using a computer interface, he or she could type in the noun and put
the action Clarifier on the end of the word. The computer would detect the difference between the word and the Clarifier and bring up a list of words that he could have meant. If he or she also
cannot read verbs there could be pictures to further clarify. In the case of anomic aphasia, a more broad condition causing a person to have trouble naming a wide variety of things, the sufferer
could use the Clarifier the same way as the foreigner.
Some sufferers can not name any colors. In this case, they could name anything that was that color. He or she would put the state of being clarifier at the end of the word. Then, he or she could
select the word they wanted from the menu. For example, let us say someone is trying to think of the word green. He or she can type the word "apple" and then attach the "state of being" clarifier. A
list of all descriptors for the word "apple" is generated, i.e., "sweet", "juicy", "red", "green", "yellow", etc. . . . and he or she would be able to choose green. This process can work in many
different ways. A user would also be able to do a list for clarifiers that match the words that are being used. For example, if a user is looking for the word "regime", he or she would type the word
"Communism" and attach the nonphysical "thing" clarifier, which then would generate the list: "government", "regime", "Cold War", "socialism", "propaganda", "revolution" and "the Iron Curtain". The
user would then choose "regime". If the user had put the place clarifier on, the generated list would include "China", "Cuba", "Soviet Russia", "Russia", "East Germany" and so on.
There are two characters that are not really clarifiers but are in this category because they have nowhere else to go. These two symbols are how PHONETOSE® handles abbreviations, acronyms and
contractions. When a contraction is made, it is spelled out phonetically like any other PHONETOSE® word and then it is separated by a slash between the last syllable or dominant sound in the
contraction. For example: don't (see
FIG. 1T
), wasn't (see
FIG. 1u
), doesn't (see
FIG. 1v
The thought process behind this is that since the word is phonetically spelled out, putting the separator where the missing letter is doesn't make sense. Thus, a need exists for separating the words
that can be logically thought out without thinking of the origin of the words. Syllable division turned out to be the most logical choice.
The other strange outlier symbol is the one used for abbreviations. They have a symbol (see
FIG. 1w
) placed around them. The end of this symbol has a clarifier of the abbreviation attached to it. The abbreviations are printed if they maintain the English letters, and if they are written out
phonetically as they are pronounced, they are written in cursive PHONETOSE® Words that have been accepted into English as words that were once abbreviations or acronyms need not have the symbol
placed around them. An example would be laser or scuba. But they can be placed in the symbol if you wish to make reference of their abbreviated nature. It is a way for the student to not confuse
English words even when he has to use them. Words from foreign languages that are put directly into a document without changing their spelling to PHONETOSE® are also printed. If both the English
abbreviation and the PHONETOSE® spelling are given, it is written as shown in
FIG. 1x
. In more casual writing, the printing for such acronyms and abbreviations can be enough to distinguish them from proper PHONETOSE® words.
Alternatively, if a word's abbreviation is the intended pronunciation, no symbol is needed around it if it is spelled out in PHONETOSE®. If the symbols surround it, then the full word is pronounced.
For example, if a user intends to have the abbreviation "intro" pronounced "introduction", then he or she would write it with the abbreviation symbol (see
FIG. 1w
) around it. If a user wants the word to be pronounced "intro" (see
FIG. 1Y
), then he or she does not. This rule exists to prevent confusion with abbreviations that are pronounced like normal words.
This same rule applies to acronyms. If a user intends for the letters to be pronounced, then he or she writes out the name of each letter with the Symbol Clarifier after each one and an optional
Clarifier on the last letter referring to what the abbreviation is. Instead, if he or she wants the reader to read to what the abbreviation is referring, then he or she puts the abbreviation symbol
around it. The important thing is that the user always knows how the abbreviation is supposed to be read.
English Grid
Referring to FIGS. 5A-E, English Grid is an alternative coding system for already existing rules known as Orton Gillingham rules. Specifically, English Grid is a way of referencing Orton Gillingham
rules by using a series of symbols that show word context in a series of levels that can show spelling, pronunciation and conceptual context. PHONETOSE® is used as a pronunciation guide.
English Grid is designed to create the context of reading rules and a set of symbols that replace long explanations. A student, instead of remembering a long explanation, can remember the visual
layout that shows the context of how a reading rule really works so it can be more easily applied to words that the student does not know. This is not possible with the current Orton Gillingham
coding system, because they use letters as symbols for consonants and vowels. For example, the letter "C" is used as the symbol for consonants but this does not work in the actual context of letters
because it is a letter itself. The symbol for vowel is an even bigger problem because it is a "v" which is a consonant but it represents vowels. In contrast to Orton Gillingham, the present invention
has four symbols that represent different aspects of word context, a vowel is represented with an "@", a consonant is represented with a "*", the rest of the word is represented with a "#", and
syllable division is represented with a "/". Note: the "/" is also used in Orton Gillingham programs. Using these four concepts, as represented with symbols, things that otherwise take many words to
describe can be described in just a few symbols. For example, as shown in FIG. 5A, an "a" at the beginning of a word or at the end of a word makes the sound of a short "u". You would have to use 16
words to express this, but this rule with English Grid is just six symbols. English Grid goes way beyond just reducing the number of characters.
For a few rules, the way that the word is constructed is dependent upon whether a prefix or suffix is used. A prefix is coded as "#)", and a suffix is coded as "(#". These two codes are designed so
the "(" is facing the letters of the word. If it is important for the suffix to start with a certain letter, the user can put the letter inside a suffix, i.e., "(t#". In the event a user wants to
encode an entire word that is part of another word, as in the case of compound words, he or she can encode it as "(#)". As an example "fire(#)" could refer to firehouse, firefighter etc.
Further English Grid examples are shown in the four figures labeled FIG. 5B. The top space of each figure is reserved for context clues, i.e., Conceptual Clarifiers or national flags. A Conceptual
Clarifier from the homophone distinguishing system used in PHONETOSE® can be placed in the top space to indicate when to use a particular rule, such as when to use "ee" at the end of a word, instead
of "y", to make the long "e" sound. FIG. 5B(i) is the rule that says when the word is an action with a long "e" at the end of the word, and it has more than one syllable, the word uses a "y" at the
end of the word to make the long "e" sound. FIG. 5B(ii) is a rule that says if the word is a person or thing, the long "e" sound at the end of the word is spelled "ee". FIG. 5B(iii) says if the word
is an action and has a long "i" at the end of a one syllable word, the word is spelled with a "y". FIG. 5B(iv) is a rule that says if the word is derived from either Italian or Japanese, the long "e"
sound is made with an "i". To show this, flags can also be placed in the topmost portion of the grid to demonstrate when a particular rule should be used. Spellings vary according to the country they
come from. It is a much faster process in this contextual visual format. This technique also makes learning spelling in other languages a lot easier. Rules for Norwegian, French, Spanish and even
Japanese can be written in this grid format.
The bottom blocks in FIG. 5B are used for syllable numbers. In English the number of syllables determines what kind of spelling you use and what sound the letters will make. Since the "/" is the
symbol for syllable if the word rule applies when there is one syllable there will be one "/". If the word rule requires that the word have two or more syllables there are two "/"s. The slash can
also be used inside the English letter block to show the context of whether the rule comes before or after the end of a syllable. This applies for the "@" and "*" as well.
As a further example, an "s" surrounded by two vowels will sound like a "z" as seen in FIG. 5C A "u" at the end of a syllable will be long as shown in FIG. 5D. The grouping of rules is also
important. Rules that are related to each other are grouped together so the relation between them is visually represented. See
FIG. 5E
(English Grid Sample Page). It is better for these groupings to occur in asymmetrical patterns to assist in dyslexic visual tracking. It is an advantage for English Grid to not follow a traditional
grid pattern.
Internet Speaking Reference Chart
See FIG. 2 and FIG. 2A of the present invention that further includes a speaking webpage activated by clicking on a PHONETOSE® letter whereby the computer will pronounce the sound that it makes. This
makes the reference chart at FIG. 2 easy to use--even for those that cannot read--and also lessens the dependence on teachers for instruction. When a user clicks on the picture of the letter, he or
she will be able to see all the English Grid rules for the letter, and, if he or she clicks on the play button directly underneath the picture, he or she will hear the sound of that letter. This
process can be implemented in iweb by putting a picture over top of the video file. The picture is of the same letter as the video file. A hyperlink is then connected from the picture that is over
top of the video file to another website page that has the English Grid rules for that letter on it. This complicated technique is used because video files in iweb cannot have hyperlinks, but
pictures can.
Additional information on this program, which otherwise has been called the "Gillinet", is set forth below and shown in FIG. 2A. The Gillinet can also be used to reference homophones. On the page
that shows the English Grid rules for a particular sound is a hyperlink that takes you to a page of homophones distinguished with the Conceptual Clarifier. They are arranged with the Conceptual
Clarifier after the English spelling of the words and the PHONETOSE® pronunciation are underneath all of the homophones that are pronounced the same way. They are grouped according to initial sounds.
Below is an example of a rule page with a link to a homophone page. The icon for the homophones for this sound is in the upper left corner. The next page is the homophone page see FIG. 2B.
Speaking PHONETOSE® Program
Now referring to FIGS. 6A-B, the Speaking PHONETOSE® Program uses a nonlinear editing system as a one-to-one phonemic correspondence speaking program (see FIG. 6A) by using the sounds in PHONETOSE®
to allow the student to construct words phonetically on a timeline.
Turn to FIG. 6A for another one embodiment of this invention having the nonlinear editing program called Final Cut Pro that can be used to take video clips of the sounds that the letters make and
link them with pictures of the letters. Other similar editing programs can also be used. The user can take the letters from the browser in the upper left-hand corner of the program and drag them into
the timeline at the lower center of the program and string together lines of phonetic parts. The user can then play these parts and have the program say the words aloud. To make it easier to
understand longer words, a blank video clip can be added for a space between syllables. This program helps students understand the word more clearly, and also helps them learn how to separate
syllables. All three previously mentioned scripts are in this program. The Conceptual Clarifiers can be accessed as well. In one embodiment, if a user puts the clarifier at the end of a word, it will
not make any sound, but if the user double clicks on it in the browser, after a short delay, the program will say what the clarifier means.
In one embodiment, this process is accomplished by having a section of the clip with no audio, followed by a section of the clip with the explanation. A user can set in and out points for a clip.
These points determine how much of the clip will be shown when a user plays it in the timeline. A user sets the out point to before the silence. However, if a user double-clicks on the clip, and
presses play, the program will track in the viewer where a user will be able to hear the explanation after the program plays through the short silence at the beginning. As such, the user can access
the entire clip in the viewer, and not just what he or she selected for the timeline.
This program serves as another way of referencing symbols without requiring a teacher, even for the illiterate. One problem with the program is that because the sounds are simply put together, they
lack the stress of accent that makes speech sound natural. In other embodiments, this issue is overcome by having the letters not directly represent the sound but according to the letter
combinations, such that the program makes the same sounds but with their accent or stress altered. Most importantly, the word or letter combinations would be grouped according to syllables, with a
unique recording being referenced for that particular syllable.
Conceptual clarifiers can also be accessed through this program. The organization of the Conceptual Clarifiers in the PHONETOSE® speaking final cut program is similar to the organization of the
PHONETOSE® alphabetical order. They are set up in asymmetrical groups with the broadest conceptual clarifier at the top showing what kind of clarifiers are underneath it. For example: the person
clarifier is on top with the name, title, action, body part, state of being, and foreigner clarifiers underneath it because these are all clarifiers that deal with people.
FIG. 6B is an isolated view of the PHONETOSE® letters in the browser shown in 6A.
Phonetic Hangman
Referring to FIGS. 7, 8 and 9 in Phonetic Hangman instead of guessing the letters that make up a mystery word, a player guesses the sounds. In this invention, two "hang people" are provided because
there are a lot more sounds than traditional Latin or English letters. When a sound is guessed, a player says the sound, not the letter, because there are no names for the letters. If you use
sub-clarifying letters in the word, the person running the game puts down any of the letters that make that sound. For instance, in the word "sounds" (which is depicted in FIG. 7), there is a normal
"s" and a plural "s". If a player guesses "s", that person gets credit for both of these occurrences. To make the game easier, you can give the student or the person guessing the clarifier for the
word. An example of this hangman structure is seen in
FIG. 7
. At the top of the structure is the printed form of PHONETOSE®; below that is the cursive form of PHONETOSE®; and below that is the syllable form of PHONETOSE®. The clarifier at the end of the word
means "language part". If a player uses the horizontal form of syllable PHONETOSE® the next syllable would be to the right if this word had more than one syllable. If a player uses the vertical form
of PHONETOSE® the next syllable would be underneath this one if there is another syllable to this word. FIG. 8 shows an example of the horizontal form of syllable PHONETOSE® for the word "computer"
in Phonetic Hangman;
FIG. 9
shows an example of the vertical form for the same word. This game can allow people to learn the clarifiers, phonetic structure, and even syllable usage. The syllable form allows games to go faster
because knowing the kind of syllable greatly reduces the possibilities. For the example of "sounds" as shown in
FIG. 7
, the types of consonants and vowels can be shown as "*@***". In Phonetic Hangman, the last blank space of a syllable in a multi-syllable word has a downward mark--the same way the last letter of a
syllable in a multi-syllable word does. This way, a player can tell where one syllable begins and another syllable ends.
Alternating Line Highlighting
Alternate line highlighting (see
FIG. 10
) is a transparent overlay of alternating lines of color (for example, green, yellow, red, blue, orange) that assists with visual tracking for dyslexics. The transparency with a series of
highlighting lines that go down a transparency in a color pattern. The colored lines are designed to be the height of the letters and the width of the page. It is laid over top of a page to assist in
dyslexic visual tracking.
There are two basic designs. The first is an overlay that is just a transparency that can go over scripts or papers 124 (see FIG. 10B). The second is a transparency that is longer and has a crease in
it that allows it to be held easily in a book 122 (see FIG. 10A).
SHAPE MATH® Basic Introduction
With SHAPE MATH®, basic math operations can be calculated by visualizing and manipulating shapes that each represent a quantity. Instead of adding abstract symbols, which is hard for a discalculic, a
SHAPE MATH® user can visualize the combination of various shapes.
FIG. 11A is an example of the number 28 when expressed with SHAPE MATH® symbols. These symbols will be described in more detail later in the document.
This section demonstrates the basic principles of SHAPE MATH® by introducing the one, two, three and five shapes and demonstrating how they can be combined to make a ten shape. These numbers were
selected because they are the basis of all the larger numbers.
The yellow outer three shape 3b represents the quantity of 3 and the green two shape 2a (shown in FIG. 11B) represents the quantity of 2. A two shape 2a and three shape 3b, when combined together,
make a green and yellow five shape 5b which represents the quantity of 5. This composite five shape is equivalent to the solid red five shape 5a because the value of a SHAPE MATH® number is
determined by its size and outline.
A two shape 2a, three shape 3b and five shape 5a can be arranged to make the square shape that represents 10 10f (see FIG. 11B). The ten shape is the standard unit of value when combining shapes
because modern math operates on a base ten system. There are many different ways to make a ten shape, including a solid red square 10a.
Every shape in SHAPE MATH® can be broken down into one triangles 1a (see FIG. 11B) which represent the quantity of 1. For example, ten shape 10e is made out of 10 one triangles 1a and the zero
spacers (see FIG. 12).
One of the purposes of the zero spacers (see FIG. 12) is to allow ten equally sized right triangles (ten one triangles 1a) to be compiled into a square (ten shape 10). A square is easy to visualize
and without the spacers, a square could only be made from differently sized one triangles 1a. The zero spacers are also used to fit other numbers into the ten square 10 and assist in visual tracking
(which is hard for discalculics and dislexics) by separating inner and outer shapes. The zero spacers represent the number 0 if you rearrange them into an L shape 11b (see FIG. 11B).
Shape Colors
The following section explains examples of the SHAPE MATH® colors from 1-5 but is not intended to limit the present invention to any particular color scheme. The primary purpose of color or shading
differences is to assist the student in distinguishing the numeric shapes.
The SHAPE MATH® numbers and colors go 1 blue, 2 green, 3 yellow, 4 orange, and 5 red (see FIG. 13).
Scientifically, blue is a warmer color than red but SHAPE MATH® colors ascend according to the scale of cultural subjective perception. Cool objects from everyday life tend to be blue and thus blue
falls on the cool end of this spectrum. This is the opposite of red which typically signifies that something is hot, such as a flame. Thus, red falls on the hot end of this spectrum. The intuitively
ascending colors are easier to remember and visualize.
Inner and Outer Shapes
It is important to note that some SHAPE MATH® numbers can have different forms which are labeled either inner or outer. An outer shape, when arranged into a ten shape, touches its border. An inner
shape, when arranged into a ten shape, is entirely surrounded by other shapes. The inner six is an exception to this rule and the specifics of this will be addressed later in the document.
Numbers 1-5
The following section explains some of the important details about SHAPE MATH® numbers below 5.
The inner four shape 4f can be made by combining 2 two shapes 2a (shown in FIG. 11B). Inner 4 shapes are typically orange 4e and when placed within the ten square, they are always surrounded by other
shapes such as the 2 outer three shapes 3b seen in ten shape 10c.
The outer four shape 4b (see FIG. 11B) is composed of a two shape 2a, 2 one triangles 1a and two zero spacers as seen in four shape 4c. It is larger only because of the zero spacers. It is important
to put the zero spacers in when forming the outer four shape so that it lines up with other shapes within the square ten shape.
The inner three shape 3d and outer three shape 3b (shown in FIG. 11B) follow the same rules of nomenclature that were previously applied to the inner and outer four shapes.
Numbers 6-10
At this point, SHAPE MATH® numbers 1-5 have been introduced and briefly explained. The following are various examples of SHAPE MATH® numbers 6-10 and how they are made by combining numbers 1-5. While
this section does not exhaust all possibilities, it shows some of the more common ways of compiling the larger digits.
Six shape 6b is made with 2 outer three shapes 3b (shown in FIG. 11B) while the equivalent six shape 6c is made from inner three shape 3d and outer three shape 3b. Additionally, Six shape 6g is made
with five shape 5a and one shape 1a (see FIG. 11B).
Seven shape 7a is made with five shape 5a and two shape 2a while the equivalent seven shape 7b is made with outer three shape 3b and inner four shape 4e (see FIG. 11B). Additionally, seven shape 7c
is made with outer three shape 3b and 2 two shapes 2a.
Eight shape 8b is made with 2 four shapes 4b (see FIG. 11B).
Nine shape 9a is made with five shape 5a and four shape 4b (see FIG. 11B).
The ten shape (shown in FIG. 14) can be made out of 2 outer three shapes 3b and 2 two shapes 2a while the ten shape (shown in FIG. 15) can be made out of 2 outer three shapes 3b and four one shapes
FIG. 16A-16F show examples of various SHAPE MATH® numbers. FIG. 16A is a 3 shape composed of 3 one shapes. FIG. 16B is a four shape composed of a three shape and a one shape. FIG. 16C is a five shape
composed of a four shape and one shape. FIG. 16D is a six shape composed of a four shape and two one shapes. FIG. 16E is a seven shape composed of a four shape a one shape and a two shape. FIG. 16F
is ten shape composed of a four shape and two three shapes.
Written Form
The next section will cover the written form of SHAPE MATH® and further explain the zero spacers.
A user of the written form of SHAPE MATH® can complete most of the common operations that are traditionally performed by writing problems with Arabic numerals. Instead of Arabic numerals, however,
SHAPE MATH® written numbers are used to express and track the quantities involved. These written SHAPE MATH® numbers, which omit the colors, look very similar to the standard SHAPE MATH® numbers with
the exception of the written one shape 1 and written two shape 2 (shown in FIG. 11B). Without color, one shape 1a and two shape 2a could be confused with five shape 5a and ten shape 10a,
respectively. Thus, zero spacers are placed around the written one and two shapes to distinguish them from five and ten shapes. Zero spacers are used because they do not represent any value and when
oriented against each other into a right triangle, they form the corner of an empty ten shape 11b. Placing the one and two shapes within the context of the corner of a ten shape will distinguish them
by showing their relative size to 10. For added distinction, the written one shape 1 and two shape 2 are turned to make them look different from the written five shape 5 and ten shape 10.
Now turning to
FIG. 17
for all of the SHAPE MATH® written numbers which include zero spacers 11b, one shape 1, two shape 2, outer three shape 3, inner three shape 3a, outer four shape 4, inner four shape 4a, five shape 5,
outer six shape 6, inner six shape 6a, seven shape 7, eight shape 8, nine shape 9 and ten shape 10.
The lines on the written eight shape 8 and nine shape 9 are extended to exaggerate the specific qualities of their shape and make them different from the written ten shape 10 when handwriting is
naturally sloppy. If a user of the system gets the written outer four shape 4 confused with other numbers, they can add an extended line like in the case of the written eight shapes 8 and nine shapes
If someone using the system gets confused by the negative space inside of the written outer six shape 6 they can cross out the center. This will make sure they do not think there is a written four
shape 4a in the center, which would indicate the quantity of ten.
The following section will explain the basics of mental SHAPE MATH® addition using several examples. The process described is also how a student would use the SHAPE MATH® pieces, a learning tool that
will described later in the document.
For the problem 8+2 eight shape 8a and 2 one shapes 1a can be visualized and the one shapes placed on each flat corner of the eight shape 8a (see FIG. 18A). This arranges the addends (8 and 2) of the
addition problem into a ten shape 10a (see FIG. 18B).
For the problem 5+2=7 five shape 5a and 2 shape 2a can be visualized and combined to create the mental image of seven shape 7a (see FIG. 11B).
For the problem 3+4 outer three shape 3b and inner four shape 4e can be visualized and combined to create the mental image of seven shape 7b (see FIG. 11B).
For the problem 7+3 seven shape 7a and outer three shape 3b are combined to make ten shape 10e (see FIG. 11B).
For the problem 3+3+4 outer three shape 3b can be combined with another outer three shape 3b to create six shape 6b (see FIG. 11B). Then, inner four shape 4e can be imagined in the empty space within
six shape 6b to create ten shape 10c (see FIG. 11B).
In SHAPE MATH®, the completion of very basic addition problems is no different than the creation of larger numbers from base numbers and very little memorization is required.
Written SHAPE MATH® and Place Value
The next section outlines the specifics of written SHAPE MATH® addition using a place value system that allows SHAPE MATH® users to work with larger numbers.
Place value is when a numbers place within the context of another number determines that digits worth. The 1 in the number 10 represents ten things while 1 in the context of 100 represents one
hundred things. Similarly, in SHAPE MATH®, a one shape 1 followed by a zero shape 11 seen in place value representation 11aa (shown in
FIG. 19A
) represents the quantity of ten while a one shape 1 followed by 2 zero shapes 11 shown in place value representation 11bb (seen in
FIG. 19B
) represents the quantity of one hundred.
The following section will demonstrate the use of the place value system in SHAPE MATH® as it applies to larger addition and subtraction problems. When using a place value system, one can perform
mathematic operations at a specific place value at a time. These place values are organized into columns in both standard math and SHAPE MATH®. Since each column in a ten based system can hold only
the numbers 1-9, one can use the SHAPE MATH® techniques for basic addition to complete the math necessary for each column. Instead of memorizing the sums of all possible combinations of digits in
each column, a SHAPE MATH® user can apply the mechanical principles of basic SHAPE MATH® addition to find the sums of those columns. When the sum of the ones column is greater than ten, that ten
shape must be carried to the next column were it is expressed with a one shape. To perform the basic operations within each column, someone using SHAPE MATH® would draw arrows to track the movement
of the shapes being combined, redraw those shapes at the ends of the arrows and then cross off the original location of the relocated shapes.
The following example demonstrates this process.
FIGS. 20 A-F shows the addition problem 85+5. The first step of the method, seen in
FIG. 20A
, is to express the problem with SHAPE MATH® numerals using eight shape 8 in the ten's place (or left column) and five shape 5 in the one's place (or right column), both in the top row, to form 85
and five shape 5 in the one's place of the bottom row to represent the quantity of 5. The next step, shown in FIG. 20B, is to draw an arrow to take the lower five shape 5 and add it to the upper five
shape 5 to form a ten shape 10. The lower five shape 5 is then crossed out. The next step, shown in FIG. 20C, is to cross out the ten shape 10 from the one's place (or right column) and carry that
quantity to the ten's place (or left column) where it becomes a one shape 1. A zero shape 0 is then placed in the one's column (right column) of the answer row as seen in FIG. 20D because everything
in the one's column has been crossed out. The next step, shown in
FIG. 20E
, is to add the 1 and the 8 of the ten's column. This is done by drawing an arrow from one shape 1 to eight shape 8, crossing off one shape 1 and re-drawing one shape 1 at the end of the arrow to
make the eight shape 8 into a nine shape 9. In the final step, shown in
FIG. 20F
, nine shape 9 of the ten's column is brought down to the answer row. Together, the nine shape 9 and zero shape 11 of the answer row represent the quantity of 90, which is the solution to the
Now, turning to FIGS. 21A-F for the subtraction problem 70-25. Like the previously explained addition problem, the first step of the method is to express the problem with SHAPE MATH® numerals as
shown in FIG. 21A using a seven shape 7 in the ten's place (or left column) and a zero shape 11 in the one's place (or right column) of the minuend (top row), to form 70 and a two shape 2 in the
ten's column (left column) and a five shape 5 in the ones column (right column) of the subtrahend (bottom row) to form 25. Since five shape 5 of the one's column of the bottom row is greater than the
zero shape 11 of the ones column of the top row, the quantity of ten must be borrowed from the ten's column. This is done by crossing out a one shape 1 from the seven shape 7 of the tens column,
drawing an arrow from that one shape 1 to the space above the zero shape 11 in the one's column and drawing a ten shape 10 at the end of the arrow as demonstrated in
FIG. 21B
. The next step is subtracting five shape 5 from the borrowed ten shape 10. To do this, an arrow is drawn from five shape 5 to ten shape 10 and the ten shape 10 is divided into two five shapes 5. The
five shape 5 from the bottom row is crossed off along with one of the five shapes 5 of the top row as demonstrated in
FIG. 21c
. The uncrossed shapes of the ones column can then be totaled and moved to the answer row. In this example, a five shape 5 remains and is drawn in the answer row as shown in
FIG. 21D
. The next step is to subtract the ten's column of the bottom row from ten's column of the top row. To do this, the two shape 2 of the bottom row is divided into 2 one shapes 1. An arrow is then
drawn from one shape 1 of the bottom row to one shape 1 of the top row and both one shapes are crossed off as demonstrated in FIG. 21D. This step only subtracts the first one shape 1 of the bottom
row. The next one shape 1 of the bottom row is subtracted by crossing it off, dividing the five shape 5 from the top row into a four shape 4 and a one shape 1, and crossing off the one shape 1
created by this division as demonstrated in
FIG. 21E
. Next, the uncrossed shapes of the tens column can be added for a total of 4, which is expressed as a four shape 4 in the answer row of the tens column to complete the answer of 45 as seen in
FIG. 21F
The next section introduces the patterns of ten shapes (referred heretofore as ten shape patterns) used to represent multiples of ten. The fifty pattern 50a (shown in FIG. 11B) is the basis of the
ten shape patterns. Ten shape patterns below 50 maintain the basic structure of the fifty pattern with the necessary number of ten shapes removed. Ten shape patterns above fifty are combinations of
fifty patterns and lower ten shape patterns. The ten shape patterns representing 10 through 100 are (shown in FIG. 22). The X pattern is used to represent 50 because X is easy to visualize mentally
and the corners of each cube touch the corner of the center cube which creates negative space to distinguish each cube from the other. The ten shape patterns are used to complete addition and
subtraction problems for multiples of ten using a system of direct representation instead of a place value.
For the problem 50-20 (see FIG. 23), a SHAPE MATH® user will first imagine a fifty pattern 50. The user will then imagine removing a twenty pattern 13a, leaving a 30 pattern 13b.
For the problem 40+30 (see FIG. 24), a SHAPE MATH® user will imagine both a 40 pattern 13c and a 30 pattern 13b arranged side by side. Then the user will imagine sliding the top ten shape 10 of the
30 pattern 13b into the center space of the 40 pattern 13c. The result is a 50 pattern 50 next to a 20 pattern 13a, which together constitute a 70 pattern 13e, the answer to the problem. Because the
basic structure of the fifty pattern is maintained, mentally manipulating the pieces is both easier and results in recognizable patterns.
Mental SHAPE MATH® Not Divisible By Ten
The next section will explain SHAPE MATH® addition of larger numbers not divisible by ten while using direct representation instead of the place value system. While the previously described written
form of SHAPE MATH®, which does rely on a place value system, allows users to do problems of any size, it is very difficult to do mentally because you have to keep track of a lot of things in your
head at one time. On the other hand, direct representation allows users to visually regroup the quantities involved without worrying about their place values. Because of this, each shape always
maintains the same value throughout the completion of the problem and is easier to track and manipulate. A SHAPE MATH® user will begin by constructing the problems using both ten shape patterns and
SHAPE MATH® numerals. In the case of addition, the numbers below ten will be combined into ten shapes which will then be placed within the structure of the 50 pattern to make their final sums easily
For the problem 18+15 (see FIGS. 25A-E), a student will imagine a ten shape 10a and an eight shape 8a to represent 18 and a ten shape 10a and a five shape 5a to represent 15 as demonstrated in FIG.
25A. It is important to note that the SHAPE MATH® numerals are being arranged into the structure of the ten shape patterns. The next step is to combine any shapes less than ten into ten shapes.
First, the five shape 5a from addition representation 14a is divided into a three shape 3b and 2 one shapes 1a as demonstrated in FIG. 25B. Next, the one shapes 1 are moved to either side of the
eight shape 8 as shown in FIG. 25C to complete ten shape 15a which is shown in FIG. 25D. A user will then convert the ten shape 15a of fourth addition representation 14d to the standard red ten shape
10a, to complete the mental image shown in FIG. 25E. At this point, the SHAPE MATH® numerals are in standard form and arranged into the ten shape pattern structure, making the final quantity of 33
easily recognizable to some with practice using the SHAPE MATH®.
Multiplication of Lower Numbers
The next section will introduce the principles behind SHAPE MATH® multiplication for lower numbers. The goal when multiplying lower numbers is to rearrange the SHAPE MATH® numbers into groups of 5 or
10 so their quantities can be visualized easily. This is done by mentally placing instances of the multiplied shape within a ten shapes.
For the problem 3×3, shown in FIG. 26, a student would imagine an outer three shape 3b, then a second outer three shape 3b to form the outline of a six shape 6b, then an inner three shape 3d within
six shape 6b so that the user imagines nine shape 15a.
The next section will explain the first SHAPE MATH® multiplication technique: rearrangement. Rearrangement, which was just demonstrated in its simplest form, involves moving some of the shapes to
make the product of multiplication problems more recognizable.
For the problem 3×4, (shown in FIGS. 27A-C) which has a product greater than ten, the student will first visualize 4 three shapes, making sure to place as many three shapes as possible within the
outline of a ten shape. The first 3 three shapes are imagined as nine shape 15a and consist of 2 three shapes 3b and one three shape 3d while the 4th three shape 3d is imagined by itself as shown in
first FIG. 27A. FIG. 11B shows three shape 3b when separated from other shapes. In the next step, shown in FIG. 27B, the isolated inner three shape 3d (from above) is divided into a two shape 2a and
a one shape 1a. Then, the one shape 1a is moved into the empty space within shape 15a. When all the space within the outline of shape 15a has been filled, it can now be imagined as standard ten shape
10a shown in FIG. 27C. After the one shape has been moved, only the two shape 2a remains next to ten shape 10a so that the quantity of 12 can be easily recognized.
Counting Onward
The next section will explain the second SHAPE MATH® multiplication technique: counting onward. This technique involves counting the ten or five shapes and then tallying what is left over. If the
rearrangement technique has already been properly applied, the left over quantities from counting onward will be minimal.
For the problem 6×4 (shown in FIGS. 28 A-E) the student would imagine 4 six shapes 6g arranged into the context of a ten shape pattern as seen in FIG. 28A. It is important to note that shapes are
almost always imagined in the context of this pattern if the quantities allow it. Imagining them this way makes them easier to visualize and remember and gives a basic structure to their arrangement
that can remain consistent throughout various multiplication techniques that may be a applied to a single problem. When applying the technique of counting onward to this problem, a student would
first count the five shapes 5a, for a total of 4, which should be easily recognized as the quantity of twenty when the images are mentally placed into a ten shape pattern as shown in FIG. 28B (ignore
arrows for now). If is easier to visualize, one would combine the five shapes 5a from 16b into 2 ten shapes 10a, the results of which are shown in FIG. 28C. After the fives are counted, the one
shapes 1a shown in FIG. 28D can then be counted (effectively rearranged), totaling 4 in this case and represented by four shape 4g from FIG. 28E, which is easily recognizable as the quantity of 24.
The next section will explain the third SHAPE MATH® multiplication technique: subtraction. This technique is used when one of the multipliers is 8 or 9. A student will complete the problem as if the
8 or 9 shapes are actually ten shapes. Then, the student would count the shapes that had to be added to the 8 or 9 shapes and subtract that total from the product created by completing the problem as
if they were ten shapes. See FIG. 11B for representations of eight shapes, nine shapes, and ten shapes.
For the problem 3×9, (shown in FIG. 29) a student would create the problem 3×10 by imagining 3 ten shapes 10a as a thirty pattern as shown in first multiplication representation 17a. Then, the
student will convert the ten shapes 10a into 9 shapes as shown in second multiplication representation 17b. In order to convert the ten shapes 10a into nine shapes 9a, a one shape 1a had to be
removed from each ten shape 10a. These one shapes 1a are visually represented in third multiplication representation 17c, and total 3, which will be obvious to any student creating this mental image.
The quantity of 3 is then subtracted from the quantity of 30 using techniques from the subtraction section and the answer of 27 is calculated.
The next section will explain the fourth SHAPE MATH® multiplication technique: splitting. When working with multiplication problems such as 5×6 or higher, a discalculic will not have the working
memory to employ only the rearrangement and counting onward techniques. Splitting allows SHAPE MATH® users to divide larger multiplication problems into two smaller problems.
The problem 6×7 can be broken down into 2 instances of the problem 3×7. The sub-problem 3×7 can be solved more easily and its answer can then be added to itself to find the answer to 6×7. When
possible, it is best to split the even multiplier of a multiplication problem. This is because even numbers can be split evenly and easily. One could complete the problem 3×7 in several ways using
the previous multiplication techniques. If the student works within the ten shape pattern, the answer to 3×7, shown in FIG. 30A, will be composed of 2 ten shapes 10a and 1 one shape 1a. To add this
quantity to itself (or double it), a student can imagine a second instance of the mental image for 21 seen in FIG. 30A and then slide both instances together to form the pattern shown in FIG. 30B,
which is easily recognizable as a representation of the quantity of 42.
Some larger problems require special techniques to complete. The problem 9×9, for example, contains no even multipliers. Instead of splitting the 9's in half they must be broken into thirds. In order
to do this, a SHAPE MATH® student will first imagine the nine shape 9f from first multiplication representation 20a (shown in FIG. 31). This nine shape 19a is composed of 2 three shapes 3b and 1
three shape 3d for a total of 3 three shapes. A student can move the mental images of these shapes apart to visually represent the division of a nine shape into 3 three shapes, a process demonstrated
in second multiplication representation 20b.
Once the 9 has been broken into 3's, one can complete the simpler problem of 3×9 and triple the answer. The earlier SHAPE MATH® techniques are sufficient to find that 3×9=27. However, adding together
3 instances of 27 can be confusing and may require too much working memory to use only the addition techniques from earlier sections. In order to find the answer to 27+27+27, a SHAPE MATH® user must
be familiar with 60 base operations. Much like the ten shapes are single conceptual unit that represents the quantity of ten, a 60 pattern 60a (shown in FIG. 11B) is a single conceptual unit that
represents the quantity of 60. In the context of this problem, a SHAPE MATH® user will first arrange the quantity of 27 into 2 ten shapes 10a, a five shape 5a and a two shape 2a to make a 27 pattern
shown in FIG. 32A. The user will then create a mental image of three 27 patterns 21a within the context of a 90 pattern shown in FIG. 32B. The 90 pattern 21b uses the 60 base system described earlier
and is composed of a 60 pattern 60a and 30 pattern 13b. When the user replaces the thirty patterns 13b with 27 patterns 21a, the mental image from FIG. 32C is created. Imagining the 27 patterns 21a
within the 90 pattern 21b helps to simultaneously visualize all the shapes involved because they are placed within the context of larger conceptual units. Once the user has imagined the pattern from
FIG. 32C, they can use the subtraction technique of multiplication explained earlier to determine that 1 three shape 3b (see FIG. 11B) is missing from each thirty pattern, for a total of 3 three
shapes. These 3 three shapes can be compiled into a 9 shape 20a (shown in FIG. 31) using the addition techniques described earlier. This nine shape can then be subtracted from the 90 pattern shown in
FIG. 32B so that the user has the mental image (shown in FIG. 33), which can be recognized as the quantity of 81. This process may seem complicated to someone already familiar with standard math,
however, it easier for a dyscalculic to perform these steps than to memorize nearly 100 solutions to 100 problems. Each step in the process follows a logical progression that manipulates shapes
within conceptual units that are easy to visualize and group together.
Place Value Multiplication
Once the techniques for completing times tables up to 9×9 are learned, a SHAPE MATH® user can apply them within the place value system of standard math in order to complete larger problems. For
example, FIG. 34 shows the problem of 38×4 as it would be seen in standard math. The previously described techniques can be used to multiply 4×8 and 4×3 to complete the basic multiplication needed
when completing the problem within the structure of standard math. The problem is not demonstrated with a figure including SHAPE MATH® numbers because the process can be completed by applying the
previously explained multiplication techniques to the structure of multiplication seen in standard math.
Time Conversion
Now, turning to FIG. 35, the next section will demonstrate how SHAPE MATH® is applied to converting units of time (seconds, minutes, hours). Because the system of time has a base of 60, the 60
pattern 60a is used to express minutes or hours. If the 60 pattern 60a expresses an hour, then each ten shape 10a within the pattern expresses ten minutes, for a total of 60 minutes. Similarly, when
a 60 shape 60a is used to express one minute, each ten shape 10a within the pattern expresses 10 seconds for a total of 60 seconds. The 60 shapes are easy to visualize in groups because each 60
pattern 60a contains a fifty pattern 50a (see FIG. 11B), and when those 60 patterns 60a are placed side by side (as shown in FIG. 35), the 50 patterns 50a within them form a 100 pattern 100a, which
is separated by outline 23a. One should note that outline 23a exists in this figure purely to clarify which ten shapes 10a compose the 50 patterns 50a. When doing time conversation within the 60
pattern structure 60a, a SHAPE MATH® user will count the number of ten shapes 10a and ad a zero to that total to calculate out how many minutes represented.
Minutes to Hours
To convert 80 minutes into hours, the minutes are expressed as 8 ten shapes 10a and placed according to the convention of the 60 pattern to create the mental image shown in FIG. 36. The first 6 ten
shapes 10a make up a 60 pattern 60a and the remaining 2 ten shapes 10a make up a 20 pattern 20a. Conceptually, a SHAPE MATH® user will realize that the 60 pattern 60a represents one hour and the 20
pattern 20a represents the remaining 20 minutes, for a total of 1 hour and 20 minutes.
Another way of completing this problem is creating the mental image of an 80 pattern 24a (shown in FIG. 37). The user will mentally distinguish the 60 pattern (separated by outline 25a) from the
remaining 2 ten shapes 10a for a total of 1 hour and 20 minutes.
Hours to Minutes
In order to convert hours to minutes, the ten shapes are totaled by conceptually separating the 50 patterns 50a, grouping them into 100 patterns (if necessary) and tallying the remaining ten shapes
10a. For example, to convert 4 hours into minutes (a process shown in FIGS. 38 A-C), a student would first create the mental image of 4 sixty patterns 60a as seen in FIG. 38A. Then, the fifty
patterns 50a are distinguished and grouped into 2 one hundred patterns as seen in FIG. 38B. These 2 one hundred patterns 100a represent 200 minutes. The remaining ten shapes 10a, seen in FIG. 38C,
are added for a total of 4 and represent 40 minutes, which can be added to the 200 minutes represented by the 100 patterns for the answer of 240 minutes.
SHAPE MATH® Cube
The next section will explain the SHAPE MATH® cube, an instructional tool to aide in learning and operating with SHAPE MATH®. The SHAPE MATH® cube is a physical cube with each side displaying a
particular ten shape. FIG. 39 displays the faces of the cube before they are folded. A SHAPE MATH® student can use the SHAPE MATH® cube as a quick reference when there is a need to conceptualize
particular ten shapes. For example, if a student needs to conceptualize a ten shape consisting of 2 four shapes, they could quickly browse the cube until their eyes find the four shapes 4b shown on
ten cube face 27a. The SHAPE MATH® cube is most useful in division when it becomes necessary to determine the multiples of a number within another number, as well as the quantity that remains after
those multiples are determined.
Mental Division
The next section will explain mental division using SHAPE MATH® and the SHAPE MATH® cube. When learning division, a student will start by dividing ten by various numbers. The problem 10/8 would be
solved by first searching the SHAPE MATH® cube for face 27a (shown separately in FIG. 40) which displays an eight shape 8b (compounded from 2 four shapes) and two one shapes 1a. This visual indicates
that the quantity of ten contains 1 instance of the quantity of 8 with 2 as a remainder.
Similarly, when calculating 10/3, a student would first search for cube face 27b (shown in FIG. 41). The 2 outer three shapes 3b and 1 inner three shape 3d can then be counted for a total of 3. The
remaining 1 shape is easily recognized as the remainder so that cube face 27b indicates the answer of 3 with a remainder of 1.
It is important to note that when dividing even numbers by 2, it is easier to divide the shape in half than to visualize and count the 2 shapes within a larger SHAPE MATH® number. For the problem 10/
2, it is easier to visualize ten shape 10a (FIG. 11B) and divide it in half to make ten shape 10b (FIG. 11B) and determine the answer of 5. The alternative is to visualize cube face 27c (shown in
FIG. 42) representing ten shape 10d, count the 2 shapes 2a and recognize that the remaining one shapes 1a can be combined to make 3 more two shapes 2d (FIG. 11B) for a total of 5.
When dividing with a divisor (number going into the dividend) larger than ten, a SHAPE MATH® user will visualize the dividend (the number the divisor goes into) and then estimate the quotient
(answer). The user will then visually distinguish a number of ten shapes within the dividend that is equal to their estimation of the quotient (answer). Those ten shapes are then combined with
smaller shapes from what remains of the dividend in order to convert each ten shape into the quantity of the divisor. The instances of these groupings are then totaled for the answer, with a
remainder existing for certain problems. This process may seem extremely confusing, however, an example demonstrates that it is more simple in practice.
Now turning to FIG. 43 illustrating the completion of 37/13 (shown in FIG. 43), the basic goal is to isolate ten shapes within the dividend (37) and combine them with left over shapes from that
dividend to create instances of the divisor (13). In this problem, the process effectively combines ten shapes with three shapes to find the instances of 13 within 37. One would first draw a thirty
seven pattern seen in FIG. 43A and consisting of 3 ten shapes 10a and seven shape 29a. For the purposes of this example, full color figures will be used instead of pencil drawings. The next step is
to estimate the quotient (answer) and distinguish a number of ten shapes equal to that estimate. In this case, one would most likely estimate that 13 goes into 37 2 times and thus distinguish 2 ten
shapes 10a marked in this example with a circle (not typically drawn in practice) as seen in FIG. 43B. Then, the distinguished ten shapes 10a are combined with inner three shape 3d and outer three
shape 3b, a process that is marked by lines 29b (drawn in practice) in order to distinguish 2 separate instances of 13 (the divisor) within 37 (the dividend) as seen in FIG. 43C. It is important to
note that seven shape 29a was chosen during the first step because it includes three shapes, the amount needed above ten to complete the divisor (13). The 2 instances of 13 (the divisor) that could
be distinguished within 37 (the dividend) represent the quotient (the answer). Thus, before calculating the remainder, we find the quotient so far is 2.
After the quotient (answer) is calculated, the remainder must then be calculated to determine an exact solution to the problem. This is done by counting the quantity of the shapes not included in the
answer thus far. In this example, a ten shape 10a and a one shape la as see in FIG. 43D and marked by circles not typically drawn, are left as a remainder, and total 11 so that the final answer of 2
with remainder 11 is calculated. FIG. 44 shows this example as a SHAPE MATH® student would write it in practice.
Mental Division (Direct)
Certain division problems are simple enough to perform mentally by visualizing directly the instances of a divisor within a quotient. For example 30/5 (see FIG. 45) is simple enough for a student to
imagine 3 ten shapes 10a in a thirty pattern 13a and split them into 6 five shapes 5a. Even some problems that do not divide evenly (the shapes not the quantities) can be fully visualized mentally
and the answers counted from this image. For example 30/3 can be solved by visualizing 3 instances of ten shape 27b (shown in FIG. 46). The 3 three shapes per ten shape 27b can be added for a total
of 9 and the remaining 3 one shapes 1a combined to make a 10th three shape, indicating 10 as the solution to 30/3.
Long Division
The next section will explain the techniques for completing long division with decimal solutions instead of remainders. The problem 43/4 can be completed with the previously described techniques to
yield the answer: 10 r3 (10 with remainder 3). However, sometimes solutions must be calculated in the form of a decimal. In SHAPE MATH®, remainders are converted to decimals in a way that is similar
to standard math. The remainder is first multiplied by 10 and then divided by the original divisor. If the answer to this step also contains a remainder, the process must be repeated until the
solution no longer contains a remainder. In this case the remainder is three and thus converts to 30 which is then divided by the original divisor of 4. Some SHAPE MATH® users can complete this
problem (30/4) mentally, however,
FIG. 47A
demonstrates the written form of this equation and shows some of the conventions of written division not yet covered. The ten shapes 10 of the thirty pattern 30a are first broken into four shapes 4
by drawing slashes 29a. This divides each ten shape 10 into 2 four shapes 4 and 2 one shapes 1 (reference numbers are only added to the top ten shape). A SHAPE MATH® user will then count the four
shapes 4 by marking them 1 through 6. The remaining one shapes 1 are then totaled into groups of 4. In this case, one four shape 4 was made from totaling the one shapes 1 and marked with a `7` placed
within the thirty pattern 30a. Since 2 one shapes 1 remain from this process, we know that 30/4=7 r2. Sometimes larger problems require SHAPE MATH® users to track these steps using standard long
FIG. 47B
shows the problem of 40/3 as a SHAPE MATH® user would write it to track their progress. The `X` 31a is placed over the subtraction step because SHAPE MATH® does not use subtraction to discover the
remainder, but instead comes to this total by counting the remaining shapes that cannot be grouped into multiples of the divisor. In this case, 2 one shapes 1 remained after 30 was divided into 7
four shapes 4. Since the remainder of 2 still exists, the process must be repeated. Remainder 2 is multiplied by 10 and thus converted to 20 and then divided by the original divisor of 4. Written as
a completed equation, this process reads 20/4=5 r0. Since no remainder exists, we can now complete the final answer using the sub-answers from each step of this process. In this example, while
completing (43/4) we calculated the following equations: 43/4=10 r3, 30/4=7 r2, 20/4=5 r0. Each time the remainder of the previous equation was multiplied by ten and became the dividend of the next
equation. To find the final solution, the answers to each sub-equation must be written without remainders in order of completion. In this case the answers when written in this way read 1075. At this
point, the decimal must be added to complete the solution. The placement of the decimal depends on the number of remainders that were given a zero (multiplied by ten). In this case 2 zeros were added
and the decimal is thus placed two spaces from the far right making 1075 into 10.75, the decimal answer to 43/4.
The next section will introduce the concepts and tools used to apply SHAPE MATH® to money. Much like each digit has a corresponding shape, each denomination of U.S. currency also has a corresponding
SHAPE MATH® shape that is sized relative to its value. The base shape for SHAPE MATH® money is the one triangle, which is used to make the SHAPE MATH® penny 32a (shown in FIGS. 48 and 49). The higher
SHAPE MATH® coins (nickel 32b, dime 32c, and quarter 32d) which are shown in FIG. 49 are all built from SHAPE MATH® pennies and can be combined using the same principles of standard SHAPE MATH®
shapes. For example, 10 SHAPE MATH® pennies 32a can be combined into a ten shape 32e which is the equivalent in size and value to a SHAPE MATH® dime 32c, (as shown in FIG. 50).
The SHAPE MATH® coins will aid a discalculic when working out problems that deal with change. For example, a SHAPE MATH® student may need to make 33 cents change. Using the SHAPE MATH® coins, the
denominations can be combined physically to create a 32 pattern. Turning to FIG. 51A, this construction will begin with a SHAPE MATH® quarter 32d which is a particular 25 pattern. Then, a student
will picture the addition of a SHAPE MATH® nickel 32c in order to make a particular 30 pattern 33a (See FIG. 51B). Finally, 3 SHAPE MATH® pennies are added to complete 33 pattern 33b (See FIG. 51C).
This additive process can also be used to complete addition problems as well. For example, a student may have 2 dimes 32c, a quarter 32d, and 4 pennies 32a (shown in FIG. 52A). If that student needed
to calculate the sum of these coins, they could arrange the corresponding SHAPE MATH® coins into the structure of the fifty pattern in order to show their total value. When 2 dimes, a quarter, and 4
pennies are rearranged into the structure of the fifty pattern, they create a 49 pattern (shown in FIG. 52B). Much like the application of the fifty pattern elsewhere in SHAPE MATH®, arranging the
coins in this way makes their total value more obvious. A student may see a forty pattern and a nine shape or they may see a fifty pattern missing a one shape. In either case, the arrangement into
logical patterns and consistency of relative sizes allows a dyscalculic to visualize the addition of coins. In this case, the sum of 49 cents is made obvious.
SHAPE MATH® can also be used in subtraction problems dealing with money. Since most subtraction problems are too difficult to imagine while using SHAPE MATH® coins, a written form of SHAPE MATH®
subtraction is used in which the minuend is represented by a pattern, the subtrahend crossed off, and the difference counted from the remaining shapes.
Now, turning to FIG. 53 for the problem (1 dollar-68 cents), a student would first imagine a 100 pattern to represent the 100 pennies within a dollar (shown in FIG. 53). Then the quantity of 68 would
be crossed out from that 100 pattern by crossing out the fifty pattern 50a, then a ten shape 10 and finally an eight shape 8. Once the quantity of 68 has been removed, the remaining shapes can be
totaled for the answer. In this case, 3 ten shapes 10 and 2 one shapes 1 remain indicating the answer of 32 cents.
Thus far, coins have been represented with SHAPE MATH®. However, bills can also be represented within the same consistent system. A one dollar bill, for example, is composed of 100 SHAPE MATH®
pennies (as shown in FIG. 54). One should note the pennies are arranged into a one hundred pattern. This is the basic pattern of arrangement for all representations of a SHAPE MATH® dollars made from
coins. FIG. 55 shows the basic 100 pattern dollar and then a dollar composed of SHAPE MATH® nickels 35b that fits within this pattern. The SHAPE MATH® dollar composed of various coins is only used in
smaller problems. For problems that involve manipulating larger bills, a different representation of the dollar is used.
FIG. 56 shows the SHAPE MATH® bills in descending order from top to bottom. The one shape 1 represents the one dollar bill 36a, five of which can fit into the five shape 5 which represents the five
dollar bill 36b and so on. Each denomination is represented by the shape or pattern from SHAPE MATH® that corresponds with the quantity of dollars expressed. Each bill also fits into larger bills a
number of times that is appropriate for their relative quantities. Students will work with physical cut outs of the images displayed in FIG. 56 to become comfortable with visualizing the quantities
involved. For the purpose of specification the SHAPE MATH® bills are shown in FIG. 56 as follows: SHAPE MATH® dollar 36a, SHAPE MATH® five dollar bill 36b, SHAPE MATH® 10 dollar bill 36c, SHAPE MATH®
20 dollar bill 36d, SHAPE MATH® 50 dollar bill 36e, SHAPE MATH® 100 dollar bill 36f. These images, mental or physical, can be manipulated to add and subtract bills of currency.
Now turning to FIGS. 57A-B for the problem ($27+a five dollar bill), a student would first arrange a SHAPE MATH® money 27 pattern 38a out of a SHAPE MATH® 20 dollar bill 36d, a SHAPE MATH® 5 dollar
bill 36b and 2 SHAPE MATH® dollars 36a as seen in FIG. 57A. Then, to add a five dollar bill, a student would first separate the 2 SHAPE MATH® dollars 36a from the 27 pattern 38a and then add a SHAPE
MATH® five dollar bill 36b in their place as demonstrated by arrows. This will create SHAPE MATH® money thirty pattern 38b beside 2 SHAPE MATH® dollars 36a shown in FIG. 57B, to create a SHAPE MATH®
money 32 pattern that can be recognized easily as an expression of 32 dollars, the answer to the problem.
SHAPE MATH® money subtraction with bills is almost identical, in practice, to subtraction of SHAPE MATH® coins. The appropriate written SHAPE MATH® pieces are determined and manipulated to calculate
the difference between 2 quantities (solution to the subtraction problem).
When calculating subtraction problems with both dollars and cents, a slightly different process is used. The student constructs two separate patterns, one for dollars and one for cents, which are
separated by a decimal point. For the problem $10-$5.17 (seen in FIGS. 58A-E) a student would start by writing the subtrahend (5.17) just above a one triangle 1 with lines leading to and forming a
box around a 100 pattern 100a which is placed to the right of a decimal point shown in
FIG. 58A
. It is important to note that the one triangle 1 and 100 pattern 100a each represent one dollar in different ways. The one triangle 1 refers to SHAPE MATH® dollar 36a and the 100 pattern 100a refers
to SHAPE MATH® dollar 35a. Once this structure has been written, the dollars must be calculated. The minuend ($10) is written as a ten shape 10 to the left of the decimal point as seen in FIG. 58B.
Then, the $5 from the subtrahend ($5.17) is subtracted by dividing ten shape 10 into 2 five shapes 5 and crossing one off as shown in
FIG. 58c
. Next, the cents must be subtracted which requires we borrow from the remaining five dollars of the minuend. To do this, the remaining five shape 5 is divided into a four shape 4 and a one shape 1,
the one shape 1 is crossed off and an arrow is drawn to connect this one shape 1 with the one shape 1 drawn originally to indicate that one dollar was borrowed and converted into 100 cents as shown
FIG. 58D
. The remaining 17 cents from the subtrahend is then subtracted by crossing this quantity from the 100 pattern as shown in
FIG. 58E
. At this point, the quantities that remain uncrossed can be totaled for the answer. The four shape 4 to the left of the decimal point represents 4 dollars while the 83 pattern 13f to the right of
the decimal point represents 83 cents, for the solution of $4.83. This process can eventually be internalized so that the student imagines these operations instead of writing them. This method is
applied to situations that require change be made when payment is made with a particular bill and thus applies only to subtraction problems with even dollar amounts as the minuend.
The next section will cover the representation and manipulation of fractions within SHAPE MATH®. The standard representations of fractions such as 3/4 or 5/7 are abstract and hard for a discalculic
to conceptualize or manipulate. With SHAPE MATH®, however, fractions are displayed directly and with proper relative sizes. For example, a SHAPE MATH® representation of 3/4 41a is (shown in FIG. 59).
The denominator (4) is represented by the outline of the SHAPE MATH® fraction and is shown isolated below as four shape 4. The numerator (3) is represented by the shaded portion of the SHAPE MATH®
fraction and is shown isolated below as three shape 3. Finally, the missing portion of the whole (not represented in standard math fractions) is represented by empty space and shown isolated as one
shape 1. SHAPE MATH® fractions work very similarly to pie charts which also display colored portions within the context of a larger whole. SHAPE MATH® representations, however, use specific shapes
with exact quantities that can express the specific parts of a fraction.
It is important to note that FIG. 59 displays a SHAPE MATH® fraction that is composed of physical SHAPE MATH® pieces. The SHAPE MATH® pieces were mentioned above and they are a physical learning tool
for SHAPE MATH® students. Every SHAPE MATH® piece has a colored front side and a reverse side which is white with an outline of the color from the front side. When working with fractions, the front
sides of pieces are used to display the denominator while the reverse sides are used to display portions of the whole that are not present (negative space). FIG. 60 shows the front side of a three
shape 3b next to the reverse side of that same three shape 3bb (shown with black dotted lines to make the outline of the faint colors more obvious).
Because SHAPE MATH® fractions are displayed so directly, simple operations such as addition can be performed by visually manipulating the quantities displayed. The problem 2/7+ 5/7 (seen in FIG. 61)
is shown in first fraction addition representation 42a as it would appear if constructed from SHAPE MATH® pieces. To solve this problem, the numerators (colored portions) are combined as shown in
second fraction addition representation 42b. The colored two shape 2 is combined with colored five shape 5 to make a colored seven shape 7. It is important to note that filling an entire SHAPE MATH®
fraction with solid colors will always be equivalent to 1. The answer 7/7 is visually represented as a normal seven shape.
Fraction Addition Different Denominators
Now turning to FIGS. 62A-C, when fractions of 2 different denominators are added, a common denominator must be found and the fractions involved must be converted into fractions with that common
denominator. In the case of 2/4+3/8 (shown in FIG. 62A) Addend 1 43a has a denominator of 4 while addend 2 43b has a denominator of 8. In this case, the first denominator (4) can be multiplied by 2
to convert it into the second denominator (8). Since the perimeter shape of a SHAPE MATH® fraction determines its denominator, the perimeter shape 43c of addend 1 (4 shape) must be doubled to make
the perimeter shape 43d of addend 2 (8 shape). This process can be seen in FIG. 62B, in which the perimeter shape 43f (denominator) is doubled along with the colored portion (numerator) so that 2/4
converts to 4/8. Now, turning to FIG. 62C we have the problem 4/8+3/8 43d and the process of addition is the same intuitive combining of shapes from many other parts of SHAPE MATH®. In this case, the
three shape 3a from 3/8 is converted into 3 one shapes 1 which replace 3 of the white one shapes 1 of 4/8 for the final answer of 7/8 43e.
Written Form of SHAPE MATH® Fractions
SHAPE MATH® fractions also have a written form. The written form of 1/4 is (shown in FIG. 63). The outside shape of written fractions still represents the denominator (four shape 4 in this case). The
shape for the numerator (one shape 1 in this case), however, is signified with a plus sign (+) drawn within the shape. The portion missing from the denominator (3 shape 3d in this case) is marked
with a minus sign (-).
Multiplying Fractions
Now turning to FIGS. 64A-D to multiply fractions in SHAPE MATH®, the first step is to break the first multiplicand into one shapes and distinguish its numerator with an outline. Then, an instance of
the second multiplicand is drawn in each of the one shapes from the first multiplicand. The quantity of all these instances of the second multiplicand is totaled for the denominator of the answer.
Finally, attention is drawn to the numerator outline drawn earlier. The shapes within this numerator outline that have a plus sign (+) are totaled to find the numerator of the answer. In the case of
1/4 (first multiplicand)×2/3 (second multiplicand) 44a (shown in FIG. 64A) the fraction 1/4 44b is converted into 4 shape 44c which is composed of 4 one shapes 1 (shown in
FIG. 64B
). The numerator of 1/4 is then distinguished with outline 44e that has a plus sign (+) 44f attached to it shown in fraction multiplication representation 44d (shown in
FIG. 64c
). Then an instance of the second multiplicand (2/3) is placed in each of the one shapes within the first multiplicand as shown in second fraction multiplication representation 44f. Now that the
problem is presented in this way, the denominator and numerator can be counted. To count the denominator, the one shapes are counted from each of the instances of 2/3 totaling 12 in this case. For
the numerator, the one triangles with a plus sign (+) within the numerator outline are counted totaling 2 in this case. This gives us the final answer of 2/12. In SHAPE MATH®, this is presented with
a negative ten shape 10 and a positive two shape 2 (shown in
FIG. 64D
). The denominator of 12 is represented by the combined ten shape 10 and two shape 2. The numerator is represented with a two shape with a plus sign. It should be noted that a SHAPE MATH® user can
still apply the principles from SHAPE MATH® multiplication to multiply the numerators then the denominators, as seen in standard math, but completing the problem with the method just described helps
the SHAPE MATH® user conceptualize fraction multiplication.
Fraction Division with Equal Denominators
When calculating the division of SHAPE MATH® fractions of equal denominators the first step is to establish the denominator of the answer. To do this the numerator shape from the divisor is drawn in
the answer space. Turning to FIGS. 65A-C for the problem 2/4+3/4, the divisor (3/4) 45a has a numerator of 3 45f therefore 3 shape 3a is drawn in the answer space 45b as shown in first fraction
division representation 45c (see FIG. 65). The next step is to determine the number of times the numerator of the dividend fits into the denominator of the answer. In this problem, the dividend 45g
has a numerator 45h of 2 therefore 2 shape 2a is drawn with a plus sign inside of denominator 3a of the answer as shown in second fraction division representation 45d (see FIG. 65). Finally, the
space in the denominator of the answer that is not already occupied is given a negative sign. In this example, a negative sign is drawn in one shape 1a as shown in third fraction division
representation 45e (see FIG. 65).
Improper and Mixed Fractions
[0258] FIG. 66
shows the equivalent fractions of ( 3/2) and (11/2) written as SHAPE MATH® fractions. Improper fraction representation 46a shows ( 3/2). In SHAPE MATH®, the denominator of a mixed fraction is
expressed with a denominator outline. In this case the denominator outline 46b surrounds a two shape 2a. The numerator of improper fractions is indicated by the total quantity of SHAPE MATH® numbers,
in this case a two shape 2a and one shape 1a, when combined, indicate a numerator of 3. This same quantity expressed as a mixed SHAPE MATH® fraction is shown in mixed fraction representation 46c
which represents 11/2. The whole number shape 46d expresses the quantity of 1 and the fraction 2/2 simultaneously. It expresses the quantity of 1 because of its outline is a one shape 1. It expresses
the fraction 2/2 because of the two shape 2 inside this one shape 1. The two shape 2 also indicates a denominator of 2 for fraction 46e (1/2), which is seen to the right of whole number shape 46d.
The importance of including this indicator (two shape 2) within the whole number shape will be apparent when doing operations with mixed fractions. The fraction shape 46e indicates 1/2 and follows
the conventions of SHAPE MATH® fractions thus far. Together, the whole number shape of 1 46d and the fraction 1/2 46e, make up the mixed fraction 11/2 46c.
Addition with Mixed Fractions of Equal Denominators
The calculation of mixed fractions in SHAPE MATH® uses a place value system. The right column houses instances of fractions below 1 while the left column houses instances of 1 expressed as fractions
(ex. 2/2, 3/3). Turning to FIGS. 67A-D for the problem of [1( 2/7)]+[2 ( 6/7)], the first step, shown in
FIG. 67A
, is to present the relevant fractions as SHAPE MATH® fractions within the typical place value organization of addition. To represent 1 2/7, whole number shape 47a is used to express 1 or 7/7. The
seven shape 7 is displayed within the one shape 1 to indicate the relevant denominator. Fraction shape 47b is used to express 2/7 and follows the SHAPE MATH® fraction conventions explained thus far.
Below these two shapes the mixed fraction 2 6/7 is represented. Whole number shape 47c represents the quantity of 2 (more specifically 2 instances of 7/7). Fraction shape 47d represents 6/7. The
second step, shown in
FIG. 67B
, is to add the fraction shapes contained in the fraction column (right column). To do this, the constituent shapes are recombined into a recognizable shape or pattern. In this case, the one shape 1
from fraction shape 47b is brought down to the empty space in the denominator of fraction shape 47d so that the fraction 7/7 47k is made. Whenever the numerator of a fraction shape in the fraction
column equals the denominator (when the fraction is full) it is erased and redrawn as a whole number shape. In this case, whole number shape 47a is drawn to represent 7/7 or the quantity of 1 as
shown in FIG. 67C. The final step, shown in
FIG. 67D
, is to compile the answer. The whole number shapes can be counted and combined into a larger shape. In this case, the 4 instances of 7/7 47a are drawn into a larger whole number 4 shape 47j to
indicate the answer of 4 in the whole number column. Next, the remaining fraction shape from the fraction column is brought down to the answer space. In this case, fraction shape 47i is brought to
the answer space and represents 1/7. Next to each other, whole number shape 47j (quantity 4) and fraction shape 47i (quantity 1/7) display the mixed fraction 4 1/7.
Subtraction of Mixed Fractions with Equal Denominators
The subtraction of SHAPE MATH® mixed fractions is very similar to addition. Like addition, the relevant mixed fractions are expressed in a place value system using SHAPE MATH® mixed fractions.
Turning to FIGS. 68A-G for the problem [2 (1/5)]-[1(4/5)],
FIG. 68A
shows how the relevant quantities would be expressed as SHAPE MATH® mixed fractions within a place value system. Whole number shape 48b expresses the quantity of 2 (more specifically 2 instances of 5
/5) while fraction shape 48c expresses 1/5. Below this, whole number shape 48d expresses 1 (more specifically 1 instance of 5/5) and fraction shape 48e expresses 4/5. The second step is to subtract
subtrahend 48e (4/5) from minuend 48c (1/5). An arrow is drawn from one shape 1 of the subtrahend to 1 shape 1 of the minuend and each one shape is crossed off as demonstrated in
FIG. 68B
FIG. 68c
. This only subtracts 1/5 from the 4/5 of the minuend. Since the minuend (4/5) is larger than the subtrahend (1/5), a unit must be borrowed from the whole number column. In this process, shown in
FIG. 68D, an arrow is drawn from whole number shape 48h to the right of the fraction column and a five shape 5 is drawn at the end of this arrow. The whole number shape becomes a five shape because 5
is the relevant denominator. The five shape 5 inside the outline of the whole number shape helps to make this more clear and obvious to a SHAPE MATH® user. Once 5/5 is borrowed from the whole number
column, the remaining 3/5 of the minuend can be subtracted from this borrowed shape. To complete this step, shown in
FIG. 68E
, an arrow is drawn from the remaining 3 shape 3 in the minuend to the borrowed five shape 5 and a three shape 3 is drawn within this borrowed five shape 5 and given a negative symbol while the three
shape 3 from the minuend is crossed off. The next step is to rewrite the problem with all crossed off shapes omitted and the remaining shapes reorganized so that the numerator is contained within a
single shape. In this case, 5 shape 5 from
FIG. 68E
is rewritten as 5 shape 5 from
FIG. 68F
Once the fractions column has been rewritten, the subtrahend of the whole number column must be subtracted from the minuend of the whole number column. In this case, shown in
FIG. 68G
, an arrow is drawn from 5/5 48j in the bottom row to 5/5 48j of the top row and each are crossed. Finally, the shape 48m (all that remains) is brought down to the answer row to indicate the answer
of 2/5.
Addition with Improper Fractions
Adding improper fractions with SHAPE MATH® is almost identical to standard SHAPE MATH® addition. FIGS. 69A-C show the problem 3/2+1/2 as it would be completed with written improper SHAPE MATH®
FIG. 69A
shows the problem after it has been converted to SHAPE MATH® symbols. The top addend 48o represents 3/2 with denominator outline 48p surrounding two shape 2 and thus indicating a denominator of 2.
The bottom addend 48q represents 1/2. The process of addition is shown in
FIG. 69B
and demonstrates the process of drawing an arrow from one shape 1 of bottom addend 48q to top addend 48o, crossing this one shape and redrawing it at the end of the arrow. This eliminates the
numerator of bottom addend 48q and completes the answer shape. Because this answer shape has a full two shape 2 outside denominator outline 48p, another denominator outline 48s is drawn around two
shape 2, shown in
FIG. 69c
. The denominator outlines each represent the quantity of 1 and can be totaled in this case for the answer which is written as two shape 2 in the answer space.
This section will show how Percentages can be done using SHAPE MATH®. When calculating percentages in SHAPE MATH®, the one shapes from each dollar of the total bill are converted into percentage
shapes. A percentage shape is best understood within the context of a one shape. Turning to FIG. 70A, percentage representation 48u shows a one shape composed of 10 ten percent pieces which are each
one shapes themselves. In FIG. 70B, a ten percent piece can also be broken down into 10 one shapes, which each represent 1 percent of the total shape, as they are each ten percent of ten percent.
Percentage representation 48v shows a one shape composed of 9 10 percent pieces and 10 one percent pieces.
Just as 10 percent pieces are composed of 1 percent pieces, higher percentage pieces that are multiples of ten can be made by compiling 10 percent pieces. Turning to third percentage representation
51g of FIG. 71B, a twenty percent piece 51c is composed of 2 ten percent pieces 51d while a 30 percent piece 51e is composed of 3 ten percent pieces 51d (see FIG. 71A). One should note that some
percentages have different specific percentage pieces. Fourth percentage representation 51h (see FIG. 71C) shows that 30 percent can be represented with both thirty percent piece 51e and thirty
percent piece 51f. These various forms are required to fit percentage pieces into a triangular one shape, which represents 100 percent. It should be noted that a unique set of zero spacers 52a (shown
in FIG. 73) is also needed to achieve this affect. FIGS. 72A-C show all the common percentage pieces within the one shape that represents 100 percent. One shape 49a (see FIG. 72A) is composed of 10
ten percent pieces, one shape 49b (see FIG. 72A) is composed of 5 twenty percent pieces, one shape 49c (see FIG. 72B) is composed of 3 thirty percent pieces 49g and 1 ten percent piece, one shape 49d
(see FIG. 72B) is composed of 2 forty percent pieces 49h and 1 twenty percent piece, one shape 49e (see FIG. 72B) is composed of 2 fifty percent pieces and finally, one shape 49f (see FIG. 72C) is
composed of 1 eighty percent piece 49i and 1 twenty percent piece. It should be noted that the eighty percent piece looks identical to 2 forty percent pieces. The interior lines and spacers from the
compiled 40 percent pieces are left in the eighty percent piece to distinguish it from an outer four shape. FIG. 73 shows the same one shapes from FIGS. 72A-C as they would appear if the percentage
pieces were pulled apart.
When calculating a percentage, a specific percentage piece will be replicated into a larger shape that represents the answer (referred heretofore as an answer shape). The number of replications
depends on the total from which the percentage is taken. Turning to FIG. 74 for an example, the answer shape 51a when calculating 20 percent of 5 would be composed of 5 20 percent pieces.
Essentially, each 20 percent piece from the answer shape 51a corresponds to a one shape 1a from the total 51b. While a student would not imagine this process, the visualization shown in this figure
would be used as a teaching tool to help a student understand the process involved and what each step represents. Once this conceptual understanding is established, the process can be completed in
practice by counting out each percentage piece as it is compiled into the answer shape. In this case, 5 20 percent pieces are counted as they are compiled into answer shape 51a. Because answer shape
51a completes a one shape 41a, it represents 1 dollar, which is 20 percent of 5 dollars.
There is a shortcut to calculate 20% tips that helps for larger checks. As shown in the previous example, 20% of $5=$1. This means that, when calculating 20% tips, each five shape in the total bill
can represent a one shape in the tip. The procedure for calculating a 20% tip on a $27 check using this short cut goes as follows.
Turning to FIG. 75A, a 27 shape 53a is imagined that represents a 27 dollar check as shown in tip calculation representation 54a. Each five shape 5a within the 27 shape is counted and converted to a
one shape 1a in the tip shape 53b, totaling 5 five shapes from the check and thus 5 one shapes in the tip shape 53b. This calculates 20 percent of 25 dollars and yields the answer of 5 dollars. To
calculate 20 percent of what remains from the check, which consists of a two shape 2a in this example, the normal SHAPE MATH® percentage techniques are applied. Turning to second tip calculation
representation 54b shown in FIG. 75B, the two shape 2a is divided into 2 one shapes 1a that each correlate with a 20 percent piece 53c in the answer. These 2 twenty percent pieces are combined into a
forty percent piece 53d that represents 40 cents. A 20 percent tip on a 27 dollar check is therefore 5 dollars (five shape 53b composed of 100 percent pieces) and 40 cents (a 40 percent piece 53d).
FIG. 76 demonstrates 20% of 4 dollars which does not yield an answer of even dollars. The 4 shape 4e can be broken into 4 one shapes 1a which each correspond to a 20 percent piece 51c and 53c in the
answer shape. In practice, a student would simply count each 20 percent piece 51c and 53c as they compile the answer shape 55a stopping at 4 in this case. This answer shape represents 80 percent of a
dollar or 80 cents. With practice, students will eventually memorize the different answer shapes that are multiples of ten cents. Until then, the answer shape can usually be recognized by either
breaking it into more easily recognized base shapes or totaling the missing shapes from a full one shape and subtracting them from 100. These missing shapes are subtracted from 100 because each full
one shape in the answer represents 100 percent of a dollar.
Examples thus far have taken 20 percent of a total, however, other percentages can be calculated. FIG. 77 shows the calculation of 15 percent of 2. When working with percentages that are not a
multiple of ten, the problem is broken into stages that are easier to calculate. In this example, 10 percent of 2 is calculated, 5 percent of 2 is calculated and then these calculations are added for
the final answer. To calculate 10 percent of 2, a 10 percent piece is replicated 2 times to make sub answer shape 57a as shown in first tip finder representation 56a. Then, 5 percent of 2 is
calculated by compiling 2 five percent pieces 57b which are added to sub answer shape 57a to make answer shape 57c as shown in second tip finder representation 56b. The mental process behind
visualizing the 5 percent pieces involves understanding that 5 percent is 50 percent of 10 percent so that a 10 percent piece can be broken in half to make 2 five percent pieces. Answer shape 57c can
be converted to answer shape 57d, which replaces 2 ten percent pieces with 20 percent piece 53c and replaces 2 five percent pieces with ten percent piece 51d indicating the final answer of 0.3 or 30
cents as shown in third tip finder representation 56c.
While the disclosure has been described in detail and with reference to specific embodiments thereof, it will be apparent to one skilled in the art that various changes and modifications can be made
therein without departing from the spirit and scope of the embodiments. Thus, it is intended that the present disclosure cover the modifications and variations of this disclosure provided they come
within the scope of the appended claims and their equivalents.
Patent applications in class Spelling, phonics, word recognition, or sentence formation
Patent applications in all subclasses Spelling, phonics, word recognition, or sentence formation
User Contributions:
Comment about this patent or add new information about this topic: | {"url":"http://www.faqs.org/patents/app/20110287395","timestamp":"2014-04-19T10:08:59Z","content_type":null,"content_length":"199816","record_id":"<urn:uuid:654819e7-8ff3-4771-8a0e-9aea4328563f>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00546-ip-10-147-4-33.ec2.internal.warc.gz"} |
radian to degrees
radian to degrees
as you might know calculator can be in radian or degree mode. This is needed when trying to use tan and sin.
I have noticed that c++ calculates tan in radian mode. I know that you can convert it by multiplying the answer by PI and deviding by 180
Is there any way to tell c++ to switch to degree mode so that I don't have to keep doing the long way with radian? thanks
and if there isn't, how can I use PI in c++?
(what's the word so that it recognizes it)
For PI in C++, I usually declare a constant to hold it, such as:
#define pi 3.14159265
It works well enough for me.
The Brain
I would also just perform the radian to degree conversion yourself.. just keep in mind that there are precision limitations.
Here is a recent post that addresses pi precision.
Originally Posted by Mr.Pink
Is there any way to tell c++ to switch to degree mode so that I don't have to keep doing the long way with radian? thanks
No, but you could define your own function:
double tand(double degrees)
static const double twoPiBy360 = (2 * 3.141592f) / 360.0f;
return tan( twoPiBy360 * degrees );
In the cmath header there is a constant called M_PI that is the value of pi.
For PI in C++, I usually declare a constant to hold it, such as:
#define pi 3.14159265
It works well enough for me.
Much better to use a const in this case - what if you use the pi combination in a string?
String literals aren't parsed for macro replacements.
#include <iostream>
using namespace std;
#define Hello 5
int main()
cout<<"Hello World"<<endl;
Hello World
Still its better to just use the predefined M_PI
Why, it's simple mathematics sir. First, you should have a const double for pi; something to the tune of:
const double pi = atan(1)*4; //Or.....if you wanted to just go ahead and write it out:
const double pi = 3.14159265358979323; //A lot of that'll be truncated. ;)
Radians to degrees is simply 180/pi, sir...which is something to the tune of 57.295, I believe. You can just multiply your radian value by a variable equal to that. You'd probably name the
variable something spiffy like "RTD"(Radians To Degrees;)
And same goes for "DTR", some const variable to the tune of .01745 (which is the opposite: pi/180).
Not sure if I was any help whatsoever, but hey.:)
I remembered all those digits off the top of my head. That's downright special. :cool: | {"url":"http://cboard.cprogramming.com/cplusplus-programming/61965-radian-degrees-printable-thread.html","timestamp":"2014-04-23T15:36:58Z","content_type":null,"content_length":"11273","record_id":"<urn:uuid:4e4d58d2-9317-454c-8b55-a88d3e30470b>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00447-ip-10-147-4-33.ec2.internal.warc.gz"} |
Vector Geometry
Vector geometry is a first class citizen in Paper.js. It is a great advantage to understand its basic principles when learning to write scripts for it. After all, there is a reason for the word
Vector in Vector Graphics.
While building Scriptographer we found vector geometry to be a powerful way of working with positions, movement and paths. Once understood, it proves to be a lot more intuitive and flexible than
working with the x- and y- values of the coordinate system directly, as most other visually oriented programming environments do.
As an example of the elegance of vector geometry, here is an interactive example of a brush tool. With only 24 lines of code, it produces a mouse tool that acts like a brush, with a variable
thickness depending on speed and a sense of a natural expression.
Click and drag in the view below:
This script is developed step by step in the Working with Mouse Vectors tutorial, along with explanations about each line of code. But before looking at such an applied example, it is crucial to
understand the basic principles of vector geometry outlined here.
In many ways, vectors are very similar to points. Both are represented by x and a y coordinates. But while points describe absolute positions, vectors represent a relative information; a way to get
from one point to another. Here a step-by-step example that explains the relation between vectors and points.
We start by creating two Point objects to describe two absolute locations in the document, defined by their coordinate values:
var point1 = new Point(50, 50);
var point2 = new Point(110, 200);
In order to get from point1 to point2, we can say we need to move 60 to the right (in x-direction), and 150 down (in y-direction). These values are the result of subtracting the x- and y-coordinates
of point1 from the ones of point2:
var x = point2.x - point1.x;
// = 110 - 50 = 60
var y = point2.y - point1.y;
// = 200 - 50 = 150;
In other words, by adding these two values to the coordinates of point1, we end up at point2.
Instead of using these two separate values, it is much easier to use a vector as a container for the them. To calculate this vector, we can simply subtract point1 from point2 instead of the two
separate subtractions in the previous step:
var vector = point2 - point1;
// = { x: 110, y: 200 } - { x: 50, y: 50 }
// = { x: 60, y: 150 }
Please note:
Read more about mathematical operations in the Mathematical Operations tutorial.
The result of this subtraction (vector) is still a Point object. Technically, there is no distinction between points and vectors. It is just their meaning that changes: A point is absolute, a vector
is relative.
Vectors can also be described as arrows. Similar to arrows, they point in a certain direction, and also indicate an amount of distance to move in that direction. An alternative and often more useful
way of describing a vector is therefore by angle and length.
The Point object exposes this alternative notation through the point.angle and point.length properties, which both can be modified too.
// 161.55494
// 68.19859
Please note:
By default, all angles in Paper.js are measured in degrees. Read more about angles and rotation in the chapter about Rotating Vectors.
It is so important that we repeat it again: Vectors contain relative information. All a vector tells us is in which direction and how far to move.
The easiest use of such a vector is to add it to an absolute position of a point. The result will again be an absolute point, which will be at a position shifted from the originating point by the
amount specified by the vector. In this way we can add the same vector to many points, as illustrated in the image bellow. The vectors you see are all the same, but the points resulting from adding
it to a group of existing points in different locations all differ.
As shown by the simple examples above, the power of vectors really comes into play when we use them in mathematical calculations, treating them as if they were simple values. Here an overview of the
different possible operations.
A vector can be added to another, and the result is the same as if we superposed two descriptions of how to get from one place to another, resulting in a third vector.
We start with four points:
var point1 = new Point(50, 0);
var point2 = new Point(40, 100);
var point3 = new Point(5, 135);
var point4 = new Point(75, 170);
As seen in Points and Vectors, we can now calculate the two vectors by subtracting the points from each other:
var vector1 = point2 - point1;
// = { x: 40, y: 100 } - { x: 50, y: 0 }
// = { x: -10, y: 100 }
var vector2 = point4 - point3;
// = { x: 75, y: 170 } - { x: 5, y: 135 }
// = { x: 70, y: 35 }
To start at startPoint, follow vector1 and then vector2, we could first add vector1 to the startPoint, retrieve the resulting tempPoint and then add vector2 to that to get to the desired endPoint.
var tempPoint = startPoint + vector1;
var endPoint = tempPoint + vector2;
But if we would like to apply the same combined vector to many points, this calculation would be unnecessarily complicated, as we would have to go through the tempPoint each time.
Instead, we can just add vector1 to vector2 and use the resulting object as a new vector that describes the combined movement.
var vector = vector1 + vector2;
But we can also do the opposite and subtract a vector from another instead of adding it. The result is the same as if we would go in the opposite direction of the vector that we are subtracting.
var vector = vector1 - vector2;
Please note:
The results of these operations is the same as the addition or subtraction of each vector's x and y coordinates. It would not work however to add or subtract the length or angle values.
It is quite easy to imagine what a multiplication or division with a numerical value would do to a vector: Instead of saying "go 10 meters into that direction", it would for example correspond to "3
times 10 meters into that direction". A multiplied vector does not change its angle. But its length is changed, by the amount of the multiplied value.
var bigVector = smallVector * 3;
Or, to go the other way:
var smallVector = bigVector / 3;
Please note:
Due to a limitation of Javascript, we need to make sure that the vector to be multiplied or divided is on the left-hand side of the operation. This is because the left-hand side defines the nature of
the type returned from the operation. To write the following would therefore produce invalid results:
var bigVector = 3 * smallVector;
So we learned that multiplying or dividing a vector changes its length without modifying its angle. But we can also change the length property on vector objects directly:
First we create a vector by directly use the Point constructor, since vectors and points are actually the same type of objects:
var vector = new Point(24, 60);
// 64.62198
Now we change the vector's length property. This is similar to the multiplication in the previous example, but modifies the object directly:
vector.length = vector.length * 3;
// 193.86593
We can also set the length to a fixed value, stretching or shrinking the vector to this length:
vector.length = 100;
Another way to change the vector's length is the point.normalize() method. In Mathematics to normalize a vector means to resize it so its length is 1. normalize() handles that for us, and also
accepts an optional parameter that defines the length to normalize to, if we would like it to be other than 1.
We start with the same vector as in the example above on line 1. Let's look at the normalized version of this vector:
var vector = new Point(24, 60);
var normalizedVector = vector.normalize();
// 64.62198
// 1
Note that the length of normalizedVector is now 1, while the original vector remains unmodified. normalize() does not modify the vector it is called on, instead it returns a new normalized vector
Now what happens if we normalize to 10 instead?
var normalizedVector = vector.normalize(10);
// 10
As expected, the returned vector has a length of 10. Note that we could also multiply the first normalized vector with 10:
var normalizedVector = vector.normalize() * 10;
// 10
Rotating vectors is a powerful tool for constructing paths and shapes, as it allows us to define a relative direction at a certain angle rotated away from another direction, for example sideways. The
Working with Mouse Vectors tutorial shows a good example of this, where rotated vectors are used to construct paths in parallel to the direction and position of the moved mouse.
All angles in Paper.js are measured in degrees, and are oriented clockwise. The angle values start from the horizontal axis and expand downwards. At 180° they flip to -180°, which is the same, since
going halfway around a circle in the left or right direction results in the same position. This does not prevent you from setting angles to something higher than 180° though.
There are two ways to change the angle of a vector. The obvious one is by setting the vector's angle property to a new value. Let's first set up a vector that points 100 coordinates down and 100 to
the right, and log its angle and length:
var vector = new Point(100, 100);
// 45
Since we are going in equal amounts down and to the right, it has an angle of 45°. Let's log it's length so we can check it after we have rotated the vector:
// 141.42136
Now we rotate it by 90° clockwise by setting its angle to 45° + 90° = 135° and log the length again:
vector.angle = 135;
// 141.42136
Note how the length has not changed. All we changed is the vector's direction. If we log the whole vector again, we will see that its coordinates are not the same anymore:
// { x: -100, y: 100 }
Instead of setting the angle directly to 135, we could have also explicitly increase it by 90°:
vector.angle = vector.angle + 90;
A simpler way of writing such an increase of a value is to use the += operator, as it prevents us from writing vector.angle twice:
vector.angle += 90;
Note that mathematical operations (addition, subtraction, multiplication and division) and methods such as rotate() and normalize() do not modify the involved vector and point objects. Instead, they
return the result as a new object. This means they can be chained and combined in expressions:
var point = event.middlePoint
+ event.delta.rotate(90);
Changing a vector's angle or length on the other hand directly modifies the vector object, and can only be used outside of such expressions. Since we are directly modifying objects, we need to be
careful about what we modify and use the clone() function when the original object shall not be modified.
var delta = event.delta.clone();
delta.angle += 90;
var point = event.middlePoint + delta;
The example script below is provided as a help to familiarise yourself with the concept of vectors.
Play around with it to get a feeling for how vectors work, and try to use it to repeat the principles learned in this tutorial. | {"url":"http://paperjs.org/tutorials/geometry/vector-geometry/","timestamp":"2014-04-20T03:12:25Z","content_type":null,"content_length":"29412","record_id":"<urn:uuid:8438e756-a4c8-4b61-aff3-c271152c1fed>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00505-ip-10-147-4-33.ec2.internal.warc.gz"} |
Negative infinity in Lisp
up vote 4 down vote favorite
I'm looking for the standard way to represent negative infinity in Lisp. Is there a symblic value which is recognised by Lisp's arithmetic functions as less than all other numbers?
Specifically, I'm looking for an elegant way to write the following:
(defun largest (lst)
"Evaluates to the largest number in lst"
(if (null lst)
(max (car lst) (largest (cdr lst)))))
lisp fixnum
add comment
2 Answers
active oldest votes
ANSI Common Lisp has bignum, which can used to represent arbitrarily large numbers as long as you have enough space, but it doesn't specify an "infinity" value. Some implementations
may, but that's not part of the standard.
In your case, I think you've got to rethink your approach based on the purpose of your function: finding the largest number in a list. Trying to find the largest number in an empty
list is invalid/nonsense, though, so you want to provide for that case. So you can define a precondition, and if it's not met, return nil or raise an error. Which in fact is what the
built-in function max does.
up vote 6 down (apply #'max '(1 2 3 4)) => 4
vote accepted (apply #'max nil) => error
EDIT: As pointed by Rainer Joswig, Common Lisp doesn't allow arbitrarily long argument lists, thus it is best to use reduce instead of apply.
(reduce #'max '(1 2 3 4))
Thanks. I was not aware max took an arbitrary number of arguments, but that provides an elegant solution. – jforberg Dec 12 '11 at 14:20
3 Since functions in Common Lisp don't allow arbitrary long argument lists, it is best to replace APPLY with REDUCE. See the value of the variable CALL-ARGUMENTS-LIMIT . An
implementation supports up to CALL-ARGUMENTS-LIMIT long argument lists. In your example it would mean that an implementation may fail to compute the maximum on CALL-ARGUMENTS-LIMIT
+ 1 long lists. Note that this the value of CALL-ARGUMENTS-LIMIT can be as small as 50 (!). – Rainer Joswig Dec 12 '11 at 16:48
@RainerJoswig: Oh you're totally right. – Daimrod Dec 12 '11 at 16:58
add comment
There is nothing like that in ANSI Common Lisp. Common Lisp implementations (and even math applications) differ in their representation of negative infinity.
For example in LispWorks for double floats:
up vote 2 down vote CL-USER 23 > (* MOST-NEGATIVE-DOUBLE-FLOAT 10)
add comment
Not the answer you're looking for? Browse other questions tagged lisp fixnum or ask your own question. | {"url":"http://stackoverflow.com/questions/8475124/negative-infinity-in-lisp/8475436","timestamp":"2014-04-16T08:36:59Z","content_type":null,"content_length":"68957","record_id":"<urn:uuid:d6e9c7c5-0c71-43af-9113-7432d7c73885>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00256-ip-10-147-4-33.ec2.internal.warc.gz"} |
n-1,n,n+1, who [should] care?!
erry Speed wrote a column in the latest IMS Bulletin (the one I received a week ago) about the choice of the denominator in the variance estimator. That is, should s² involve n (number of
observations), n-1 (degrees of freedom), n+1 or anything else in its denominator? I find the question more interesting than the answer (sorry, Terry!) as it demonstrates quite forcibly that there is
not a single possible choice for this estimator of the variance but that instead the “optimal” estimator is determined by the choice of the optimality criterion: thisChang and of Vasishth and Broe I
discussed earlier. As well as by the Stein effect, of course.) I thus deem it worthwhile to impress upon all users of statistics that there is no such single optimal choice, that unbiasedness is not
a compulsory property—just as well since most parameters cannot be estimated in an unbiased manner!—, and that there is room for a subjective choice of a “best” estimator, as paradoxical as it may
sound to non-statisticians.
3 Responses to “n-1,n,n+1, who [should] care?!”
1. Bonjour Christian,
the use of the terminology population variance (divided by n) and sample variance (divided by n-1) used in many textbooks, and implicitly assumed in many packages or calculators is illogical (I
view here the variance as a function of n numbers that does not change given the statistical context) and leads to a considerable amount of confusion indeed and many problems in the classroom.
And it is something that I always avoided personally. Of course dividing by n-p -1 with p regressors in the basic linear model will also be the unbiased choice.
2. I completely agree that there are multiple criteria for an optimal estimator. But I wonder if there is any statistician who believes there is only one optimal choice. Not to mention Bayesianists,
even a frequentist would consider at least two criteria, the bias and variance of an estimator, for the “optimal” consideration. In fact, even for biasness alone there are several criteria
(unconditional and conditional bias), especially in sequential testing problems. Chang’s book does not imply that bias or other criteria should be the only criterion. However, it does use the
fact of a biased MLE to question the interpretation of likelihood as the relative plausibility of various values of the parameter, suggesting a necessity of looking into the meaning of likelihood
further, especially in the MLE is biased.
□ Sorry Mark if I repeat myself: the MLE is almost always biased. “Almost always” is understood in the sense that for almost every parameterisation of the distribution, there is no unbiased
estimator of the corresponding parameter and hence the MLE cannot be unbiased. See e.g. Lehmann and Casella (section 8.4, p.144). If this does not seem intuitive enough, think of the standard
deviation in the normal model: there is no unbiased estimator of this quantity… | {"url":"http://xianblog.wordpress.com/2013/02/05/n-1nn1-who-should-care/","timestamp":"2014-04-21T14:44:00Z","content_type":null,"content_length":"39802","record_id":"<urn:uuid:517623ac-c2b3-45bf-a4c9-92dcd2b1c736>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00013-ip-10-147-4-33.ec2.internal.warc.gz"} |
Monotone Sequence Theorem: literature thing
Sometimes the literature gives a proof, and then a reference, but the reference is not really to that proof. Here is an example. The theorem in question is the following:
For any seq of n^2+1 distinct reals there is either a dec subseq of length n or an inc subseq of length n+1.
Proofs from the book
the authors attribute the following argument to
Let the seq be a(1),...,a(n^2+1). Associate to each a(i) the number t(i) which is the length of a longest inc subseq starting at a(i). If there is an i such that t(i) &ge n+1 then we are done. If
not then the function mapping a(i) to t(i) maps a set of size n^2+1 to a set of size n. Hence there is some s such that n+1 elements of the seq map to s. These n+1 elements form a dec seq.
The actual proof in the Erdos-Szekeres paper is this (paraphased):
Let f(n) be the minimum number of points out of which we can select n inc or dec subseq. We show f(n+1) = f(n)+2n-1. Let a(1),...,a(f(n)+2n-1) be a sequence of distinct reals. Out of the first f
(n) of them there is an inc or dec subseq of length n. Call the last elements of that subseq b(1). Remove it. (There is now a seq of length f(n)+2n-2.) Repeat the process to obtain b(1), b(2),
..., b(2n). Let A be the b(i)'s that are from inc subseq. Let B be the b(i)'s that are from dec subseq. It is easy to show that A is itself a dec subseq and that B is itself an inc subseq. If A
or B has n+1 elements then we are done. The only case left is that A and B each have n elements. Let a be the last element in A and b be the last element in b. It is easy to show that a=b. But
one of them was removed before the other, so this is impossible.
1. I suspect that in talks Erdos gave he did the proof now atributed to Erdos-Szekeres and hence people naturally assumed that this is the paper it was in.
2. I am surprised that Proofs from the book gave this proof. There is, IMHO, a better proof. I do not know whose it is.
Let the seq be a(1),...,a(n^2+1). Map every 1 \le i \le n^2+1 to (x,y) where x (y) is the length of the longest inc (dec) subseq than ends with a(i). It is easy to show that this map is 1-1.
If all of the x,y that are mapped to are in [n]x[n] then the domain is bigger than the range, contradiction.
3. Erdos-Szekeres first proved this theorem in 1935. Martin and Joseph Kruskal proved it 15 years later without knowing that Erdos-Szekeres had proven it; though by the time Joesph Kruskal published
it he knew. I have gathered up 5 proofs of the theorem that I know here.
4. In those days it was harder to find out if someone else had done what you had done since they didn't have google, but it may have (in some cases) been easier since there were so many fewer
researchers- you could just call them. OH- but long distance was expensive back then.
10 comments:
1. Of the three proofs you give here, only the "Proofs from the book" one does not use the phrase "it is easy to show". That, in my opinion, makes it the better proof.
2. Anonymous Rex4:08 PM, November 14, 2008
Some comments:
You say in the post that any sequence of n^2+1 distinct reals contains either a decreasing subsequence of length n or an increasing subsequence of length n+1. You can actually make both of those
n+1, as you do in your collection of five proofs.
Also, why not make it more general? Any sequence of length ab+1 contains either a decreasing subsequence of length a+1 or an increasing subsequence of length b+1. Moreover, it's worth mentioning
that this is tight, by considering the following sequence of length ab: b,b-1,...,2,1;2b,2b-1,...,b+2,b+1; ...; ab,ab-1,...,(a-1)b+2,(a-1)b+1.
Finally, the last proof in the PDF contains a minor error. You say that the co-domain of the map f is [n+1]x[n+1]. The actual co-domain could be much larger, e.g. if you consider the sequence
3. Anon 1 (indirectly) raises a good question with regard to what level of details a proof should contain. I am an advocate for spelling out all details unless they're obvious to the average
undergraduate student in mathematics. Clear explanations make a paper more easily accessible to those in neighboring fields of expertise.
4. Paul Beame6:13 PM, November 14, 2008
J Michael Steele has a survey with "six or more proofs" of it and related problems. Highly recommended. He attributes maybe the cutest proof to Hammersley (which you don't cite) and the last one
you mention as your favorite to Seidenberg (1959). This last one is the version that seems the most generalizable.
5. Paul Beame1:47 PM, November 15, 2008
For those who don't look at the Steele survey, here is that other proof: View the numbers as being written on cards to be stacked in piles ordered from left to right. A card can be placed on top
of another card if it has a smaller value. Place each card in turn on top of the leftmost pile where it fits. If a card's value is smaller than the top cards in all preceding piles then start a
new pile to the right of the existing piles and record a pointer from that card to the top card of the immediately preceding pile.
By the pigeonhole principle either there is a pile of height n+1, which is a decreasing sequence of length n+1, or there are n+1 piles in which case following back the pointers to the first pile
selects an increasing sequence of length n+1.
6. Paul Beame3:38 PM, November 15, 2008
Correction; For each card records a pointer to the top card in the immediately preceding pile, not just when new piles are created.
7. Thanks Paul - I like that proof as it is constructive (and efficient to implement).
8. Yes, the book "Extremal Combinatorics" says that this proof is due to Seidenberg. And it is indeed beautiful.
9. Proofs from the Book gives an argument that makes the claim immediately /intuitive,/ while the shorter alternative proof does not make it immediately obvious why the specified mapping is
10. Amen to Anonymous 1. It is easy to show that the phrase "it is easy to show" is often false. | {"url":"http://blog.computationalcomplexity.org/2008/11/monotone-sequence-theorem-literature.html","timestamp":"2014-04-20T03:11:08Z","content_type":null,"content_length":"174997","record_id":"<urn:uuid:84bb284b-e7fe-442c-a8db-27310aa87ff8>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00369-ip-10-147-4-33.ec2.internal.warc.gz"} |
Seahurst Statistics Tutor
...I am uniquely qualified to tutor geometry, with a PhD in Aeronautical and Astronautical Engineering from the University of Washington and more than 40 years of project experience in science and
engineering. The coursework for my Ph.D. included an extensive amount of mathematics, including calcul...
21 Subjects: including statistics, chemistry, physics, English
...I also volunteered with the Pullman, WA YMCA after school tutoring for over a year while earning my degree at WSU. During this time I also volunteered with the YMCA Special Olympics program and
am very comfortable working with special needs children.I am qualified to tutor Study Skills due to my...
25 Subjects: including statistics, chemistry, physics, geometry
...If your looking for a fun, creative, and EFFECTIVE way to improve your math skills- contact me for a tutoring session and you won't be disappointed. To give you an example of my creative
methods of teaching - I once taught math in an inner city New York 2nd grade class room. I took a class of 15 students that didn't know how to multiply.
17 Subjects: including statistics, calculus, geometry, algebra 2
...I became a tutorial instructor, because I am excited to see people learn. I was a tutor, through college. I always studied for depth, with the intent to teach what I learned.
62 Subjects: including statistics, English, chemistry, physics
...Have extensive IT industry experience and have been actively tutoring for 2 years. I excel in helping people learn to compute fast without or without calculators, and prepare for standardized
tests. Handle all levels of math through undergraduate levels.
43 Subjects: including statistics, chemistry, calculus, physics | {"url":"http://www.purplemath.com/seahurst_wa_statistics_tutors.php","timestamp":"2014-04-18T14:14:49Z","content_type":null,"content_length":"23839","record_id":"<urn:uuid:a1b549e4-4f55-4ea2-97ce-68e4d8fdc9fe>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00342-ip-10-147-4-33.ec2.internal.warc.gz"} |
As a first example, we take the overdetermined system of two equations in one dependent variable f(x), and two constants a and b.
Call rifsimp for a single case only (the default).
We see that under the given assumptions for the form of a and b (from Pivots), the only solution is given as f(x)=0 (from Solved). Now, run the system in multiple case mode using casesplit.
We see that we have four cases:
All cases except 2 have f(x)=0.
Looking at case 2 in detail, we see that under the constraint a = b^2 (from Solved) and b <> 0 from Pivots, the solution to the system will be given by the remaining ODE in f(x) (in Solved). Note
here that the constraint on the constants a and b, together with the assumption b <> 0, imply that a <> 0, so this constraint is not present in the Pivots entry due to simplification. It is still
present in the Case entry because Case describes the decisions made in the algorithm, not their simplified result. Also, case 4 has no Pivots entry. This is because no assumptions of the form were
used for this case.
One could look at the caseplot with the command:
As a final demonstration involving this system, suppose that we are only interested in nontrivial cases where f(x) is not identically zero. We can simply include this assumption in the input system,
and rifsimp will take it into account.
We see that the answer is returned in a single case with two false split Case entries. This means the computation discovered that the and cases lead to contradictions, so the entries in the Case list
are labelled as false splits, and the alternatives for the binary case splittings (cases with or ) are not present.
For the next example, we have a simple inconsistent system:
So there is no solution u(x) to the above system of equations.
The next example demonstrates the UnSolve list, while also warning about leaving indeterminates in unsolved form.
So we run rifsimp, but only solve for f(x), leaving g(x) in unsolved form. Unfortunately, the resulting system is inconsistent, but this is not recognized because equations containing only g(x) are
left unsolved. As discussed earlier in the page, these equations come out in the UnSolve list.
When equations are present in the UnSolve list, they must be manually examined.
Here is a nonlinear example.
By default rifsimp spawns the nonlinear equation to obtain a leading linear equation, and performs any required simplifications. The end result gives the following output:
We have only one consistent case. Attempting to perform this calculation with the spawn=false option gives the following:
So it is clear that by disabling spawning, the system is not in fully simplified form (as indicated by the presence of the DiffConstraint entry), and we do not obtain full information about the
Was this information helpful?
Please add your Comment (Optional)
E-mail Address (Optional)
What is This question helps us to combat spam | {"url":"http://www.maplesoft.com/support/help/Maple/view.aspx?path=DEtools/rifsimp/output","timestamp":"2014-04-18T15:40:57Z","content_type":null,"content_length":"225222","record_id":"<urn:uuid:aab177aa-50d3-46f9-b3a5-3403747dd70b>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00606-ip-10-147-4-33.ec2.internal.warc.gz"} |
John McAfee is a Heinlein hero
December 10, 2012
By Andrew
(This article was originally published at Statistical Modeling, Causal Inference, and Social Science, and syndicated at StatsBlogs.)
“A small group of mathematicians”
Jenny Davidson points to this article by Krugman on Asimov’s Foundation Trilogy. Given the silliness of the topic, Krugman’s piece is disappointingly serious (“Maybe the first thing to say about
Foundation is that it’s not exactly science fiction – not really. Yes, it’s set in the future, there’s interstellar travel, people shoot each other with blasters instead of pistols and so on. But
these are superficial details . . . the story can sound arid and didactic. . . . you’ll also be disappointed if you’re looking for shoot-em-up action scenes, in which Han Solo and Luke Skywalker
destroy the Death Star in the nick of time. . . .”). What really jumped out at me from Krugman’s piece, though, was this line:
In Foundation, we learn that a small group of mathematicians have developed “psychohistory”, the aforementioned rigorous science of society.
Like Davidson (and Krugman), I read the Foundation books as a child. I remember the “psychohistory” part, of course, but not that it was invented by mathematicians. That seems so retro! Back in the
day, there were only a few sorts of technical academic fields, and one of these was mathematics. Thus you had Mandelbrot inventing fractals, Turing inventing computer science, and Ulam inventing the
Nowadays, I think of mathematicians as a sort of eccentric band of specialists, working for decades on problems that only they care about, while earning money teaching intro calc and training
graduate students to work for Steven A. Cohen. I’m not saying that’s a fair impression—it would be just as correct for a mathematician to describe statisticians as an eccentric band of mathematical
plodders who make a virtue of their mediocrity and call it practicality—but it’s the impression I get. If I were writing a novel about an exciting new science, I might have it be invented by a
biologist or a computer scientist or even a rogue economist, but I probably wouldn’t think that something so applied would come out of the minds of a band of mathematicians.
A modern Heinlein hero
Perhaps Krugman will next write something on Robert Heinlein, whose writings, like Asimov’s, provide endless retro amusement (for those of us who are amused by such things), with the characteristic
Heinlein hero being someone like a garage mechanic who develops a faster-than-light space drive in his basement workshop. Updating that to the present day, we’d end up with someone like John McAfee,
that internet zillionaire who turned up in Guatemala the other day. Actually, McAfee sounds like a perfect Heinlein hero: a super-rich retired businessman with a fascination with airplanes, guns, and
drugs, and a 20-year-old girlfriend.
Of course, if we were really living in a Heinlein story, McAfee would actually have a time machine in his backyard, and that girlfriend would be a reincarnation of McAfee’s cat.
P.S. At the very end of the Krugman article:
The Foundation Trilogy by Isaac Asimov, introduced by Nobel Prize-winning economist Paul Krugman, is published by The Folio Society priced £75.00.
I know the British economy hasn’t been doing so well lately, but £75 is still a good chunka change, no? What I wonder is, how many people who buy this book will really want to read it all the way
through. Reading about the Foundation trilogy can be fun, but I can’t imagine the book itself can be very easy or pleasant to read at this point. I just feel that at this point I’ve read so many
smooth works of fiction and journalism over the years, that it might be difficult to read something that wooden in style. (Heinlein would be much more readable, I’d think.)
Please comment on the article here: Statistical Modeling, Causal Inference, and Social Science | {"url":"http://www.statsblogs.com/2012/12/10/john-mcafee-is-a-heinlein-hero/","timestamp":"2014-04-17T18:46:10Z","content_type":null,"content_length":"38297","record_id":"<urn:uuid:480a8591-cb3e-461d-aca0-d64715871917>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00063-ip-10-147-4-33.ec2.internal.warc.gz"} |
koszul duality and algebras over operads
up vote 12 down vote favorite
Given a pair of Koszul dual algebras, say $S^*(V)$ and $\bigwedge^*(V^*)$ for some vector space $V$, one obtains a triangulated equivalence between their bounded derived categories of
finitely-generated graded modules.
Given a pair of Koszul dual operads, say the Lie and commutative operads, what is the precise analogue of a derived equivalence between their categories of algebras?
homological-algebra operads koszul-duality
add comment
1 Answer
active oldest votes
The situation for graded modules over a pair of Koszul dual algebras is more complicated, actually. What the question says is true for Koszul algebras $A$ and $A^!$ provided that $A$ is
Noetherian and $A^!$ is finite-dimensional (including the case of the symmetric and exterior algebras) but not otherwise. In general one can say that the unbounded derived categories of
positively graded modules with finite-dimensional components over $A$ and $A^!$ are anti-equivalent. The subcategories of complexes of positively graded modules bounded separately in
every grading in these unbounded derived categories are also anti-equivalent.
One can replace the contravariant anti-equivalence with a covariant equivalence by considering positively graded modules over one of the algebras and negatively graded modules over the
other one (both algebras being considered as positively graded). In this case one does not have to require the components of the modules to be finite-dimensional.
With algebras over operads, the analogue of the equivalence for graded modules involves DG-algebras with an additional positive grading (there being only the ground field $k$ in the
additional grading $0$ and nothing in the negative additional grading), with the additional grading preserved by the differential. The Koszul duality is an anti-equivalence between the
up vote 11 localizations of the categories of DG-algebras of this kind, with every component of fixed additional grading being a bounded complex of finite-dimensional vector spaces, by
down vote quasi-isomorphisms. For some operads (e.g., for Lie and Com) one has to assume the field $k$ to have characteristic $0$, while for some others (e.g., Ass) one doesn't.
If one wishes to replace the contravariant anti-equivalence with a covariant equivalence in the case of algebras over operads, one has to consider algebras on one side of the equivalence
and coalgebras on the other side. Then the boundedness and finite-dimensionality requirements can be dropped.
What I've described above is the homogeneous Koszul duality; the nonhomogeneous case (with ungraded modules or algebras without the additional grading) is more complicated, though also
possible. See my answer to the question linked to from the question above.
References: 1. Beilinson, Ginzburg, Soergel "Koszul duality patterns in representation theory", 2. My preprint "Two kinds of derived categories, ...", arXiv:0905.2621, Appendix A.
1 thanks for the great answer! – Harold Williams Sep 26 '10 at 1:25
add comment
Not the answer you're looking for? Browse other questions tagged homological-algebra operads koszul-duality or ask your own question. | {"url":"http://mathoverflow.net/questions/39577/koszul-duality-and-algebras-over-operads?sort=votes","timestamp":"2014-04-19T07:58:16Z","content_type":null,"content_length":"54517","record_id":"<urn:uuid:4300b9b8-6d17-42a2-ae4d-fe74e39b2236>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00203-ip-10-147-4-33.ec2.internal.warc.gz"} |
Does a free falling charge radiate ?
you might try and use it just to approximate the orbit of 2 bodies when their distance is very large wrt their radii and call them geodesic orbits, but that won't get you a realistic approximation
for the gravitational radiation of that system, for that you need strong-field numerical relativity.
This makes sense, and I agree it might indeed be what Gralla was trying to say. Basically this would mean that, since the orbital period of the binary pulsar is much shorter than the characteristic
time scale for gravitational radiation from that system, the post-Newtonian method would give approximate geodesic orbits for time scales short compared to the radiation time scale, but if you tried
to model the long-term behavior of the system that way you would have to adjust the orbital parameters every so often as gravitational radiation was emitted to keep the approximation close enough. To
actually predict the long-term changes in the orbital parameters, you would need to do the strong field numerical simulation.
if you want to call geodesic motion to all orbiting bodies, extended or test particles, regardless of the intensity of the radiation (gravitational or EM) they are emitting you basically are saying
that all test particles and extended bodies worldlines following some kind of orbit no matter how unstable are following geodesic motion wich I don't think it's true.
I'm not trying to say this. I agree there would be little point in the concept of a geodesic if it didn't give you a way to pick out some meaningful subset of all possible worldlines.
if orbiting bodies emitting radiation, regardless of the intensity(strong-field case) didn't see affected their geodesic motion, first:what could actually ever affect a geodesic path? and second: how
do we expect that radiation to affect distance bodies detectors if it isn't capable to alter the geodesic path of the emitting body in the least(as long as we still consider third Newton's law as
valid of course).
Geodesic paths can "change" and still be geodesic paths because of changes in the metric that defines what a geodesic path is. Consider the detector scenario; suppose we have a GW detector that uses
interferometry, like LIGO or LISA. When a gravitational wave passes through the detector, it shows interference fringes; but the individual mirrors that reflect the laser light that shows the fringes
(because of small changes in the proper length between the mirrors) are in free fall the whole time. They are following geodesics, but geodesics of a time-varying metric.
Similarly, the two neutron stars in the binary pulsar could be following geodesics, but still have their orbital parameters change, because the metric is changing. In fact, that's probably the wrong
way to think about it, though, because the changes in the orbital parameters, at least to a first approximation, *are* the changes in the metric. The stars themselves don't change, considered in
isolation; what changes is their relationship. The overall metric of the system as a whole includes the relationship between the stars, so if that changes, the metric changes, even if each star
remains exactly the same internally. This doesn't mean the stars don't travel on geodesics; it means that there is a single self-consistent solution realized by Nature (which we can only approximate
at our current level of knowledge) that has each star (more precisely, each star's center of mass) traveling on a geodesic of the full, time-dependent metric that is ultimately due to the two stars
acting together as sources.
But *why* would the orbital parameters change, if the stars themselves are not changing internally? AFAIK the answer to this involves the light-speed time delay in the propagation of gravity, as
outlined, for example, in this paper by Carlip:
Of course it's possible that the full, self-consistent solution realized by Nature does not have the stars traveling on exact geodesics of the full, time-dependent metric; that's what Gralla seems to
think, for example. We won't know for sure until we can construct such solutions and make more precise observations. But I don't think we can rule out the possibility that, at least for systems like
the binary pulsar, gravitational waves can be emitted without requiring any deviation from geodesic motion to explain them. | {"url":"http://www.physicsforums.com/showthread.php?p=4280268","timestamp":"2014-04-18T03:16:55Z","content_type":null,"content_length":"97059","record_id":"<urn:uuid:2ad1badf-268e-44d8-8065-d638aa3cd6b5>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00190-ip-10-147-4-33.ec2.internal.warc.gz"} |
Horseshoe Lake, WA Math Tutor
Find a Horseshoe Lake, WA Math Tutor
...I have taught all subjects of the SAT for over 10 years. For the Writing section, I split the time between the essay and the multiple choice. For the essay, I make sure students are ready for
whatever prompt they might receive and I target specifically what each student needs to improve.
32 Subjects: including prealgebra, LSAT, algebra 1, algebra 2
...My experience tutoring includes helping my peers throughout middle school, high school, and college. Some examples would be teaching Trigonometry to my classmates in high school as a
substitute teacher, assisting a 10 year old with all homework including math for a year, assisting a college stud...
7 Subjects: including calculus, linear algebra, algebra 1, algebra 2
...My first large technical document was my Ph.D. Dissertation. I am highly qualified to tutor for the GED test.
21 Subjects: including algebra 1, algebra 2, calculus, chemistry
I've always excelled at taking standardized testing and I really love helping others improve their scores. I specialize in helping students with the GRE, ACT, SAT, and ASVAB. I scored perfect on
my first ASVAB exam and I've been able to score perfect on repeat exams of the GRE, ACT, and SAT.
15 Subjects: including SAT math, GRE, ACT Math, prealgebra
...I mainly tutored high school algebra. I am good at teaching students how to work with fractions. I have a Bachelor's Degree in Mechanical Engineering.
5 Subjects: including precalculus, algebra 1, algebra 2, geometry
Related Horseshoe Lake, WA Tutors
Horseshoe Lake, WA Accounting Tutors
Horseshoe Lake, WA ACT Tutors
Horseshoe Lake, WA Algebra Tutors
Horseshoe Lake, WA Algebra 2 Tutors
Horseshoe Lake, WA Calculus Tutors
Horseshoe Lake, WA Geometry Tutors
Horseshoe Lake, WA Math Tutors
Horseshoe Lake, WA Prealgebra Tutors
Horseshoe Lake, WA Precalculus Tutors
Horseshoe Lake, WA SAT Tutors
Horseshoe Lake, WA SAT Math Tutors
Horseshoe Lake, WA Science Tutors
Horseshoe Lake, WA Statistics Tutors
Horseshoe Lake, WA Trigonometry Tutors
Nearby Cities With Math Tutor
Colby, WA Math Tutors
Colchester, WA Math Tutors
Forest City, WA Math Tutors
Grnd Lke Town, OK Math Tutors
Herron Island, WA Math Tutors
Lake Holiday, WA Math Tutors
Long Lake, WA Math Tutors
Orchard Heights, WA Math Tutors
Overlook, WA Math Tutors
Parkwood, WA Math Tutors
Raft Island, WA Math Tutors
Retsil Math Tutors
South Park Village, WA Math Tutors
View Park, WA Math Tutors
Waterman, WA Math Tutors | {"url":"http://www.purplemath.com/Horseshoe_Lake_WA_Math_tutors.php","timestamp":"2014-04-19T23:50:15Z","content_type":null,"content_length":"23893","record_id":"<urn:uuid:031ae225-79d5-496a-b31b-11ff7f35445a>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00490-ip-10-147-4-33.ec2.internal.warc.gz"} |
Characterizing determinacy in Kleene algebras
, 2003
"... We propose Kleene algebra with domain (KAD), an extension of Kleene algebra with two equational axioms for a domain and a codomain operation, respectively. KAD considerably augments the
expressibility of Kleene algebra, in particular for the specification and analysis of state transition systems. We ..."
Cited by 42 (29 self)
Add to MetaCart
We propose Kleene algebra with domain (KAD), an extension of Kleene algebra with two equational axioms for a domain and a codomain operation, respectively. KAD considerably augments the
expressibility of Kleene algebra, in particular for the specification and analysis of state transition systems. We develop the basic calculus, discuss some related theories and present the most
important models of KAD. We demonstrate applicability by two examples: First, an algebraic reconstruction of Noethericity and well-foundedness. Second, an algebraic reconstruction of propositional
Hoare logic.
- JOURNAL ON LOGIC AND ALGEBRAIC PROGRAMMING, SPECIAL ISSUE ON RELATION ALGEBRA AND KLEENE ALGEBRA , 2004
"... In relational semantics, the input-output semantics of a program is a relation on its set of states. We generalize this in considering elements of Kleene algebras as semantical values. In a
nondeterministic context, the demonic semantics is calculated by considering the worst behavior of the program ..."
Cited by 7 (5 self)
Add to MetaCart
In relational semantics, the input-output semantics of a program is a relation on its set of states. We generalize this in considering elements of Kleene algebras as semantical values. In a
nondeterministic context, the demonic semantics is calculated by considering the worst behavior of the program. In this paper, we concentrate on while loops. Calculating the semantics of a loop is
difficult, but showing the correctness of any candidate abstraction is much easier. For deterministic programs, Mills has described a checking method known as the while statement verification rule. A
- ALGEBRAIC METHODOLOGY AND SOFTWARE TECHNOLOGY (AMAST 2006). LNCS 4019 , 2006
"... We propose an algebraic semantics for the temporal logic CTL∗ and simplify it for its sublogics CTL and LTL. We abstractly represent state and path formulas over transition systems in Boolean
left quantales. These are complete lattices with a multiplication that preserves arbitrary joins in its left ..."
Cited by 6 (5 self)
Add to MetaCart
We propose an algebraic semantics for the temporal logic CTL∗ and simplify it for its sublogics CTL and LTL. We abstractly represent state and path formulas over transition systems in Boolean left
quantales. These are complete lattices with a multiplication that preserves arbitrary joins in its left argument and is isotone in its right argument. Over these quantales, the semantics of CTL∗
formulas can be encoded via finite and infinite iteration operators; the CTL and LTL operators can be related to domain operators. This yields interesting new connections between representations as
known from the modal µ-calculus and Kleene/ω-algebra.
, 2010
"... We present an algebraic approach to separation logic. In particular, we give an algebraic characterisation for assertions of separation logic, discuss different classes of assertions and prove
abstract laws fully algebraically. After that, we use our algebraic framework to give a relational semantic ..."
Cited by 5 (4 self)
Add to MetaCart
We present an algebraic approach to separation logic. In particular, we give an algebraic characterisation for assertions of separation logic, discuss different classes of assertions and prove
abstract laws fully algebraically. After that, we use our algebraic framework to give a relational semantics of the commands of the simple programming language associated with separation logic. On
this basis we prove the frame rule in an abstract and concise way. We also propose a more general version of separating conjunction which leads to a frame rule that is easier to prove. In particular,
we show how to algebraically formulate the requirement that a command does not change certain variables; this is also expressed more conveniently using the generalised separating conjunction. The
algebraic view does not only yield new insights on separation logic but also shortens proofs due to a point free representation. It is largely first-order and hence enables the use of off-the-shelf
automated theorem provers for verifying properties at a more abstract level.
, 2005
"... Abstract. We give a compendium of algebraic calculation rules for the operations of residuation and detachment in semirings. 1 ..."
"... Abstract. In this paper we present an abstract representation of pointer structures in Kleene algebras and the properties of a particular selective update function. These can be used as
prerequisites for the definition of in-situ pointer updates and a general framework to derive in-situ pointer algo ..."
Cited by 1 (1 self)
Add to MetaCart
Abstract. In this paper we present an abstract representation of pointer structures in Kleene algebras and the properties of a particular selective update function. These can be used as prerequisites
for the definition of in-situ pointer updates and a general framework to derive in-situ pointer algorithms from their specification.
"... We generalise the designs of Unifying Theories of Programming (UTP) by defining them as matrices over semirings with ideals. This clarifies the algebraic structure of designs and considerably
simplifies reasoning about them, e.g., forming a Kleene and omega algebra of designs. Moreover, we prove a g ..."
Add to MetaCart
We generalise the designs of Unifying Theories of Programming (UTP) by defining them as matrices over semirings with ideals. This clarifies the algebraic structure of designs and considerably
simplifies reasoning about them, e.g., forming a Kleene and omega algebra of designs. Moreover, we prove a generalised fixpoint theorem for isotone functions on designs. We apply our framework to
investigate symmetric linear recursion and its relation to tail-recursion; this substantially involves Kleene and omega algebra as well as additional algebraic formulations of determinacy,
invariants, domain, pre-image, convergence and noetherity. Due to the uncovered algebraic structure of UTP designs, all our general results also directly apply to UTP.
, 2005
"... In 1996 Zhou and Hansen proposed a first-order interval logic called Neighbourhood Logic (NL) for specifying liveness and fairness of computing systems and also defining notions of real analysis
in terms of expanding modalities. After that, Roy and Zhou presented a sound and relatively complete Du ..."
Add to MetaCart
In 1996 Zhou and Hansen proposed a first-order interval logic called Neighbourhood Logic (NL) for specifying liveness and fairness of computing systems and also defining notions of real analysis in
terms of expanding modalities. After that, Roy and Zhou presented a sound and relatively complete Duration Calculus as an extension of NL. We present an embedding of NL into an idempotent semiring of
intervals. This embedding allows us to extend NL from single intervals to sets of intervals as well as to extend the approach to arbitrary idempotent semirings. We show that most of the required
properties follow directly from Galois connections, hence we get the properties for free. As one important result we get that some of the axioms which were postulated for NL can be dropped since they
are theorems in our generalisation. Furthermore, we present some possible interpretations for neighbours beyond intervals. Here we discuss for example reachability in graphs and applications to
hybrid systems. At the end of the paper we add finite and infinite iteration to NL and extend idempotent semirigs to Kleene algebras and ω algebras. These extensions are useful for formulating
repetitive properties and procedures like loops.
"... This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and education use, including for instruction at the
authors institution and sharing with colleagues. Other uses, including reproduction and distribution, or sel ..."
Add to MetaCart
This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and education use, including for instruction at the authors
institution and sharing with colleagues. Other uses, including reproduction and distribution, or selling or licensing copies, or posting to personal, institutional or third party websites are
prohibited. In most cases authors are permitted to post their version of the article (e.g. in Word or Tex form) to their personal website or institutional repository. Authors requiring further
information regarding Elsevier’s archiving and manuscript policies are encouraged to visit: | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=220194","timestamp":"2014-04-18T01:31:51Z","content_type":null,"content_length":"32730","record_id":"<urn:uuid:dd0518a4-cc78-486c-9c8e-aec4f415f20f>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00331-ip-10-147-4-33.ec2.internal.warc.gz"} |
Nonradial pulsation of the Delta Scuti variable 4 CVn
Delta Scuti Star Newsletter
Issue 1, October 1989
Nonradial pulsation of the Delta Scuti variable 4 CVn
The results of a multisite campaign undertaken during the years 1983 and 1984 have resulted in a solution for the complex variable 4 CVn = Al Cvn = HR4715. M Breger, B. J. McNamara, F.Kerschbaum,
Huang Lin, Jiang Shi-yang, Guo Zi-he, and E. Poretti have made multisite observations at four observatories on three continents McDonald and Tortugas (USA), Xing-Long (China), and Merate (Italy).
These two campaigns cover 136 hours and avoid the serious one-cycle per day aliasing present in single-observatory data.
Multiple-frequency least-squares and single-frequency Fourier techniques reveal five frequencies with constant amplitudes for the two years. A sixth frequency near 5.5 c/d (actually predicted by the
nonradial pattern found below) is also present, but was not included into the solution because its exact value was ambiguous.
A clue to the pulsation of 4 CVn is given by the observed frequency differences of 0.40 and 0.80 cycles per day, respectively. The factor of two suggests that we are dealing with rotational splitting
and modes with different m values. Consideration of first and second-order rotational splitting as well as the theoretical values and ratios of the pulsation constants, Q, given by Fitch, allows an
identification of the modes. All the frequencies are matched well with the same l value of 2 or 3. The identification given below may not be unique (within the observational uncertainties of
determining the radius and Q values), but gives the best fit among the different combinations modelled.
Pulsation frequencies and p modes for l= 2
Frequency Q value Amklitude k m
(c/d) (dqys) (mag)
8.5950 0.016 0.013 3 0
6.9763 0.020 0.006 2 0
7.3778 0.019 0.005 2 -1
5.0475 0.028 0.028 1 1
5.8508 0.024 0.008 1 -1
The mode identification is consistent with the observed rotational frequency of 20.31 revs/d and predicts an inclination, i, of the rotational axis to the line of sight of 42 degrees.
A comparison with published and unpublished data for earlier years shows that over 18 years the amplitudes of pulsation are slowly variable on a time scale of years. | {"url":"http://www.univie.ac.at/tops/CoAst/archive/DSSN1/News4CVn.html","timestamp":"2014-04-16T19:07:11Z","content_type":null,"content_length":"3577","record_id":"<urn:uuid:80ea0638-26ff-4f1a-8f1c-50f217c739a9>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00220-ip-10-147-4-33.ec2.internal.warc.gz"} |
Limit of ln(x+1) as x->infinity
August 13th 2008, 11:18 PM
Limit of ln(x+1) as x->infinity
Hi, I'm a bit rusty on my math since it is summer...
but i have to do this summer assignment.
The original question is which of the following functions has a horizontal asymptote at y=-1? and it lists like 5 equations, one which is y=ln(x+1)
I'm thinking that you have to find the limit as x->infinity and negative infinity, and if the answer is -1, that is my equation. If that's wrong, point it out please.
And no graphing calculator, and i really don't want to hand draw these graphs (i have to show work)
August 13th 2008, 11:24 PM
Hi, I'm a bit rusty on my math since it is summer...
but i have to do this summer assignment.
The original question is which of the following functions has a horizontal asymptote at y=-1? and it lists like 5 equations, one which is y=ln(x+1)
I'm thinking that you have to find the limit as x->infinity and negative infinity, and if the answer is -1, that is my equation. If that's wrong, point it out please.
And no graphing calculator, and i really don't want to hand draw these graphs (i have to show work)
$\lim_{x \to \infty} \ln(x+1) = \infty$
and $\ln(x+1)$ is undefined in the reals for $x\le -1$
August 13th 2008, 11:30 PM
Hi, I'm a bit rusty on my math since it is summer...
but i have to do this summer assignment.
The original question is which of the following functions has a horizontal asymptote at y=-1? and it lists like 5 equations, one which is y=ln(x+1)
I'm thinking that you have to find the limit as x->infinity and negative infinity, and if the answer is -1, that is my equation. If that's wrong, point it out please.
And no graphing calculator, and i really don't want to hand draw these graphs (i have to show work)
You don't need a calculator, there are many options if you have access to a computer. There should be a built in calculator with the operating system, or (not many people know this) just type the
calculation into Google and Google calculator will give the answer. Not to mention the myriad of online function/graph plotters that you will find if you google for online graph plotter, the
first hit is here.
August 14th 2008, 09:19 AM
You don't need a calculator, there are many options if you have access to a computer. There should be a built in calculator with the operating system, or (not many people know this) just type the
calculation into Google and Google calculator will give the answer. Not to mention the myriad of online function/graph plotters that you will find if you google for online graph plotter, the
first hit is here.
Actually, it's not that I don't have a graphing calculator, I'm just not allowed to use one for that problem. Do you think that just writing that the limit goes to infinity is sufficient?
August 14th 2008, 10:14 AM
It depends on what you are supposed to know. If you know that $\ln(x)$ increases without bound as $x$ goes to infinity you need just say so.
August 14th 2008, 10:18 AM
August 14th 2008, 12:40 PM
This depends on your teacher, but I would assume that it is efficient enough. One thing you can is rewrite the limit equation to prove your point even further.
$\lim_{x \rightarrow \infty} ln(x)$
$\lim_{e^x \rightarrow \infty} e^{ln(x)}$
In order for $a^x$ to go to infinity, the base must be positive and x must have a limit of infinity. Both of those work for this limit, so the following can be said:
$\lim_{x \rightarrow \infty} x$ | {"url":"http://mathhelpforum.com/pre-calculus/45923-limit-ln-x-1-x-infinity-print.html","timestamp":"2014-04-17T11:47:20Z","content_type":null,"content_length":"13429","record_id":"<urn:uuid:036fae98-c228-404f-99c6-b6a4ec42458d>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00324-ip-10-147-4-33.ec2.internal.warc.gz"} |
Calculus II???
Is this a good place to get help with Calculus II problems?
OK MathGuru!! I bow to your knowledge! Determine whether the trianble with vertices P(1,0,-1), Q(2,1,0), and R(0,0,3) is a right triangle. Like I have a CLUE???
I would do it this way: Find the distance between P&Q, Q&R, and R&P (this should be easy enough, let me know if you need help there is a formula for distance between points in 3 dimensions) Now you
have 3 sides of the triangle. According to Euclid's SSS proof their is only one possible triangle given three sides. Now you can use sin or cos to figure out the angle PQR which is the one most
likely to be a right angle. Let me know if you need further help. | {"url":"http://mathhelpforum.com/calculus/925-calculus-ii.html","timestamp":"2014-04-16T05:43:53Z","content_type":null,"content_length":"38842","record_id":"<urn:uuid:633c7b2b-f0f2-44c7-80d4-227aaf3280b7>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00439-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
What happens as a result of an increase in the intensity of a sound wave? -the frequency of a soundwave increases -the velocity of the sound wave decreases -the energy of the soundwave increases -the
amplitude of the soundwave decreases
• one year ago
• one year ago
Best Response
You've already chosen the best response.
i thinnk its c
Best Response
You've already chosen the best response.
Intensity is proportional to amplitude squared \[Intensity \alpha amplitude^{2}\]. So since the intensity increases we can say that the amplitude must increase too. As intensity is power per unit
area per unit time, we say as I increases then A increases; so the kinetic energy must increase too. As the kinetic energy increases by a factor of x then the velocity increases by \[x\]. So the
The frequency of the wave is unchanged since the period remains the same and the velocity should increase too. Since the only feasible answer is energy increases.
Best Response
You've already chosen the best response.
I meant to say as KE increases by x the v increases by v^2
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/5108fb05e4b0d9aa3c45e2b1","timestamp":"2014-04-17T01:50:11Z","content_type":null,"content_length":"32988","record_id":"<urn:uuid:bc1b44c3-665d-4a4c-9cfa-9c006e7c0248>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00055-ip-10-147-4-33.ec2.internal.warc.gz"} |
Weighted blow up of a Toric Variety
up vote 2 down vote favorite
Let $X$ be a Toric Variety and let $x\in X$ be a point (not necessarily smooth). Then the blow up $Bl_{x}X$ of $X$ in $x$ is Toric. Let $Y:=WBl_{x}X$ be the weighted blow up of $X$ in $x$ with
weights $a_{1},...,a_{n}$.
Is $Y$ Toric ?
In particular, is a weighted blow up of a weighted projective space Toric ?
3 For the blow up to be toric $x$ must be torus invariant – Jesus Martinez Garcia Aug 4 '11 at 18:19
3 If you are blowing up a torus invariant ideal, the resulting object should still be toric. – Karl Schwede Aug 4 '11 at 18:53
add comment
1 Answer
active oldest votes
(Essentially reposting Jesus Martinez Garcia and Karl Schwede's comments as an answer)
If you are blowing up a torus-invariant sheaf of ideals, then the Rees algebra (the thing you take Proj of to get the blow-up) has grading by characters of the torus, so the blowup has an
action of the torus, so it is toric (since it also contains a dense copy of the torus that acts on it). Thus, if your weighted blow-up is weighted "along the coordinate hyperplanes", it
will be toric.
up vote 2
down vote If you blow up a non-invariant sheaf of ideals, you do not get something toric in general. If you blow up a non-invariant point of $\mathbb A^2$, then the blow-up map already cannot be
toric, even though both varieties are toric. Blowing up two different points of $\mathbb A^2$ gives a total space which has no toric structure at all: the two (-1)-curves must be
torus-invariant and blowing them down gives a toric map to $\mathbb A^2$, but any toric blow-up of $\mathbb A^2$ must have the exceptional locus lying over a single point.
1 A caveat: the blow-up of a normal toric variety at a torus-invariant ideal need not be normal. For example, take the blow-up of $\langle x^2, y^2 \rangle$ in the plane is non-normal. So,
the answer also depends on what you take to be a toric variety. – Dustin Cartwright Aug 5 '11 at 13:26
add comment
Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry or ask your own question. | {"url":"http://mathoverflow.net/questions/72100/weighted-blow-up-of-a-toric-variety","timestamp":"2014-04-19T17:37:30Z","content_type":null,"content_length":"53298","record_id":"<urn:uuid:f69a4895-b582-4d1e-8d38-b88e8237b36c>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00271-ip-10-147-4-33.ec2.internal.warc.gz"} |
Knot Physics
Knot Physics
Spacetime is assumed to be a branched 4-dimensional manifold embedded in a 6-dimensional Minkowski space. The branches allow quantum interference; each individual branch is a history in the
sum-over-histories. A n-manifold embedded in a n+2-space can be knotted. The metric on the spacetime manifold is inherited from the Minkowski space and only allows a particular variety of knots. We
show how those knots correspond to the observed particles with corresponding properties. We derive a Lagrangian. The Lagrangian combined with the geometry of the manifold produces gravity,
electromagnetism, weak force, and strong force.
Introduction to knot physics
Introduction to quantum in knot physics
Conference presentation
Frequently asked questions page
Knot physics: Spacetime in co-dimension 2
First version 2004. Recent changes:
(Corrections in QFT and Ricci flatness. 1/15/2013)
(Minor changes for clarity. 4/14/2013)
(Section IX D Gravity changed for clarity. 4/16/2013)
(Minor changes. 4/3/2014)
Knot physics: Neutrino helicity
(Neutrino helicity separated as new paper. 11/3/2011)
(Minor changes for clarity. 4/14/2013)
(Minor changes. 4/3/2014)
Knot physics: Deriving the fine structure constant
(First version. 4/14/2013)
(Significant changes and revisions. 4/3/2014)
Mathematica notebooks:
(First version. 4/14/2013)
(Expanded function verification. 4/25/2013)
(Significant changes and revisions. Function verification separated. 4/2/2014)
(First version. 4/2/2014) | {"url":"http://www.knotphysics.net/","timestamp":"2014-04-18T14:46:15Z","content_type":null,"content_length":"5903","record_id":"<urn:uuid:9ffd487c-4b25-4000-99a2-d10ad5cd7663>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00649-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: Re: st: How to loop over non-integers?
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
Re: Re: st: How to loop over non-integers?
From "Scott Merryman" <scott.merryman@gmail.com>
To statalist@hsphsun2.harvard.edu
Subject Re: Re: st: How to loop over non-integers?
Date Sun, 10 Jun 2007 15:57:53 -0500
On 6/10/07, PatrickT@umac.mo <PatrickT@umac.mo> wrote:
The code below nearly gets there. From the plot of the Wald statistic
against alpha, the smallest element of alpha (on this coarse grid) is
alpha_hat = 29. I could
not work out how to use the minimum function to extract this value. I tried
things like --- scalar alpha_hat = min(alpha) --- but that won't work.
What's the trick
You could try:
sort W
scalar alpha_hat = alpha[1]
sum W, meanonly
if W == r(min) {
scalar alpha_hat = alpha
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2007-06/msg00296.html","timestamp":"2014-04-20T21:24:43Z","content_type":null,"content_length":"6652","record_id":"<urn:uuid:381891a4-ca2b-491c-8d9d-a207ab26813f>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00626-ip-10-147-4-33.ec2.internal.warc.gz"} |
Consequences of the Axiom of Choice Project
Consequences of the Axiom of Choice Project Homepage
The book Consequences of the Axiom of Choice by Paul Howard Send E-Mail to Paul Howard and Jean E. Rubin Send E- Mail to Jean Rubin is volume 59 in the series Mathematical Surveys and Monographs
published by the American Mathematical Society in 1998. This book is a survey of research done during the last 100 years on the axiom of choice and its consequences. (Connect to The AMS Bookstore for
ordering information.)
The Consequences of the Axiom of Choice Project is a continuation of the research that produced the book. The authors would appreciate learning of any corrections or additions that should be made to
the project. (phoward@emunix.emich.edu, jer@math.purdue.edu)
On this page you will find:
• Changes and additions to the data base that have occurred since publication of the book
• A TeX version of the implication table, Table 1 which may be downloaded and printed. (Hold down the shift key and click on the file name to download.)
• A TeX version of the auxillary table, Table 2 which may be downloaded and printed (Hold down the shift key and click on the file name to download.)
• A view of the implication table, Table I, restricted to any subset of 10 or fewer forms of the axiom of choice. (Fill out the form below.)
• A listing of all models known to satisfy any specified set of conditions. (Complete the form below. You may have to submit the data more than once.)
To see a list of all models with specified characteristics:
1. Enter the numbers of the forms that are to be true in the model separated by spaces or commas below. (Ten forms maximum)
The list of forms to be TRUE in the model(s)
2. Enter the numbers of the forms that are to be false in the model separated by spaces or commas below. (Ten forms maximum)
The list of forms to be FALSE in the model(s)
Number of visitors: | {"url":"http://www.math.purdue.edu/~hrubin/JeanRubin/Papers/conseq.html","timestamp":"2014-04-18T05:56:14Z","content_type":null,"content_length":"4042","record_id":"<urn:uuid:c4943792-78b4-49a7-aff8-1ccc1bd70d99>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00190-ip-10-147-4-33.ec2.internal.warc.gz"} |
The miraculous universal distribution
Results 1 - 10 of 17
, 1994
"... The Minimum Description Length (MDL) principle is solidly based on a provably ideal method of inference using Kolmogorov complexity. We test how the theory behaves in practice on a general
problem in model selection: that of learning the best model granularity. The performance of a model depends ..."
Cited by 20 (8 self)
Add to MetaCart
The Minimum Description Length (MDL) principle is solidly based on a provably ideal method of inference using Kolmogorov complexity. We test how the theory behaves in practice on a general problem in
model selection: that of learning the best model granularity. The performance of a model depends critically on the granularity, for example the choice of precision of the parameters. Too high
precision generally involves modeling of accidental noise and too low precision may lead to confusion of models that should be distinguished. This precision is often determined ad hoc. In MDL the
best model is the one that most compresses a two-part code of the data set: this embodies "Occam's Razor." In two quite different experimental settings the theoretical value determined using MDL
coincides with the best value found experimentally. In the first experiment the task is to recognize isolated handwritten characters in one subject's handwriting, irrespective of size and
orientation. Base...
- M B , 1993
"... The theory of algorithmic complexity (commonly known as Kolmogorov complexity) or algorithmic information theory is a novel mathematical approach combining the theory of computation with
information theory. It is the theory that finally formalizes the elusive notion of the amount of information in i ..."
Cited by 15 (10 self)
Add to MetaCart
The theory of algorithmic complexity (commonly known as Kolmogorov complexity) or algorithmic information theory is a novel mathematical approach combining the theory of computation with information
theory. It is the theory that finally formalizes the elusive notion of the amount of information in individual objects, in contrast to entropy that is a statistical notion of average code word length
to transmit a message form a random source. This powerful new theory has successfully resolved ancient questions about the nature of randomness of individual objects, inductive reasoning and
prediction, and has applications in mathematics, computer science, physics, biology, and other sciences, including social and behavioral sciences.
, 2007
"... A drawback to Kolmogorov-Chaitin complexity (K) is that it is uncomputable in general, and that limits its range of applicability. Another critique concerns the dependence of K on a particular
universal Turing machine U for which predictions for short sequences-shorter for example than typical compi ..."
Cited by 7 (7 self)
Add to MetaCart
A drawback to Kolmogorov-Chaitin complexity (K) is that it is uncomputable in general, and that limits its range of applicability. Another critique concerns the dependence of K on a particular
universal Turing machine U for which predictions for short sequences-shorter for example than typical compiler lengths- can be arbitrary. In practice one can approximate it by computable compression
methods. However, such compression methods do not provide a good approximation for short sequences. Herein is suggested an empirical approach to overcome the problem that compression approximations
do not work well for short sequences. Additionally, our results demonstrate that there is a strong correlation in terms of sequence frequencies across the output of several systems including such
abstract systems as cellular automata and Turing machines, as well as repositories containing a sample of real-world information such as images and human DNA fragments. Our results suggest
- In Genetic Programming, Proceedings of EuroGP 2003 , 2003
"... This paper has three aims. Firstly, to clarify the poorly understood No Free Lunch Theorem (NFL) which states all search algorithms perform equally. Secondly, search algorithms are often applied
to program induction and it is suggested that NFL does not hold due to the universal nature of the ma ..."
Cited by 6 (3 self)
Add to MetaCart
This paper has three aims. Firstly, to clarify the poorly understood No Free Lunch Theorem (NFL) which states all search algorithms perform equally. Secondly, search algorithms are often applied to
program induction and it is suggested that NFL does not hold due to the universal nature of the mapping between program space and functionality space. Finally, NFL and combinatorial problems are
, 2011
"... Intelligent design advocate William Dembski has introduced a measure of information called “complex specified information”, or CSI. He claims that CSI is a reliable marker of design by
intelligent agents. He puts forth a “Law of Conservation of Information” which states that chance and natural laws ..."
Cited by 5 (0 self)
Add to MetaCart
Intelligent design advocate William Dembski has introduced a measure of information called “complex specified information”, or CSI. He claims that CSI is a reliable marker of design by intelligent
agents. He puts forth a “Law of Conservation of Information” which states that chance and natural laws are incapable of generating CSI. In particular, CSI cannot be generated by evolutionary
computation. Dembski asserts that CSI is present in intelligent causes and in the flagellum of Escherichia coli, and concludes that neither have natural explanations. In this paper we examine
Dembski’s claims, point out significant errors in his reasoning, and conclude that there is no reason to accept his assertions. 1
- The Mathematical Association of America, Monthly , 2002
"... there laws of randomness? These old and deep philosophical questions still stir controversy today. Some scholars have suggested that our difficulty in dealing with notions of randomness could be
gauged by the comparatively late development of probability theory, which had a ..."
Cited by 4 (1 self)
Add to MetaCart
there laws of randomness? These old and deep philosophical questions still stir controversy today. Some scholars have suggested that our difficulty in dealing with notions of randomness could be
gauged by the comparatively late development of probability theory, which had a
"... We propose a test based on the theory of algorithmic complexity and an experimental evaluation of Levin’s universal distribution to identify evidence in support of or in contravention of the
claim that the world is algorithmic in nature. To this end we have undertaken a statistical comparison of the ..."
Cited by 4 (4 self)
Add to MetaCart
We propose a test based on the theory of algorithmic complexity and an experimental evaluation of Levin’s universal distribution to identify evidence in support of or in contravention of the claim
that the world is algorithmic in nature. To this end we have undertaken a statistical comparison of the frequency distributions of data from physical sources on the one hand– repositories of
information such as images, data stored in a hard drive, computer programs and DNA sequences–and the frequency distributions computing devices such as Turing machines, cellular automata and Post Tag
systems. Statistical correlations were found and their significance measured. 1.2
- S.E.E.D. Journal
"... A model is one of the most fundamental concepts: it is a formal and generalized explanation of a phenomenon. Only with models we can bridge the particulars and predict the unknown. Virtually all
our intellectual work turns around finding models, evaluating models, using models. Because models are so ..."
Cited by 1 (1 self)
Add to MetaCart
A model is one of the most fundamental concepts: it is a formal and generalized explanation of a phenomenon. Only with models we can bridge the particulars and predict the unknown. Virtually all our
intellectual work turns around finding models, evaluating models, using models. Because models are so pervasive, it makes sense to take a look at modelling itself. We will approach this problem, of
course, by
- Proceedings of the Fourth IEEE International Workshop on Information Assurance (IWIA’06 , 2006
"... Zero-day attacks, new (anomalous) attacks exploiting previously unknown system vulnerabilities, are a serious threat. Defending against them is no easy task, however. Having identified “degree
of system knowledge” as one difference between legitimate and illegitimate users, theorists have drawn on i ..."
Cited by 1 (0 self)
Add to MetaCart
Zero-day attacks, new (anomalous) attacks exploiting previously unknown system vulnerabilities, are a serious threat. Defending against them is no easy task, however. Having identified “degree of
system knowledge” as one difference between legitimate and illegitimate users, theorists have drawn on information theory as a basis for intrusion detection. In particular, Kolmogorov complexity (K)
has been used successfully. In this work, we consider information distance (Observed K − Expected K) as a method of detecting system scans. Observed K is computed directly, Expected K is taken from
compression tests shared herein. Results are encouraging. Observed scan traffic has an information distance at least an order of magnitude greater than the threshold value we determined for normal
Internet traffic. With 320 KB packet blocks, separation between distributions appears to exceed 4σ. 1. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=553113","timestamp":"2014-04-19T00:11:41Z","content_type":null,"content_length":"35563","record_id":"<urn:uuid:6bad6d7a-6024-4c54-a971-28124185b2c2>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00551-ip-10-147-4-33.ec2.internal.warc.gz"} |
Experimental mathematics and mathematical physics
Bailey, David H.; Borwein, Jonathan M.; Broadhurst, David and Zudilin, Wadim (2010). Experimental mathematics and mathematical physics. In: Amdeberhan, Tewodros; Medina, Luis A. and Moll, Victor H.
eds. Gems in Experimental Mathematics. Contemporary Mathematics, 517. Providence, RI, USA: American Mathematical Society, pp. 41–58.
Full text available as:
PDF (Accepted Manuscript) - Requires a PDF viewer such as GSview, Xpdf or Adobe Acrobat Reader
Download (605Kb)
One of the most effective techniques of experimental mathematics is to compute mathematical entities such as integrals, series or limits to high precision, then attempt to recognize the resulting
numerical values. Recently these techniques have been applied with great success to problems in mathematical physics. Notable among these applications are the identification of some key
multi-dimensional integrals that arise in Ising theory, quantum field theory and in magnetic spin theory.
Item Type: Book Chapter
Copyright Holders: 2010 American Mathematical Society
ISBN: 0-8218-4869-0, 978-0-8218-4869-2
Academic Unit/Department: Science > Physical Sciences
Related URLs:
Item ID: 24730
Depositing User: Colin Smith
Date Deposited: 18 Nov 2010 09:34
Last Modified: 25 Nov 2012 03:37
URI: http://oro.open.ac.uk/id/eprint/24730
Share this page:
Actions (login may be required)
View Item
Report issue / request change | {"url":"http://oro.open.ac.uk/24730/","timestamp":"2014-04-20T08:56:27Z","content_type":null,"content_length":"27433","record_id":"<urn:uuid:f3fe2497-ba81-41a2-b115-8bddc8ffbdef>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00459-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
In H5 the parser don't "like" my formulas. What's wrong with them: vin*(1-1/sqrt(1+2K*RS*(VIN-VT)))) and: (vIN-VT)/RS + (1/K*RS^2)*(1-sqrt(1+2K*RS*(vIN-VT)))
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/5079710be4b0ed1dac512ebd","timestamp":"2014-04-20T03:21:07Z","content_type":null,"content_length":"34781","record_id":"<urn:uuid:1fb22747-0c9a-4106-a80a-27fc20e4b865>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00286-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to Have Students Make and Test Conjectures
It is through the process of having students make and test conjectures that higher levels of reasoning and more complex learning will occur.
Suggestions for using the Make and Test Conjecture Method
Grab a student's attention by presenting them with a thought provoking research question.
• Engage the students by having them make a prediction(s) about possible outcomes to this question and explain and share their reasoning.
• Have students collect, access, or simulate data to answer the research question.
• Have students analyze the data to see possible data-based answers to the research question.
• Create disequilibrium by having students compare their prediction(s) with actual outcomes.
• Promote discussions so that encourage students to come up with explanations for the predicted and actual outcomes in order to strengthen associations between concepts and develop the students
reasoning abilities.
Engaging Students
Adding the "make and test conjectures" method to an activity can be as simple as asking students to first think about a question before examining the data. In this way students become engaged with
reasoning about the data and develop an interest in seeing the resulting data. It is important to have the students explain their reasons for the conjectures (often predictions) and then later try to
verbally explain why they turned out to be correct or incorrect.
Allan Rossman and Beth Chance (1998) utilized the method of making and testing conjectures in their book "Workshop Statistics: Discovery with Data and Minitab".
In one activity they ask the students to guess the number of different states a typical student at their school may have visited. The students are also to guess which states would be visited least
and which states would be visited most. They are then to guess the proportion of students at their school who have been to Europe. The students record their own personal data on these questions.
Finally the students collect the actual data for these questions from their classmates. The students are then asked to compare their estimates with the actual class data and write a sentence or two
comparing the two distributions.
In this example,
students are engaged in reasoning about data, and have a reason to be interested in examining the graphs of data produced by the class.
In another example Rossman and Chance (1998) ask students to guess the number of people per television set in the United States, China and Haiti for 1990. The students then have to make a prediction
about which countries will have few people per television, which will tend to have longer life expectancies, shorter life expectancies, or if there will be no relationship between televisions and
life expectancy. Then students are given the data to analyze and use to test their conjectures.
Confronting Misconceptions
Chance, delMas and Garfield (2004) use this method to develop student reasoning about sampling distributions. For example, they found it beneficial to have students confront the limitations of their
knowledge so they could correct their misconceptions and construct strong, correct connections about sampling variability and distributions. They used technology to generate simulations to test
students predictions about how samples and sampling distributions behave; confronting some of the strong misconceptions students have about sampling and helping them construct an understanding of the
Central Limit Theorem. They used the predict/test/evaluate method to create disequilibrium in the students and then used discussions to make sure the students fully integrated the information about
the concept into their schemes.
Another example of using this method to confront misconceptions involves learning about confidence intervals. Many students do not understand what the 95% means (in a 95% confidence interval), and
there is a frequent misconception that a 95% confidence interval means that 95% of the data are in the interval. Students can be asked to predict what percentage of the data from a sample are in the
confidence interval, and what percentage of the data in a population are within a particular interval. They can then run simulations using a web applet (e.g, Sampling Words at RossmanChance.com) to
test these conjectures.
Developing Reasoning
Cob and McClain (2004) also used this method for developing the statistical reasoning abilities in elementary school children. For example, they had students make predictions about the effectiveness
of a new AIDS treatment. The students then examined two different sets of data related to the treatment, compared their intuitions with the real world data, and then discussed and wrote up their
analyses. Their goal was to make sure the students would view data not merely as numbers but as measures of an aspect of a situation that was relevant to the question under investigation.
Chance, B., delMas, R., & Garfield, J. (2004). Reasoning About Sampling Distributions. In D. Ben-Zavi & J. Garfield (Eds.), The Challenge of Developing Statistical Literacy, Reasoning, and Thinking.
Kluwer Academic Publishers; Dordrecht, The Netherlands.
Cobb, P., & McClain, K. (2004). Principles of Instructional Design for Supporting the Development of Student Statistical Reasoning. In D. Ben-Zavi & J. Garfield (Eds.), The Challenge of Developing
Statistical Literacy, Reasoning, and Thinking. Kluwer Academic Publishers; Dordrecht, The Netherlands.
This chapter proposes design principles for developing statistical reasoning in students. To learn more: Developing Statistical Reasoning
Rossman, A. L., & Chance, B. L. (1998). The Workshop Mathematics Project --Workshop Statistics: Discovery with Data and Minitab.Springer-Verlag: New York. | {"url":"http://serc.carleton.edu/sp/cause/conjecture/how.html","timestamp":"2014-04-19T12:00:02Z","content_type":null,"content_length":"29081","record_id":"<urn:uuid:a1cf0817-fdd8-4c2d-9362-182ee83e4a18>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00181-ip-10-147-4-33.ec2.internal.warc.gz"} |
derivative of log function
April 25th 2010, 07:58 PM
derivative of log function
just want to make sure i'm doing this problem right, any help is appreciated.
find derivative:
= log7 + logx^2-1
= lnx^2-1 / ln7
=ln(x+1)(x-1) / ln7
= 1/ln7 + (1/x+1 * 1/x-1)
April 25th 2010, 08:03 PM
Prove It
Please use brackets where they're needed, this is too hard to read.
Is the function
$y = \ln{(7x^2)} - 1$?
April 25th 2010, 08:04 PM
Here is the way to find the derivative of log functions;
$\frac{Derivative~of ~inside~of~ log}{inside~ of~ log}$
Is you function $\log(7x^2-1)$?
If yes, from the above rule $\frac{Derivative~of ~7x^2-1}{7x^2-1}=\frac{14x}{7x^2-1}$
April 27th 2010, 05:30 PM
ah i knew i forgot brackets somewhere.
the original problem was | {"url":"http://mathhelpforum.com/calculus/141420-derivative-log-function-print.html","timestamp":"2014-04-21T16:34:58Z","content_type":null,"content_length":"5985","record_id":"<urn:uuid:d48937cb-41a5-47ec-954e-3d8a3a0ababf>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00539-ip-10-147-4-33.ec2.internal.warc.gz"} |
Arizona City Statistics Tutor
Find an Arizona City Statistics Tutor
...I have a Master's degree in mathematics and 14 years of experience teaching algebra and trigonometry. I have tutored both high school and college students with success. I am a highly qualified
and state certified high school math teacher.
15 Subjects: including statistics, calculus, geometry, algebra 1
...I have had experience tutoring all of those classes with the exception statistics, however, I am confident in my teaching abilities. I realize everyone learns differently and am very good at
explaining things multiple ways. I am also am a college football player and track athlete, and have tons...
28 Subjects: including statistics, chemistry, reading, calculus
...When I began tutoring I found out that a majority of the students weren’t performing well on Arizona’s standardized test. The main reason I thought students weren’t performing well was due to a
paucity of tutors. I began to tutor students in math, reading, and writing to improve their test scores.
67 Subjects: including statistics, chemistry, English, Spanish
...I am also proficient in Calculus. I have a passion for Math and am enthusiastic tutor with a lot of patience. I understand that Math is difficult for many students, but truly believe, that I
can help anyone master this subject. • Location: NorthWest Tucson/Oro ValleyI was born and raised in Israel, so naturally, Hebrew is my first language.
13 Subjects: including statistics, calculus, algebra 2, algebra 1
...I was born and raised in Tucson, Arizona and I've spent the past 4 years tutoring local students in Math. I've taken an SAT preparation course from the Princeton Review and, I'm proud to say,
received a perfect score of 800 on the Math portion. I took every AP math course available to me in hig...
42 Subjects: including statistics, physics, calculus, geometry | {"url":"http://www.purplemath.com/arizona_city_az_statistics_tutors.php","timestamp":"2014-04-16T05:04:51Z","content_type":null,"content_length":"24109","record_id":"<urn:uuid:9a326b6f-acdc-40a2-becc-d27125d6ba9a>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00258-ip-10-147-4-33.ec2.internal.warc.gz"} |
[BioC] pamr Error: each class must have >1 sample
Kasper Daniel Hansen k.hansen at biostat.ku.dk
Wed Jul 28 22:17:17 CEST 2004
Dick Beyer <dbeyer at u.washington.edu> writes:
> I am having trouble with pamr.train and subsequently pamr.cv.
> In the pamr documentation, the following works:
> set.seed(120)
> x <- matrix(rnorm(1000*20),ncol=20)
> y <- sample(c(1:4),size=20,replace=TRUE)
> mydata <- list(x=x,y=y)
> mytrain <- pamr.train(mydata)
> mycv <- pamr.cv(mytrain,mydata)
> But if you change the seed, it doesn't:
> set.seed(1123)
> x <- matrix(rnorm(1000*20),ncol=20)
> y <- sample(c(1:4),size=20,replace=TRUE)
> mydata <- list(x=x,y=y)
> mytrain <- pamr.train(mydata)
> Error in nsc(data$x[gene.subset, sample.subset], y = y, proby = proby, :
> Error: each class must have >1 sample
> There is discussion in the documents (http://www-stat.stanford.edu/~tibs/PAM/Rdist/doc/readme.html) about "fragile" functions, but I have not been able to understand how to make this error go away. If anyone has had this problem or has some advice, I would be eternally grateful.
If you look at the y-ector you will notice it look like this
> table(y)
Hence there is only 1 sample with a class of "1". Of course this
happens when you sample 20 times from a set of 4 values. From the error
message it seems that the method requires at least two samples from
every class.
Possible solutions (quick solutions, I am not to familiar with pamr):
- increase the size, so that a class with only one sample is very
- fit the data, disregarding the single sample and using only 3
Kasper Daniel Hansen, Research Assistant
Department of Biostatistics, University of Copenhagen
More information about the Bioconductor mailing list | {"url":"https://stat.ethz.ch/pipermail/bioconductor/2004-July/005519.html","timestamp":"2014-04-20T17:09:44Z","content_type":null,"content_length":"4815","record_id":"<urn:uuid:2ab2967b-ece8-4131-abb7-28f22abbfc89>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00608-ip-10-147-4-33.ec2.internal.warc.gz"} |
FOM: Martin-Steel theorem
Stephen G Simpson simpson at math.psu.edu
Wed Sep 9 13:18:12 EDT 1998
Joseph Shoenfield writes:
> By a strictly pi-0-n+1 sentence I mean one which is not pi-0-n or
> sigma-0-n.
I still don't understand. By adding dummy quantifiers, it is trivial
to convert a Pi^0_1 sentence to a logically equivalent Pi^0_{n+1}
sentence which is not Pi^0_n or Sigma^0_n.
You could avoid this difficulty by talking about sentences up to
equivalence over ZFC or something of the sort, but I still don't see
the relevance of this.
> I stated this conjecture only because you demanded one,
> but I would really rather return to my original general statement:
OK, let's drop the discussion of your conjecture, which makes no sense
to me anyway.
> one should look for a result which relates the position of an
> undecidable statement in the arithmetical or analytic hierarchy and
> the number and kind of large cardinals needed to prove it.
Why? I don't see that such a relationship would have a bearing on any
important f.o.m. issue. It seems *much* more important to find good
examples of finite combinatorial statements that require large
cardinals to prove them.
> I have always been puzzled as to why you considered the particular
> result of Harvey such a key result in the completeness program.
The reason I view Harvey's independence result as key is that it is
the state of the art vis a vis the program that I mentioned, i.e. to
extend the incompleteness phenomenon into finite combinatorics, or
more specifically, to find finite combinatorial statements which are
independent of ZFC, or ZFC plus large cardinals. By state of the art
I mean the best result that is known at the present time.
> I find your challenge to explain why I value the Steel-Martin
> theorem so highly not only reasonable but welcome, since it will
> afford me a chance to express some thoughts on judging mathematical
> results which I have had recently. I hope to meet the challenge
> sometime soon.
I'm looking forward to it. I wonder how you are going to express your
thoughts on these matters without using any informal concepts.
-- Steve
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/1998-September/002089.html","timestamp":"2014-04-17T12:32:05Z","content_type":null,"content_length":"4439","record_id":"<urn:uuid:292a7960-e3dd-4125-bcaa-fca0a526141a>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00416-ip-10-147-4-33.ec2.internal.warc.gz"} |
1996 Joint Mathematics Meetings
AMS Meeting Program by Day
Current as of Tuesday, April 12, 2005 15:09:27
Program | Deadlines | Registration/Housing/Etc. | Inquiries: meet@ams.org
1996 Joint Mathematics Meetings
Orlando, FL, January 10-13, 1996
Meeting #908
Associate secretaries:
Lance W Small, AMS lwsmall@ucsd.edu
Donovan H Van Osdol, MAA dv@christa.unh.edu
Thursday January 11, 1996
• Thursday January 11, 1996, 8:00 a.m.-12:00 p.m.
AMS-MAA Special Session on Research in Undergraduate Mathematics Education, II
Annie Selden, Tennessee Technological University
John Selden, MERC
• Thursday January 11, 1996, 8:00 a.m.-12:00 p.m.
AMS Special Session on Knot Theory, II
Tim D. Cochran, Rice University
□ 8:00 a.m.
Stability of lower central series of compact $3$-manifold groups.
Tim D. Cochran*, Rice University
Kent Orr, Indiana University, Bloomington
□ 8:30 a.m.
Three manifold invariants and finite dimensional Hopf algebras.
Louis H. Kauffman*, University of Illinois, Chicago
□ 9:00 a.m.
Multiple strong fusions of boundary links.
Paul A. Bellis*, Rice University
□ 9:30 a.m.
□ 10:10 a.m.
The Kauffman-Radford-Hennings invariant for a general quantum group.
Stephen F. Sawin*, Massachusetts Institute of Technology
□ 10:40 a.m.
Links, Bousfield-Kan completions and closures of groups.
Tim D. Cochran, Rice University
Kent Orr*, Indiana University, Bloomington
□ 11:10 a.m.
Link homotopy and the relative slice problem.
Vjacheslav S. Krushkal*, University of California at San Diego, La Jolla
□ 11:40 a.m.
Vassiliev invariants of two component links and the Casson-Walker invariant.
Paul A. Kirk*, Indiana University, Bloomington
Charles Livingston, Indiana University, Bloomington
• Thursday January 11, 1996, 8:00 a.m.-11:50 a.m.
AMS Special Session on Algebraic Groups and Invariant Theory, II
Amassa C. Fauntleroy, North Carolina State University
Aloysius G. Helminck, North Carolina State University
□ 8:00 a.m.
Rings of covariants for finite coregular groups.
Harold E.A. Campbell, Queen's University
Ian Hughes, Queen's University
Robert James Shank, Queen's University
David L. Wehlau*, Royal Military College
□ 8:30 a.m.
Centralizers of locally nilpotent derivations.
David R. Finston*, New Mexico State University, Las Cruces
Sebastian Walcher, Technical University of Munich, Germany
□ 9:00 a.m.
Quantization and invariant theory of nilpotent orbits.
Ranee Kathryn Brylinski*, Pennsylvania State University, University Park
Bertram Kostant, Massachusetts Institute of Technology
□ 9:30 a.m.
□ 10:00 a.m.
Generalized Capelli identities.
Friedrich Knop*, Rutgers University, New Brunswick
Siddhartha Sahi, Rutgers University, New Brunswick
□ 10:30 a.m.
□ 11:00 a.m.
Variation of geometric invariant theory quotients.
Yi Hu*, University of Michigan, Ann Arbor
□ 11:30 a.m.
Dual to the McKay correspondence.
Jean-Luc Brylinski*, Pennsylvania State University, University Park
• Thursday January 11, 1996, 8:00 a.m.-12:00 p.m.
AMS Special Session on Multidimensional Complex Dynamics, III
John Hamal Hubbard, Cornell University
Ralph W. Oberste-Vorth, University of South Florida
□ 8:00 a.m.
Remarks on multidimensional algebraic complex dynamics.
Peter Papadopol*, Grand Canyon University
□ 8:50 a.m.
The topology of generalized H\'enon mappings.
John Hamal Hubbard*, Cornell University
□ 9:40 a.m.
On ``M\"obius'' transformations of the $2 \times 2$ Siegel upper half plane.
Shmuel Friedland, University of Illinois, Chicago
Pedro Freitas*, University of Illinois, Chicago
□ 10:30 a.m.
On some problems of complex dynamics in several variables.
Shmuel Friedland*, University of Illinois, Chicago
□ 11:20 a.m.
• Thursday January 11, 1996, 8:00 a.m.-11:50 a.m.
AMS Special Session on Nonselfadjoint Operator Algebras and Their Applications, II
Timothy D. Hudson, East Carolina University
Elias G. Katsoulis, East Carolina University
• Thursday January 11, 1996, 8:00 a.m.-10:50 a.m.
AMS Special Session on Commutative Algebra, III
Craig L. Huneke, Purdue University, West Lafayette, and University of Michigan, Ann Arbor
Gennady Lyubeznik, University of Minnesota, Minneapolis
• Thursday January 11, 1996, 8:00 a.m.-12:00 p.m.
AMS Special Session on Computational Harmonic Analysis and Approximation Theory, III
N. K. Govil, Auburn University, Auburn
Richard A. Zalik, Auburn University, Auburn
• Thursday January 11, 1996, 8:00 a.m.-10:00 a.m.
MAA Minicourse \#11: Part B
Earth math: Applications of precalculus mathematics to environmental issues.
Christopher Schaufele, Kennesaw College
Nancy Zumoff, Kennesaw College
• Thursday January 11, 1996, 8:00 a.m.-10:00 a.m.
MAA Minicourse \#12: Part A
The use of symbolic computation in probability and statistics.
Zaven Karian, Denison University
Elliot Tanis, Hope College
• Thursday January 11, 1996, 8:00 a.m.-11:55 a.m.
MAA Session on Planning Reformed Calculus Programs: Experiences and Advice, I
Martin E. Flashman, Humboldt State University
• Thursday January 11, 1996, 8:00 a.m.-11:50 a.m.
MAA Session on Creating an Active Learning Environment: Preparing Pre-service Teachers, I
Hubert J. Ludwig, Ball State University
Kay Meeks Roebuck, Ball State University
□ 8:00 a.m.
Welcome; Introductory Remarks; Organizational Comments
□ 8:05 a.m.
Computer applications in the teaching of mathematics--Version $L$.
Hubert J. Ludwig*, Ball State University
□ 8:20 a.m.
Patterns, functions, and recursion for the middle school teacher.
Mary M. Sullivan*, Curry College
Connie M. Yarema, East Texas State University
□ 8:35 a.m.
Carousel numbers---a lead-in to number theory.
Gary B. Klatt*, University of Wisconsin, Whitewater
□ 8:50 a.m.
MathKit lessons in a ``teaching methods'' course.
Louise McNertney Berard*, Wilkes University
□ 9:05 a.m.
Using technology to teach mathematics: A course for pre-service middle and secondary school teachers.
Michael B. Fiske*, Saint Cloud State University
□ 9:20 a.m.
Creating an active learning environment: Preparing pre-service teachers with technology and discovery lessons at North Park College.
Leona L. Mirza*, North Park College
□ 9:30 a.m.
Technology training of pre-service teachers.
Cathleen Maria Zucco*, LeMoyne College
□ 9:50 a.m.
Open-ended logo projects---A natural active learning environment.
Susann M. Mathews*, Wright State University, Dayton
□ 10:10 a.m.
Teaching and learning mathematics with spreadsheets.
Kay I. Meeks Roebuck*, Ball State University
□ 10:25 a.m.
A course in mathematics and technology for prospective teachers.
Vincent P. Schielack, Jr.*, Texas A & M University, College Station
□ 10:40 a.m.
A teacher in-service course on technology.
Carl R. Spitznagel*, John Carroll University
□ 10:55 a.m.
$NUMBERS$---an interactive tutorial program.
Kenneth E. Thomas*, Andrews University
□ 11:10 a.m.
Technology as a tool: Modeling for preservice teachers.
Blake Ellis Peterson*, Oregon State University
□ 11:25 a.m.
Creating an active learning environment: Preparing pre-service teachers using faculty development workshops.
William P. Fox*, United States Military Academy
□ 11:40 a.m.
Ideas and idea sources for creating active learning environments for prospective elementary teachers.
Dale R. Oliver*, Humboldt State University
Phyllis Z. Chinn, Humboldt State University
• Thursday January 11, 1996, 8:00 a.m.-11:55 a.m.
MAA Session on The Scholarship of Humanistic Mathematics, I
Joan Countryman, The Lincoln School, Providence, Rhode Island
Harald M. Ness, University of Wisconsin Centers-Fond du Lac
Alvin M. White, Harvey Mudd College
• Thursday January 11, 1996, 8:30 a.m.-11:20 a.m.
AMS Special Session on Recursive and Feasible Mathematics, III
Douglas Cenzer, University of Florida
Jeffrey B. Remmel, University of California at San Diego
• Thursday January 11, 1996, 8:30 a.m.-11:50 a.m.
AMS Special Session on Differential Geometry and Mathematical Relativity, II
Gregory J. Galloway, University of Miami, Coral Gables
• Thursday January 11, 1996, 9:00 a.m.-9:50 a.m.
AWM Emmy Noether Lecture
On some homogenization problems for differential operators.
Olga A. Oleinik*, Moscow State University, Moscow, Russia
• Thursday January 11, 1996, 9:00 a.m.-11:50 a.m.
AMS Special Session on Analytic Methods in Several Complex Variables, II
F. Michael Christ, University of California, Los Angeles
• Thursday January 11, 1996, 9:00 a.m.-10:00 a.m.
JPBM Session
Taking advantage of Math Awareness Week (MAW).
Gerald J. Porter, University of Pennsylvania
Richard H. Herman, Joint Policy Board for Mathematics
Kathleen Holmay, Joint Policy Board for Mathematics
• Thursday January 11, 1996, 9:00 a.m.-10:00 a.m.
AMS Electronic Products and Services Presentation
e-MATH on the World Wide Web.
Wendy A. Bucci, American Mathematical Society
Ralph E. Youngen, American Mathematical Society
• Thursday January 11, 1996, 10:05 a.m.-10:55 a.m.
MAA Invited Address
Creating opportunities for minorities in mathematics.
Etta Z. Falconer*, Spelman College
• Thursday January 11, 1996, 11:10 a.m.-12:00 p.m.
AMS Invited Address
Ordinary differential equations which generate all knots and links.
Philip John Holmes*, Princeton University
Robert W. Ghrist, Cornell University, Ithaca
• Thursday January 11, 1996, 1:00 p.m.-2:00 p.m.
AMS Colloquium Lectures: Lecture II
Modular forms, elliptic curves and Galois representations.
Andrew J. Wiles*, Princeton University
• Thursday January 11, 1996, 2:15 p.m.-3:05 p.m.
MAA Invited Address
Vector fields, flows and invariant sets.
Krystyna M. Kuperberg*, Auburn University, Auburn
• Thursday January 11, 1996, 2:15 p.m.-4:05 p.m.
AMS Special Session on Analytic Methods in Several Complex Variables, III
F. Michael Christ, University of California, Los Angeles
• Thursday January 11, 1996, 2:15 p.m.-4:10 p.m.
AMS Special Session on Knot Theory, III
Tim D. Cochran, Rice University
• Thursday January 11, 1996, 2:15 p.m.-4:05 p.m.
AMS Special Session on Diophantine Problems From Different Perspectives, I
Henri Rene Darmon, McGill University
Andrew J. Granville, University of Georgia
• Thursday January 11, 1996, 2:15 p.m.-4:15 p.m.
AMS Special Session on Algebraic Groups and Invariant Theory, III
Amassa C. Fauntleroy, North Carolina State University
Aloysius G. Helminck, North Carolina State University
• Thursday January 11, 1996, 2:15 p.m.-4:05 p.m.
AMS Special Session on Differential Geometry and Mathematical Relativity, III
Gregory J. Galloway, University of Miami, Coral Gables
• Thursday January 11, 1996, 2:15 p.m.-4:05 p.m.
AMS Special Session on Nonselfadjoint Operator Algebras and Their Applications, III
Timothy D. Hudson, East Carolina University
Elias G. Katsoulis, East Carolina University
• Thursday January 11, 1996, 2:15 p.m.-4:15 p.m.
MAA Minicourse \#10: Part B
Mathematical algorithms, models, and graphic representations using spreadsheets.
Deane E. Arganbright, University of Papua New Guinea
Erich Neuwirth, University of Vienna
Robert S. Smith, Miami University, Oxford
• Thursday January 11, 1996, 2:15 p.m.-4:15 p.m.
MAA Minicourse \#5: Part B
Business calculus: A new real-data/model-building approach.
Iris B. Feta, Clemson University
John L. Kenelly, Clemson University
Donald LaTorre, Clemson University
• Thursday January 11, 1996, 2:15 p.m.-4:15 p.m.
MAA Minicourse \#7: Part B
The historical development of the foundations of mathematics.
Robert L. Brabenec, Wheaton College
• Thursday January 11, 1996, 2:15 p.m.-4:15 p.m.
MAA Minicourse \#9: Part B
Calculus for the 21st century.
Lawrence C. Moore, Duke University
David A. Smith, Duke University
• Thursday January 11, 1996, 2:15 p.m.-3:55 p.m.
MAA Session on Standards for Introductory College Mathematics Courses Before Calculus, II
Gregory D. Foley, Sam Houston State University
Jon Wilkin, Northern Virginia Community College
• Thursday January 11, 1996, 2:15 p.m.-4:10 p.m.
MAA Session on Chaotic Dynamics and Fractal Geometry, II
Denny Gulick, University of Maryland, College Park
Jon W. Scott, Montgomery College
• Thursday January 11, 1996, 2:15 p.m.-4:10 p.m.
MAA Session on Constructivism Across the Curriculum, II
David M. Mathews, Central Michigan University
Keith E. Schwingendorf, Purdue University, North Central
• Thursday January 11, 1996, 2:15 p.m.-4:10 p.m.
MAA Session on Active Learning Strategies for Statistics and Probability, II
Mary R. Parker, Austin Community College
Allan J. Rossman, Dickinson College
• Thursday January 11, 1996, 2:15 p.m.-3:45 p.m.
JPBM Session
What the media look for in a math story.
Carol S. Wood, Wesleyan University
Richard H. Herman, Joint Policy Board for Mathematics
Kathleen Holmay, Joint Policy Board for Mathematics
• Thursday January 11, 1996, 2:15 p.m.-4:10 p.m.
MAA Committee on the Teaching of Undergraduate Mathematics Panel Discussion
Making teaching more public.
Linda H. Boyd, DeKalb Community College
Steven Dunbar, University of Nebraska, Lincoln
Bonnie Gold, Wabash College
James R. C. Leitzel, University of Nebraska, Lincoln
Miriam Leiva, University of North Carolina, Charlotte
Eli Passow, Temple University, Philadelphia
• Thursday January 11, 1996, 2:15 p.m.-3:15 p.m.
Young Mathematicians Network Panel Discussion
The training of teaching assistants.
Kevin E. Charlwood, University of Wisconsin, Milwaukee
Edward F. Aboufadel, Southern Connecticut State University
Suzanne M. Lenhart, University of Tennessee-Knoxville
Michael J. McAsey, Bradley University
• Thursday January 11, 1996, 3:00 p.m.-4:00 p.m.
AMS Electronic Products and Services Presentation
The Internet and the World Wide Web.
Wendy A. Bucci, American Mathematical Society
Ralph E. Youngen, American Mathematical Society
• Thursday January 11, 1996, 3:15 p.m.-4:10 p.m.
MAA Panel Discussion
Case studies in effective undergraduate mathematics programs.
Alan C. Tucker, State University of New York, Stony Brook
Linda H. Boyd, DeKalb Community College
David J. Lutzer, College of William & Mary
• Thursday January 11, 1996, 3:20 p.m.-4:10 p.m.
AMS Invited Address
Geometric graphs.
Janos Pach*, Hungarian Academy of Science, Hungary
• Thursday January 11, 1996, 7:00 p.m.-10:00 p.m.
MAA Session on Innovations in Teaching Linear Algebra, II
Donald R. LaTorre, Clemson University
David C. Lay, University of Maryland, College Park
Steven J. Leon, University of Massachusetts, Dartmouth
• Thursday January 11, 1996, 7:00 p.m.-10:00 p.m.
MAA Session on The Uses of History in the Teaching of Mathematics
Florence D. Fasanelli, Mathematical Association of America
Victor J. Katz, University of District of Columbia
V. Frederick Rickey, Bowling Green State University
□ 7:00 p.m.
Introduction by V. Frederick Rickey
□ 7:05 p.m.
History and the reform of undergraduate mathematics.
Alphonse Buccino*, Contemporary Communications, Inc.
□ 7:25 p.m.
Using original sources in class.
Gary S. Stoudt*, Indiana University of Pennsylvania
□ 7:40 p.m.
Galileo and Maria Gaetana Agnesi examine infinitesimals.
Fredric J. Zerla*, University of South Florida
□ 8:00 p.m.
De Beaune's proof of the Pythagorean Theorem.
C. Edward Sandifer*, Western Connecticut State University
Cynthia B. Gubitose, Western Connecticut State University
□ 8:20 p.m.
Euler's discovery of the gamma function in a math history course.
Stacy G. Langton*, University of San Diego
□ 8:40 p.m.
Problems and puzzles in history of mathematics research.
Kim Plofker*, Brown University
□ 8:55 p.m.
Benjamin Banneker and double position.
Leon M. Cohen*, Hampden-Sydney College
Kristen Alynne Haring, University of North Carolina, Chapel Hill
Joyce Janiga, Paradise Valley Community College
Kenneth L. Jones, American University
Laura B. Smith, North Carolina Central University
Marian W. Smith, Florida A&M University
□ 9:15 p.m.
The report of the Committee of Fifteen in $1912$: How it affected the secondary curriculum.
Steve Butcher, University of Central Arkansas
Charles B. Pierre, Clark Atlanta University
Harry B. Coonce*, Mankato State University
□ 9:30 p.m.
The historical development of mathematics examinations.
Lilian Metlitzky, California Polytechnic State University
Agnes Tuska*, California State University, Fresno
□ 9:45 p.m.
The early history of the Cornell University Mathematics Department.
Gary G. Cochell*, Culver-Stockton College
• Thursday January 11, 1996, 7:00 p.m.-9:00 p.m.
MAA Committee on Participation of Women Presentations
Are we there yet? Encouraging women in mathematics.
Carole B. Lacampagne, U.S. Department of Education
• Thursday January 11, 1996, 7:00 p.m.-10:00 p.m.
MAA Poster Session
Projects under the Instrumentation and Laboratory Improvement-Leadership in Laboratory Development programs.
Earl Fife, Calvin College, Grand Rapids, Michigan
• Thursday January 11, 1996, 7:00 p.m.-9:00 p.m.
Reunion for Calculus Reform Workshop Participants
Donald B. Small, U. S. Military Academy
• Thursday January 11, 1996, 7:30 p.m.-9:00 p.m.
MAA Presentation
Department chairs session: Encouraging departmental change.
James R.C. Leitzel, University of Nebraska, Lincoln
Inquiries: meet@ams.org | {"url":"http://jointmathematicsmeetings.org/meetings/national/jmm/1908_program_thursday.html","timestamp":"2014-04-16T07:36:03Z","content_type":null,"content_length":"85133","record_id":"<urn:uuid:1f17b762-f4c1-4015-98b4-5207c3096b65>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00412-ip-10-147-4-33.ec2.internal.warc.gz"} |
Frank Quinn
Most of my work is concerned with topology of manifolds, and topics in algebra, algebraic topology, and controlled topology needed for this. A particular current interest is the Farrell-Jones
conjecture in K-theory. See the arXiv for papers, or my Mathematics page for a partial selection.
For essays on mathematics education, computer testing, and related topics see my Education page.
I am the primary developer of materials for Math 1206c, the computer-tested form of the second semester of the first-year calculus for science and engineering.
I am the organizer of the EduTeX Working Group of the TeX Users Group. This is concerned with educational testing software.
Other Materials
Essays and other materials from an involvement in electronic publication in the mid 1990s can be found on the page Electronic Publication.
Materials on the history and structure of mathematics are on the page History and Nature of Mathematics.
Contact Information
Department of Mathematics
476 McBryde Hall
Blacksburg, VA 24061-0123
(540) 231-5960 (FaX)
E-Mail: quinn@math.vt.edu
Last Date Modified - November 10, 2010 | {"url":"http://www.math.vt.edu/people/quinn/","timestamp":"2014-04-20T15:51:30Z","content_type":null,"content_length":"2526","record_id":"<urn:uuid:59134fbe-3536-42fa-a3ed-2a59c65acf4f>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00231-ip-10-147-4-33.ec2.internal.warc.gz"} |
Chevy Chs Vlg, MD
Find a Chevy Chs Vlg, MD Math Tutor
...I look forward to working with you! Note: If you would like tutoring for a standardized test, I would prefer that you first buy a study guide that includes practice tests.I have a Master's
degree in Chemistry from American University, and I provided independent tutoring for chemistry for years t...
11 Subjects: including prealgebra, trigonometry, algebra 1, algebra 2
...I have both classroom teaching and private tutoring experiences. Between 2002 and 2006, I was a lecturer at the Ethiopian Civil Service University and during that time I taught more than 12
different engineering courses for undergraduate urban engineering students. Between 2006 and 2011 I was a...
14 Subjects: including statistics, GED, algebra 1, algebra 2
...I have tutored math for eight years. I have worked with middle school and early high school students in prealgebra. I am very comfortable working on conceptual issues, as well as personal
challenges in learning math.
11 Subjects: including algebra 1, algebra 2, prealgebra, public speaking
My name is Bekah and I graduated from BYU with a degree in Math Education. While I was in college, I was a professor's assistant for 3 years in a calculus class, which included me lecturing twice
a week, and working one-on-one with students. After graduating, I taught high school math for one year...
10 Subjects: including algebra 1, algebra 2, calculus, geometry
...I have taught classes at Cornell University, the State University of New York, New York University, Columbia University, the University of Maryland, and now at Marymount University in
Arlington, Virginia.I have studied Algebra as an undergraduate at the Massachusetts Institute of Technology, and ...
39 Subjects: including algebra 1, algebra 2, calculus, chemistry
Related Chevy Chs Vlg, MD Tutors
Chevy Chs Vlg, MD Accounting Tutors
Chevy Chs Vlg, MD ACT Tutors
Chevy Chs Vlg, MD Algebra Tutors
Chevy Chs Vlg, MD Algebra 2 Tutors
Chevy Chs Vlg, MD Calculus Tutors
Chevy Chs Vlg, MD Geometry Tutors
Chevy Chs Vlg, MD Math Tutors
Chevy Chs Vlg, MD Prealgebra Tutors
Chevy Chs Vlg, MD Precalculus Tutors
Chevy Chs Vlg, MD SAT Tutors
Chevy Chs Vlg, MD SAT Math Tutors
Chevy Chs Vlg, MD Science Tutors
Chevy Chs Vlg, MD Statistics Tutors
Chevy Chs Vlg, MD Trigonometry Tutors
Nearby Cities With Math Tutor
Bethesda, MD Math Tutors
Brentwood, MD Math Tutors
Cabin John Math Tutors
Chevy Chase Math Tutors
Chevy Chase Village, MD Math Tutors
Colmar Manor, MD Math Tutors
Cottage City, MD Math Tutors
Fort Myer, VA Math Tutors
Garrett Park Math Tutors
Kensington, MD Math Tutors
Martins Add, MD Math Tutors
Martins Additions, MD Math Tutors
N Chevy Chase, MD Math Tutors
Somerset, MD Math Tutors
West Mclean Math Tutors | {"url":"http://www.purplemath.com/Chevy_Chs_Vlg_MD_Math_tutors.php","timestamp":"2014-04-16T13:34:44Z","content_type":null,"content_length":"24210","record_id":"<urn:uuid:d2241bf9-2a66-4f70-b6f7-35f929aaf7e2>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00241-ip-10-147-4-33.ec2.internal.warc.gz"} |
38 loaders - I need your input
April 22, 2006, 11:37 AM
Hey guys... last week I reloaded my first 200 rounds of .45 ACP on my 550.
This lead free .38 load is more complicated and there is very little data out there for lead free stuff, so please take a look and tell me what you think.
Let me get this part out of the way - I'm going to Thunder Ranch in about a month. They mandate lead free ammo. This is not an option, and factory lead free .38 is a unicorn.
That said, the formula -
Mixed .38 brass, all nickel plated.
105 Gr Sinterfire lead free bullets - Note length is .676"
Titegroup - 3.5 gr - selected b/c it is supposed to work well with LF primers & burn clean
CCI primers - #500 small pistol Lead Free
Ammo is loaded to 1.53" OAL.
The Sinterfire bullets have a "shoulder". From the base to the drop of of the .357" diameter is .276" in length. It drops to .347" and tapers to the HP from there.
So, the length of the .38 cases (avg) 1.155" plus the .676" 105 gr minus the distance to the shoulder drop off = 1.55", but I couldn't get it to that exactly... the test run of 24 rounds ammo ranges
from 1.528"-1.533".
FWIW frangiblebullets.com suggests charges based on a formula that takes the actual weight of the bullet (105), then add the size of the equiv length lead bullet - 147 - then divide by two and load
for that - so 126, so I pulled some 125 data and 3.5 gr seemed like a good starting point.
VERY little crimp as the lead free ammo will crack if you give it to much crimp. Crimped just enough to get rid of the bell.
I'll be shooting these out of a Ruger GP100 5" and a S&W M13 .357 4".
Some Q&A intel from the fragible bullets-
Q. In your typical JHP load in 38 SPL, what happens when you drop in a SinterFire bullet?
A. When using 4.5 gr. of Bullseye the 110 JHP velocity goes from 606 fps to 813 fps and using a 125 JHP the change is from 7i14 fps to 928 fps. This is because of the built-in bullet lubricant and
the reduced case capacity increasing velocity.
Q. You have loading data on your site(NOTE - NOT ANYMORE THAT I CAN FIND:(), but what do I do to develop my own loads?
A. Look at the data you use now. We have a light bullet that has a heavy bullet profile and case capacity. Use the intermediate bullet weight data you already have available. For example, in 308 and
30-06 using your favorite 150gr load, drop in our 125gr bullet. In 223, use your 55gr bullet load and drop in our 42gr bullet. In 9mm and 40 S&W, since they are high pressure rounds, our bullet is
30% lighter and therefore takes up 30% more case capacity. Go with your lighter loads using W231, Bullseye, AA#2, #5 or another fast pistol powder. The 45ACP, 38 SPL and 357 all have larger case
capacity and a little slower powder loading will work OK. Just be sure that the load will snap the 45 well enough to cycle the action.
I'm planning on trying this at the range tomorrow afternoon, so if there is any chance I'm going to say, blow my hand off due to detonation a warning would be appreciated :uhoh: | {"url":"http://www.thehighroad.org/archive/index.php/t-196466.html","timestamp":"2014-04-17T21:43:55Z","content_type":null,"content_length":"14308","record_id":"<urn:uuid:a1da72bc-b8da-4eec-be34-5e5566c89b42>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00372-ip-10-147-4-33.ec2.internal.warc.gz"} |
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/users/abdul_shabeer/medals","timestamp":"2014-04-18T18:55:47Z","content_type":null,"content_length":"94819","record_id":"<urn:uuid:f09ade9d-793b-4e96-9db7-42fd6df93090>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00049-ip-10-147-4-33.ec2.internal.warc.gz"} |
Pleasant Hill, CA Trigonometry Tutor
Find a Pleasant Hill, CA Trigonometry Tutor
...I can help students understand the basic trig functions, their reciprocal functions, inverse functions, graphing sinusoids, solving triangles, proving trig identities and solving equations
involving trigonometric functions. I teach basic trigonometry at the end of the Algebra 2 classes I teach t...
13 Subjects: including trigonometry, calculus, ASVAB, geometry
...I could explain the main points precisely and concisely (Algebra is my PhD area). All have improved in their understanding and in various degree their grades. I provide extra practice and drill
problems. I have substantial experience tutoring Calculus (including AP, AB/BC) and Multivariate Calculus.
15 Subjects: including trigonometry, calculus, GRE, algebra 1
With my unique background and education, I can work successfully with a wide variety of students who have a wide variety of "issues" with learning math. I understand where the problems are, and
how best to get past them and onto a confident path to math success. My undergraduate degree is in mathematics, and I have worked as a computer professional, as well as a math tutor.
20 Subjects: including trigonometry, calculus, statistics, geometry
...If you are completely new to Excel, I will teach you the Excel environment and elementary concepts and neat features of Excel, including entering data versus formulas, absolute versus relative
cell referencing, populating cells easily, formatting tricks, using multiple sheets, etc. Beyond the el...
23 Subjects: including trigonometry, calculus, geometry, algebra 2
...I am an expert on math standardized testing, as stated in my reviews from previous students. I have worked on thousands of these types of problems and can show your student how to do every
single one, which will dramatically increase their test scores! I can help your student ace the following standardized math tests: SAT, ACT, GED, SSAT, PSAT, ASVAB, TEAS, and more.
59 Subjects: including trigonometry, reading, chemistry, English | {"url":"http://www.purplemath.com/Pleasant_Hill_CA_Trigonometry_tutors.php","timestamp":"2014-04-20T04:11:35Z","content_type":null,"content_length":"24630","record_id":"<urn:uuid:6143c456-75dd-4ca0-980a-3153ef17375f>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00593-ip-10-147-4-33.ec2.internal.warc.gz"} |
Analysis of Breathing in Swimming-Nerd
Warning: This is really nerdy!
One of the most confusing aspects of swimming to athletes that are not used to the water is breathing. In most sports, breathing is something we take for granted. Air is always around for us and
there for the taking. Not so when your face is planted in a swimming pool or lake! In the water, you must actively work to breathe, and this work will take energy and slow you down. World class 50
meter swimmers are well aware of this fact. Many of them take 2 breaths or less for their entire race!
It is true that triathlon is a different sport than 50 meter sprint swimming. Taking too few breaths in the open water will reduce your oxygen supply, which can cause you to tire rapidly. Of course,
taking too many breaths causes you to slow down. How do you know if you are breathing too little or too much? This experiment seeks to broaden understanding of how much time is lost with every
breath. With this information, you can decide an appropriate breathing pattern for yourself. As with all of my experiments, the results are only as valid as the assumptions and experimental setup,
and those are listed below. If you just want to see the conclusions and recommendations, skip to the bottom of this article.
Assumptions in the Experiment to determine time loss while breathing:
1. The metric in this experiment is time lost per breath taken over a swimming distance of 25 yards. The 25 yard distance was selected because it is short enough to do a large volume of tests within
the same study. A series of lengths were swum where the number of strokes per breath was controlled. Two, four and eight strokes per breath were investigated.
2. Freestyle was the stroke utilized throughout this experiment. The athlete was free to determine their own pulling and flutter kicking technique. Technique was held consistent throughout the
3. The response variable in this experiment is raw sprint time. Time was selected because the expected loss per breath is expected to be small (less than 0.10 seconds per breath). Such a small
differential requires a precise measurement system, which can be obtained easier by a stopwatch compared to counting heart beats per minute.
4. Each 25 yard data point was swum as fast as possible with the prescribed number of strokes per breath. An assumption made is that the time lost to breathe at high speeds (sprinting) is equivalent
to time lost at slower speeds (in a lake). This assumption is reasonable, as the act of breathing is independent of arm or leg exertion. In other words, you work just has hard to turn your head
to breathe if you are swimming fast or slow. However, this assumption will not be validated in this experiment.
5. For each number of strokes per breath, a data set was collected when breathing just to the left side and a separate data set was collected when breathing just to the right side.
6. The differences between left and right side breathing were determined by statistical methods. If it could be concluded that the left – right difference is not relevant to the experiment, the two
data sets for each breathing requirement (left & right) would be merged and analyzed together.
7. The various strokes per breath (2, 4, and 8) were evaluated completely on both sides of breathing (left/right). Eight repetitions were taken for each of these situations. Thus, the total number
of experimental runs was 3 x 2 x 8 = 48.
8. The tests were completed in a randomized order. Prior to the start of each test, the recorder announced the breathing pattern (breathe every 2, 4 or 8) and which side to breathe on. The athlete
did not receive any further advance notice of what to do.
9. Times were obtained by the recorder on the pool deck with a stopwatch to a resolution of 0.00 seconds. The recorder started the watch when the athlete assumed a still in-water starting position
and stopped the watch when the athlete touched the opposite wall.
10. Once started, the athlete took quantity-4 underwater dolphin kicks before surfacing. This equated to approximately 5 yards of underwater swimming. This distance was sufficient to maintain speed
off the start, but no so long as to interfere with the breathing results.
11. The number of strokes taken over each length was collected. This was to ensure that the athlete was swimming in a consistent manner regardless of breathing pattern.
12. All experimental tests were performed in the same lane, same pool and same direction of length. This was done to ensure equivalent environmental conditions for each test.
13. Data was collected over a two day period where the days were consecutive. Exactly half of the experiment was completed on each day. It would have been ideal to collect all data in the same day
and session, but it was too difficult for the athlete to perform the required number of sprints all at once. Since data was collected over consecutive days at the same facility, it is not likely
to adversely affect results.
14. A single athlete (Duane Dobko) was used as the test subject for the study. The assumption made in this test is that body type and swimming ability are independent of time lost per breath. Since
different swimmers were not evaluated, this assumption could not be confirmed. The benefit to using an experienced swimmer is improved consistency, resulting in a more statistically valid result.
15. The athlete wore the same equipment (brief style swimsuit) throughout the test. Wetsuits or full-body suits were not worn at any time.
16. Between repeats, adequate time was allowed for complete rest and recovery. Such time was at the discretion of the athlete.
Overall Results
Figure 1: Results – 25 yard sprint times with various breath holding patterns
Figure 1 demonstrates that the time lost per breath is smaller than expected (0.0441 seconds per breath versus an expected 0.1 seconds). This time loss can be used to estimate the effects of various
breathing patterns in open water swimming. Results are shown in Figure 2.
Figure 2:
The values in Figure 2 were calculated assuming that the average swimmer takes 15 strokes for every 20 yards of swimming in open water, compounded over the distance in yards of each open water swim
(for example, a 2.4 mile swim is 4,224 yards), and using the amount of time lost per breath (0.0441 seconds).
Figures 1 and 2 illustrate that the time lost by taking a few extra breaths is very small. For example, if you switched from 2 strokes per breath to 8 strokes per breath for an entire 2.4 mile Iron
distance swim, your projected time would only improve by 52 seconds. For most of us, this gain would be more than offset by the increased fatigue due to lack of oxygen! The result suggests that
breathing as much as possible is the way to go, even breathing every stroke. However, a more detailed analysis suggests otherwise. The trade-off to breathing a lot lies in the corresponding increase
in variation.
Analysis of Variation
The overall consistency of swim times was evaluated with different breathing rates. The intent is to determine if different breathing patterns create more or less variable results. The variation in
strokes per breath is summarized in the Box/Whisker plot in Figure 3.
Figure 3 – Box / Whisker plots (definition of Box / Whisker on right)
Note: Click on above chart to enlarge.
Figure 3 illustrates how the variation in sprinting speed increased when the test subject increased breathing rate. The standard deviation of each data set was evaluated and shown in Figure 4.
Figure 4 – Scatter Plot, Standard Deviation of Data
As shown in Figure 4, the standard deviation (a measure of variation) increased a whopping 80% when the test subject increased the number of breaths per length from 2 to 9. The increase when the
athlete doubled the number of breaths from 2 per length to 4 per length was 32%. Important to note is that the test athlete had significant swimming experience. It is likely (though not verified in
this experiment) that the average triathlete would have an even greater increase in variation.
The large differences in variation suggest that a higher breathing frequency increases swimming speed variation. This makes sense, as head movement will tend to disrupt body positioning. This is why
most swimmers find it easier to swim evenly with a snorkel.
Any increase in variation of swimming speed carries negative consequences for the triathlete in open water. As the athlete speeds up and slows down, their body position in the water must also rise
and fall, forcing more water movement and greater energy expenditure to attain the same speed compared to a swimmer who swims more evenly and stays higher on average in the water. In other words,
when you breathe more, you must work harder and use more energy to go the same speed. This hypothesis makes logical sense. However, it could not be confirmed in this experiment.
Number of strokes taken per length
The total number of stokes taken in each experiment was counted and documented. It is important that stroke count is reasonably consistent in order to accurately compare different breathing patterns.
In other words, if the stroke count is varying in excess of 4 strokes per length, it would create a difference in the total number of breaths taken.
The maximum number of breaths taken in any 25 yard swim was 21, and minimum was 19. The average was 20 and the standard deviation was 0.6. These numbers make it sufficient to assume that number of
breaths per length is consistent. The following relation of “strokes per breath” and “breaths per length” was established. This relationship is used throughout the experiment.
│ Stroke per Breath │ Equivalent Breaths per length │
│ 2 │ 9 │
│ 4 │ 4 │
│ 8 │ 2 │
Left Breathing versus Right Breathing
A more detailed Box / Whisker plot (Figure 5) was established in which the various breathing patterns were analyzed when the test subject breathed to the left side and to the right side.
Figure 5: Analysis of Left vs. Right breathing
Figure 5 illustrates that there is no difference in times or variation when the test subject breathed to the right side versus the left side. This result is to be expected for experienced swimmers.
However, the result may differ for athletes with less experience. The data analysis is shown in Figure 6.
Figure 6
Note: Click on above chart to enlarge.
In figure 6, the p-values and confidence intervals were calculated assuming that the left data and right data are independent variable sets (no paired comparison).
As shown in Figure 6, none of the p-values showed any significant differences (to 95% confidence). The confidence intervals ranged from plus/minus 0.15 seconds at 2 breaths per length versus plus/
minus 0.25 seconds at 9 breaths per length. These confidence intervals are sufficiently small to conclude that there is no relevant difference in time when the athlete breathes on the left side
versus the right side.
From this data, it can be concluded that it is valid to merge all left and right data together into a single data set.
Statistical Analysis of Data
The data was analyzed in order to determine if the calculated time lost per breath taken in swimming freestyle (0.0441 seconds) was estimated from statistically valid data. The summary information is
shown in Figure 7. The results show that the data gathered for this experiment is sufficient to make conclusions regarding time lost per breath.
The test for normality (Shapiro-Wilk method used) shows the p-values all in excess of 0.05. With this result, it can be concluded that the data is insufficient to prove non-normality. A much larger
sample size would be required to prove normality. Thus, the conclusion that the data is normal is not definitive but is sufficient for this analysis.
The Kurtosis value in each data set is also reasonable. In a normal distribution, the excess Kurtosis is zero. Since all excess Kurtosis values are negative and small in this experiment, it can be
concluded that the data is slightly flat, meaning that the characteristic peak associated with the middle of the bell curve is not quite as high as it should be. However, the Kurtosis values are
considered small enough that the normal distribution is assumed for all data sets in this experiment.
An analysis of means may now be performed with the standard formulas associated with normality. The combined (left/right) data sets of 2 breaths per length were compared to 4 and 9 breaths per
length, respectively. All of these comparisons were performed assuming independent data sets (no paired comparison).
The results of analysis of means (See Figure 7) showed a statistically significant difference between all data comparisons (to 95% confidence). Thus, a difference between 2 breaths per length versus
4 breaths per length, and 4 breaths per length versus 9 breaths per length can be considered conclusive. The confidence intervals range from approximately plus/minus 0.2 seconds up to plus/minus 0.35
seconds. The range of confidence intervals occurred due to the increase in variability when more breaths were taken in a swimming length. Based on this analysis, it can be concluded that an average
loss of 0.0441 seconds per breath is a valid assessment.
Figure 7 – Statistical analysis summary
Overall Testing Conclusions
The experiment revealed two key findings as described in the following paragraphs.
The experiment established that there is a time loss associated with every breath you take when swimming freestyle. This loss in time is quite small, measuring only 0.0441 seconds per breath. When
considered alone, this small value of time loss suggests that triathletes should not consider the number of breaths they take in a race, and may breathe as often as possible in order to maintain
adequate oxygen supply.
However, the details of the experimental results cast doubt on this narrow focused conclusion. The variation in the test subject’s maximum swimming ability increased by 80% when breathing every two
strokes versus breathing every 8 strokes. This suggests a loss of efficiency as breathing frequency increases. It can be projected that the inefficiency will add up over the long distances of open
water swims in triathlon, increasing fatigue and decreasing total speed. As this experiment was narrow focused in evaluating maximum speed over 25 yards, it was not possible to confirm this
So, what can a triathlete take away from this experiment? In my coaching clinics, I always recommend to my clients to take as many breaths as they need, but not to take any more. If you can get away
with breathing every four strokes, then that is what you should do. If you start to get oxygen depleted, it is okay to breathe every 2 strokes. You will not lose a lot of time with this decision.
However, be aware that your entire stroke will be less efficient when you breathe more. You must determine for yourself if this decrease in efficiency is going to be made up by the increase in oxygen
intake. If you feel like you are drowning or suffocating, chances are that breathing more often is going to work well for you. If there is enough air in the tank, then charge ahead and don’t look up.
Leave a Comment | {"url":"http://www.dobkanize.com/gear-testing/analysis-of-breathing-in-swimming/","timestamp":"2014-04-20T13:46:08Z","content_type":null,"content_length":"46999","record_id":"<urn:uuid:1c05fb4f-c248-4206-b4f6-8c163a937ab5>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00606-ip-10-147-4-33.ec2.internal.warc.gz"} |
Centralizers of Nilpotent Elements in Semisimple Lie Algebras
up vote 3 down vote favorite
Let $G$ be a connected, simply-connected, complex, semisimple Lie group with Lie algebra $\frak{g}$, and let $\xi\in\frak{g}$ be a nilpotent element. I am interested in understanding the structure of
$C_{\frak{g}}(\xi)=\{\eta\in\frak{g}:[\xi,\eta]=$0$\}$, $C_G(\xi)=\{g\in G:Ad_g(\xi)=\xi\}$, and $\pi_0(C_G(\xi))=C_G(\xi)/C_G(\xi)_0$. I would appreciate any references you suspect would give useful
structural information. Also, I would welcome any advice and suggestions.
rt.representation-theory lie-groups lie-algebras algebraic-groups
add comment
1 Answer
active oldest votes
This determination of component groups goes back to Elashvili and Alexeevskii, but has been improved somewhat in a 1998 IMRN paper by Eric Sommers and a later joint paper by him and
George McNinch here. Your set-up is essentially equivalent to studying the same problem for a semisimple algebraic group and its Lie algebra in arbitrary chaeracteristic, but good
characteristic (including 0) is essential for getting uniform results.
In particular, the situation for nilpotent elements of the Lie algebra and unipotent elements of the group is essentially the same, by Springer's equivariant isomorphism between the two
settings The classes/orbits and centralizers correspond nicely in good characteristic.
up vote 2
down vote P.S. Concerning structural information on the centralizers, you can also consult Roger Carter's 1985 book on characters of finite groups of Lie type. There he includes a lot of details
accepted about the classes and centralizers in your question over an algebraically closed field. Since there are only finitely many unipotent classes or nilpotent orbits (the same number in good
characteristic), his tables provide a clear overview. There is less detail about exceptional types in the book by Collingwood-McGovern on nilpotent orbits, but it provides the full
Dynkin-Kostant theory over $\mathbb{C}$. Fine points of structure are also treated extensively in the newer AMS book by Martin Liebeck and Gary Seitz, in arbitrary characteristic
(including good and bad primes).
add comment
Not the answer you're looking for? Browse other questions tagged rt.representation-theory lie-groups lie-algebras algebraic-groups or ask your own question. | {"url":"http://mathoverflow.net/questions/127410/centralizers-of-nilpotent-elements-in-semisimple-lie-algebras","timestamp":"2014-04-18T18:40:58Z","content_type":null,"content_length":"52459","record_id":"<urn:uuid:d6b0a1cf-370e-449c-91b4-2f645eb554aa>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00409-ip-10-147-4-33.ec2.internal.warc.gz"} |
Determine whether the following series is an absolute convergence, conditional convergence, or divergent - WyzAnt Answers
Determine whether the following series is an absolute convergence, conditional convergence, or divergent
Determine whether the following series is an absolute convergence, conditional convergence, or divergent and by what test: ā ,ā ,n=1 ((-1)^nln(n))/(2.5)^n
Tutors, please
to answer this question.
Use the ratio test:
lim[nā ā ]((ln(n+1)/2.5^n+1)/(ln(n)/2.5^n) = lim ((ln(n+1)/ln(n))/2.5) = 1/2.5 <1
Therefore, the series converges absolutely. | {"url":"http://www.wyzant.com/resources/answers/16371/determine_whether_the_following_series_is_an_absolute_convergence_conditional_convergence_or_divergent","timestamp":"2014-04-23T20:46:12Z","content_type":null,"content_length":"44636","record_id":"<urn:uuid:20a742a1-36eb-48f1-be89-a6f63efc9f90>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00512-ip-10-147-4-33.ec2.internal.warc.gz"} |
Long Division Animation
Re: Long Division Animation
I think I may redo it, ignoring the remainder. Then show another method where you put the remainder as the difference. That way I will have three alternatives!
"The physicists defer only to mathematicians, and the mathematicians defer only to God ..." - Leon M. Lederman | {"url":"http://www.mathisfunforum.com/viewtopic.php?pid=76411","timestamp":"2014-04-21T02:04:50Z","content_type":null,"content_length":"10598","record_id":"<urn:uuid:139c5610-ce50-49b4-b575-55f698d41ef2>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00648-ip-10-147-4-33.ec2.internal.warc.gz"} |
Why Webmath?
The "classic" after school math homework scenario...
A math student is stuck on a homework problem, which is part of an assignment due tomorrow. This student could be you, your child, your brother or sister, or anyone studying math. What can the
stuck student do?
□ Ask mom or dad for help. But they're working late, will come home tired, and most likely have forgotten the math they haven't studied since high school or college.
□ Ask an older brother or sister for help. But they have their own schedules and other things to do.
□ Ask a friend for help. Admitting you don't understand your homework is sometimes hard to do. Perhaps your friends are no better at math than you are.
□ Get a math tutor. Private tutors are very expensive, and usually appointments must be made. This assignment is due tomorrow, and you need help now, not "next Thursday, between 2:30 and 3:30
at the local library."
□ Try to figure it out yourself. But you have been trying! You've read over the examples and explanations in the book several times, and it just doesn't help. Your class notes don't help much
□ Don't do the assignment. NO! You want to do it! You are sitting down trying to study, and using your free time to work on homework. You want to be a good student, but you just can figure out
how to do the homework [hey, it happens!]
Frustration and discouragement set in rather quickly. Maybe you'll take a break, watch some TV, go outside, get a snack, or go surf around on the internet. But soon it's late in the night and you
still can't figure out your math homework. What are you going to do?
Getting math help over the internet. Now.
The internet is an exciting place, filled with information. We put ourselves in the shoes of a frustrated student, and tried to find help on a 10th grade Algebra problem. We used the most popular
internet search engines, and here's what type of math help sites we were able to come up with:
□ A question and answer math data base. A site like this is a catalog of math questions asked by other students, answered by one or more math experts or tutors, and are then, in turn, posted
online in an organized question/answer for the benefit of all. Some sites like this have grown quite large and inclusive, containing questions and answers from many branches of mathematics.
Typically there is a very reasonable one to two day turnaround on answers to questions posted.
□ A site that is able to solve one or two particular problems. In these cases, the site is the result of the authors first experiment at internet programming, and is their first CGI script or
Java applet. Most of them are very nice and quite original, but somewhat limited in what types of problems they solve.. We found sites like this that might give the factors of a polynomial,
solve an equation, or graph a sine-wave, exploiting the most up-to-date internet technologies.
□ A math tutorial site. These sites are almost like taking an online test. They have a database of questions and possible answers, usually in a multiple choice , that the student may select,
and even be graded on. Many hints, and multimedia clues may be included along the way.
□ Specific math sites. The internet search engines typically also find sites pertaining to a particular math class, at a particular school, like Math 152 at Anywhere State College.
□ Math reference site. Sites like this contain many math definitions and outline fundamental relationships upon with mathematics is build. They read very much like math tables or a "math
dictionary." Some are very inclusive and very well organized.
That's it! In other words, we didn't find any immediate help with the exact problem on which we are working. The motivation that spawned the idea for Webmath was the passing of knowledge using
the accessibility and immediacy of the internet. Wouldn't it be nice if a frustrated student could
1. Connect to the internet from their home,
2. type in the problem they are having trouble with,
3. instantly receive hints or even a solution to their problem,
4. log off, then continue on with their homework with a "push" in the right direction, if not the answer they're looking for?
The paragraph above describes the apparent void Webmath tries to fill:
to give a student immediate help over the internet with the particular math problem they are on.
Webmath is not a database of questions and answers, or an online math testing site. Webmath is a math-help web site that generates answers to specific math questions, as entered by a user of the
site, at any particular moment. In fact, currently, Webmath is of little use unless a particular math problem to solve is typed into it.
The math answers are generated and produced real-time, at the moment a web user types in their math problem and clicks "solve." In addition to the answers, Webmath also attempts to show to the
student how to arrive at the answer as well. For example, if the user wants to know how to square the quantity (x+2), Webmath does not just display the answer x^2+4x+4, but a step-by-step
solution as well.
Behind the scenes, Webmath does not contain link after link of static web-pages containing information on mathematics. It contains a sophisticated computer math "engine" that is actually able to
recognize and "do the math" on a particular problem it is presented with. For this reason, Webmath is a very dynamic website, in that most of the replies a user will encounter are created the
instant they are sent to the user.
We acknowledge though, that programming a computer to solve math problems "on the fly" is not an easy task, and Webmath is not a a complete math solution system. Many computer methods and
algorithms related to having a computer solve a math problem remain research-level topics at computer science labs around the world. This difficulty in computer math programming remains at the
heart of the limited offerings of this site, and shortcomings therein. Nevertheless, the developmental goal of this site is to slowly and steadily increase and improve upon its math solving
capabilities, so frustrated students have a place to at least try to find the answer to their math problem, at the time they need it. | {"url":"http://webmath.com/about.html","timestamp":"2014-04-16T16:22:44Z","content_type":null,"content_length":"136580","record_id":"<urn:uuid:3182cad5-fe30-4860-acfb-af025c8f4aab>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00653-ip-10-147-4-33.ec2.internal.warc.gz"} |
Range of the Radon Transform
up vote 1 down vote favorite
Let us consider the Radon transform in two dimensions:
$$\tag{1}Rf(r,\theta):=\int\limits_{-\infty}^{\infty} f(r\cos\theta-t\sin\theta,r\sin\theta+t\cos\theta) dt,$$
where $r\in\mathbb{R}$ and $0\leq\theta\leq \pi$. There is a well known theorem about the range of the transform.
Theorem. A function $g(r,\theta)$ can be represented as a Radon transform of some function $f(x,y)$ (i.e. $g=R[f]$) if and only if for all integers $n\geq0$ $$\int\limits_{-\infty}^{\infty} r^ng(r,\
theta) dr$$ is a homogeneous polynomial of $\cos\theta$ and $\sin\theta$.
Obviously, if $g(r,\theta)$ belongs to the range of the Radon transform then the inverse Radon transform of the function $g(r,\theta)$ is $f(x,y)$.
Now let us consider a function which DOES NOT belong to the range of the transform.
QUESTION: What we would receive if we apply the inverse Radon transform to a function not from the range of the transform?
For example, consider function $g(r,\theta):= e^{-r^2}$ if $0\leq\theta\leq \pi/2$ and $g(r,\theta):= e^{-r^2(1-\cos\theta\sin\theta)}$ if $\pi/2\leq\theta\leq\pi$. This function does not belong to
the range of the Radon transform. Then, on the one hand, there is no function $f$ such that $g=R[f]$. On the other hand, $g= R[ R^{-1}g ]$.
What's wrong with this paradox?
UPDATE: Let us notice that $R[R^{-1}g]$ is defined correctly, but it is not equal to $g$.
Indeed, if $g=R[R^{-1}g]$, then
$$\int\limits_{-\infty}^{\infty} r^ng(r,\theta) dr = \int\limits_{-\infty}^{\infty} r^n R[R^{-1}g] (r,\theta) dr=$$
$$=\int\int r^n [R^{-1}g] (r\cos\theta−t\sin\theta,r\sin\theta+t\cos\theta)drdt=$$
$$=\int\int (u\cos\theta+v\sin\theta)^n [R^{-1}g] (u,v)dudv,$$
which is a homogeneous polynomial of $\cos\theta$ and $\sin\theta$ (we just have to expand the brackets). On the other hand it is NOT a homogeneous polynomial (by assumption). Therefore $g\neq R[R^
fa.functional-analysis fourier-analysis ca.analysis-and-odes
Without specifying the domain of the Radon transform, it does not make sense to talk about its range. My feeling is that this is the root of your supposed paradox – Yemon Choi Oct 31 '10 at 19:41
Yemon, I don't quite understand your comment. f(x,y) is defined on $\mathbb{R}^2$ and Radon transform, $R[f](r,\theta)$ is defined on $\mathbb{R}\times [0;\pi]$. – Oleg Oct 31 '10 at 20:07
Actually, no. If you define the domain to be all functions for which the integral (1) converges, your "well known theorem" is false. The Radon and inverse Radon transforms establishes a bijection
1 between Schwarz functions on R^2 and with Schwarz functions on $S^1 \times R$ that satisfies the homogeneous polynomial condition its moment. See chapter 1 of Helgason's book www-math.mit.edu/
~helgason/Radonbook.pdf See also Dirk's answer below. So most likely when you take the inverse transform of you function, you get something that decays only slightly faster than $|x|^{-2}$. –
Willie Wong Nov 1 '10 at 19:42
No, if $g$ is such that $R^{-1}$ is well defined and $R^{-1}g$ is such that $RR^{-1}g$ is well defined, $RR^{-1}g = g$ by definition. The problem is that $R: \mathcal{S}(\mathbb{R}^2) \to \mathcal
1 {S}(\mathbb{P})$ with the image being functions satisfying the homogeneous polynomial condition, while $R$ may still send a bigger space (say, a space of functions for which the trace on all lines
are defined and absolutely integrable) to something else. For comparison, think of the Fourier transform. It is a bijection of Schwarz functions, but it also is a bijection of $L^2$ function with
itself. – Willie Wong Nov 1 '10 at 23:36
1 The last integral does not converge. – Willie Wong Nov 2 '10 at 21:44
show 8 more comments
1 Answer
active oldest votes
Probably you refer to some theorem in "Mathematics of Computerized Tomography" by Frank Natterer (e.g. Theorem 4.2)? Then you are assuming that the domain in $\mathcal{S}$ and if I
remember correctly, in that book this denotes the Schwartz space of rapidly decaying $C^\infty$-functions. Hence you paradox is resolved by the fact that $R^{-1} g$ is not a Schwartz
up vote 3 function.
down vote
1 Hello Dirk! Thanks for this explanation. But anyway, it is easy to see, that one can apply a Radon transform to function $R^{-1}g$ and $RR^{-1}g$ is defined correctly. What is your
opinion, is $RR^{-1}g$ equal to $g$ (see update of the question)? Thanks! – Oleg Nov 2 '10 at 20:52
add comment
Not the answer you're looking for? Browse other questions tagged fa.functional-analysis fourier-analysis ca.analysis-and-odes or ask your own question. | {"url":"http://mathoverflow.net/questions/44359/range-of-the-radon-transform/44470","timestamp":"2014-04-16T14:02:59Z","content_type":null,"content_length":"59590","record_id":"<urn:uuid:16d645ac-4606-4155-b8fc-7237b5d7206d>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00134-ip-10-147-4-33.ec2.internal.warc.gz"} |
Desai, SS and Viswanathan, S and Ramaswamy, MA (1977) Numerical evaluation of transonic wave drag. Journal of Aircraft, 14 (2). pp. 215-217. ISSN 0021-8669
Full text available as:
Restricted to Registered users only
Download (986Kb)
Under consideration is the linear-approximation formula given by Liepmann and Roshko (1960) for the wave drag coefficient of a pointed slender body of given length, in terms of its cross-sectional
area distribution. The right-hand side consists of a double integral with logarithmic singularity, plus some terms which vanish when the S'(1) is zero, where S is the cross-sectional area
distribution function in a dimensionless variable. A method for numerical integration of the double integral is proposed, which does not require values of S''(x) as inputs and is consequently
insensitive to numerical differentiation errors. The scheme is applied to three transonic examples, including two where S'(1) does not equal zero. Although the formula is strictly speaking invalid in
this case, it was used to see if one can obtain reasonable estimates for engineering purposes. Comparison with other schemes is made.
Item Type: Journal Article
Additional Copyright for this article belongs to AIAA
Uncontrolled Mathematical models;Cross sections;Drag (hindrance);Waves; Coefficients;Singularities;Errors;Numerical integration; Dimensions;Aircraft;Aerodynamic coefficients;Aerodynamic drag;
Keywords: Numerical integration;Slender bodies;Transonic flow; Wave drag;Singular integral equations
Subjects: MATHEMATICAL AND COMPUTER SCIENCES > Numerical Analysis
AERONAUTICS > Aerodynamics
Division/ Aerodynamics, Aerodynamics, Aerodynamics
Depositing User: M/S ICAST NAL
Date Deposited: 03 Mar 2010
Last Modified: 24 May 2010 09:54
URI: http://nal-ir.nal.res.in/id/eprint/3660
Actions (login required) | {"url":"http://nal-ir.nal.res.in/3660/","timestamp":"2014-04-18T23:21:53Z","content_type":null,"content_length":"16524","record_id":"<urn:uuid:30c16353-5922-4a86-9069-034bb7c78e01>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00283-ip-10-147-4-33.ec2.internal.warc.gz"} |
atom self capacitance
Like a capacitor, an atom stores energy in electric fields, and I suppose one can calculate an "equivalent capacitance". I'm not sure there's much physical insight to be gained here, as you're not
going to plug one into a circuit.
Like an inductor, some atoms also store energy in magnetic fields, and I suppose one can calculate an "equivalent inductance". Here, though, you've gone astray and assumed all of the energy is stored
in the magnetic field. That's not the case.
An LC circuit moves energy back and forth between the capacitor and the inductor. This is not what happens in the atom. The reason why you got the Rydberg constant out was that you put it in, in the
form of the Bohr radius.
I have to agree with you that the coincidence is an artifact introduced in the "model".
About the physical sense of the capacitance is exactly what I try to say in my first post.
Regarding the LC, in my opinion, although the analogy is not very useful, the bohr model obtain those number assuming that the speed and potencial energy are equilibrated in fixed levels. This can be
seen as current-voltage exchange, which essentialy is what you see in the LC circuit. But I insist that this a quite artificial point of view and all number are there as Vanadium50 correctly said, so
is not surprising that it "works". | {"url":"http://www.physicsforums.com/showthread.php?t=276515","timestamp":"2014-04-21T02:06:30Z","content_type":null,"content_length":"71861","record_id":"<urn:uuid:d6d0bcca-2658-4cdf-978c-d341b3e53c02>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00532-ip-10-147-4-33.ec2.internal.warc.gz"} |
MATH M120 4532 Brief Survey of Calculus II
Mathematics | Brief Survey of Calculus II
M120 | 4532 | Tba
P: M119. A continuation of M119 covering topics in elementary
differential equations, calculus of functions of several variables and
infinite series. Intended for non-phsical science students. Credit not
given for both M212 or M216 and M120. | {"url":"http://www.indiana.edu/~deanfac/blsu202/math/math_m120_4532.html","timestamp":"2014-04-19T14:31:07Z","content_type":null,"content_length":"792","record_id":"<urn:uuid:1db73f89-133a-45be-a5e1-4e174be5b3f1>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00529-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
- User Profile for: nerdyboi
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
User Profile for: nerdyboi
UserID: 571655
Name: mathnerd4life
Registered: 2/24/09
Occupation: Cereal Killer
Location: Hannah, Montana
Biography: I hate CEREAL! math rules!!
Total Posts: 7
Show all user messages | {"url":"http://mathforum.org/kb/profile.jspa?userID=571655","timestamp":"2014-04-21T13:20:01Z","content_type":null,"content_length":"13813","record_id":"<urn:uuid:caf4d39b-2aaf-4e09-bd47-7e571b345222>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00365-ip-10-147-4-33.ec2.internal.warc.gz"} |
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/users/misskelly~/asked","timestamp":"2014-04-19T12:43:37Z","content_type":null,"content_length":"108627","record_id":"<urn:uuid:9b95d9a5-da1d-4813-9456-2d266ae80678>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00521-ip-10-147-4-33.ec2.internal.warc.gz"} |
A trigonmetric integral. Please help
May 26th 2007, 02:46 AM #1
May 2007
A trigonmetric integral. Please help
Hey if anyone could solve this integral for me it would be much appreciated:
Note: I have only done 3 unit mathematics (Australian HSC) and I don't start Integral calculus until Semester 2, so there are several integral properties I know nothing about. In fact I'm not
sure if this can be solved (looking on wikipedia I was unable to find a formula):
This has the limits and the integration sign left out. I have left out the limits because I'm still working on defining them so if anyone who has the time to solve this can leave the integrated
function in the form [f(x)] I'll know that I need to sub the limits in.
If the above is a little difficult to read, once constants (L, m and g) are taken out the integral is
dr/((cos(r/L))^.5) with the denominator being the square root of cos(r/L).
Hey if anyone could solve this integral for me it would be much appreciated:
Note: I have only done 3 unit mathematics (Australian HSC) and I don't start Integral calculus until Semester 2, so there are several integral properties I know nothing about. In fact I'm not
sure if this can be solved (looking on wikipedia I was unable to find a formula):
This has the limits and the integration sign left out. I have left out the limits because I'm still working on defining them so if anyone who has the time to solve this can leave the integrated
function in the form [f(x)] I'll know that I need to sub the limits in.
If the above is a little difficult to read, once constants (L, m and g) are taken out the integral is
dr/((cos(r/L))^.5) with the denominator being the square root of cos(r/L).
$\int dr \, \frac{\sqrt{L}}{L \sqrt{mgr \, cos \left ( \frac{r}{L} \right ) }}$
Let $\theta = \frac{r}{L}$. Then $d \theta = \frac{dr}{L}$. So...
$\int dr \, \frac{\sqrt{L}}{L \sqrt{mgr \, cos \left ( \frac{r}{L} \right ) }} = \int d \theta L \cdot \, \frac{\sqrt{L}}{L \sqrt{mg \theta L \, cos( \theta) }}$ = $\frac{1}{\sqrt{mg}} \int \frac
{d \theta}{\sqrt{\theta \, cos( \theta )}}$
I can't think of a way to integrate this.
Where did this integral come from anyway?
Sorry I must not have been clear. The denominator is simply L(mgcos(r/L) to the power of a half) i.e. in your last step there is no theta outside of the cos(theta).
To answer your question I...uh
Anyways I guess 10 weeks into advanced physics isn't long enough to be arrogant enough to think you can come up with an equation of periodic motion for a pendulum huh?
Anyways I ended up with that integral and I'm really hoping to get a final equation, even if its incorrect (I'm a tad stubborn). However if this integral is unsolvable chances are I went wrong
somewhere huh.
Oh and I'm really hoping that you can integrate it in terms of dr and not dtheta as I spent a while converting it into that form (as it involved integrating a function of theta by dt so I had to
find an alternate form of dt).
If you are interested in my method I'll be happy to tell you (not sure if its sound though; as I said only 10 weeks in
Edit: I know it would be a simple matter to switch it back after the dtheta integral but I'd like to see the working out for dr thats all. However since your the one helping me, then whatever
works for you
Hey if anyone could solve this integral for me it would be much appreciated:
Note: I have only done 3 unit mathematics (Australian HSC) and I don't start Integral calculus until Semester 2, so there are several integral properties I know nothing about. In fact I'm not
sure if this can be solved (looking on wikipedia I was unable to find a formula):
This has the limits and the integration sign left out. I have left out the limits because I'm still working on defining them so if anyone who has the time to solve this can leave the integrated
function in the form [f(x)] I'll know that I need to sub the limits in.
If the above is a little difficult to read, once constants (L, m and g) are taken out the integral is
dr/((cos(r/L))^.5) with the denominator being the square root of cos(r/L).
It does have a closed form in terms of a hypergeomentric function and
elementary functions, but that can be of no interest to you as yet.
But if you want to see what Mathematica gives try it on the QuickMath integrator.
Sorry I must not have been clear. The denominator is simply L(mgcos(r/L) to the power of a half) i.e. in your last step there is no theta outside of the cos(theta).
To answer your question I...uh
Anyways I guess 10 weeks into advanced physics isn't long enough to be arrogant enough to think you can come up with an equation of periodic motion for a pendulum huh?
Anyways I ended up with that integral and I'm really hoping to get a final equation, even if its incorrect (I'm a tad stubborn). However if this integral is unsolvable chances are I went wrong
somewhere huh.
Oh and I'm really hoping that you can integrate it in terms of dr and not dtheta as I spent a while converting it into that form (as it involved integrating a function of theta by dt so I had to
find an alternate form of dt).
If you are interested in my method I'll be happy to tell you (not sure if its sound though; as I said only 10 weeks in
Edit: I know it would be a simple matter to switch it back after the dtheta integral but I'd like to see the working out for dr thats all. However since your the one helping me, then whatever
works for you
The pendulum equation works out to be:
$\theta^{ \prime \prime } + \frac{g}{L} sin( \theta ) = 0$
where L is the length of the pendulum and $\theta$ is the angle the string makes with the vertical.
The simple harmonic oscillator equation is:
$x^{ \prime \prime } + \omega ^2 x = 0$
If the initial angular displacement of the pendulum is small enough we may approximate
$sin( \theta ) \approx \theta$
and the pendulum equation becomes the simple harmonic oscillator equation. (So even a simple pendulum really doesn't even follow simple harmonic motion.)
The pendulum equation can be solved in terms of elliptic integrals, which can only be approximated. I haven't seen it done, but CaptainBlack is likely correct that we can write the solution in
terms of hypergeometric functions. (At the very least the solution can certainly be expanded into a series of hypergeometric functions.) In Physics the usual method is to use elliptic integrals
and expand an approximation from there.
Yeah we did actually prove the period of a pendulum undergoing simple harmonic motion at small angles where sin(theta)~theta. We then proved it under different kinds of damping, when it has a
driving force at natural frequency, and then started all over again with a physical pendulum.
The reason I started all this nonsense was I was a little bored and I wanted to prove the period of the pendulum without using assumptions such as it moves in SHM such that omega = (k/m)^(1/2) or
f = (omega/2pi).
As you can see I failed abysmally.
Anyway I thank you both for taking the time to help me out and have just one more request. I haven't done hypergeometric functions (as captainblack guessed) so I can't understand the intergrated
Therefore I was wondering if the integral can be represented in terms of elementary functions only if I gave you the limits ((theta)L, zero) or if real numbered constants are required (pi/18),0
(where theta = 10 degrees in radians and L = 1 metre).
The integral is multiplied by the constant 4.
Actually I just remember Captainblack's calculator so I plugged it into that, and came up with... a negative period.
Does the differencial equation for the pendulum have a solution in terms of elliptic integrals?
Bucause I think it does. For the most part it is just not necessay.
huh, well I am waaaaaaaay out of my league here but looking at my notes it shows that we did prove the period of a pendulum where sin(theta)~ theta and showed via an infinte series that it holds
for a maximum of 15 degrees (the infinite series being 1 + ((1^2)/(2^2)) sin^2 (CAPITAL THETA)/2 + ((1^2).(3^2))/((2^2). (4^2))sin^4 (CAPITAL THETA)/2 + ...) (or rather for capital theta =
15degrees the true period is longer than that given by the approximate by less than .05%.
And it is here that my insubstantial contributions end, until I've done about 3 more years of intensive maths
By the way how do you guys right out your working neatly on these web pages? I have to use ^ for power of and you guys have integrals and stuff.
I clicked on one of the equations you had and it came up with a bunch of code.
Yes, the problem can be solved "exactly" using elliptic integrals. We can then use an approximation of the integral to form approximate solutions to the pendulum equation. Obviously you can
approximate solutions to the differential equation without going through all that. I have never heard why the elliptic integral version is used instead of one of these. (Unless it was merely a
good excuse to teach us what they are...)
May 26th 2007, 04:03 AM #2
May 26th 2007, 04:15 AM #3
May 2007
May 26th 2007, 05:28 AM #4
Grand Panjandrum
Nov 2005
May 26th 2007, 06:42 AM #5
May 26th 2007, 02:34 PM #6
May 2007
May 26th 2007, 02:49 PM #7
May 2007
May 26th 2007, 05:28 PM #8
Global Moderator
Nov 2005
New York City
May 26th 2007, 06:10 PM #9
May 2007
May 26th 2007, 06:46 PM #10 | {"url":"http://mathhelpforum.com/calculus/15366-trigonmetric-integral-please-help.html","timestamp":"2014-04-16T07:45:26Z","content_type":null,"content_length":"72943","record_id":"<urn:uuid:b82ef783-bc14-47c1-81c2-7f6ad5b2195d>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00026-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Answer is...Maybe!
The Answer is...Maybe!
As, undoubtedly, you are busy extracting and computing data for the 2009-2010 Perkins Report, I thought I might share an insight gained after a conversation with one of our partners from Mid-Plains
Community College, Tad Pfeifer. We had a discussion about the 2P1 Numerator and the 4P1 Denominator. More specifically, we pondered whether these two items, in fact, represent the same populations.
After some discussion and running of numbers, we concluded the answer is … maybe!
It is conceivable that these two populations
be the same. However, there is a key distinction between the two. For our purposes in Nebraska, the 2P1 Indicator is measuring the percentage of CTE Concentrators that graduate with a diploma,
degree, certificate, or credential. The 2P1 numerator is calculated as a subclass, or subpopulation of the 2P1 denominator; we refine the 2P1 denominator population to calculate the population
comprising the 2P1 numerator.
The 4P1 Indicator measures the percentage of CTE Concentrator graduates employed in work, military or apprenticeships. The 4P1 denominator represents the number of CTE concentrators from the previous
reporting year that left postsecondary education with a credential, certificate, degree or diploma during the previous reporting year. On the surface, this sounds eerily similar to the 2P1 numerator
definition. However, there is a difference - a subtle difference – which is found within the computation methodology of the indicators.
2P1 requires the removal of those students attempting a CTE course during the current reporting year (see step 11 from the 2P1 denominator calculation process, page 14 of the Perkins Postsecondary
Data Manual). 4P1 does not make such exclusion.
So, let’s revisit my earlier answer of “maybe”. I noted that these two populations
be the same. That is because
none of the CTE Concentrators from the previous year attempt a CTE course during the current reporting year, in fact, the 2P1 Numerator population will be identical to the 4P1 Denominator population.
However, this is conditional on whether or not any CTE Concentrator from the previous reporting year attempts a CTE course during the current reporting year – thus, the unambiguous answer … “maybe”.
Until next week…
1 Comments:
At September 17, 2010 at 3:46 PM , said...
Post a Comment
Subscribe to Post Comments [Atom]
<< Home | {"url":"http://perkinspundit.blogspot.com/2010/09/september-7-2010.html","timestamp":"2014-04-16T13:16:11Z","content_type":null,"content_length":"17701","record_id":"<urn:uuid:fbc73be7-30aa-40e2-9256-bfdf0719d984>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00279-ip-10-147-4-33.ec2.internal.warc.gz"} |
Kemah Algebra 2 Tutor
Find a Kemah Algebra 2 Tutor
...Being a younger tutor, I certainly can relate to the kids and teenagers better. I am quite nice and very respectful. I already work for two tutoring companies now.
8 Subjects: including algebra 2, chemistry, geometry, piano
...My approach in working with you on algebra 1 and algebra 2 is first to assess your familiarity and comfort with basic concepts, and explain and clarify the ones you need some improvement on;
and then to work on the specific areas of your assignments, such as solving equations with radicals or gra...
20 Subjects: including algebra 2, writing, algebra 1, logic
...My unique methods has made my students scholars! May it be an upcoming test, that one class you just cannot seem to get a grasp on or refreshing what you have learned, I am willing to assist.
When my students fail, so do I.
6 Subjects: including algebra 2, reading, algebra 1, geometry
...I've also taught SAT and ACT prep. I've tutored students in English, reading, and writing. I enjoy teaching a great deal and am well versed in teaching in different styles to fit the student.
34 Subjects: including algebra 2, chemistry, reading, English
I have been a private math tutor for over ten(10) years and am a certified secondary math instructor in the state of Texas. I have taught middle and high-school math for over ten (10) years. I am
available to travel all over the greater Houston area, including as far south as Pearland, as far north as Spring, as far west as Katy and as far east as the Galena Park/Pasadena area.
9 Subjects: including algebra 2, calculus, geometry, algebra 1 | {"url":"http://www.purplemath.com/Kemah_Algebra_2_tutors.php","timestamp":"2014-04-21T04:34:22Z","content_type":null,"content_length":"23650","record_id":"<urn:uuid:222ffae9-4065-4ee3-84dd-77e58727083e>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00104-ip-10-147-4-33.ec2.internal.warc.gz"} |
Identifying independence in Bayesian networks
Results 1 - 10 of 100
"... Bucket elimination is an algorithmic framework that generalizes dynamic programming to accommodate many problem-solving and reasoning tasks. Algorithms such as directional-resolution for
propositional satisfiability, adaptive-consistency for constraint satisfaction, Fourier and Gaussian elimination ..."
Cited by 278 (62 self)
Add to MetaCart
Bucket elimination is an algorithmic framework that generalizes dynamic programming to accommodate many problem-solving and reasoning tasks. Algorithms such as directional-resolution for
propositional satisfiability, adaptive-consistency for constraint satisfaction, Fourier and Gaussian elimination for solving linear equalities and inequalities, and dynamic programming for
combinatorial optimization, can all be accommodated within the bucket elimination framework. Many probabilistic inference tasks can likewise be expressed as bucket-elimination algorithms. These
include: belief updating, finding the most probable explanation, and expected utility maximization. These algorithms share the same performance guarantees; all are time and space exponential in the
inducedwidth of the problem's interaction graph. While elimination strategies have extensive demands on memory, a contrasting class of algorithms called "conditioning search" require only linear
space. Algorithms in this class split a problem into subproblems by instantiating a subset of variables, called a conditioning set, or a cutset. Typical examples of conditioning search algorithms
are: backtracking (in constraint satisfaction), and branch and bound (for combinatorial optimization). The paper presents the bucket-elimination framework as a unifying theme across probabilistic and
deterministic reasoning tasks and show how conditioning search can be augmented to systematically trade space for time.
, 1996
"... This literature review discusses different methods under the general rubric of learning Bayesian networks from data, and includes some overlapping work on more general probabilistic networks.
Connections are drawn between the statistical, neural network, and uncertainty communities, and between the ..."
Cited by 172 (0 self)
Add to MetaCart
This literature review discusses different methods under the general rubric of learning Bayesian networks from data, and includes some overlapping work on more general probabilistic networks.
Connections are drawn between the statistical, neural network, and uncertainty communities, and between the different methodological communities, such as Bayesian, description length, and classical
statistics. Basic concepts for learning and Bayesian networks are introduced and methods are then reviewed. Methods are discussed for learning parameters of a probabilistic network, for learning the
structure, and for learning hidden variables. The presentation avoids formal definitions and theorems, as these are plentiful in the literature, and instead illustrates key concepts with simplified
examples. Keywords--- Bayesian networks, graphical models, hidden variables, learning, learning structure, probabilistic networks, knowledge discovery. I. Introduction Probabilistic networks or
probabilistic gra...
- Games and Economic Behavior , 2001
"... The traditional representations of games using the extensive form or the strategic (normal) form obscure much of the structure that is present in real-world games. In this paper, we propose a
new representation language for general multiplayer games — multi-agent influence diagrams (MAIDs). This rep ..."
Cited by 157 (2 self)
Add to MetaCart
The traditional representations of games using the extensive form or the strategic (normal) form obscure much of the structure that is present in real-world games. In this paper, we propose a new
representation language for general multiplayer games — multi-agent influence diagrams (MAIDs). This representation extends graphical models for probability distributions to a multi-agent
decision-making context. MAIDs explicitly encode structure involving the dependence relationships among variables. As a consequence, we can define a notion of strategic relevance of one decision
variable to another: ¢¡ is strategically relevant to if, to optimize the decision rule at, the decision maker needs to take into con-sideration the decision rule at ¡. We provide a sound and
complete graphical criterion for determining strategic relevance. We then show how strategic relevance can be used to detect structure in games, allowing a large game to be broken up into a set of
interacting smaller games, which can be solved in sequence. We show that this decomposition can lead to substantial savings in the computational cost of finding Nash equilibria in these games. 1
- International Journal of Approximate Reasoning , 1996
"... Belief networks are popular tools for encoding uncertainty in expert systems. These networks rely on inference algorithms to compute beliefs in the context of observed evidence. One established
method for exact inference onbelief networks is the Probability Propagation in Trees of Clusters (PPTC) al ..."
Cited by 149 (6 self)
Add to MetaCart
Belief networks are popular tools for encoding uncertainty in expert systems. These networks rely on inference algorithms to compute beliefs in the context of observed evidence. One established
method for exact inference onbelief networks is the Probability Propagation in Trees of Clusters (PPTC) algorithm, as developed byLauritzen and Spiegelhalter and re ned by Jensen et al. [1, 2, 3]
PPTC converts the belief network into a secondary structure, then computes probabilities by manipulating the secondary structure. In this document, we provide a self-contained, procedural guide to
understanding and implementing PPTC. We synthesize various optimizations to PPTC that are scattered throughout the literature. We articulate undocumented, \open secrets " that are vital to
producing a robust and e cient implementation of PPTC. We hope that this document makes probabilistic inference more accessible and a ordable to those without extensive prior exposure.
- In Intl. Joint Conf. on Artificial Intelligence (IJCAI , 2003
"... Simultaneous Localization and Mapping (SLAM) is a fundamental problem in mobile robotics: while a robot navigates in an unknown environment, it must incrementally build a map of its surroundings
and localize itself within that map. Traditional approaches to the problem are based upon Kalman filters, ..."
Cited by 126 (1 self)
Add to MetaCart
Simultaneous Localization and Mapping (SLAM) is a fundamental problem in mobile robotics: while a robot navigates in an unknown environment, it must incrementally build a map of its surroundings and
localize itself within that map. Traditional approaches to the problem are based upon Kalman filters, but suffer from complexity issues: the size of the belief state and the time complexity of the
filtering operation grow quadratically in the size of the map. This paper presents a filtering technique that maintains a tractable approximation of the filtered belief state as a thin junction tree.
The junction tree grows under measurement and motion updates and is periodically "thinned" to remain tractable via efficient maximum likelihood projections. When applied to the SLAM problem, these
thin junction tree filters have a linear-space belief state representation, and use a linear-time filtering operation. Further approximation can yield a constant-time filtering operation, at the
expense of delaying the incorporation of observations into the majority of the map. Experiments on a suite of SLAM problems validate the approach.
- Journal of Artificial Intelligence Research , 1999
"... We show how to find a minimum loop cutset in a Bayesian network with high probability. Finding such a loop cutset is the first step in Pearl's method of conditioning for inference. Our random
algorithm for finding a loop cutset, called RepeatedWGuessI, outputs a minimum loop cutset, after O(c ..."
Cited by 81 (2 self)
Add to MetaCart
We show how to find a minimum loop cutset in a Bayesian network with high probability. Finding such a loop cutset is the first step in Pearl's method of conditioning for inference. Our random
algorithm for finding a loop cutset, called RepeatedWGuessI, outputs a minimum loop cutset, after O(c \Delta 6 k kn) steps, with probability at least 1 \Gamma (1 \Gamma 1 6 k ) c6 k , where c ? 1 is
a constant specified by the user, k is the size of a minimum weight loop cutset, and n is the number of vertices. We also show empirically that a variant of this algorithm, called WRA, often finds a
loop cutset that is closer to the minimum loop cutset than the ones found by the best deterministic algorithms known. 1
- Proc. of the Eighth Conference on Uncertainty in Artificial Intelligence , 1992
"... In a previous paper [8] we presented an algorithm for extracting causal influences from independence information, where a causal influence was defined as the existence of a directed arc in all
minimal causal models consistent with the data. In this paper we address the question of deciding whether t ..."
Cited by 60 (1 self)
Add to MetaCart
In a previous paper [8] we presented an algorithm for extracting causal influences from independence information, where a causal influence was defined as the existence of a directed arc in all
minimal causal models consistent with the data. In this paper we address the question of deciding whether there exists a causal model that explains ALL the observed dependencies and independencies.
Formally, given a list M of conditional independence statements, it is required to decide whether there exists a directed acyclic graph D that is perfectly consistent with M, namely, every statement
in M, and no other, is reflected via d-separation in D. We present and analyze an effective algorithm that tests for the existence of such a dag, and produces one, if it exists. Key words: Causal
modeling, graphoids, conditional independence. 1 1 Introduction Directed acyclic graphs (dags) have been widely used for modeling statistical data. Starting with the pioneering work of Sewal Wright
- Artificial Intelligence , 1996
"... This paper develops axioms and formal semantics for statements of the form "X is causally irrelevant to Y in context Z," which we interpret to mean "Changing X will not affect Y if we hold Z
constant." The axiomization of causal irrelevance is contrasted with the axiomization of informational irr ..."
Cited by 54 (15 self)
Add to MetaCart
This paper develops axioms and formal semantics for statements of the form "X is causally irrelevant to Y in context Z," which we interpret to mean "Changing X will not affect Y if we hold Z
constant." The axiomization of causal irrelevance is contrasted with the axiomization of informational irrelevance, as in "Learning X will not alter our belief in Y , once we know Z." Two versions of
causal irrelevance are analyzed, probabilistic and deterministic. We show that, unless stability is assumed, the probabilistic definition yields a very loose structure, that is governed by just two
trivial axioms. Under the stability assumption, probabilistic causal irrelevance is isomorphic to path interception in cyclic graphs. Under the deterministic definition, causal irrelevance complies
with all of the axioms of path interception in cyclic graphs, with the exception of transitivity. We compare our formalism to that of [Lewis, 1973], and offer a graphical method of proving theorems
- In Proc. Tenth Conference on Uncertainty in AI , 1994
"... In this paper we propose a new approach to probabilistic inference on belief networks, global conditioning, which is a simple generalization of Pearl's (1986b) method of loopcutset conditioning.
We show that global conditioning, as well as loop-cutset conditioning, can be thought of as a speci ..."
Cited by 46 (0 self)
Add to MetaCart
In this paper we propose a new approach to probabilistic inference on belief networks, global conditioning, which is a simple generalization of Pearl's (1986b) method of loopcutset conditioning. We
show that global conditioning, as well as loop-cutset conditioning, can be thought of as a special case of the method of Lauritzen and Spiegelhalter (1988) as refined by Jensen et al (1990a; 1990b). | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=86756","timestamp":"2014-04-18T05:58:44Z","content_type":null,"content_length":"38968","record_id":"<urn:uuid:3328f669-3ce6-4f68-924a-64c6035432d1>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00291-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Crank approaches to transcendence
Replies: 3 Last Post: Apr 13, 2013 9:59 PM
Messages: [ Previous | Next ]
Paul Crank approaches to transcendence
Posted: Apr 13, 2013 5:04 PM
Posts: 385
Registered: I'm surprised that there don't seem to be many crank "proofs" that any constants, whose rationality is unknown, (such as Euler's constant), are irrational or transcendental.
7/12/10 I would have thought that "proofs" that Euler's constant is irrational would be a massive crank magnet for the following reasons: 1) It's an easy result to state and understand. 2) Crank
proofs would seem easy to generate. All you do is write that the constant = p/q with p and q integers, then do several pages of algebraic manipulations, make a mistake in the computations
half way through the argument, then derive a contradiction based on the mistake
I have found very few crank transcendence proofs. Are they rare, or am I just looking in the wrong places?
Paul Epstein
Date Subject Author
4/13/13 Crank approaches to transcendence Paul
4/13/13 Re: Crank approaches to transcendence quasi
4/13/13 Re: Crank approaches to transcendence Paul
4/13/13 Re: Crank approaches to transcendence Newberry | {"url":"http://mathforum.org/kb/thread.jspa?threadID=2447296&messageID=8893647","timestamp":"2014-04-19T21:02:44Z","content_type":null,"content_length":"20016","record_id":"<urn:uuid:55d9afea-a983-4ec8-92da-e5883c7c5abd>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00008-ip-10-147-4-33.ec2.internal.warc.gz"} |
semisimplicity of automorphic Galois representations
up vote 4 down vote favorite
Is it known that the Galois representation constructed by Harris and Taylor in their book is semisimple? I can't see this proven in the book, but on the other hand, everywhere else the representation
is taken to be semisimple... Are they considering its semisimplification?
Sorry for the simple question.
shimura-varieties galois-representations langlands-conjectures
add comment
1 Answer
active oldest votes
Do you mean the local Galois representations or the local Galois representations ?
The global Galois representations they are constructing correspond to cuspidal automorphic representations of GL(n). They are expected to be always irreducible, though I'm not sure
when this is known exactly. But it is known in the case that Harris and Taylor consider (when the automorphic representation is square integrable at a finite place), cf corollary 1.3
of the article "Compatibility of local and global Langlands correspondences" by Taylor and Yoshida.
up vote 4 down
vote accepted The local Galois representations are not expected to be semi-simple in general. They are expected to be Frobenius semi-simple (ie, the Frobenius elements are supposed to act
semi-simply), but this is not known for $n\geq 3$. So, if you mean the local representations, then yes, very often people are just taking the Frobenius semi-simplifications of the
representations that appear in the cohomology of Shimura varieties.
Thanks for your answer. I meant the global Galois representation. – Nicolás Apr 18 '12 at 21:21
add comment
Not the answer you're looking for? Browse other questions tagged shimura-varieties galois-representations langlands-conjectures or ask your own question. | {"url":"http://mathoverflow.net/questions/94001/semisimplicity-of-automorphic-galois-representations?answertab=oldest","timestamp":"2014-04-18T06:00:23Z","content_type":null,"content_length":"52351","record_id":"<urn:uuid:6d705413-23a1-4600-b53f-340499bfb3c8>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00498-ip-10-147-4-33.ec2.internal.warc.gz"} |
IntroductionMaterialStudy AreaField DataALS DataMethodsDerivation of DTM from ALS DataStatistical ALS MeasuresCanopy Volume EstimationLocal Maxima DetectionStem Volume EstimationStem Number and Basal Area EstimationValidationResultsDiscussionConclusionsReferencesFigures and Tables
During the last decade, airborne laser scanning (ALS) data have been established as a standard data source for high precision topographic data acquisition and have also been used for estimation of
forest variables [1]. For forestry applications, the most commonly used method is to derive measures from the ALS data in raster cells approximately the size of a field plot, 100–200 m^2, and use the
measures as independent variables in regression models to estimate forest variables such as mean tree height and stem volume [2–4].
The measures derived from the ALS data may be height percentiles and measures of the density of the vegetation as the fraction of ALS reflections from vegetation relative to the total amount of ALS
reflections [5]. In that case the regression models are based on an assumption that the stem volume is proportional to one or several height measures (e.g., percentiles of the height above the
ground) multiplied by a density measure (e.g., the fraction of ALS data above a threshold height above the ground). Usually a log-log regression model is used. The regression model is often selected
with best subset regression that selects the set of independent variables that result in the highest correlation with the dependent variable or with stepwise regression where independent variables
are included or excluded in the regression model depending on their significance. The estimation of the regression model parameters is based on reference data from one study area [6]. Another
approach for stem volume or biomass estimation is to use a model based on the structure of the forest by calculating the canopy volumes for different height layers and using those measures as
independent variables in a linear regression model [7]. As defined in [7], the canopy volume is the entire volume between the canopy and the terrain surface. Furthermore, the different canopy height
layers account for height-dependent differences in canopy structure. The forest canopy can either be described by the first echoes directly or by a rasterized digital surface model (DSM) calculated
from the first echoes. As the input for the canopy volume estimation, the canopy height normalization with respect to the digital terrain model (DTM) of the first echoes and the DSM, respectively, is
required. For the calibration of the linear regression model, reference data is needed, for example, from a forest inventory. Depending on its sampling design (e.g., angle count sampling, fully
callipered sample plot area, stand-based), the spatial unit used to extract the ALS-based measures can vary.
If the ALS data are dense enough, individual tree crowns may be identified from the data [8–12]. The identification is usually done by deriving a normalized digital surface model (nDSM) from the ALS
data and defining local maxima in the nDSM as treetops. The nDSM is calculated by subtracting the DTM from the DSM and commonly has a pixel size of 0.5 m to 1.0 m. As a second step, segmentation of
the nDSM around the local maxima can be done to derive more information about the tree crowns. Commonly used raster-based segmentation methods are, for example, the watershed segmentation [13], the
multi resolution segmentation [14] or an edge-based segmentation [15]. A common problem with identification of individual trees is that there is an underestimation in the result, especially for
smaller trees below the dominant tree layer [16].
The analyses based on nDSMs are faster and more robust than those based directly on ALS returns. However, the nDSMs still provide information about local variations in the forest that are related to
individual trees. As demonstrated in [17], the canopy volume regression model can also be applied to the rasterized ALS data. Derivation of measures from ALS data in smaller raster cells (e.g., 0.5 m
to 1.0 m) could also be a way to compensate for the varying density of the ALS data [18]. The density may vary, for example, due to the pattern of the laser scanner or overlapping strips. If measures
are derived from the ALS from a 100–200 m^2 raster cell, these measures are largely influenced by the parts of the raster cell with the highest pulse density.
The purpose of this study is to compare methods to estimate stem volume, stem number and basal area. The first comparison is between measures derived from ALS data in 0.5 m raster cells and variables
derived from larger raster cells corresponding to the size of the field plots for estimation of forest variables. The second comparison is between the canopy volume model [7] and a model based on
height percentiles and density of ALS data [3] for estimation of stem volume in hemi-boreal forest. The third comparison is between area-based regression models and individual tree-based models for
estimation of stem number and basal area.
The study area is located in the southwest of Sweden (Lat. 58°N, Long. 13°E). The most common tree species are Norway spruce (Picea abies) (38.5% of basal area), Scots pine (Pinus sylvestris) (28.0%
of basal area), birch (Betula pendula and Betula pubescens) (18.0% of basal area), oak (Quercus robur) (6.0% of basal area), and other broadleaved trees (9.5% of basal area).
In total sixty-eight circular field plots with 12 m radius were allocated during July and August 2009 (Figure 1). The positions of the center of the field plots were measured using a DGPS with a few
dm accuracy after post-processing. Within the field plots, the diameter at breast height (DBH) of all trees with DBH ≥ 40 mm were measured using a caliper and the tree species were recorded. For a
sub‐sample of trees, the heights were also measured using a hypsometer. The sub-sample was randomly selected with inclusion probability proportional to the basal area of the trees.
The stem number N in each field plot was calculated as the number of trees divided by the area A of the field plot. The stem volume of each tree in the sub-sample, where the height was measured, was
calculated with specific functions for pine, spruce [19] and oak [20]. For other species, the function for birch was used [19]. To estimate the stem volume of all trees, species specific log-linear
regression models were created for pine, spruce, oak, and other species based on the subsample of trees where the height was measured in all field plots simultaneously (Equation (1)). ln ( V j ) = α
0 + α 1 D B H j + α 2 ln ( D B H j ) + ∊ j
The root mean square error (RMSE) at tree level of the regression models was 137 dm^3 (19.1%) for pine, 102 dm^3 (15.9%) for spruce, 389 dm^3 (7.7%) for oak, and 90 dm^3 (25.9%) for other species.
The stem volume of all trees was estimated with the respective regression models.
The stem volume V in each field plot was calculated as the sum of the stem volume of all trees in the field plot divided by the area A of the field plot. The basal area BA was calculated in each
field plot using Equation (2): BA = 1 A ∑ j = 1 m B A j = 1 A ∑ j = 1 m D B H j 2 π / 4
The ALS data were acquired on 4 September 2008 using a TopEye MKII ALS system with a wavelength of 1064 nm carried by a helicopter. The flying altitude was 250 m above ground and the average emitted
pulse density was 7 m^−2. The first and last returns were saved for each laser pulse and the average return density was 11 m^−2 (Figure 2).
ALS returns were classified as ground or non-ground using the progressive Triangular Irregular Network (TIN) densification method [21] in the TerraScan software [22]. A DTM was derived as the mean
value of the ground returns in 0.5 m raster cells. TIN interpolation was used for raster cells with no data.
The z-values of the ALS returns were normalized with respect to the DTM (Equation (3)). z normal = z − z DTM
The following measures were derived from the ALS returns in each circular field plot with 12 m radius.
The 10th, 20th, ..., 100th percentiles of the normalized z-values from the ALS returns ≥ 2 m above the DTM in each field plot: p[10], p[20], ..., p[100].
The total number of ALS returns: N[tot].
The number of ALS returns in intervals I[1], I[2], I[3], and I[4]: N[1], N[2], N[3], and N[4].
The number of ALS returns ≥ 2 m and < 34 m above the DTM: N[veg].
The total number of first ALS returns: N[f,tot].
The number of first ALS returns in intervals I[1], I[2], I[3], and I[4]: N[f,1], N[f,2], N[f,3], and N[f,4].
where I[1] was 2 ≤ z < 10, I[2] was 10 ≤ z < 18, I[3] was 18 ≤ z < 26, and I[4] was 26 ≤ z < 34 m above the DTM.
The vegetation ratio in each field plot was calculated as r[veg] = N[veg]/N[tot]. The fractions of ALS returns in intervals I[j] were calculated as r[hj]=N[j]/N[tot].
The following measures were derived from the ALS returns in raster cells of 0.5 m.
The mean normalized z-value of all ALS returns ≥ 2 m and < 34 m above the DTM: z[mean]. The mean of this value in all raster cells inside each field plot: h[mean].
The mean normalized z-value of all first ALS returns in intervals I[1], I[2], I[3], and I[4]: z[f,mn,1], z[f,mn,2], z[f,mn,3], and z[f,mn,4]. The mean of this value in all raster cells inside each
field plot: h[f,mn,1], h[f,mn,2], h[f,mn,3], and h[f,mn,4].
The maximum normalized z-value of all first ALS returns < 34 m above the DTM (i.e., a first return nDSM): z[f,max].
The maximum normalized z-value of all ALS returns < 34 m above the DTM (i.e., an nDSM): z[max]. The 99th percentile of this value in all raster cells inside each field plot: h[99].
To calculate the canopy volume for each interval I[j], the relative proportion (between 0 and 1) of first return DSM raster cells, whose heights fell within the interval, was used: N[f,j,raster]/N
[f,tot,raster]. The maximum height of 34 m was chosen based on the maximum tree height in the field data and on the observation that ALS returns ≥ 34 m above the DTM were all erroneous returns high
above the tree tops, found in a few field plots. Raster cells without ALS returns were excluded when calculating mean values and percentiles.
The canopy volume was calculated for four different height classes j = 1, 2, 3 and 4 using Equation (4) [7]. V hj = h f , m n , j × a jwhere a[j]=A × N[f,j]/N[f,tot] and A is the total area of each
field plot. For the calculation of the canopy volume, it is assumed that A is represented by the total number of first echoes N[f,tot]. The canopy volume was also calculated for rasterized ALS data
with a[j,raster]=A × N[f,j,raster]/N[f,tot,raster].
Local maxima detection was used to find individual tree tops in the raster of z[max]. Raster cells without ALS data were iteratively filled with the mean value of the eight surrounding raster cells.
Before the local maxima detection was done, different filtering approaches were applied to the raster of z[max] to remove small variations in the surface model. Three different approaches were
tested: in the first case, an m × m mean filter was applied to all raster cells, in the second case, the filter was applied only if h[99] ≥ h[lim], and in the third case, the filter was applied only
for local z[max] ≥ h[lim], otherwise z[max] was used without mean filtering. This was done for h[lim] = 15 and 20 m and for filter sizes m × m = 3 × 3, 5 × 5, and 7 × 7 (Figure 3). For the local
maxima detection, a 3 × 3 max filter was applied to the original and the filtered raster, respectively. Local maxima were defined where the raster values were equal in the raster before and after max
filtering. Those raster cells represent the local maxima in the 3 × 3 windows. If several adjacent raster cells fulfilled the criterion, only the midmost raster cell was used as a local maximum. For
each detected local maximum, the height of the corresponding raster cell was extracted: h[loc].
The stem volume was estimated with five different regression models. The independent variables of the regression models were derived from the ALS returns in each circular field plot with 12 m radius
in two cases (Equations (5) and (8)) and from the ALS returns in raster cells of 0.5 m in the other cases (Equations (6), (7) and (9)). ln ( V i ) = α 0 + α 1 ln ( p 60 , i ) + α 2 ln ( p 90 , i
) + α 3 ln ( r veg , i ) + ∊ i ln ( V i ) = α 0 + α 1 ln ( r h 3 , i ) + α 2 ln ( r veg , i ) + α 3 ln ( h mean , i ) + ∊ i ln ( V i ) = α 0 + α 1 ln ( r h 3 , i ) + α 2 ln ( r veg , i )
+ α 3 ln ( h 99 , i ) + ∊ i V i = α 0 + α 1 V h 1 , i + α 2 V h 2 , i + α 3 V h 3 , i + α 4 V h 4 , i + ∊ i V i , raster = α 0 + α 1 V h 1 , i , raster + α 2 V h 2 , i , raster + α 3
V h 3 , i , raster + α 4 V h 4 , i , raster + ∊ i
The models in Equations (5–7) were selected with best subset regression. For the models in Equations (5–7), the stem volume was calculated as the exponential function of the estimated values. This
introduces a bias (e.g., [23,24]). Due to this, the estimates were corrected for logarithmic bias by multiplying the result with the mean value of the stem volumes from the dataset on which the
regression models were based, divided by the mean value of the stem volume estimates using the dataset on which the regression models were based [25].
The stem number and the basal area were estimated with two methods: (i) an area-based approach and (ii) an individual tree-based approach.
In the area-based approach, the stem number (Equations (10) and (11)) and the basal area (Equations (12) and (13)) were estimated with different regression models. The independent variables of the
regression models were derived from the ALS returns in each circular field plot with 12 m radius in two cases (Equations (10) and (12)) and from the ALS returns in raster cells of 0.5 m in two cases
(Equations (11) and (13)). N i = α 0 + α 1 p 30 , i + α 2 p 50 , i + α 3 p 80 , i + α 4 r veg , i + ∊ i N i = α 0 + α 1 r h 3 , i + α 2 r veg , i + α 3 h 99 , i + ∊ i ln ( B A i ) = α 0
+ α 1 ln ( p 10 , i ) + α 2 ln ( p 60 , i ) + α 3 ln ( r veg , i ) + ∊ i ln ( B A i ) = α 0 + α 1 ln ( r h 3 , i ) + α 2 ln ( r veg , i ) + α 3 ln ( h 99 , i ) + ∊ i
The models were selected with best subset regression. The models in Equations (12) and (13) were corrected for logarithmic bias [25].
In the individual tree-based approach, values of DBH were calculated using a relationship between DBH and tree height based on a regression model for the subsample of trees where the heights were
measured (Equation (14)): h j = β 0 + β 1 ln ( D B H j ) + ∊ jwhere DBH[j] is the DBH of tree j and h[j] is the height of tree j and assuming that the heights of the local maxima h[loc] were the
tree heights. The regression model was based on all tree species since the tree species was not determined from the ALS data. The stem number was derived as the number of local maxima in a field plot
divided by the area, and the basal area was calculated from the estimated DBH values.
The accuracy of the estimates from ALS data was validated using leave-one-out cross-validation for one field plot at a time: one field plot was excluded; the parameters of the models were estimated
based on the remaining field plots and then applied to the excluded field plot to estimate forest variables. The accuracy was validated with the field-measured values using the RMSE and the bias
(Equations (15) and (16)). RMSE = ∑ i = 1 n ( Y ^ i − Y i ) 2 n bias = ∑ i = 1 n ( Y ^ i − Y i ) nwhere Y[i] is the stem volume, stem number or basal area in plot i, and n is the number of field
plots. Furthermore, scatter plots were generated. The validation was done by both including all field plots as well as excluding field plots with > 80% basal area from oak, which were five field
plots out of 68.
The RMSE of the estimated stem volume was largest for the regression model in Equation (5) and smallest for the regression model in Equation (7) (Table 1). The bias was less than 2% for all
regression models. For larger field-measured values, the deviation between estimated and field-measured values was larger and a few outliers were observed (Figure 4).
The RMSE of the estimated stem volume was smaller when excluding field plots with > 80% basal area from oak (Table 2). The regression model in Equation (7) had the smallest RMSE also in this case.
The bias was still less than 2%. The estimated values showed fewer obvious outliers (Figure 5).
The RMSE of the estimated stem number was smallest for the regression model in Equation (11) and second smallest for the regression model in Equation (10). The bias was close to zero in both cases
(Table 3). When tree tops were identified from local maxima in the nDSM, the RMSE of the estimated stem number was in general larger for larger filter sizes and smaller for conditions when z[max] was
filtered more often (i.e., always filtered or lower h[lim]). The bias was in general lower the more z[max] was filtered (i.e., larger filter sizes or filtered more often). All cases showed outliers
for large field-measured values and low estimated values (Figure 6).
The RMSE of the estimated stem number was slightly larger for the regression model when excluding field plots with > 80% basal area from oak (Table 4 and Figure 7). When tree tops were identified
from local maxima in the nDSM, the RMSE was larger when z[max] was always mean filtered or mean filtered if h[99] ≥ 15 m, and smaller for the other cases. The bias changed in a negative direction for
all cases. The relative order of the RMSE and bias was the same as when all field plots were included.
The RMSE and bias of the estimated basal area was smallest for the regression model in Equation (13) and Equation (12) (Table 5). When the basal area was calculated from the DBH derived from the
local maxima, the RMSE of the estimated basal area was in general smaller for larger filter sizes and for conditions when z[max] was filtered more often (i.e., always filtered or lower h[lim]). The
bias was in general lower the more z[max] was filtered (i.e., larger filter sizes or filtered more often). The estimated values deviated more from the field-measured values for the basal area
calculated from the DBH derived from the local maxima than for the regression model (Figure 8). In the first case, the basal area was overestimated for larger field-measured values.
The RMSE of the estimated basal area was smaller for the regression model in Equations (12) and (13) when excluding field plots with > 80% basal area from oak (Table 6). When the basal area was
calculated from the DBH derived from the local maxima, the RMSE was larger and the bias was higher. The relative order of the RMSE and bias was the same as when all field plots were included. The
estimated values showed a similar pattern as when all field plots were included (Figure 9).
The most accurate estimates of stem volume, stem number and basal area were achieved with regression models that used rasterized (0.5 m raster cells) ALS data as input instead of 3D point cloud data
directly. This suggests that the raster cells can compensate for the varying density of the ALS data and the variability of the forest properties within the field plots. For the two stem volume
models that used input measures calculated at plot level from the normalized 3D point cloud directly, the canopy volume regression model was more accurate than the log-log regression model including
the vegetation ratio and measures of the height of the ALS returns. However, the most accurate estimate was achieved with a log-log regression model including the vegetation ratio and a measure of
the maximum height of the ALS returns derived from 0.5 m raster cells. Apart from the canopy volume models, the final models were selected with best subset regression, which means that the selection
of independent variables was based on the reference data. Since the parameters of the regression model are also estimated based on the reference data, the model can be fitted very well to the
reference data. However, it requires that the local reference dataset is large enough to base the models on. The canopy volume model is stable in the sense that the independent variables are not
selected based on the local reference data, which might have advantages for estimation of stem volume for large areas. The stem volume used as ground truth was estimated with regression models with a
comparatively high RMSE, which was around 20% for most of the trees. This makes the validation more uncertain. Excluding field plots with > 80% basal area from oak resulted in fewer outliers since
most of the outliers were oak dominated field plots. Previous studies have reported larger errors for estimation of stem volume and basal area in mixed forest than in coniferous dominated forest
[2,17,26] since field plots with different properties are included in the same model. The stem volume of oak is generally higher than that of most other tree species having the same tree height. In
this study, only five out of 68 field plots were oak dominated. This means that the models where all field plots were included were mainly based on forest with a smaller fraction of oak and resulted
in large errors when they were applied to oak dominated forest.
The estimation of the basal area showed fewer outliers for large field-measured values than the estimation of stem volume. The reason may be that the outliers for stem volume were mostly oak
dominated field plots and that the relationship between DBH and tree height is more similar for oak and other tree species than the relationship between stem volume and tree height. The RMSE of the
regression estimates decreased slightly when excluding oak dominated field plots in the same way as for the estimation of stem volume. However, the RMSE and the bias of the basal area derived from
local maxima increased when excluding oak dominated field plots. This may be because the regression model used for calculating DBH from the heights of the local maxima underestimated DBH for tall
trees and the oaks were taller than the average tree. This negative contribution to the bias disappeared when excluding the oak dominated field plots and the result was a higher bias.
The bias of the estimated stem number changed in the negative direction when excluding oak dominated field plots. The reason was that the excluded field plots were outliers with an overestimated stem
number. This may be due to the canopy of oak having more small variations than other species, which gives rise to several local maxima within the same tree crown. A few other field plots were
outliers with an underestimated stem number for large field-measured values. This is expected when identifying trees from local maxima since trees below the dominant tree layer are not visible in the
The estimates of stem number and basal area were more accurate for the regression models then when identifying tree tops from local maxima in the nDSM. The advantage of the latter is that a list of
DBH estimates is produced at the same time. Distributions of DBH have previously been estimated from height and density measures from ALS data and theoretical diameter distribution models [27] and as
percentiles of DBH [28]. The advantage of using local maxima in the nDSM is that they also describe the horizontal distribution of the ALS data that percentiles and density do not.
When tree tops were identified from local maxima in the nDSM, the RMSE of the estimated stem number was smallest when the nDSM was mean filtered for local z[max] ≥ 15 m and largest when the nDSM was
mean filtered for local z[max] ≥ 20 m. The bias was large and positive when the nDSM was mean filtered for local z[max] ≥ 20 m, and closer to zero when the nDSM was mean filtered for local z[max] ≥
15 m. The large positive bias in the first case was probably caused by small variations that gave rise to local maxima then identified as tree tops since the nDSM was filtered less often than for the
other cases. The nDSM was filtered more often with the condition h[99] ≥ 15 m than local z[max] ≥ 15 m and the result was a lower bias in the first case. The same effect was visible for h[99] ≥ 20 m
and local z[max] ≥ 20 m. The accuracy was similar when the nDSM was always mean filtered and when the nDSM was mean filtered if h[99] ≥ 15 m. This is probably because h[99] was rarely below 15 m, so
in the second case the nDSM was almost always filtered.
For the estimated basal area derived from local maxima, the RMSE was smallest when the nDSM was always mean filtered or mean filtered if h[99] ≥ 15 m, and largest when the nDSM was mean filtered for
local z[max] ≥ 20 m. The bias was lowest in the first case and highest in the second case. An adaptive filtering may improve the identification of local maxima corresponding to tree tops but the
method may also be very sensitive to parameter settings. Additionally, the filter sizes are limited to odd multiples of the size of the raster cells. This means that the conditions for setting
different parameters must be chosen carefully. In future work, definition of the conditions that can be applied to different forest types will be needed.
The RMSE of the estimated stem number was smallest when the nDSM was mean filtered with a 3 × 3 filter and the bias was highest. The RMSE of the estimated stem number was larger when a 5 × 5 and 7 ×
7 filter was used and the bias was lower. This suggests that the larger filter sizes removed small variations in the nDSM that would otherwise have given rise to local maxima. However, the RMSE of
the estimated basal area was smallest when a 7 × 7 filter was used and the bias was lowest. The bias was large and positive for the filter size 3 × 3 (i.e., the basal area was overestimated) and the
basal area was overestimated for larger field-measured values for all filter sizes. Since the basal area was calculated from the DBH and the DBH was derived from the heights of the local maxima in
the nDSM, this suggests that the heights of the local maxima overestimated the tree heights. The reason may be that not all local maxima corresponded to tree tops. Tree tops below the dominant tree
layer do not give rise to local maxima in the nDSM. This means that the stem number will be underestimated if the nDSM is filtered so that only tree tops give rise to local maxima. If the number of
local maxima is equal to the stem number, some of the local maxima do not correspond to tree tops and the heights of the local maxima overestimate the heights of the trees below the dominant tree
layer. This may explain why the basal area was overestimated with a 3 × 3 filter size even though the estimate of the stem number was most accurate. | {"url":"http://www.mdpi.com/2072-4292/4/4/1004/xml","timestamp":"2014-04-21T15:53:19Z","content_type":null,"content_length":"113360","record_id":"<urn:uuid:7a1a6754-f002-44d8-b192-6fa0afc78d0b>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00601-ip-10-147-4-33.ec2.internal.warc.gz"} |
Gerzon Nested MIMO
Next | Prev | Up | Top | Index | JOS Index | JOS Pubs | JOS Home | Search
Gerzon Nested MIMO Allpass
An interesting generalization of the single-input, single-output Schroeder allpass filter (defined in §2.8.1) was proposed by Gerzon [158] for use in artificial reverberation systems.
The starting point can be the first-order allpass of Fig.2.31a on page , or the allpass made from two comb-filters depicted in Fig.2.30 on page .^3.15In either case,
Let z transforms. Denote the difference equation becomes, in the frequency domain (cf. Eq.2.15))
which leads to the matrix transfer function
where matrix, and paraunitary matrix transfer function [502], [452, Appendix C].
Note that to avoid implementing direct-form II, viz.,
To avoid a delay-free loop, the paraunitary matrix must include at least one pure delay in every row, i.e., causal.
In [158], Gerzon suggested using
where orthogonal matrix, and
is a diagonal matrix of pure delays, with the lengths 420,421] for a series combination of Schroeder allpass sections). This structure is very close to the that of typical feedback delay networks (
FDN), but unlike FDNs, which are ``vector feedback comb filters,'' the vectorized Schroeder allpass is a true multi-input, multi-output (MIMO) allpass filter.
Gerzon further suggested replacing the feedback and feedforward gains digital filters amplitude response bounded by 1. In principle, this allows the network to be arbitrarily different at each
Gerzon's vector Schroeder allpass is used in the IRCAM Spatialisateur [219].
Next | Prev | Up | Top | Index | JOS Index | JOS Pubs | JOS Home | Search [How to cite this work] [Order a printed hardcopy] [Comment on this page via email] | {"url":"https://ccrma.stanford.edu/~jos/pasp/Gerzon_Nested_MIMO_Allpass.html","timestamp":"2014-04-19T04:50:36Z","content_type":null,"content_length":"21634","record_id":"<urn:uuid:4cdaff0c-dfc4-4d9b-a160-5c717bf2722a>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00634-ip-10-147-4-33.ec2.internal.warc.gz"} |
Stochastic Signatures of Phase Space Decomposition
ISRN Computational Mathematics
Volume 2012 (2012), Article ID 981501, 4 pages
Research Article
Stochastic Signatures of Phase Space Decomposition
^1Depaul University, College of Digital Media and Computing, 243 South Wabash Avenue, Chicago, IL 60604-2301, USA
^2Department of Chemistry and Seaver Chemistry Laboratory, Pomona College, Claremont, CA 91711, USA
Received 28 July 2011; Accepted 15 September 2011
Academic Editors: M.-B. Hu and O. Kuksenok
Copyright © 2012 John J. Kozak and Roberto A. Garza-López. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and
reproduction in any medium, provided the original work is properly cited.
We explore the consequences of metrically decomposing a finite phase space, modeled as a d-dimensional lattice, into disjoint subspaces (lattices). Ergodic flows of a test particle undergoing an
unbiased random walk are characterized by implementing the theory of finite Markov processes. Insights drawn from number theory are used to design the sublattices, the roles of lattice symmetry and
system dimensionality are separately considered, and new lattice invariance relations are derived to corroborate the numerical accuracy of the calculated results. We find that the reaction efficiency
in a finite system is strongly dependent not only on whether the system is compartmentalized, but also on whether the overall reaction space of the microreactor is further partitioned into separable
reactors. We find that the reaction efficiency in a finite system is strongly dependent not only on whether the system is compartmentalized, but also on whether the overall reaction space of the
microreactor is further partitioned into separable reactors. The sensitivity of kinetic processes in nanoassemblies to the dimensionality of compartmentalized reaction spaces is quantified.
1. Introduction
To provide a physical motivation for the present study, understanding the factors influencing self-assembly in nanophase materials is a major experimental and theoretical challenge [1, 2]. Modeling
even the early stages of self-assembly already introduces fundamental questions, for example, whether the process is reducible to a sequence of elementary steps occurring under equilibrium conditions
[3, 4]. The role of different system geometries in influencing the efficiency of reactions in a finite, compartmentalized system [5] is an ongoing problem, one aspect of which is explored
quantitatively here.
Before discussing the technical aspects of our work, we illustrate the main idea with a simple example. Consider a dimensional lattice of sites. One possible representation of such a lattice would be
a system (see Figure 1). But, another representation that comprises sites would be the disjoint pair of lattices, a lattice with sites, and an lattice with sites, linked at a common corner site. To
monitor ergodic flows on the assigned phase space(s), we place a sink (trap) at one of the corner vertices of the lattice and at the common vertex of the two sublattices. The transit time of a test
particle undergoing unbiased, random displacements until, eventually, it is (irreversibly) localized at the trap can be determined by formulating and then solving numerically the stochastic master
equation [6] for the problem. By extracting the first moment of the underlying probability distribution function, specified by the set of site-specific mean walklengths , the overall mean number of
steps before localization, a signature of the mean transit time, can be obtained.
One fully expects to get different values of for these two lattice representations, but the question is “how different?” Also, the problem was illustrated above using (three) lattices. How do the
results change if one or more of the disjoint spaces are lattices, that is, not approximately “square?” Is the difference in reaction efficiency magnified or suppressed? And, further, do these
differences persist if one considers phase spaces of higher dimension? It is to the above questions that the present contribution is directed.
Fundamental results in number theory can be used to motivate the choices of lattices for which the metric decomposability of phase space can be studied. For dimension , one can take advantage of
Fermat’s last theorem, namely, that one can find three positive integers , , and that satisfy the equation for for a given choice of . In , the decomposition is not unique; for any given , there may
be (and usually is) more than one choice of that satisfies the theorem.
In three dimensions, however, the possibility of finding multiple lattices with such a degeneracy is much more limited, in fact, just the set of Hardy-Ramanujan numbers. The first is ; the second
“taxi cab” number is These lattices are, practically speaking, as far as one can go. The next smallest Hardy-Ramanujan number is the last factorization one of several possible. Given the
computational demands in implementing the theory of finite Markov processes, these lattices are just too large; hence, our results for dimension presented in Section 3 are (much) more limited than
for .
The following section specifies the lattice statistical parameters needed to develop the Markovian theory. Section 3 presents the results of a series of calculations for and dimensional lattices. In
Section 4, site-specific walklength data are used to deduce two new invariance relations, valid for finite lattices, which must be satisfied exactly to confirm that the computational results are
numerically exact. In the final section, we comment on the relevance of our results to self-assembly in nanosystems.
2. Formulation
We consider finite planar lattices, each site of which is of valence (connectivity) , and finite dimensional lattices, each site of which is of valence . At any interior site of the lattice, the test
particle can move with equal a priori probability in any one of directions. If the particle initiates its motion at an edge site on a lattice, it can move in one of three directions on the lattice,
but if it attempts to step “out of” the lattice, it is reset at the starting position. If the particle happens to be at a corner site, it can move to two adjacent lattice sites, remaining on the
lattice, or is “reset” (stalls) at the corner site twice, rather than leaving the lattice. A similar protocol is implemented on lattices, where a particle on an interior site can move in directions,
on a face site in directions with one reset, on a edge site in directions with two resets, and on a corner site in directions with three resets.
In all calculations here, the sink is placed at one corner site. Previous studies [5] have documented that moving the trap away from a centrosymmetric location will increase the walklength. However,
the parent lattice or the lattices that result when one deconstructs a given initial lattice may not have a unique centrosymmetric site; for definiteness, we anchor the trap at a corner site. All
calculations were carried out for nearest-neighbor displacements only and in the absence of any governing potentials; both restrictions can be lifted in further work.
All results reported here, Tables 1–3, are numerically exact (see Section 4). Once the problem is formulated, no further approximations are made in carrying out the computational program.
3. Results
Displayed in Table 1 are representative results for the dimensional case. In every instance, one expects and finds that the results for are systematically larger for the “parent” lattice than for the
corresponding two disjoint lattices. The more “symmetric” and “similar” are the pair of disjoint lattices, the more efficient is the underlying ergodic process. Computing the percent difference
between results for the composite lattice versus two disjoint (but linked) lattices, one finds that the difference is always greater than 10%, and frequently much greater (in the range from 10%–30%,
depending on lattice symmetry).
In Table 2, we find that the difference in results for dimensions is much less severe. For , the percentage is less than 4%. Hence, dimensionality appears to play the critical role in cases studied
here. Note that there is a companion, dimensional result in Table 2 which was obtained by taking advantage of the decomposition of into the average of the greatest member in each pair of Brown
numbers , and . Once again, the difference between the (here) three disjoint lattices and the parent lattices is ~15%.
We present in the following section results for two new lattice invariance relations that can be used to check the accuracy of the site-specific values that underlie the data given in Tables 1 and 2.
4. Invariance Relations
Over 40 years ago, Montroll and Weiss [7] reported for lattices subject to periodic boundary conditions an exact, analytic result for the average number of steps taken by a random walker from a site
which is a first nearest-neighbor to a (deep) trap. Further, they discussed how one would proceed to derive similar analytic expressions starting from sites second, third, th nearest neighbor(s) to
the trap, emphasizing that the derived expressions will depend on the structure of the lattice.
In previous work, we pursued the Montroll-Weiss program and obtained additional invariance relations for hexagonal, square planar, and cubic lattices subject to periodic boundary conditions, each
having a single, centro-symmetric, deep trap [8–10]. Since satisfaction of lattice invariance relations is an “acid test” on the accuracy of numerical calculations of the site-specific (and overall
mean ), we used the methods described in [8–10] to develop invariance relations for finite lattices.
The corresponding first Montroll-Weiss invariance relation for finite lattices holds for a trap positioned either at an interior site or a surface site. The second invariance relation obtained for
finite lattices is for lattices with a trap at a corner site and for lattices with a trap at a corner site.
It is important to stress that the right-hand side of each of the above expressions is an integer. Hence, calculating the in integer format, constructing , and , and comparing the results obtained
with the (integer) results predicted by the right-hand side of the exact formulae above confirms (or not) the numerical accuracy of our calculations. Table 3 gives the results for , and for all the
lattices considered in this study and provides the desired confirmation.
5. Conclusions
In this contribution we have studied ergodic flows of a random walker undergoing unbiased displacements in a positional phase space represented by a host lattice, and characterized quantitatively the
consequences of different metric decompositions of a given, finite parent lattice. As a model for studying the early stages of self-assembly of nanoparticles, the results here complement those
reported in [3, 4] where the phase (reaction) space was defined by a host, periodic lattice. Specifically, by breaking the translational symmetry of the reaction space, we find that the reaction
efficiency in a finite system is strongly dependent not only on whether the system is compartmentalized, but also on whether the overall reaction space of the microreactor is further decoupled into
separable reactors.
Finally, the influence of system dimensionality on the efficiency of kinetic processes carried out in compartmentalized microreactors is quantified. We find that more significant differences are
encountered when a reaction space is deconstructed than a reaction space. Phrased in the language of experimental studies on nanoparticle kinetics, our results suggest that reactions carried out on
articulated surfaces of a compartmentalized nanoparticle are likely to influence the efficiency of reaction to a greater extent than if the same reaction occurs in the partitioned interior of a
One of the authors (J. J. Kozak) gratefully appreciates conversations with and the technical assistance of Amelia E. Pawlak, DePaul University. The work of Professor R. A. Garza-López was supported
by the Hirsh Research Initiation Grant, the Howard Hughes Medical Institute Research Program, and the Summer Undergraduate Research Program from the Pomona College.
1. A. N. Goldstein, Handbook of Nanophase Materials, Dekker, New York, NY, USA, 1977.
2. P. Jensen, “Growth of nanostructures by cluster deposition : a review,” Reviews of Modern Physics, vol. 71, no. 5, pp. 1695–1735, 1999.
3. J. J. Kozak, C. Nicolis, and G. Nicolis, “Modeling the early stages of self-assembly in nanophase materials,” Journal of Chemical Physics, vol. 126, no. 15, Article ID 154701, 2007. View at
Publisher · View at Google Scholar · View at PubMed
4. J. J. Kozak and G. Nicolis, “Modeling the early stages of self-assembly in nanophase materials. II. Role of symmetry and dimensionality,” Journal of Chemical Physics, vol. 134, no. 6, Article ID
064701, 8 pages, 2011. View at Publisher · View at Google Scholar · View at PubMed
5. J. J. Kozak, “Chemical reactions and reaction efficiency in compartmentalized systems,” Advances in Chemical Physics, vol. 115, pp. 245–406, 2000. View at Scopus
6. R. A. Garza-López, P. Bouchard, G. Nicolis, M. Sleutel, J. Brzezinski, and J. J. Kozak, “Kinetics of docking in postnucleation stages of self-assembly,” Journal of Chemical Physics, vol. 128, no.
11, Article ID 114701, 2008. View at Publisher · View at Google Scholar · View at PubMed
7. E. W. Montroll and G. H. Weiss, “Random walks on lattices. II,” Journal of Mathematical Physics, vol. 6, no. 2, pp. 167–181, 1965. View at Scopus
8. R. A. Garza-López and J. J. Kozak, “Invariance relations for random walks on hexagonal lattices,” Chemical Physics Letters, vol. 371, no. 3-4, pp. 365–370, 2003. View at Publisher · View at
Google Scholar · View at Scopus
9. R. A. Garza-López and J. J. Kozak, “Invariance relations for random walks on square-planar lattices,” Chemical Physics Letters, vol. 406, no. 1–3, pp. 38–43, 2005. View at Publisher · View at
Google Scholar · View at Scopus
10. R. A. Garza-López, A. Linares, A. Yoo, G. Evans, and J. J. Kozak, “Invariance relations for random walks on simple cubic lattices,” Chemical Physics Letters, vol. 421, no. 1–3, pp. 287–294, 2006.
View at Publisher · View at Google Scholar · View at Scopus | {"url":"http://www.hindawi.com/journals/isrn.computational.mathematics/2012/981501/","timestamp":"2014-04-19T00:07:17Z","content_type":null,"content_length":"104131","record_id":"<urn:uuid:1f82ec8a-d061-4ae3-a028-f9a3ac5c5c3f>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00573-ip-10-147-4-33.ec2.internal.warc.gz"} |
Compiled by Greg Fasshauer
Karl Menger's Fields of Research
Nicht etwa, daß bei größerer Verbreitung des Einblickes in die Methode der Mathematik notwendigerweise viel mehr Kluges gesagt würde als heute, aber es würde sicher viel weniger Unkluges gesagt.
- Karl Menger
[Not that, if one were to spread the insight into the methods of mathematics more widely, this would necessarily result in many more intelligent things being said than today, but certainly many fewer
unintelligent things would be said.]
Principal Dates
1902 born in Vienna
1920-1924 studied at the University of Vienna; Ph.D. in Mathematics
1925-1927 docent at the University of Amsterdam
1927-1936 professor of geometry at the University of Vienna
1930-1931 visiting lecturer at Harvard University and The Rice Institute
1931-1937 founder of the Ergebnisse eines Mathematischen Kolloquiums
1937-1946 founder of Reports of a Mathematical Colloquium, 2nd series, and Notre Dame Mathematical Lectures [1944-1][1944-6]
1946-1971 professor of mathematics, Illinois Institute of Technology, Chicago
1951 visiting lecturer at the Sorbonne, Paris
1961 guest lecturer in several European universities
1964 visiting professor, University of Arizona and Ford Institute, Vienna
1968 visiting professor, Middle East Technical University, Ankara
1971 professor emeritus in Chicago
1975 Austrian Cross of Honor for Science and Art First Class
1983 Doctorate of Humane Letters and Sciences, Illinois Institute of Technology
1985 died in Highland Park near Chicago
Karl Menger was one of the leading mathematicians of the 20th century. Mario Bunge, in the review of a collection of Menger's papers [Sel], says:
The author's publications span a half-century period and they cover an amazing variety of fields, from logic to set theory to geometry to analysis to the didactics of mathematics to economics to
Gulliver's interest in mathematics... This selection should be of interest to foundational workers, mathematics teachers, philosophers, and all of those who suffer from nostalgia for the times when
mathematicians, like philosophers, were interested in all conceptual problems.
Karl Menger was born on January 13, 1902 in Vienna. His father was the famous Austrian economist Carl Menger (1840-1921) who was one of the founders of marginal utility theory [Gru]. His mother
Hermione Andermann, who was thirty years younger than her husband, was a noted novelist who was also interested in music.
As a young man Karl explored his talents for literature. In fact, in 1920 Menger was in his last year at the Döblinger Gymnasium in Vienna where he was a classmate of Heinrich Schnitzler's whose
father Arthur was a well-known dramatist (two other classmates were the future Nobel Laureates Wolfgang Pauli and Richard Kuhn). Between the years 1920 and 1931 there are several entries concerning
Menger in Arthur Schnitzler's diary. None of the remarks concerning Menger's writing are very favorable. In fact, from 1920 until 1923 Menger was working on a drama about the apocryphal Pope Joan.
Schnitzler refers to Menger's writing as unsuccessful, not favorable, and remarks that Menger has no literary ambitions other than wanting to complete this one drama. Schnitzler, besides noting
Menger's genius and talents (for mathematics and physics), refers to Menger as a person who is not quite normal, a strange fellow, a megalomaniac, and suicidal.
Menger entered the University of Vienna in the fall of 1920 to study physics. However, when he heard Hans Hahn lecture on Neueres über den Kurvenbegriff (What's new concerning the concept of a curve)
in March of 1921 his interests were redirected towards mathematics. In the aforementioned lecture Hahn pointed out that there was at that time no satisfactory definition of a curve. In particular he
presented the failed attempts of Georg Cantor, Camille Jordan and Giuseppe Peano. Moreover, such authorities as Felix Hausdorff and Ludwig Bieberbach had declared this problem to be virtually
unsolvable. Within a few days Menger, an undergraduate student with no other mathematical background information about the problem other than what Hahn had told him, came up with a solution, and
presented it to Hahn. Menger's interest in this topic led to his work in curve and dimension theory.
In a lecture given at the University of Chicago in 1976 Menger recalls these events (click here to listen to a part of this lecture).
While working late at night and in damp rooms Menger fell severely ill with tuberculosis. He was sent to a sanatorium in Aflenz (Styria), in the Austrian Alps (from fall 1921-April 1923) and ended up
doing much of his fundamental work while recovering there. During this time both of his parents passed away.
At the same time Menger was formulating his ideas, the young Russian mathematician Pavel Urysohn (born in 1898 and died in a drowning accident in 1924) also constructed a theory of dimension in the
years 1921 and 1922. According to George Temple [100]:
Menger's definition is undoubtedly simpler and more general than Urysohn's, and the question of priority is of minor importance.
After Menger returned from the sanatorium he continued his studies under Hahn, and received his doctorate in 1924.
In March 1925 Menger went to Amsterdam to work with the eminent mathematician L.E.J. Brouwer. In Amsterdam Menger continued to work on curve theory and dimension theory, and also gained insight into
logic and the foundations of mathematics. In particular, he was exposed to Brouwer's intuitionistic interpretation of science and mathematics. Witold Hurewicz, who had worked on dimension theory
under Menger in Vienna, followed him to Amsterdam in the spring of 1926. After earning his habilitation in the fall of 1926 Menger returned to the University of Vienna (after two somewhat quarrelsome
years with Brouwer) and was appointed Extraordinarius für Geometrie in February 1927 (replacing Kurt Reidemeister, who had left to take a chair in Königsberg).
In the fall of 1927 Karl Menger became a member of the famous Vienna Circle. This group of about three dozen philosophers, logicians, mathematicians, as well as natural and social scientists was
started by Moritz Schlick, Otto Neurath and Hans Hahn and became known to the public in 1929 with the publication of a manifesto entitled The Scientific World View. The Vienna Circle. The meetings of
the Circle took place in a rather dingy room on the ground floor of the building that housed the mathematical and physical institutes, in the Boltzmanngasse. The original members included
• Rudolf Carnap (philosopher, left Vienna in 1931 to go to Prague, and then emigrated to the United States in 1936, where he started the Chicago Circle at the University of Chicago),
• Herbert Feigl (philosopher, student of Schlick's, emigrated to the United States in 1931, founded the Minnesota Center for Philosophy of Science),
• Philipp Frank (mathematician, went to Harvard in 1938),
• Kurt Gödel (mathematician, a student of Hahn's, went to Princeton in 1938),
• Hans Hahn (professor of mathematics at the University of Vienna, and Menger's Ph.D. thesis advisor; died of complications related to cancer surgery July 24 1934 at the age of 35 in Vienna),
• Victor Kraft (philosopher, made contributions toward establishing ethics as a science),
• Karl Menger,
• Otto Neurath (philosopher with interests in logic, optics, the economic theory of value),
• Theodor Radakovic (assistant professor of mathematics at the Polytechnical Institute in Vienna, former student of Hahn's),
• Kurt Reidemeister (professor of geometry at the University of Vienna; departed in 1927)
• Moritz Schlick (professor of philosophy at the University of Vienna, one of the main forces behind the Vienna Circle; was shot dead by a student in 1936),
• Friedrich Waismann (studied mathematics and philosophy under Schlick).
Some of the more or less regular guests included
The Circle also benefitted from frequent contacts with
Menger's more elaborate comments about some of the above listed members can be found in [Rem, pp. 54-73].
The work done by the members of the Vienna Circle may be considered as some of the most important and most influential of thought in the twentieth century. However, the intellectual success was
severely hampered by the political happenings at the time. In fact, the Vienna Circle found a tragic end with the Anschluss in March of 1938.
Here is some more information on the Vienna Circle (Institute Vienna Circle or click here, here or here).
In parallel to the Vienna Circle Menger started a Mathematical Colloquium at the University of Vienna in 1928. Some of the participants and frequent lecturers included
• Franz Alt (one of Menger's Ph.D. students; emigrated to the United States in 1938; was a founding member of the Association for Computing Machinery (ACM) in 1947; later deputy director of
American Institute of Physics in the 1970s),
• Hans Hornich (specialized in complex analysis and potential theory, became a professor at the Technical University of Vienna),
• John von Neumann (click here or here),
• Georg Nöbeling (student of Menger's; went to Erlangen, Germany, in 1933, where he became a professor in 1942; president of the German Mathematical Society (Deutsche Mathematiker-Vereinigung) in
the 1950s; passed away in 2008 shortly after his 100th birthday)
• R. G. Putnam,
• Karl Schlesinger (economist and banker),
• Olga Taussky-Todd (went first to Göttingen, where she helped publish David Hilbert's Collected Works; then moved to England, where she married the mathematician John Todd; finally ended up with a
most distinguished mathematical career in the United States),
• Abraham Wald (student of Menger's who obtained his Ph.D. in 1930 - after taking only three courses; started in geometry, but then turned to mathematical economics working with Oscar Morgenstern;
emigrated to the United States in 1938 with the help of Menger and studied statistics; 1944 professor at Columbia University; one of the founders of sequential analysis and statistical decision
theory; died in a plane crash in India in 1950),
The proceedings of the Colloquium (from 1928/29 through 1935/36) were edited and published by Menger (with the help of Kurt Gödel, Georg Nöbeling, Abraham Wald and Franz Alt) in the Ergebnisse eines
Mathematischen Kolloquiums. This collection of papers contains path-breaking papers by Menger, Gödel, Tarski, Wald, Wiener, John von Neumann, and many others. In particular, the field of mathematical
economics was profoundly influenced by the discussion of equilibrium equations by Schlesinger and the response by Wald in March of 1934 [Erg, pp.280-288], which as Menger said [translated from Erg,
p.290] marked
an end of the period in which economists simply formulate equations, without worrying about existence or uniqueness of their solutions.
Another paper of fundamental importance was contributed by John von Neumann [Erg, p.453], who presented a generalization of Brouwer's Fixed Point Theorem.
Karl Menger spent the academic year 1930/31 in the United States visiting Harvard University and the Rice Institute in Houston, Texas. During this visit he met many prominent American mathematicians
and philosophers of the time. In [Rem, Ch. XIII] he recalls his meetings with George David Birkhoff, Marston Morse, Henry Maurice Sheffer, Paul Weiss, Norbert Wiener, Edward Huntington, Alfred
Whitehead, Willard Van Orman Quine, Percy Bridgman, Josef Schumpeter (an economist), Charles W. Morris, Lester R. Ford, and, in Chicago, Eliakim Hasting Moore. Menger went for long hikes with Henry
Maurice Sheffer, as well as with Norbert Wiener, during which they had long discussions. Among the philosophers he was most impressed by Percy Bridgman, whom he referred to as a modern reincarnation
of [Ernst] Mach [Rem, p.166]. Karl Menger also had great respect for E. H. Moore of whom he says [Rem, p.170]
If there has been a father of American mathematics, then it certainly was Moore, the teacher of Birkhoff, Oswald Veblen, Robert Lee Moore, and most American mathematicians of their age who later
became prominent.
Menger kept in touch with his students and the Kolloquium in Vienna through Georg Nöbeling, who wrote him in early 1931 of the groundbreaking work of Kurt Gödel [Göd]. Menger promptly interrupted the
lecture series he was giving at the Rice Institute to report about Gödel's discovery. Menger says [Rem, p.203]
Thus the mathematicians at Rice Institute were probably the first group in America to marvel at this turning-point of logic and mathematics.
In 1936 Menger attended the International Congress of Mathematics in Oslo, and was elected one of its vice presidents. He described the deteriorating situation in Vienna to friends and colleagues.
Soon thereafter he was offered a position at the University of Notre Dame, Indiana.
In 1937 Menger went to the United States and accepted the position at Notre Dame. Initially Menger had only asked for an extended leave of absence from the University in Vienna, but after the war he
was not invited back. Karl Sigmund writes about the situation:
After the war, the reconstruction of the bombed-out State Opera was accorded highest priority by democratic new Austria. Men like ... Menger, however, were politely told that the University of Vienna
had no place for them.
Gödel visited Menger at Notre Dame, but Menger was not able to convince him to stay. At Notre Dame Menger started the Ph.D. program in the mathematics department (together with Arthur Milgram, Paul
Pepper, John Kelley, and Emil Artin, see [Hop]), and he organized a series of Notre Dame Mathematical Lectures (the 2nd volume of which, Emil Artin's Galois Theory, was rather well-known). Menger
also started a Mathematical Colloquium (shaped after the one in Vienna), and published the related Reports of a Mathematical Colloquium, Second Series which appeared between 1938 and 1946. However,
WWII affected academic life in the United States, and the success of the Mathematical Colloquium was limited.
In 1935 Karl Menger had married Hilda Axamit, an actuarial student. They had 4 children. Karl Jr., born in 1936, Rosemary and Fred, twins born in 1937, and Eve, born in 1942.
In 1946 Karl Menger was invited to join the newly founded Illinois Institute of Technology by the chairman of the mathematics department Lester R. Ford, who had been at the Rice Institute at the time
of Menger's visit there in 1931. Just a few years earlier Eduard Helly had also been called to IIT - but he died shortly after he accepted his position in 1943.
Rudolf Carnap and others had started a Chicago Circle at the University of Chicago, and Menger tried to participate as often as possible, even while still at Notre Dame in South Bend, Indiana. Karl
Menger spent the rest of his life in Chicago.
During the war years Menger had been heavily involved in the teaching of Calculus to Naval cadets. This was one of the reasons that much of his work in the 50s and 60s was concerned with mathematics
education. Among the things he wrote was a Calculus textbook in which he proposed a number of major changes in mathematical formalism and notation aimed at facilitating the teaching of basic
His booklet "You Will Like Geometry" was used as a guide to the IIT geometry exhibit at the Museum of Science and Industry in the early 1950s and included Menger's famous "Taxicab Geometry"
explanation. In the introduction to "You Will Like Geometry", Menger wrote,
'Impossible,' you say, 'Geometry is a bore. It has been dead and petrified for centuries.' But you are wrong. Geometry is amazing and ingenious and beautiful and profound; and most important, it is
alive and growing. Just follow the growth of the geometric world of plane figures through the ages.
Also in the 1950s, Menger appeared on local TV and radio programs to talk about mathematics; appeared several times in the Chicago Tribune as an expert on such topics as why students find math
difficult ("Johnny Is Puzzled By the X: That's Why He Hates Math, Expert Says"), and lectured to local high school science teachers on geometry.
In 1951/52 Menger spent a sabbatical at the Sorbonne in Paris, and in 1963 he returned to Austria for the first time since he had left in 1937. In 1968 he was a visiting professor at the Middle East
Technical University in Ankara, Turkey. In 1971 he was elected a corresponding member of the Austrian Academy of Sciences.
In 1971 Karl Menger became professor emeritus at IIT.
On June 2, 1975, in a ceremony at IIT, the Austrian Consul in Chicago presented the Austrian Cross of Honor for Science and Art First Class to Karl Menger (by then Professor Emeritus). This pleased
Menger immensely, since his father had received the same honor many years before.
IIT honored Karl Menger with a Doctorate of Humane Letters and Sciences in December of 1983.
Karl Menger loved music and modern architecture. He collected decorative tiles from all over the world. He disliked wine, but enjoyed sweet liqueurs. Though not a vegetarian, he ate meat sparingly,
particularly in his last years. But he was always glad to sample cuisines, from Cuban to Ethiopian, that were new to him. He liked baked apples. Menger liked to take long walks, and sometimes he
invited doctoral students for early morning walks along Lake Michigan. Menger liked America. He even liked the Marx Brothers.
Karl Menger died in his sleep on October 5, 1985 at the home of his daughter Rosemary and son-in-law Richard Gilmore in Highland Park, Illinois.
A short biography of Karl Menger can also be found at the History of Mathematics server at the University of St. Andrews, Scotland.
A very interesting article about Karl Menger was written by Seymour Kass and published by the American Mathematical Society [Kas].
An excellent article about Karl Menger and his Mathematisches Kolloquium in Vienna was written by Louise Golland (an IIT alumn) and Karl Sigmund.
Wikipedia: Karl Menger
Karl Menger's 100th Birthday Celebration
To commemorate the 100th anniversary of Karl Menger's birth, a conference was held on April 11-12, 2002 in Vienna: Mengerfest (organized by Karl Sigmund and the Austrian Mathematical Society), with
presentations by
• Georg Winckler (Rektor Universität Wien),
• Peter Schuster (Vizepräsident Österreichische Akademie der Wissenschaften),
• F.R. McMorris (Chair, Department of Applied Mathematics, IIT),
• Hans Kaiser (Vizerektor, Technische Universität Wien, Vorsitzender ÖMG-Landessektion Wien),
• Harald Rindler (Vorstand, Institut für Mathematik, Universität Wien),
• Walter Benz (Hamburg),
• Abe Sklar (IIT, Chicago),
• Tony Crilly (University of Middlesex),
• Alan Moran (University of Middlesex),
• Ludwig Reich (Universität Graz),
• Karl Sigmund (Universität Wien),
• Lester Senechal (Mount Holyoke),
• Ioan James (University of Oxford),
• Dirk van Dalen (University of Utrecht),
• Bert Schweizer (University of Massachusetts).
The Menger Sponge
Probably the most popular creation of Karl Menger is the so-called Menger sponge (sometimes wrongly referred to as Sierpinski's sponge). It can be considered as the three-dimensional analog of the
Cantor set (1D) and the Sierpinski square (2D). The Menger sponge appears in many modern books on fractals (e.g. [Man]).
Construction of the Menger sponge
Take a cube, divide it into 27 = 3 x 3 x 3 smaller cubes of equal size and remove the cube in the center along with the six cubes that share faces with it. You are left with the eight small corner
cubes and twelve small edge cubes holding them together. Now, imagine repeating this process on each of the remaining 20 cubes. Repeat again. And again...
In a more abstract setting the Menger sponge is also referred to as the Menger universal curve. This comes from Menger's work in dimension theory.
A Menger Sponge, made from business cards.
The animation below shows the Menger sponge of depth 3 created in Maple.
Karl Menger's Fields of Research
Curve and Dimension Theory
In his book Dimensionstheorie (published in 1928) Menger gave the following recursive definition of the dimension of an abstract set [Rem, p.50]:
A set S in a Cartesian space (of any dimension) is n-dimensional if,
1. each point of S is contained in neighborhoods as small as may be desired with whose frontiers S has at most (n-1)-dimensional intersections; and
2. for at least one point of S the frontier of each sufficiently small neighborhood has at least an (n-1)-dimensional intersection, [and, to start the process] the empty set [is assigned] the
dimension -1.
One of Menger's theorems in this area states that
Every n-dimensional separable metric space is topologically equivalent to part of a certain universal n-dimensional space, which can in turn be realized as a compact set in (2n+1)-dimensional
Euclidean space.
The universal one-dimensional curve (or - as a compact set in three-dimensional space - the Menger sponge) is shown above.
This theorem was generalized by Georg Nöbeling and is known as the "Menger-Nöbeling Embedding Theorem".
In 1932 Menger published Kurventheorie which contains the famous n-Arc Theorem:
Let G be a graph with A and B two disjoint n-tuples of vertices. Then either G contains n pairwise disjoint AB-paths (each connecting a point of A and a point of B), or there exists a set of fewer
than n vertices that separates A and B.
This theorem was referred to as one of the fundamental theorems in graph theory by Frank Harary; modern graph theorists call it Menger's Theorem. The history of this theorem was presented by Menger
in [n-Arc] in 1981.
Geometry of General Metric Spaces
Menger was the first to introduce Fréchet's definiton of metric spaces in geometry. The result was called metric geometry by Menger, and it included theories of betweenness, geodesic lines and
curvature of curves and of surfaces in abstract metric spaces.
One of Menger's theorems on isometric imbeddings is
A metric space R can be imbedded in Hilbert space H if and only if R is separable and every set of n+1 (n=2,3,4,...) distinct points of R can be imbedded in R[n].
Schoenberg showed how this theorem is at the basis of the notion of positive definite functions in abstract metric spaces [Schoe].
A General Theory of Length and the Calculus of Variations
Coordinate-Free Treatment of Curvature
In the 1930s Menger formulated a definition of curvature of an arc A without referring to an underlying coordinate system.
Let A be an arc in a compact convex metric space. Consider a triple (q,r,s) of points of A. The Menger curvature of A is given by the reciprocal of the radius of the circumscribing circle of the
three points. The curvature is zero if and only if one of the points is between the other two. The curvature of A at a point p is now given by the limit of the Menger curvature as the three points (q
,r,s) become arbitrarily close to p (see [Stu, p.320]).
These ideas were extended later to higher-dimensional manifolds primarily by Menger's student Abraham Wald. Of this work Menger says (see [Wald]):
This result should make geometers realize that (contrary to the traditional view) the fundamental notion of curvature does not depend on coordinates, equations, parametrizations, or differentiability
assumptions. The essence of curvature lies in the general notion of convex metric space and a quadruple of points in such a space.
Probabilistic Metric
The idea of introducing probabilistic notions into geometry was also one that occupied Menger's thoughts. His motivation came from the idea that positions, distances, areas, volumes, etc., all are
subject to variations in measurement in practice. And, as, e.g., quantum mechanics implies, even in theory some measurements are necessarily inexact. In 1942 Menger published a note entitled
Statistical Metrics [Sta]. In this note he explained how to replace the numerical distance between two points p and q by a function F[pq] whose value F[pq](x) at the real number x is interpreted as
the probability that the distance between p and q is less than x. Originally Menger had planned to collaborate with his former student Abraham Wald on this subject, but Wald was killed in a plane
crash in India in 1950. However, others, such as Berthold Schweizer (a former student of Menger's) and Abe Sklar (a colleague of Menger's, and now Professor Emeritus of mathematics at IIT), took up
the work and developed what is now called the theory of probabilistic metric spaces [Pro].
Algebra of Geometry
Menger's so-called algebra of geometry (see e.g. [AG]) played an important role in John von Neumann's mathematical foundations of quantum mechanics. Menger was one of the first to investigate lattice
Hyperbolic Geometry
In hyperbolic geometry Menger formulated an axiomatic foundation which was independent of, and simpler than, any possible one for Euclidean geometry. However, this work did not attract as much
attention as his work in curve and dimension theory did.
Algebra of Functions
Menger's book Algebra of Analysis was a direct result of his experience with the existing literature for teaching calculus. Menger found that the foundations of analysis needed to be systematized and
clarified. In particular, he was bothered by the fact that many textbooks did not emphasize the role of functions appropriately. There was no clear conceptual distinction between a function f and its
value f(x). And other functions, such as the identity function, played no explicit role at all. In particular, Menger introduced a property of multivariate functions called superassociativity. Today
algebraic structures in which this property holds are referred to as Menger algebras. Menger algebras have found applications, among other areas, in logic, and, to Menger's delight, in geometry.
A New Approach to Calculus
Karl Menger enjoyed teaching undergraduates. He believed that, when properly done, it stimulated research. In the late 1950s he lectured to high school students on the subject What is x? [1956-5].
During WWII Menger taught calculus to future navy officers. This experience led him to rethink the foundations of calculus. He published the book Algebra of Analysis, and also wrote his own calculus
book: Calculus. A Modern Approach. One of the reasons for writing the calculus book was the fact that Menger felt that the traditional way of presenting the subject was deeply flawed. Menger sent a
copy of the book to Einstein, who liked it and applauded the attempt to clarify notation. However, he advised against too much "housecleaning". Even though viewed favorably by some, the book was seen
as too radical a reform of the subject by most people, and the project failed. This was a great disappointment to Menger.
One of the most striking (and often overlooked) pedagogical innovations in the book is the development of a complete "miniature calculus" which discusses all of the basic features, including the
Fundamental Theorem, without introducing limits.
Menger's calculus book was republished in 2007 as a Dover edition: [Dover] [Amazon]
Analysis of the Idea of Variables
Analytic Functions
Foundations of Mathematics and Logical Tolerance
In 1951 Menger was the first to introduce the idea of fuzzy sets (which he called hazy sets). The fundamental idea behind a hazy set is to replace the element-set relation by the probability of an
element belonging to a set. This concept was later "rediscovered" [BKZ], and then attracted a lot of attention.
The prominent economist Oscar Morgenstern claims (see [Kas]) that Menger's 1934 paper Das Unsicherheitsmoment in der Wertlehre played a primary role in persuading John von Neumann to undertake a
formal treatment of utility [KT]. The collection [Sel] contains an entire chapter on economics. In particular, various aspects of the Petersburg Game are studied by Menger. This game goes as follows
(see [Sel, p.260]):
There are two players, A and B. The game begins with B placing a certain bet with A. Then a coin is flipped. If it shows heads, then B receives $1 from A, and the game is over. If it shows tails, it
is flipped again. If the second toss turns up heads, then B receives $2 from A, and the game ends. The coin is flipped until head occurs for the first time. If this happens at the nth toss (i.e., if
the first n-1 tosses all come up tails, but the nth toss is heads), then B receives $2^n-1 from A and the game ends. Depending on whether n=1,2,3,4,..., B wins $2^0 = $1, $2^1 = $2, $2^2 = $4, $2^3 =
$8, ... What bet should B be willing to make?
During the politically tension filled 1930s Menger found it difficult to concentrate on mathematics. He attempted to develop a formal concept of ethics which had the same relation to traditional
ethics as formal logic had to traditional logic. For more on Menger's excursions into ethics see [Leo].
In 1934 Menger wrote a little book Moral, Wille und Weltgestaltung (Morality, Decision, and Social Organization) on the application of simple mathematical notions to ethical problems which he
translated into English in 1974 [Mor]. Menger's last paper [Soc] was an extension of this work.
Karl Popper said about this book:
This is one of a few books in which the author attempts to depart from the stupid talk in ethics.
Oscar Morgenstern later used Menger's book as a starting point for his work on game theory together with John von Neumann (see the section on economics).
Also see Karl Menger's page at The Mathematics Genealogy Project.
Listed below are the references cited above. Some of this material was used in the compilation of the information presented here.
A complete bibliography of Menger's work is here.
[100] George Temple, 100 Years of Mathematics, Springer, 1981.
[BKZ] R. Bellman, R. Kalabe and L. Zadeh, Abstraction and pattern design, J. Math. Anal. Appl. (13) 1966, 1-7.
[Erg] Karl Menger: Ergebnisse eines Mathematischen Kolloquiums, Egbert Dierker and Karl Sigmund, Exact Thought in a Demented Time: Karl Menger and his Viennese Mathematical Colloquium, The
Mathematical Intelligencer Vol.22 (1), 2000.
[Göd] Kurt Gödel, Über Vollständigkeit und Widerspruchsfreiheit, [Erg] 3 (1931/32).
[Gru] Carl Menger, Grundsätze der Volkswirtschaftslehre, 1871.
[Hop] Arthur J. Hope, The Story of Notre Dame: Notre Dame - One Hundred Years, Notre Dame University, 1999.
[Kas] Seymour Kass, Karl Menger, Notices of the Amer. Math. Soc. (43) 1996, 558-561.
[KT] H.W. Kuhn and A.W. Tucker, John von Neumann's work on the theory of games, Bull. Amer. Math. Soc. (64) 1958.
[Leo] R.J. Leonard, Ethics and the Excluded Middle: Karl Menger and Social Science in Interwar Vienna, Isis (89), 1998, 1-26.
[Man] Benoit B. Mandelbrot, The Fractal Geometry of Nature, W.H.Freeman, 1982.
[Pro] Berthold Schweizer and Abe Sklar, Probabilistic Metric Spaces, North Holland, 1983.
[Rem] Karl Menger: Reminiscences of the Vienna Circle and the Mathematical Colloquium, Louise Golland, Brian McGuinness and Abe Sklar (eds.), Vienna Circle Collection Vol.20, Kluwer, 1994.
[Schoe] Iso J. Schoenberg, On certain metric spaces arising from Euclidean spaces by a change of metric and their imbedding in Hilbert space, Annals of Math. (38), 1937, 787-793.
[Sig] Karl Sigmund, Musil, Perutz, Broch: Wiener Literaten und ihre Neigung zur Mathematik, Neue Zürcher Zeitung (international edition), 8/9 March 1997, p. 49.
[Wien2001] Karl Sigmund, "Kühler Abschied von Europa" - Wien 1938 und der Exodus der Mathematik, Ausstellungskatalog, Sept.17-Oct.20, 2001, University of Vienna. | {"url":"http://science.iit.edu/applied-mathematics/about/about-karl-menger","timestamp":"2014-04-20T05:49:07Z","content_type":null,"content_length":"107075","record_id":"<urn:uuid:cb6d1d29-b2c0-4717-bfd8-019b4a4dcb07>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00522-ip-10-147-4-33.ec2.internal.warc.gz"} |
Meeting Details
For more information about this meeting, contact Robert Vaughan.
Title: Explicit bases of modular symbols over function fields
Seminar: Algebra and Number Theory Seminar
Speaker: Cécile Armana, University of Muenster, visiting Brown
Modular symbols over the rationals are a useful theoretical and computational tool for computing modular forms. In 1992 J. Teitelbaum introduced a similar notion of modular symbols over the function
field of positive characteristic $\mathbf{F}_{q}(T)$. They are related to various objects (automorphic forms, abelian varieties, Drinfeld modular forms) defined over this field. The group of such
modular symbols has a finite presentation (Manin has given a similar presentation for classical modular symbols). Both presentations are extensively used in algorithms for computing modular symbols.
Room Reservation Information
Room Number: MB106
Date: 03 / 29 / 2012
Time: 11:15am - 12:05pm | {"url":"http://www.math.psu.edu/calendars/meeting.php?id=12378","timestamp":"2014-04-20T16:17:50Z","content_type":null,"content_length":"3653","record_id":"<urn:uuid:e27ec0c7-21b1-448e-8e0d-6ce54b6db5b8>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00312-ip-10-147-4-33.ec2.internal.warc.gz"} |
Pi Day
Physics Photo of the Week
March 14, 2008
Two Pi's from one Pie
length of the arc segment is exactly one radius. The drawing at left is a segment circle that is exactly one radian. The radius of the segment is 8 units. The curve on the outside of the segment is
also 8 units. The arc in the wedge is then exactly one radian - the natural unit for angles. In the more familiar units of degrees, one radian is equivalent to the rather "unnatural" 57.2958...
Back to the complete circle: Since the circumference of a circle is exactly 2 Pi times the radius, the angle of a complete circle is the whole circumference divided by the radius. That leaves 2 Pi
In dividing a pie among a family of six, each share is 1/6 of 2 Pi, or Pi/3. If Mom asks you how much pie you would like, merely say, "I would like Pi/3 radians please". The chocolate cream pie
featured above is marked for easy cutting into pi/3 radian sectors to serve a family of six.
Many thanks to Vicki for making the pie!
There will be no Physics Photo of the Week next week, March 21, due to spring break at Warren Wilson College. The next Physics Photo of the Week will be published on March 27, 2008 when we will
celebrate the Equinox.
Physics Photo of the Week is published weekly during the academic year on Fridays by the Warren Wilson College
Physics Department
. These photos feature an interesting phenomena in the world around us. Students, faculty, and others are invited to submit digital (or film) photographs for publication and explanation. Atmospheric
phenomena are especially welcome. Please send any photos to dcollins@warren-wilson.edu.
Click here to see the Physics Photo of the Week Archive.
Observers are invited to submit digital photos to: | {"url":"http://www.warren-wilson.edu/~physics/PhysPhotOfWeek/2008PPOW/20080314pi_pie/index.html","timestamp":"2014-04-21T12:36:11Z","content_type":null,"content_length":"4538","record_id":"<urn:uuid:b0d2b1c5-3e98-4328-9a05-5f96af9cac23>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00599-ip-10-147-4-33.ec2.internal.warc.gz"} |
In 2008 NPS commissioned a peer comparison study from CES Consultants to evaluate the overall institution against a carefully selected group of peer institutions. The intent of the peer analysis was
to provide quantitative information on the range and magnitude of a number of key performance indicators to help NPS identify strengths and weaknesses, and specific areas for improvement. Fifteen
specific peer institutions (CES peers) were selected for comparison:
California Institute of Technology
Carnegie Mellon University
Claremont Graduate University
Duke University
Georgia Institute of Technology
Illinois Institute of Technology
Massachusetts Institute of Technology
North Carolina State University
Rensselaer Polytechnic Institute
Rice University
Stanford University
Stevens Institute of Technology
University of California, Santa Barbara
University of Illinois, Urbana-Champaign
University of Southern California
A wide variety of specific data was collected for all of the institutions, and then averages and rankings in various categories were computed. The final report was issued in January 2009, and we
shall refer to the results of the CES peer comparison in various parts of this self-study for purposes of assessment.
In 2010, NPS commissioned another study from Academic Analytics, LLC. The purpose of this study, which we shall refer to as AA2010, was to generate quantitative measures of faculty scholarly
productivity at NPS.
The mission of the Department of Applied Mathematics is to provide an exceptional mathematical education focused on the unique needs of our students, to engage in relevant research, and to provide
quality service to the community. We are deeply committed to maintenance of a well-designed curriculum and a supportive environment for our students.
Because mathematics is the language of science, it is fundamental to every quantitative science and technology curriculum on campus. The primary role of the department is one of service to the
various technical curricula at NPS. This includes a very high proportion of remedial undergraduate level coursework that is necessary for students who are transitioning to graduate study after a long
period away from university. The Department of Applied Mathematics strives to provide a solid mathematical foundation for all students as they make the transition into graduate curricula. We provide
high-quality instruction in all courses, giving emphasis to relevant and modern mathematical techniques in our advanced courses. And we encourage students to develop and utilize skills in analysis,
reasoning, creativity, and exposition as they acquire knowledge of mathematics and its applications. We regularly engage with our client curricula to ensure that our service courses remain up to date
and meet their needs.
There are also a very small number of students who are seeking degrees in Applied Mathematics. The primary sponsor for students seeking a degree in Applied Mathematics is the Department of
Mathematics at the United States Military Academy (USMA). These students come here for graduate study as preparation for an assignment to teach mathematics at USMA. Prior to the cancellation of the
380 curriculum in early 2001, we also had a contingent of Naval officers who were preparing to teach mathematics at the United States Naval Academy (USNA) as well as one or two Marine Corps officers
(to be similarly detailed). We also support a number of students who decide to pursue a dual degree program while at NPS. We have found this to be an excellent way to enhance our interdisciplinary
activities across the campus.
In addition, we maintain active research programs, making a special effort to respond to the needs of the NPS, DoN and DoD communities. By adhering to the most stringent standards of scholarship, we
ensure that the department continues to hold the respect of the community of scholars worldwide. We serve our profession, not only through scholarship, but also by our involvement in professional
organizations and by our editorial and administrative contributions to the growing body of mathematical knowledge. We also serve the NPS community with our active role in the governance of the
The Department of Applied Mathematics has the longest history of any department at the Naval Postgraduate School and traces its origins to the appointment of Ensign Guy K. Calhoun as Professor of
Mathematics in 1910. He would later become the first faculty member assigned to the Postgraduate Department that was created at Annapolis in 1912 to function as a preparatory school whose students
would complete their graduate studies at a civilian institution after a year of study at Annapolis. The department was headed by a single officer with a small office staff and the assistance of a
single civilian engineer. At its inception, the department had no regular faculty but relied on the cooperation of civilian institutions as well as regular faculty from the academic departments at
USNA. Prof. Ralph E. Root, who had originally joined the faculty in the Mathematics Department at Annapolis in 1913, quickly became involved in the fledgling Postgraduate Department, and in 1914 he
became the first civilian faculty member of the new department when he was appointed as its Professor of Mathematics and Mechanics. He had earned his Ph.D. at the University of Chicago in 1911 under
the direction of the illustrious Prof. E.H. Moore whose list of doctoral students includes such notables as George Birkhoff, Leonard Dickson, Theophil Hildebrandt, R.L. Moore, and Oswald Veblen.
Indeed, Root’s dissertation work was so fundamental to the early development of the concept of ‘neighborhood’ that it encompasses an entire section in Aull and Lowen’s Handbook of the History of
General Topology.
By 1931 the Postgraduate Department had evolved into a Postgraduate School that had fifteen faculty – four in Mathematics and Mechanics (C.C. Bramble, W.R. Church, C.H. Rawlins, and R.E. Root), three
in Mechanical Engineering, three in Electrical Engineering, two in Metallurgy and Chemistry, one in Physics, one in Radio, and one in Modern Languages.
In 1946 Captain Herman A. Spanagel (appointed Head of the Postgraduate School in April, 1944) instituted a major reorganization of the school that created, for the first time, traditional academic
departments. The Department of Mathematics and Mechanics was one of the seven original academic departments created in this reorganization, and Prof. W.R. Church was appointed as chairman. Professor
Church, who had spent the war years on active duty in the Navy, was a keen student of new developments in applied mathematics. The applications of statistics to strategy in anti-submarine warfare
had led to many other applications in the analysis of naval operations. As this new area of science continued to grow after the end of the war, Professor Church and the Department of Mathematics and
Mechanics were leaders in the development of the new curriculum in operations analysis, which began in 1951-52. Professors Torrance from Mathematics and Cunningham from Physics taught the initial
courses in this discipline. They were soon assisted by Professor Tom Oberbeck, who joined the Mathematics faculty in 1951. After a period of growth and development, during which several
statisticians were added to the faculty to handle the gradual shift in emphasis from physical science to statistical analysis as the curriculum adjusted to the needs of the Navy, the School created
the Department of Operations Research with Oberbeck as Chairman in 1962. He was succeeded, three years later, by Jack Borsting, who was also from the Department of Mathematics and Mechanics.
In 1966, the separation of Operations Research from Mathematics was completed with the transfer of statistics to the Department of Operations Research along with five of the professors who covered
this subject. At this time the offspring to which Mathematics had given birth had nineteen professors and was still growing. This is the same year that the Department of Mathematics and Mechanics
changed its name to the Department of Mathematics and R.E. Gaskell took over as chairman following twenty years of impeccable leadership by W.R. Church.
Professor Church was also keenly interested in the development of the computer world, started in the war years by the Harvard Mark I and by the University of Pennsylvania Moore School Computer.
Indeed, the Department of Mathematics and Mechanics had already played a pivotal role in the history of computing by that point, as it was our own Prof. C.C. Bramble who had recommended the
development of the Harvard Mark II for the Naval Proving Ground at Dahlgren. Howard Aiken, computer pioneer and developer of the Mark II, recalls it this way in an interview in February, 1973:
“Mark II was built for the Naval Proving Ground at Dahlgren on the recommendation of Professor Clinton Bramble of the United States Naval Academy, who was a mathematician and who was on duty as a
Naval Officer. And Bramble was able to foresee that they had to quit this hand stuff in the making of range tables. That's why we built the computer. And Albert Worthheimer found the money for it and
signed the contract.
In November of 1944, the Bureau of Ordnance requested the Computation Laboratory, then operating as a naval activity, undertake the design and construction of an automatic digital calculator for
installation at the Naval Proving Ground."
It is interesting to note the Prof. Bramble’s first contacts with Dahlgren were in 1924. He notes in a January 1977 interview that:
“In those days, there was no bridge across the Potomac. I used to call up, and they'd send a boat over to Morgantown, Maryland, for me. When I came down, it was just for general interest in ordnance
problems while I was teaching ordnance courses at the Naval Postgraduate School. The courses included ballistics and gun design, both exterior and interior ballistics.
Naturally I was interested in the current problems in those areas, so periodically I would get in touch with Dr. Thompson, who was at that time the Senior Scientist at Dahlgren. It was a very
informal contact, but that was my way of maintaining a live interest in current ordnance problems and the research that was going on. I also did the same sort of thing with the Army Proving Ground at
When the national emergency [World War II] came on and the decision was made to move the ballistics work out of Washington from the Bureau of Ordnance to Dahlgren, the Postgraduate School was
requested to transfer me to Dahlgren, but the Head of the Postgraduate School wouldn't agree, so they compromised by sending me to Dahlgren 4 days a week. That was the beginning of the ballistic work
and the beginning of the Computation Laboratory because, at that time, there were only two mathematicians employed at Dahlgren. They were at about a GS-7 or GS-9 level. That was back about 1942, and
there were also a couple of women at the GS-5 level.”
Although he split his time between the Postgraduate School and Dahlgren for several years at that point, Prof. Bramble eventually moved to Dahlgren full-time in 1947 when he was appointed Head of the
Computation and Ballistics Department. In 1951 he was selected as Dahlgren’s first Director of Research, a position he held until his retirement in January 1954.
Although Professor W. E. Bleick, 1946, and B. J. Lockhart, 1948, had also been involved with these computer developments before coming to the Department of Mathematics, it was Professor Church who
led the movement to obtain the first electronic automatic digital computer. And so it was that in 1953, an NCR 102A, was hoisted by a crane through a second floor window in Root Hall and installed in
the Mathematics Department. This precursor machine, as well as the development of its use in instruction and research, resulted in the acquisition in 1960 of the world’s first all solid-state
computer – the CDC 1604 Model 1, Serial No. 1 – which was designed, built, and personally certified in the lobby of Spanagel Hall by the legendary Seymour Cray. This was the first of ten such
machines, ordered by the Navy’s Bureau of Ships for its Operational Control Centers. The installation of the CDC 1604 coincided with the formation of the School’s Computer Center, now named in honor
of Professor Church.
Computer courses quickly became standard in almost every curriculum in the School, and the use of the computer in research work increased rapidly at the School. However, it was not until 1967 that
the school established the Computer Science courses and began adding faculty in this area to the Department of Mathematics. Two years later, Gary Kildall joined the department as an instructor of
mathematics to fulfill his draft obligation to the US Navy. His pioneering work during his years as part of the department fundamentally changed the nature of computing, particularly his creation of
PL/M (the first high-level language developed for microprocessors) and CP/M (the first operating system for microcomputers).
Eventually, the existence of a group of computing specialists within the Department of Mathematics and their interaction with faculty in other departments (chiefly electrical engineering) who worked
with computers led to formation of the Computer Science Group in 1973; however, the professors involved maintained their status in the Department of Mathematics until 1976 when the Department of
Computer Science was formed. At that time, five faculty members moved from Mathematics to the new department.
Thus, in about thirty years, the Department of Mathematics had seen two sub-disciplines emerge and develop into thriving departments, each with its own cadre of graduate students, student thesis
effort, and sponsored research.
Following the separation of Computer Science from Mathematics, the department saw a ten year period where it was functioning once again almost exclusively as a service department. The Mathematics
curriculum (380), which had been established in 1956, was officially disestablished in 1976. The department maintained its degree granting authority, but without an official curriculum, only a
handful of students received the MS in Mathematics between 1976 and 1987; most of these were dual majors with Operations Research.
The 380 curriculum was reestablished in 1987, and this initiated a period of growth in the department. More than half of the current faculty were recruited in the seven year period following the
reestablishment of the curriculum. Throughout the 1990s, the department graduated an average of roughly six students per year with a steady mix of inputs from the Navy, Army, and Marine Corps.
In early 2001 the superintendent, RADM Ellison, officially closed the 380 curriculum ,ostensibly due to low Navy enrollments. Although the department maintained its degree granting authority, RADM
Ellison did not allow other services to matriculate students in Applied Mathematics during the remainder of term, and this led to the loss of student inputs from the Army and Marine Corps. Upon his
departure, the new superintendent, RADM Wells, officially changed the enrollment policy and allowed other services to matriculate in Applied Mathematics. Unfortunately, he was not able to get the 380
curriculum officially reinstated, and hence Navy students are still unable to enroll in the curriculum. In spite of this, the department has been able to somewhat rebuild our program, and in the last
two years we have started to produce a small but steady stream of graduates.
· 1957-8: Prof. William Edmund Milne
· 1981-2: Prof. Garret Birkhoff
Although there were no official academic departments at NPS prior to 1946, Ralph Root is generally considered to be our first chairman as his original appointment to NPS was as head of mathematics
and mechanics. Having held that post from 1914 until his retirement in 1946 we honor him by placing him first in our historical list of the chairmen of the department.
· 1914-1945: Ralph E. Root
· 1946-1965: W. Randolph Church
· 1966-1971: Robert E. Gaskell
· 1972-1973: W. Max Woods
· 1974-1975: Ladis D. Kovach
· 1976-1983: Carroll O. Wilde
· 1984-1986: Gordon E. Latta
· 1986-1992: Harold M. Fredricksen
· 1993-1996: Richard H. Franke
· 1996: Guillermo Owen
· 1997-1998: W. Max Woods
· 1999-2002: Michael A. Morgan
· 2003-2008: Clyde L. Scandrett
· 2009-Present: Carlos F. Borges
At the present time the department has sixteen tenured and tenure-track faculty, one of whom has been on an extended leave of absence since 2006. This number of faculty actually matches the number
onboard back in 2001, although there have been four new hires in that period to replace retiring faculty members. There is good diversity among the tenure track faculty represented by three females
and three Hispanics. This distribution of 19% in each category is well above the NPS averages (from the peer comparison study) of 16% and 11% respectively. And although we are under the peer
institution median of 30% female, we are well above the peer institution median of 13% for underrepresented minorities. Indeed, we clearly have the most diverse faculty of any department in the
School of Engineering and Applied Sciences. The chart below shows the age and years of service distribution for the tenure-track faculty.
A complete listing of the faculty as well as links to brief vitae and personal web pages can be found in Appendix A. Several major school wide research and teaching awards have been won by our
current tenure-track faculty, including three Menneken Awards and one Schieffelin Award. A summary of major awards and prizes won by our faculty is available in Appendix B.
The department currently has two non-tenure track research faculty members. Distinguished Visiting Professor Art Krener has been resident with us since 2005. In 2009 Professor Margaret Cheney joined
us as a Research Professor. Although her primary focus is her regular appointment as Professor of Mathematical Sciences at the Rensselaer Polytechnic Institute, she has ongoing interdisciplinary
collaborations here at NPS, and her appointment is facilitating further collaborations both inside and outside of the department.
The department employs two lecturers and one senior lecturer. Although their main focus is teaching the remedial undergraduate mathematics courses that are essential to the transition of NPS students
to the various science and technology curricula, all of them have a range of abilities well beyond that (one routinely teaches a graduate level dynamics class for the Meteorology department). The
quality of teaching from this group of faculty is uniformly excellent and, indeed, one of them is a Schiefellin Award winner. All three are retired O-5 military officers (one each from the Army,
Navy, and Air Force) and, as a result, are much attuned to the particular needs of our students.
In addition to classroom teaching, many are involved in curriculum reform efforts (course development, review sessions, textbook and lab-manual writing etc.).
The departmental policy on instructional workload follows the fact that NPS is obligated, by contract, to pay all tenure-track faculty for ten months each year. Since the school operates year-round
and there are four academic quarters, each tenure-track faculty member has one quarter each year (called their inter-sessional quarter) during which they are paid only one month of direct salary and
must either find external funding or take leave without pay for the other two months. During the three quarters in which they are on a full-time pay status, each faculty member is required to teach a
total of five sections as assigned by the chair.
Beginning in academic year 2009, the school offered another option for faculty. In particular, a faculty member who asks for only nine months of direct funding from the school is required to teach
only four sections during that time. Faculty exercising this nine-month option are expected to secure external funding or take leave without pay for the entire three months of their inter-sessional
Faculty choosing the nine-month option may ‘buy out’ of additional teaching duties by securing more external funding. In particular, a faculty member may, with the permission of the chair, buy out of
one additional class (reducing his/her annual load to three) by securing thirty-three days of additional external funding within the nine month window.
Lecturers are expected to teach at least two sections in each quarter that they are being paid from direct funds. There is no guarantee of employment in any quarter, and the number of days they are
paid in any quarter may vary depending on workload.
Faculty who engage in certain paid administrative duties have reduced teaching loads (e.g., the chair does not have a teaching obligation while serving as chair).
The department has a formal mentoring process for assistant professors to help them on their academic journey. Upon arrival at NPS and after a brief settling in period, the Chair assigns each
assistant professor a two-person mentoring team. The mentoring team always has at least one full professor as the senior member. The mentoring team is tasked with helping the mentee develop the basic
framework for a rich and rewarding academic career. This includes helping them:
· Adapt to the unique teaching demands of the school
· Develop a strong and relevant research program
· Understand the role of NPS in the Navy and DoD
· Balance the three pillars of teaching, scholarship, and service
The mentor team generates a mentor report each year after the mentee has submitted his or her faculty activity report. The mentor report summarizes the progress that the candidate is making toward
promotion and tenure and is used by the Chair and the department as part of the annual renewal review.
The Graduate Program in Applied Mathematics is designed to meet the needs of the Department of Defense for graduates who are skilled in applying concepts from advanced mathematics to real world
military problems. A typical follow on assignment for graduates is to be an instructor in mathematics at the U.S. Naval Academy at Annapolis or the U.S. Military Academy at West Point. Program
requirements are based on the premise that graduate students should have broad exposure to graduate level mathematics combined with hands-on experience, either through research activities conducted
within the department or through coordinated experiences with other departments and other partners in industry, government labs, and national research institutes. To achieve this goal, we offer a
spectrum of courses aimed at providing breadth of training, in addition to a depth of knowledge in one of the areas of specialization represented within our research ranks.
In addition to the Master of Science and Doctor of Philosophy programs in Applied Mathematics, the department offers individually tailored minor programs for many of the school's doctoral students.
In order to enter a program leading to the degree Master of Science in Applied Mathematics, the prospective student is strongly advised to possess either a bachelor’s degree with a major in
mathematics or a bachelor’s degree in another discipline with a strong mathematical orientation.
Any program that leads to the degree Master of Science in Applied Mathematics for a student who has met the entrance criteria must contain a minimum of 32 quarter-hours of graduate-level (3000-4000
numbered) courses with a minimum QPR of 3.0. The program specifications must be approved by both the department Chairman and the Academic Associate. The program is subject to the general conditions
specified in the Academic Council Policy Manual as well as the following:
A student must complete or validate the four course 1000 level calculus sequence and the introductory courses in linear algebra and discrete mathematics.
The program must include at least 16 hours in 3000-level mathematics courses and 16 hours of approved 4000-level mathematics courses.
Courses in Ordinary Differential Equations, Real Analysis, and upper division Discrete Mathematics are specifically required, and those at the 3000 level or above may be applied toward the
requirement above.
An acceptable thesis is required. The Department of Applied Mathematics permits any student pursuing a dual degree to write a single thesis meeting the requirements of both departments, subject to
the approval of the Chairmen and Academic Associates of both departments.
The Department of Applied Mathematics offers the degree Doctor of Philosophy in Applied Mathematics. Areas of specialization will be determined by the department on a case by case basis. Requirements
for the degree include course work followed by an examination in both major and minor fields of study, and research culminating in an approved dissertation. It may be possible for the dissertation
research to be conducted off-campus in the candidate's sponsoring organization.
Entrance into the program will ordinarily require a master’s degree, although exceptionally well-prepared students with a bachelor’s degree in mathematics may be admitted. A preliminary examination
may be required to show evidence of acceptability as a doctoral student. Prospective students are directed to contact the Chairman of the Applied Mathematics Department or the Academic Associate for
further guidance.
In recent years the department has created two certificate programs wherein students can earn a certificate by completing a prescribed program of coursework. The first of these was the Mathematics of
Secure Communications Certificate (curriculum 280) which was created in 2003. This certificate program has become very popular with students from a variety of curricula. Second was the Scientific
Computation Certificate (curriculum 283) which was launched in 2009. We are currently moving our first cohort of students through this program and have high hopes for its future success, particularly
as a coherent minor program for doctoral students in the various engineering programs on campus.
Our production of graduates has been severely impacted by the official elimination of the 380 curriculum in 2001. Although the curriculum is still officially inactive for Navy students, we reopened
the master’s degree program for student inputs in 2006 at the direction of then President RADM Wells. This allowed us to begin admitting Army students and since that time our output has increased
dramatically and we have graduated an average of more than seven students per year (master’s, dual master’s, and doctorate) since 2007.
A detailed list of graduates for the past five years along with their thesis titles appears in Appendix D.
The department currently enrolls more than 1,200 students in roughly 80 sections of mathematics courses per year. Less than two percent of these enrollments are math graduate students. Thus the
magnitude of our service role to the university is huge and we take our service mission very seriously. Technical majors, either engineering or science, usually take four or more mathematics classes.
In sum, we support 30 separate curricula at NPS:
· 356 - Information Systems & Operations
· 360 - Operations Analysis
· 361 - Joint Operational Logistics
· 362 - Human Systems Integration
· 363 - Systems Analysis (DL)
· 365 - Joint Cmd, Cntrl, Comm, Comp/Intel (C4I) Sys
· 366 - Space Systems Operations
· 368 - Computer Science
· 370 - Information Systems & Technology
· 372 - Meteorology
· 373 - Meteorology and Oceanography (METOC)
· 380 - Applied Mathematics
· 399 - Modeling, Virtual Environments & Simulation
· 525 - Undersea Warfare
· 533 - Combat Systems Sciences & Technology
· 570 - Naval/Mechanical Engineering
· 580 - Systems Engineering
· 590 - Electronic Systems Engineering
· 591 - Space Systems Engineering
· 595 - Information Warfare
· 596 - Electronic Warfare Systems International
· 814 - Transportation Management
· 815 - Acquisitions & Contract Management
· 816 - Systems Acquisition Management
· 819 - Supply Chain Management
· 820 - Resource Planning/Mgmt for International Defense
· 827 - Material Logistics Support Management
· 837 - Financial Management
· 870 - Information Systems Management MBA
· 999 - Staff (Non-Degree)
Our core teaching load can roughly be split into three tracks:
Calculus Refresher Track – This consists of four classes (MA1113, MA1114, MA1115, and MA1116) covering single-variable, multi-variable, and vector calculus (although there are some variations). These
classes are generally taught in a highly accelerated mode so that they can be completed in the first two quarters. This track is taken by students in most technical curricula (i.e., a significant
proportion of all students from GSEAS and GSOIS) although those with sufficiently strong backgrounds may validate some or all of these classes. Annual enrollments in this group during 2009 were 550
as compared with 622 in 2006. This decrease of more than 10% is very significant and reflects the large decrease in enrollments in the Engineering school over the past three years.
Core Analysis Track - This consists of the core undergraduate analysis classes that are essential in physical sciences and engineering. The basic courses are Ordinary Differential Equations (MA2121),
Partial Differential Equations (MA3132 or MA3139), Linear Algebra (MA2043 and MA3046), and Numerical Analysis (MA3232). Courses from this track are generally taken by students from GSEAS. Enrollments
in this track are highly variable and in decline due to collapsing enrollments in GSEAS curricula.
Core Discrete Mathematics Track – This consists of core classes in mathematics essential in computer science and operations research. The courses in this track are Discrete Mathematics (MA1025,
MA2025, and MA3025) and Linear Algebra (MA3042). Courses in this track are generally pursued by students from GSOIS. Enrollments in this track have been relatively stable.
The nature of the teaching effort within the department is significantly different than that of other technical departments on campus both within GSEAS and GSOIS. To better understand this, consider
the following table which summarizes the distribution of resident sections by course content level for the 2009 academic year. The table is separated into two parts, the GSEAS departments and then
the GSOIS departments (which include OR and CS, both of which were at one time part of MA).
│Level │EC │MA│MAE│MR│OC│PH│SE│CS │DA│IS│OR │
│ 4000 │42 │7 │30 │13│12│24│14│64 │42│36│56 │
│ 3000 │39 │24│32 │17│15│30│21│63 │47│29│59 │
│ 2000 │30 │10│15 │2 │2 │13│2 │ 7 │7 │1 │ 8 │
│ 1000 │ 4 │36│ 1 │ │ │11│2 │ │ │ │ 2 │
│ Total│115│77│78 │32│29│78│39│136│96│66│125│
It is very clear from the data that the teaching experience for faculty in Applied Mathematics is very different than it is in other technical departments. A few things bear special mention. First of
all, we teach a tremendous number of sections at the first year undergraduate level. Indeed, we taught 36 of 56 total such sections on campus in 2009, nearly 65% of all sections at that level.
Second, no technical department on the entire campus teaches fewer sections of graduate level material. Indeed, every other technical department taught a minimum of 35% of their sections at the 4000
level, compared to just 9% in Mathematics. Even departments teaching half as many total sections as we do, still teach nearly twice as many sections at the 4000 level. The following bar graph
summarizes this by showing the percentage distribution of teaching efforts at the graduate (4000), upper-division undergraduate (3000), and lower-division undergraduate (1000 and 2000) levels.
Another important factor which impacts the teaching profile is the size of class sections. Consider the following bar chart which summarizes the distribution of resident class section sizes for all
GSEAS departments for 2009.
This chart shows how very different the teaching loads are in the department. We teach many more large classes which is extremely demanding in an institution where there are no teaching assistants,
homework graders, etc. To get another perspective on this issue we can examine teaching in AY2009 by weighted teaching credit (WTC). Weighted teaching credit is computed by multiplying the number of
students in a section by the number of lecture hours and then summing over all sections. The following chart summarizes teaching in AY2009 by weighted teaching credit for resident sections only
across all seven GSEAS departments.
The number to the right of each department indicates the number of tenure-track faculty in that department for AY2009. The much higher real teaching loads in mathematics are readily apparent. Indeed,
we teach nearly twice as much as MAE with the same number of tenure-track faculty.
The chart below shows the combined effect of large class sizes and low class level. It summarizes the distribution of teaching by weighted teaching credit at the various levels.
The contrast in teaching profiles is both clear and critical. The Department of Applied Mathematics is unique in that we do more than 70% of our teaching at the first and second year undergraduate
levels (1000 and 2000). This impacts faculty careers in several ways. First, the extremely limited graduate level teaching makes it far more difficult to develop and maintain active research
programs. Second, the incredible demands of fast-paced remedial undergraduate level teaching makes classroom excellence a far more critical issue in promotion and tenure decisions than in other
departments. Third, the time demands of teaching large sections of remedial mathematics to students just returning to university studies leaves little time to pursue research during teaching quarters
(it is common for tenure-track faculty in mathematics to log more weighted teaching credit in a single quarter than tenure-track faculty in other departments log in an entire year). All of these
issues are amplified by the widely varying levels of preparation of incoming students (including, direct entry students who often come straight into multi-variable calculus without a refresher
quarter in which to relearn basic single variable calculus).
One final note is that none of the preceding includes any counting or consideration of reading classes. This is another burden on the faculty since the department generally offers six to ten such
classes each year and they are not accounted for in any way (i.e. no funding or other credit is given).
The core campus process for evaluating instruction is the Student Opinion Form (SOF). This is a Likert scale survey instrument that is administered at the end of every quarter in every class section.
The survey is administered electronically and the data is centrally collected for use in a wide variety of evaluation and assessment processes. Although there are sixteen questions on the survey, two
are of particular importance. Question twelve (Q12) asks “Overall, I would rate this instructor:” and allows for a rating from one to five with one indicating “Lowest 10%” and five indicating “Top
10%”. Question thirteen (Q13) asks “Overall, I would rate this course:” with the same rating scale.
One method of evaluating the department’s instructional performance is to examine the results from these two questions in comparison with the rest of the school. Because the data is a bit noisy, one
gets a clearer picture by considering a moving average. The tables below show three-year moving averages of the data comparing the median performance of the department to the school-wide median
Not only is our median rating uniformly far above the campus-wide median, it is nearly always comfortably above the campus-wide first quartile. This reflects the strong emphasis placed on individual
instructional performance within the department. What makes this even more noteworthy is the fact that it is done while teaching a high proportion of large, accelerated pace, remedial undergraduate
This graph also shows excellent performance by MA. We have made a concerted effort in the last several years to tighten up our syllabi and improve our offerings. The table above clearly shows that
those efforts have paid off in increased ratings of our courses. One should note that NPS as a whole has remained relatively stagnant in this measure for about eight years.
We maintain an ongoing dialogue with other client departments to make sure our courses continue to serve their needs. In addition, the Mathematics Department is heavily involved with the accrediting
process for Engineering through ABET (the Accrediting Board for Engineering Technologies). This process involves examining our course syllabi, textbooks, and sample exams in our engineering
mathematics courses.
Mathematics faculty are also heavily involved as advisors and co-advisors for master’s and Ph.D. students all across campus. This further strengthens our relationships with these departments and has
even led to several joint appointments over the years.
The department's research efforts can be grouped into three broad areas, as delineated below. These areas have considerable overlap and several faculty consider themselves associated with more than
one group. Beyond the areas listed below, a number of researchers from the department have major interdisciplinary connections to researchers from other departments across the campus. Indeed there
are very prominent collaborations with the departments of Computer Science, Defense Analysis, Electrical and Computer Engineering, Mechanical and Aerospace Engineering, Meteorology, Oceanography,
Operations Research, and Physics.
Applied analysis is concerned with the interface between fundamental mathematical structures which rely on continuity and their use in the physical and social sciences. This research group has
diverse interests that include asymptotic analysis, control theory, mechanics (fluid and orbital), and game theory. There are significant overlaps with the research group in Numerical Analysis/
Scientific Computing.
Regular Faculty Members in the Applied Analysis Group
· Don Danielson
· Chris Frenzen
· Wei Kang
· Art Krener
· Guillermo Owen
· Clyde Scandrett
Current Postdoctoral Members in the Applied Analysis Group
· Cesar Aguilar
Numerical Analysis/Scientific Computing is the study of theories, computational methods, numerical algorithms, and other tools required to practically solve mathematical models of problems from
science and engineering in a fast, accurate, and efficient manner. The primary goal is the development of novel techniques and approaches to approximation and efficient computation that are at the
heart of modern science. This research group is primarily focused on the numerical solution of partial and ordinary differential equations, numerical linear algebra, and approximation theory. There
are very significant overlaps with the research group in Applied Analysis.
Regular Faculty Members in the Numerical Analysis / Scientific Computing Group
· Carlos Borges
· Fariba Fahroo
· Frank Giraldo
· Bill Gragg
· Beny Neta
· Hong Zhou
Current Postdoctoral Members in the Numerical Analysis / Scientific Computing Group
· Jim Kelly
· Shiva Gopalakrishnan
· Eric Choate
Discrete mathematics, sometimes called finite mathematics, is the study of mathematical structures that are fundamentally discrete, in the sense of not supporting or requiring the notion of
continuity. Discrete mathematics is extensively used in a variety of critical applications such as cryptography, coding theory, combinatorics, network analysis, and search algorithms for the
Regular Faculty Members in the Discrete Mathematics Group
· David Canright
· Hal Fredricksen
· Ralucca Gera
· Craig Rasmussen
· Pante Stanica
Department research output in terms of peer-reviewed publications is excellent. A comprehensive list of our peer-reviewed publications for the years 2005 to 2009 can be found in Appendix C. The table
below summarizes the total annual output of peer-reviewed publications by our tenure-track faculty over the same period.
We can paint a more useful overall picture of published research output by looking at data from the recent peer comparison studies. Of particular note in the CES study is the fact that NPS, as a
school, ranked dead last in terms of the number of journal articles produced, which indicates low aggregate productivity in this regard. However, when one breaks out the results of the AA2010 peer
comparison study to the departmental level, we see that the Department of Applied Mathematics fares rather well against its peers. Note that there are generally two comparisons, first against all
similar sized departments (same number of faculty ±5) from a database of over 300 other institutions, and then against the fifteen CES peers. The tables below show the most salient features in
regards to publications and citations. Note that this data is collected by looking at publications from a selected set of sources over the three year period 2006-2008, and, hence, the total number of
publications listed for each department is generally undercounted. On the other hand, the metric is consistent from institution to institution; hence, the comparisons are on the whole valid measures
of the relative publication statistics. First of all we look at the percentage of faculty with a publication during the comparison period.
One can more readily compare by looking at the ratio of the NPS percentage to the CES and similar sized peer percentages as shown in this chart.
In light of the substantial teaching loads our relative performance in this category is quite good, particularly in comparison with other NPS technical departments. The only two GSEAS departments
that outperform us are small, have minimal teaching loads, and have large numbers of research faculty. Next we look at publications per author from the same study.
In this critical category we are ranked higher than any other department on campus- both in absolute terms and in comparison to our peers. This can be seen more readily by examining the relative
publication rate (ratio of NPS average to peer group average) which is displayed below.
Finally, it is noteworthy that the work of math faculty is well cited in comparison to our peers. This can be seen in the two charts that follow. The first displays ratio between the percentage of
our faculty with a citation and the same percentage for the peer groups.
Note that we are clearly in the first tier of technical departments in this regard. The second chart shows the ratio of the number of citations per cited author between NPS and the peer groups. In
some sense, the citation rate indicates the impact of a particular author’s work and hence this is a critical measure of the importance of our work to the wider community of scientists.
Once again, in this critical area we are far stronger than the other technical departments on campus.
Collectively, the departmental research expenditures in 2009 total roughly $550,000 from external research funding, most of which comes from ONR and AFOSR, with lesser amounts from other DoN/DoD
sources. Approximately one-third of the tenured and tenure-track faculty have direct federally funded grant support. This is slightly above average for mathematics faculty as documented in the
Science and Engineering Indicators: 2010 (Table 5.12), which notes that only 29.7% of full-time mathematics faculty having doctoral degrees for at least four years received federal research support
in 2009. Although this average is much lower than most other fields of science, it has been relatively stable in the field of mathematics for many years. It is also worth noting that several
additional faculty are involved in externally funded research projects with principal investigators from other NPS departments. External funding expended by these faculty is not listed in the table
Unfortunately, efforts to attract external research funding are hampered by the lack of a sustainable graduate program and the effects of the high load of remedial undergraduate-level teaching.
The department maintains an active post-doctoral program. We currently have four National Research Council post-docs residing in the department. Two are working with Prof. Giraldo in the area of
Scientific Computing, one is working with Prof. Krener in the area of Applied Analysis, and one is working with Prof. Zhou in the area of Scientific Computing.
The departmental policy on scholarly activity has been crafted to support the mission of the Naval Postgraduate School, and in view of that supportive posture, our approach differs substantially from
that of most civilian research universities. More specifically we view research and scholarship as a means rather than as an end in itself. In light of that, it is our policy that all tenure-track
faculty shall be engaged in meaningful scholarly activity as part of their regular duties, and, furthermore, that such activity shall in some way support the mission of the school. As a department,
we recognize that the unique nature of the Naval Postgraduate School carries with it very unique forms of meaningful scholarly activity and we acknowledge that this can be very difficult to assess.
Moreover, we acknowledge that choosing simplistic numerical metrics would hinder our ability to contribute and could adversely affect the quality of our work. We strongly agree with the position of
the American Mathematical Society and their published 2006 statement regarding this issue – “When judging the work of most mathematicians, the key measure of value for a research program is the
quality of publications rather than the rate.”
The department places a high premium on collegiality, and this is reflected in our internal structure and governance. The Chair is elected by the department faculty and serves for a term of three
years after being appointed by the school’s President upon the recommendation of the Provost. The department is strongly committed to a system of shared collegial governance. There are several
standing committees, as outlined below, which recommend policies and actions on issues such as the curriculum, hiring, and admissions. All critical strategic decisions are made with the full
participation of the faculty after study and recommendation by the appropriate committee (or a specially appointed committee if the issue does not naturally fall in the purview of one of the standing
committees). Significant effort is made to constitute committees that are representative of the various groupings within the department (professorial rank, research area, etc.). Once major strategic
decisions have been made by the department, the Chair is responsible for implementing them.
In addition to committee input, the Chair convenes general faculty meetings once or twice per quarter as necessary, and meets with individual committees on an as-needed basis.
· Chair. Carlos Borges – The Chair plans and administers the educational, personnel, and financial activities of the department. The responsibilities of the Chair include:
o Organizing and supervising the department to carry out the educational policies of the school and to accomplish the objectives of the various curricula
o Planning and supervising research programs in the departments to support the mission of the school.
o Planning the academic program for the department
o Representing the department in academic and administrative matters, including the annual Promotion and Tenure (P&T) activities
o Recruiting qualified academic personnel for the department, within authorized allowances, and recommending their appointment
o Recommending faculty for promotion, tenure, and merit pay raises
o Providing professional evaluation of academic personnel and performance ratings of civil service personnel assigned to the department
o Maintaining familiarity with related activities at civilian educational institutions and technical and industrial organizations, so that curricula and courses are kept abreast of educational and
technical advances
o Managing the departmental budget, and representing the department in school-wide budgeting processes
o Overseeing the mentoring program for faculty.
o Designating and supervising Associate Chairs to assist with departmental administrative duties
o Working with the Program Officers in maintaining liaison with sponsors, developing new programs, and in the sponsor evaluation and modification of programs
· Associate Chair for Instruction. Bard Mansager - Appointed by and reports to the Chair; Primary responsibilities include:
o Serves as Academic Associate
o Oversees student admissions
o Designs and oversees the course matrices of all Math majors
o Oversees the department's teaching mission, including interface with client disciplines
o Focal point for curriculum reform efforts of the department
· Associate Chair for Research. Frank Giraldo - Appointed by and reports to the Chair; Primary responsibilities include:
o Oversees the department’s research programs
o Represents the department to the Campus Research Board
o Coordinates research activities within the department
· Associate Chair for Computing. David Canright - Appointed by and reports to the Chair; Primary responsibilities include:
o Oversees the department’s computing facilities
o Represents the department on the Campus Computing Advisory Board
o Coordinates and oversees software licenses and other related issues
· Colloquium Coordinator. Art Krener – Appointed by the Chair
o Oversees the departmental colloquium series
· Library Liaison. Hong Zhou – Appointed by the Chair
o Serves as a liaison between the department and the Dudley Knox Library
· Webmaster. Ralucca Gera – Appointed by the Chair
o Oversees the maintenance of the department web pages
· Planning Committee. Makes recommendations on long-range planning issues such as hiring, coordinated research efforts, and new initiatives. Responsible for creating and updating the department’s
strategic plan. The Associate Chair for Research is an ex officio member. Reports to the Chair.
· Course and Curriculum Committee. Assigns course coordinators to individual courses. Reviews class syllabi, curriculum materials, and other issues related to the department’s teaching mission.
Coordinates with client curricula to ensure that our course offerings continue to satisfy their requirements. Reports to the Associate Chair for Instruction.
· Computing Committee. Oversees computing issues within the department including the selection of instructional software. Reports to the Associate Chair for Computing.
· Doctoral Committee. Oversees all aspects of the doctorate program to include admissions, curriculum, selection of dissertation committees, and administration of qualifying examinations. Reports to
the Chair.
There are currently 2 staff members:
· Administrative Support Assistant (ASA): Bea Champaco - supervises office staff and is in charge of the department's financial operations.
· Office Automation Assistant (OA): Stephanie Muntean - assists department ASA with departmental operations and provides support to faculty.
The Department of Mathematics computing infrastructure is consists primarily of individual Windows machines in faculty and graduate student offices. These are all connected to the campus ITACS
infrastructure and they provide nearly all of our software and hardware support. The department replaces individual PCs on a three to four year cycle, although funding for this requirement is a
continuing problem.
Below is a summary of the workstations and servers that comprise the department's computing infrastructure.
· Approximately 30 PCs ranging from new to four years old
· Approximately 6 laptop computers
· One LCD projector
· One faculty member, Frank Giraldo, has a 4 Node, 32 Core Apple XServe cluster.
· High Performance Computing Center. This group promotes scientific computing at NPS by providing support to researchers and departments who wish to engage in scientific computing, and aims to
establish NPS as a nationally recognized HPC "Center of Excellence." The group’s high-performance computing facility provides a powerful baseline of computation and storage infrastructure, including
scientific workstations, supercomputer systems, high speed networks, special purpose and experimental systems, the new generation of large scale parallel systems, and application and systems software
with all components well integrated and linked over a high speed network.
· ITACS. The Information Technology and Communications Services (ITACS) name reflects the incorporation of all communication services, telephone support, and network support into the core computing
functions that have been provided by the Naval Postgraduate School since 1953.
All faculty and staff are evaluated annually for purposes of determining merit raises and to maintain an ongoing dialogue regarding their personal goals and their role in the development of the
department. In addition, our curriculum, certificates, and courses are overseen by the appropriate committee and/or administrative member of the department. Below is a summary of how the faculty,
staff, and programs are evaluated and assessed.
Each faculty member is required to submit quarterly workload forms outlining their planned activities at the beginning of each quarter. At the end of the calendar year each faculty member compiles
and submits an annual Faculty Activity Report (FAR) which summarizes their accomplishments in the prior year. The Chair is in charge of evaluating the faculty on the basis of the quarterly workload
forms, the annual Faculty Activity Reports, and other information that may be pertinent for the year (e.g. student opinion form data). These evaluations take into consideration the contributions made
to teaching, research, and service. Teaching evaluation includes classroom performance, as well as the impact of any course development or reform efforts (either within the department or elsewhere).
The effectiveness of classroom performance is largely determined by reviewing data from the student opinion form although classroom visits by the Chair or other faculty (e.g. members of the mentoring
team for Assistant Professors) may also be used. For research, the emphasis is on determining the impact of the faculty member's work, measured using the guidelines and principles set forth in the
Marto Report and the Powers Report. Although this approach is far more demanding than simply counting papers or adding up grants, it is essential due to the very non-traditional nature of the Naval
Postgraduate School. Service includes departmental committee work, as well as service to the school (e.g. faculty council), service to the profession (e.g. conference organization), and service to
the community (e.g. outreach activities). Certain activities overlap multiple criteria. For example, the directing and mentoring of graduate students contributes to both the teaching and research
missions of the department. As another example, conference organization is a service to the profession, but it also enhances and facilitates the organizer's research program.
Each tenured or tenure-track faculty member is rated on a scale of Meritorious or Unsatisfactory in accordance with the school’s human resources office (HRO) procedure. There are no fixed percentage
weights on the contributions of research, teaching, and service. In addition, junior faculty, are not expected to perform much service (though many are involved in conference organization and
outreach activities). Each faculty member discusses his or her evaluation with the Chair annually.
Lecturers and senior lecturers are also evaluated by the Chair. The evaluation criteria include classroom teaching, the impact of course and curriculum development (e.g. developing militarily
relevant example problems or demonstrations), and service (e.g. course coordination, review sessions, help-session coordination).
In early 2010, the current chair created a new internal review process by which the faculty evaluate the chair. This process is in its infancy, but uses an anonymous web based survey tool that is
implemented using Google documents (a dummy copy is viewable here). There are a number of statements which are ranked on a Likert scale (strongly disagree to strongly agree) as well as questions
which allow anonymous written responses and suggestions. Although this process needs additional refinement, initial faculty reaction has been quite positive.
Each year at the end of the campus promotion and tenure cycle, the Chair has individual meetings with all potential promotion and tenure candidates inside the department. For untenured tenure-track
faculty, the meeting is focused on the candidate’s timeline and generally includes a discussion of the mentor report for that year, the level of progress in the case, and specific actions to be taken
in any areas that are deficient or worthy of specific attention. For tenured associate professors, the discussion is meant to determine the candidate’s progress toward promotion to full professor and
the candidate’s interest in putting his or her case forward in the next P&T cycle.
Following these discussions, if there are any cases to be considered for the next cycle, the Chair tasks the candidate with generating a draft of their promotion package and convenes a meeting of the
appropriate faculty (e.g. tenured full professors to consider promotion to full cases) to consider whether the case is at a level that merits further consideration. Once a set of promotion candidates
has been determined, the chair appoints an individual department evaluation committee (DEC) for each candidate consisting of three faculty (two from within the department and one outside member) to
prepare the case for formal consideration. The candidate then prepares a complete documentation package, and the DEC then prepares a report evaluating the candidate and making a positive or negative
recommendation to the department. After the DEC report and the documentation are complete, this is made available to the appropriate group of faculty - the tenured faculty (in the case of promotion
candidates to associate professor) or the professors (in the case of promotion candidates to Professor). The appropriate faculty meet to consider the case. The meeting opens with a straw vote (by
secret ballot) which is tallied and announced to those present. This is followed by a detailed discussion of the case generally led by the DEC chair and ends with a final vote, also by secret ballot
(the Chair tallies but does not participate in the voting). After the final vote, the Chair writes a report noting the outcome of the faculty vote and formulating his or her recommendation on the
case. The full documentation package, the DEC report, and the Chair report are then forwarded to the campus-wide Faculty Promotion Council for consideration by the full school.
The University's Human Resources Office (HRO) oversees the review process of all university staff. HRO mandates that each staff member be given a written annual review along with a face-to-face
meeting with his or her supervisor.
Teaching evaluation within the department is two-pronged. The primary source of information is the Student Opinion Form (SOF) which is administered campus wide by the Registrar. This instructor/
course evaluation instrument incorporates a set of sixteen questions presented, as noted above, on a Likert scale, as well as an area for written commentary by the students. The SOF is administered
electronically and must be submitted by the students to the Registrar prior to the release of course grades (a student’s grade cannot be released until the SOF is submitted, so compliance is 100%).
After the submission of course grades the numerical SOF data is summarized (max, min, average, and standard deviation) and distributed to the individual faculty and to their respective department
chairs. The written comments are returned only to the individual faculty. In addition to the SOF, the Chair (and mentors in the case of junior faculty) will informally visit classes from time to time
to aid in the assessment of teaching. The visits of mentors are generally recorded in the annual mentor reports so that they can be used in the faculty evaluation and promotion process.
As mentioned in the section on departmental operations, we have a standing course and curriculum committee to review and oversee our courses and curriculum, as well as coordinate our offerings with
our various client curricula around the school. In addition, each course in the catalog has an official course coordinator who is charged with day-to-day monitoring of content, deciding on textbooks,
handling course validation requests, etc.
The chair currently conducts an exit briefing with each and every student receiving a degree in Applied Mathematics. This is done in a face-to-face meeting where the students are asked to give
specific input on our graduate programs and their experience in them. Although the discussion is started using generic questions such as “What is your general impression of the program?” it also
involves more specific questions like “What specific changes could we make that would improve the program?” The chair summarizes the responses and uses them as one tool for assessing the program.
First and foremost, it is critical to understand that NPS allocates department budgets in a manner very far removed from the models used in most universities. The process underwent a very large
change in 2009 with the adoption of the “nine month model,” but is still fundamentally one of ‘steering by the wake’ in that the current year’s direct teaching (DT) budget allocation is primarily
based on the number of eligible sections taught in the previous year (an eligible section is one in which there were 7 or more enrolled students). From our perspective the process of determining the
DT budget is basically:
1. There is a base allocation that covers each tenure track faculty member for 9 months and the Chair for 12 months
2. Count the number of eligible sections (S) taught in the previous year
3. Determine the base capacity (C) of the department by multiplying the number of tenure-track faculty (not counting the Chair) by 4 (the nominal teaching load)
4. The exceptional sections E = S – C is divided by 8 and that many years of additional salary is added to the base budget
Because of the small enrollments, many required sections fall below the eligible threshold, and hence, even though they must be taught to meet requirements, they are not accounted for in the budget
allocation. This generally leads to serious budget shortfalls in the summer quarter that are particularly difficult in the Department of Applied Mathematics, since this is one of our heaviest
teaching quarters. The table below shows the history of initial faculty labor budget allocations versus actual end of year expenditures exclusive of extramural research funding. It is important to
note two things. First, over the course of every year there are additional transfers for various reasons (e.g. one of our faculty teaches a class for another department). Second, the funding model
underwent a fundamental change in 2009 that made it far more realistic, although there are still serious problems.
│Year│Initial │Expended │
│2005│$1,581,875 │$1,641,824 │
│2006│$1,587,218 │$2,064,705 │
│2007│$1,365,424 │$2,203,826 │
│2008│$1,506,174 │$2,778,515 │
│2009│$2,423,897 │$2,992,863 │
To expand on the current budget picture, we can analyze the situation in 2009 in more detail since this is the only year for which we have data under the new funding model. The initial allocation
left out two important and known issues for the year – funding a full-year sabbatical for the previous chair and funding the work of one senior lecturer in managing an NPS program with Singapore.
Even though both of these requirements were well understood by the academic planning office before the initial allocation, many hours had to be spent by the chair to get budget transfers to fund
these external mandates, and, indeed, the full funding for these two issues did not arrive until August 21, 2009, just weeks before the close of the fiscal year. Had these properly been included in
the initial allocation, we would have started the year with $2,673,897. Over the course of the year, there were additional transfers in and out for various reasons, and the department ended the year
with a faculty labor shortfall of roughly $27,000. This shortfall is orders of magnitude less than in previous years where it was not uncommon to run nearly $1,000,000 in the red.
The most serious issue regarding faculty labor budgeting is the failure to properly fund essential teaching requirements that fall below the 7 student enrollment threshold. This introduces
significant problems, as many required classes operate very near the threshold and enrollment variability basically leads to ‘unfunded mandates’ in that the department has to teach required core
classes for which it has not been funded. What is more disturbing is the unwillingness of the administration to provide additional funding in light of the high degree of efficiency of our operations.
Consider the following chart which shows the cost of resident teaching in six of the GSEAS departments (systems engineering is excluded because a high percentage of their teaching is distance
learning and the budgeting process for that is very different). The chart shows two measures of cost. The first is computed by dividing the mission-funded teaching budget for each department by the
total enrollments. The second is computed by dividing the mission-funded teaching budget for each department by the total number of four-unit weighted teaching credits (4-WTC). The four-unit weighted
teaching credit is computed by dividing the weighted teaching credits by four. This measure is highly comparable to enrollments but compensates for some of the practices in other departments (e.g.
splitting a 4-unit class into two 2-unit classes) that create the illusion of more teaching by proliferating sections.
It is very clear from the data that our costs are roughly half of those for the other GSEAS departments. Indeed, we are unquestionably the most efficient technical department at NPS. In light of this
fact, it is difficult to understand the difficulty we have had trying to convince the administration to fund some of the smaller graduate-level sections we need to teach for our students and in order
to maintain the professional competence of our faculty.
Staff labor budgets are such a continual problem that we simply operate with an expectation of running a deficit which will eventually be paid from campus funds. The department is authorized staff
support of an Administrative Support Assistant and an Office Automation Assistant; this authorization provides barely sufficient staffing for a department of this size. The full cost of these
positions is well-known in advance, but the initial allocation is consistently less than 60% of the known cost.
The department receives an annual operations (OPTAR) budget of roughly $40,000. This is used to pay virtually all operating costs, including office supplies, electronic equipment (printers, office
computers, etc.), software licenses, department travel, honoraria, etc.
The department has made significant improvements since the last external review in 1985. It is worth noting that the 1985 self study notes seven issues from the previous review in 1979 and follows up
with their status in 1985. Before proceeding it is worth revisiting those issues one more time. The first five issues have all been solved in the intervening years, they are:
The lack of a unified research effort is regarded as a weakness by members of the department – The 1985 report remarks that no progress had been made since 1979. Happily, in 2010 we have made great
progress on this issue. The department has focused in a few key areas (applied analysis, scientific computation / numerical analysis, and discrete mathematics) and we have good collaborative efforts
within the department as well as fruitful interdisciplinary collaborations.
Average to indifferent performance by faculty on loan from other departments – The 1985 report remarks that there had been progress as the department had been allowed to recruit faculty. It is a
pleasure to note that this is no longer an issue, as we rarely use faculty from other departments to teach classes in mathematics. However, it bears mention that even in recent years, when we have
used faculty from other departments, the quality of instruction has been consistently unacceptable. Moreover, there have been recent discussions of forcing the department to use faculty from other
departments to teach calculus. This is a grave concern.
The department lacks expertise in applied algebra and discrete mathematical structures – This problem had been solved in 1985 and remains solved to this day as we have a very strong group of faculty
in these areas.
The department has a need for increased expertise in numerical analysis, numerical methods for differential equations, and computation – The 1985 report notes that they had begun to address the
problem, but had not yet done so. Happily, in 2010 this is no longer a problem. Indeed these areas are particular strengths for the department at the current time and, in fact, account for the
majority of our external grant funding.
The department needs “new blood” – Although there is a history of long gaps in recruiting, it is not a major issue at the current time as we have recruited four new faculty in the last six years.
The remaining two issues are still very serious concerns:
Lack of graduate students in mathematics – The 1985 report notes that this was still a significant problem in 1985. Soon after that report, the 380 curriculum was reinstated and there followed a
period of growth that showed real promise from 1990 to 2001, graduating an average of 6 students per year. By 2005 the closure of the 380 curriculum had brought the department back to 1985 levels,
and we have struggled, with some success, to recover from there.
The lack of means to combat the teaching of mathematics courses in other departments – The 1985 report notes that this remains a problem and, sadly, the situation remains unchanged to this day. There
is substantial course poaching and duplication across the campus, and repeated attempts to eliminate this needless duplication of effort have borne no fruit. This problem had been documented as early
as 1947, and without intervention from the school administration, there is little hope that this practice will abate in the future.
These two issues are very closely related to the two very specific recommendations made in the external review from 1985. They are:
Encourage and support the growth of a mathematics degree program- The history here has been mixed. Indeed the school did reinstate the 380 curriculum following the 1985 review, and there was a solid
period of support which led to the growth of a small but high quality program. Unfortunately, the ill-advised cancellation of the 380 curriculum in 2001 has done tremendous damage to the program from
which we still have not recovered.
The Naval Postgraduate School should at least double its support for the teaching of advanced mathematics courses – It was specifically noted that the 15 units of graduate level math being offered in
1985 was far from sufficient. Since that time there has not been a significant increase in the support for teaching of advanced mathematics courses. Although the department has taught more than 30
units on numerous occasions, some of these sections have fewer than 7 students, and the department has to teach them “out of hide.” In sum, this issue has not been addressed and remains a very
serious one.
Having now reviewed past issues, we point out our main goals and issues moving forward. Many if not all of them are related to the issues from the last two reviews (1979 and 1985) as seen above.
Indeed, all of them relate to the fact that the teaching profile in the department is excessively weighted toward classes at the 1000 and 2000 level; there are few opportunities to teach at the 4000
level. This issue was discussed above in the section on our instructional program where we noted a clear and critical contrast between our teaching profile and that in other comparable departments at
NPS. The Department of Applied Mathematics is unique in that we do nearly 70% of our teaching at the first and second year undergraduate levels (1000 and 2000). This impacts faculty careers in
several negative ways. First, the extremely limited graduate level teaching makes it far more difficult to develop and maintain active research programs. Second, the incredible demands of fast-paced
remedial undergraduate level teaching makes classroom excellence a far more critical issue in promotion and tenure decisions than it is in other departments. Third, the time demands of teaching large
sections of remedial mathematics to students just returning to university studies leaves little time to pursue research during teaching quarters. All of these issues are amplified by the widely
varying levels of preparation of incoming students (including direct entry students who often come straight into multi-variable calculus without a refresher quarter in which to relearn basic single
variable calculus).
There are three primary causes for this state of affairs –a lack of campus support for graduate level mathematics courses, a small graduate program, and course poaching -and we must address each of
them in order to improve the situation.
We believe that a regular and reliable offering of graduate level mathematics would lead to increased enrollments in these very classes. There are increasing numbers of doctoral students in many
technical curricula who would be eager to pursue a minor in mathematics, but often shy away from the option because they cannot be certain that classes they need for the minor will be offered. We
have taken a first step with the creation of two certificate programs which attract both master’s and Ph.D. students from other curricula. But even these are difficult to coordinate since we
generally need to secure a full cohort (at least seven students) before we begin. If, on the other hand, students and program officers knew in advance that certain classes would be offered on a
regular basis (annually or biannually), then we believe we could fill the seats with students seeking certificates, as well as those Ph.D. students in need of a minor. An administration commitment to
fund just twelve sections per year at the 4000 level would likely be enough to put us on a path to self-sufficiency.
The Department of Applied Mathematics is currently the only department without a Navy-sponsored master’s degree program. While we have been able to rebuild a steady, although small, input of students
from the Army and complement this with a few dual master’s, the numbers are simply not sufficient to maintain a high-quality graduate level department. Although faculty are generally engaged in
thesis advising both inside and outside the department, a larger group of math graduate students would materially improve our opportunities to teach graduate level mathematics as well as maintain
active and relevant research programs. We need to convince the Navy to officially reinstate the 380 curriculum and to begin sending students. We believe there is a real need for a core group of
officers with graduate math degrees to teach at the US Naval Academy. Moreover, we believe the Navy would benefit by having more members of the officer corps with advanced degrees in mathematics.
There is a significant and ongoing problem with course poaching and duplication by other departments. This was clearly noted in the 1985 review and continues to this day. It is worth special mention
that this is a long-standing problem at NPS. Indeed, in 1947 the Heald Report (Report on the Educational Program of the United States Naval Postgraduate School. By an Advisory Committee to the
American Council on Education, Henry T. Heald, Chairman. New York: American Council on Education, June 27, 1947) made special mention of this problem. The report noted that the school’s failure to
use standard individual courses as building blocks resulted in a “complicated array” of very similar courses. And furthermore, that this led to small, uneconomical classes and related inefficiencies.
This practice continues to have a serious negative impact both on the department and the school in several ways.
Above all, it is an inefficient use of school resources since this practice results in multiple versions of essentially the same material being taught in very small sections by a variety of people in
a variety of departments. This practice is particularly prevalent within GSEAS where, for example, nearly half of the departments offer their own course in ocean acoustics, and nearly every
department offers its own version of a numerical methods course. This practice also hurts the students since they are often taking classes that have been watered down or lack academic rigor precisely
because they are being taught outside of the department in which they should be legitimately housed.
More troubling for us is the fact that most of the poaching and duplication occurs at the 3000 and 4000 level, so it is a major contributor to the unhealthy teaching profile in the department. For
example, consider PH3991 – Theoretical Physics. The textbook for this course is Mathematical Methods in the Physical Sciences by Mary Boas, and the course is essentially one on methods of applied
mathematics. This class is generally offered twice a year with a dozen or more students each time. There are many more examples similar to this one all over campus. For a detailed description of
course poaching and duplication issues currently affecting the department see Appendix E.
(vita): (Home Page) (vita): (Home Page)
Numerical Analysis, Numerical Linear Algebra, Applied Approximation Theory, Orthogonal Polynomials, Fluid Dynamics, Materials Processing, Cryptography, Orbital Mechanics, Acoustics,
Floating-point Computation. Fractal Geometry.
(vita) : (Home Page)
Inverse Problems in Acoustics and Electromagnetics, Particularly Radar Imaging
(vita): (Home Page) (vita):
Use Theory and Software to Model Dynamics of Fluids and Structures, Improve Analytical and Numerical : Mathematical Education.
Techniques for Prediction of Satellite Orbits.
(vita): (vita): (Home Page)
: Optimal Control Theory, Numerical Optimal Control Theory, Control of Distributed Parameter Systems, : Application of Combinatorial Techniques and the Results to Problems of Digital
Numerical Analysis. Communications, Cryptography and Computer Security, Coding and Information Theory.
(vita): (Home Page) (vita): (Home Page)
Asymptotic Analysis, Dynamical Systems, Applied Mathematics. Graph Theory, Combinatorial Applications, Artificial Intelligence.
(vita): (Home Page) (vita) (Home Page)
Spectral Elements and Discontinuous Galerkin Methods, Domain Decomposition Methods and Parallel computing, : Computational Complex Analysis, Numerical Linear Algebra.
Time-Integrators, Adaptive Methods.
(vita): (Home Page) (vita): (Home Page)
: Nonlinear Control Theory with Engineering Applications, including Bifurcation Control, Normal Forms and Control and Estimation.
Invariants, H-infinity Control, Formation Control and their Applications in the Control of Aircraft.
(vita): (vita): (Home Page)
Combat Modeling. : Finite Elements, Orbit Predictions, Parallel Computing.
(vita): (vita): (Home Page)
Game Theory, Terrorism and Low-intensity Conflict, Voting, Economic Equilibrium. : Graph Theory, Network Design and Optimization, Heuristic Algorithms in Combinatorial
(vita) (vita): (Home Page)
: Active Materials, and the Coupling between Elastic, Fluid, and Piezoelectric Media, Wave Propagation : Cryptography & Coding theory, Boolean Functions, Logic and Discrete Mathematics,
Phenomenon, Electromagnetic Wave Propagation, the Biot Theory of Wave Propagation in Porous Media. Number Theory, Graph Theory, Combinatorial Mathematics, Algebra.
(vita): (Home Page)
Scientific Computation, Mathematical Modeling of Nonlinear Phenomena, Structure and Flow Properties of
Complex Polymeric and Nano-Composite Fluids.
The Menneken Faculty Award for Excellence in Scientific Research:
The Sigma Xi Carl E. Menneken Research Award:
The Rear Admiral John Jay Schieffelin Award for Excellence in Teaching:
● Carlos Borges
● Gordon Latta (Emeritus)
● Bard Mansager
● Arthur Schoenstadt (Emeritus)
● Maurice Weir (Emeritus)
● Carroll Wilde (Emeritus)
AIAA Fellows:
● Fariba Fahroo (Associate Fellow)
● Beny Neta (Associate Fellow)
ICA Fellows:
● Ralucca Gera (Associate Fellow)
● Craig Rasmussen
● Pante Stanica (Associate Fellow)
IEEE Fellows:
SIAM Fellows:
Guillermo Owen holds the title of Distinguished Professor and is a member of the following:
● Colombian Academy of Exact and Physical Sciences
● Royal Academy of Sciences and Arts of Catalonia
● Third World Academy of Sciences
Bedrossian, N., Bhatt, S., Kang, W. & Ross, I.M. 2009, "Zero-propellant maneuver guidance", IEEE Control Systems Magazine, vol. 29, no. 5, pp. 53-73.
Borges, C.F. 2009, "A full-Newton approach to separable nonlinear least squares problems and its application to discrete least squares rational approximation", Electronic Transactions on Numerical
Analysis, vol. 35, pp. 57-68.
Borm, P., Van den Brink, R., Hendrickx, R. & Owen, G. 2009, "The VL control measure for symmetric networks", Social Networks, , pp. 85-91.
Chartrand, G., Okamoto, F., Rasmussen, C.W. & Zhang, P. 2009, "The set chromatic number of a graph", Discussiones Mathematicae Graph Theory, vol. 29, pp. 545-561.
Chun, C., Bae, H.J. & Neta, B. 2009, "New families of nonlinear third-order solvers for finding multiple roots", Computers & Mathematics with Applications, vol. 57, no. 9, pp. 1574-1582.
Chun, C. & Neta, B. 2009, "A third-order modification of Newton's method for multiple roots", Applied Mathematics and Computation, vol. 211, no. 2, pp. 474-479.
Chun, C. & Neta, B. 2009, "Certain improvements of Newton's method with fourth-order convergence", Applied Mathematics and Computation, vol. 215, pp. 821-828.
Cusick, T.W., Li, Y. & Stanica, P. 2009, "On a conjecture of balanced symmetric Boolean functions", J. Mathematical Cryptology, vol. 3, no. 4, pp. 273--290.
Cusick, T.W. & Stanica, P. 2009, Cryptographic Boolean Functions and Applications, Academic Press.
Cusick, T.W. & Stanica, P. 2009, "Sums of the Thue-Morse sequence over arithmetic progressions", Advances and Applications in Discrete Mathematics, vol. 4, no. 2, pp. 127-135.
Dartyge, C., Luca, F. & Stanica, P. 2009, "On digit sums of multiples of an integer", J. Number Theory., vol. 129, no. 11, pp. 2820-2830.
Dea, J.R., Giraldo, F.X. & Neta, B. 2009, "High-order non-reflecting boundary conditions for the linearized 2-D Euler equations: No mean flow case", Wave Motion, vol. 46, no. 3, pp. 210-220.
Gera, R., Hattingh, J.H., Jafari Rad, N., Joubert, E.J. & van der Merwe, L. 2009, "Vertex and edge critical total restrained domination in graphs", The Bulletin of the Institute of Combinatorics and
its Applications, vol. 57, pp. 107-117.
Haegel, N.M., Mills, T.J., Talmadge, M., Scandrett, C.L., Frenzen, C.L., Yoon, H., Fetzer, C.M. & King, R.R. 2009, "Direct imaging of anisotropic minority-carrier diffusion in ordered GaInP", Journal
of Applied Physics, , pp. 023711 (5 pp.).
Jangveladze, T., Kiguradze, Z. & Neta, B. 2009, "Large time behavior of solutions to a nonlinear integro-differential system", Journal of Mathematical Analysis and Applications, vol. 351, no. 1, pp.
Jangveladze, T., Kiguradze, Z. & Neta, B. 2009, "Large time behavior of solutions and finite difference scheme to a nonlinear integro-differential equation", Computers & Mathematics with
Applications, vol. 57, no. 5, pp. 799-811.
Jangveladze, T., Kiguradze, Z. & Neta, B. 2009, "Finite difference approximation of a nonlinear integro-differential system", Applied Mathematics and Computation, vol. 215, pp. 615-628.
Kang, W., Ross, I.M., Pham, K. & Gong, Q. 2009, "Autonomous observability of networked multi-satellite systems", AIAA J. of Guidance, Control, & Dynamics, vol. 32, no. 3, pp. 869-877.
Kilic, E. & Stanica, P. 2009, "Generating matrices for weighted sums of second order linear recurrences", J. Integer Seq., vol. 12, no. 2, pp. Article 09.2.7, 11.
Kilic, E. & Stanica, P. 2009, "Factorizations and representations of second order linear recurrences with indices in arithmetic progressions", Bulletin Mex. Math. Soc., vol. 15, pp. 23-36.
Konyagin, S.V., Luca, F. & Stanica, P. 2009, "Sum of divisors of Fibonacci numbers", Unif. Distrib. Theory, vol. 4, no. 1, pp. 1--8.
Luca, F. & Stanica, P. 2009, "Fibonacci numbers of the form p^a ± p^b" in Proceedings of International Conference on Fibonacci Numbers Congressus Numerantium, , pp. 177-183.
Luca, F. & Stanica, P. 2009, "On Machin's formula with powers of the golden section", International Journal of Number Theory, vol. 5, no. 6, pp. 973-979.
Maitra, S., Subba Rao, Y.V., Stanica, P. & Gangopadhyay, S. 2009, "Nontrivial solutions to the cubic sieve congruence problem x^3=y^2 z \pmod p", Special Issue on Applied Cryptography & Data Security
in Journal of ``Computacion y Sistemas'' (eds. F. Rodriguez-Henriquez, D. Chakraborty), vol. 12, no. 3, pp. 253--266.
McCormick, G.H. & Owen, G. 2009, "Terrorists and their sponsors: An inquiry into trust and double-crossing" in Mathematical Methods in Counterterrorism, ed. J. Farley,.
Owen, G. 2009, "Endogenous Formation of Coalitions", International Game Theory Review, , pp. 461-470.
Petrakos, N., Dinolt, G., Michael, B. & Stanica, P. 2009, "Cube-type algebraic attacks on wireless encryption protocols", IEEE Computer, , pp. 106-108.
Rasmussen, C.W. & Okamoto, F. 2009, "Set vertex colorings and joins of graphs", Czechoslovak Mathematical Journal, vol. 59, no. 4, pp. 929-941.
Restelli, M. & Giraldo, F.X. 2009, "A conservative semi-implicit discontinuous Galerkin method for the Navier-Stokes equations in non-hydrodynamic mesoscale modeling", Siam Journal on Scientific and
Statistical Computing, vol. 31, pp. 2231-2257.
Wang, H.Y., Zhou, H. & Forest, M.G. 2009, "Sheared Nematic Liquid Crystal Polymer Monolayers", Discrete and Continuous Dynamical Systems-Series B, vol. 11, no. 2, pp. 497-517.
Canright, D.R. & Batina, L. 2008, "A Very Compact "Perfectly Masked" S-box for AES", Applied Cryptography and Network Security - ACNS 2008. 6th International Conference. Proceedings (Lecture Notes in
Computer Science Vol. 5037), , pp. 446-459.
Chun, C. & Neta, B. 2008, "Some modification of Newton's method by the method of undetermined coefficients", Computers & Mathematics with Applications, vol. 56, no. 10, pp. 2528-2538.
Cusick, T.W., Li, Y. & Stanica, P. 2008, "Balanced symmetric functions over GF(p)", IEEE Trans. Inform. Theory, vol. 54, no. 3, pp. 1304--1307.
Demetriou, M.A. & Fahroo, F. 2008, "Natural observers for a class of second order bilinear infinite dimensional systems", Proceedings of the 46th IEEE Conference on Decision and Control, , pp.
Eroh, L. & Gera, R. 2008, "Global alliance partition in trees", Journal of Combinatorial Mathematics and Combinatorial Computing, , no. 66, pp. 161-9.
Fahroo, F. & Ross, I.M. 2008, "Pseudospectral methods for infinite-horizon optimal control problems", Journal of Guidance Control and Dynamics, vol. 31, no. 4, pp. 927-936.
Fahroo, F. & Ross, I.M. 2008, "Convergence of the costates does not imply convergence of the control", Journal of Guidance Control and Dynamics, vol. 31, no. 5, pp. 1492-1497.
Filaseta, M., Luca, F., Stanica, P. & Underwood, R.G. 2008, "Galois groups of polynomials arising from circulant matrices", J. Number Theory, vol. 128, no. 1, pp. 59--70.
Fredricksen, H.M., Ionascu, E.J., Luca, F. & Stanica, P. 2008, "Minimal Niven numbers", Acta Arith., vol. 132, no. 2, pp. 135-159.
Gera, R. & Shen, J. 2008, "Extension of strongly regular graphs", Electronic Journal of Combinatorics, vol. 15, no. 1.
Giraldo, F.X. & Restelli, M. 2008, "A study of spectral element and discontinuous Galerkin methods for the Navier-Stokes equations in nonhydrostatic mesoscale atmospheric modeling: Equation sets and
test cases", Journal of Computational Physics, vol. 227, no. 8, pp. 3849-3877.
Giraldo, F.X. & Warburton, T. 2008, "A high-order triangular discontinuous Galerkin oceanic shallow water model", International Journal for Numerical Methods in Fluids, vol. 56, no. 7, pp. 899-925.
Gomez, D., Gonzalez-Aranguena, E., Manuel, C. & Owen, G. 2008, "A value for generalized probabilistic communication situations", European Journal of Operational Research, vol. 190, no. 2, pp.
Gomez, D., Gonzalez-Aranguena, E., Manuel, C., Owen, G., del Pozo, M. & Saboya, M. 2008, "The cohesiveness of subgroups in social networks: A view from game theory", Annals of Operations Research,
vol. 158, no. 1, pp. 33-46.
Gong, Q., Fahroo, F. & Ross, I.M. 2008, "Spectral algorithm for pseudospectral methods in optimal control", Journal of Guidance Control and Dynamics, vol. 31, no. 3, pp. 460-471.
Gong, Q., Ross, I.M., Kang, W., Fahroo, F. & Mao, J. 2008, "Connections Between the Covector Mapping Theorem and Convergence of Pseudospectral Methods for Optimal Control", J. Computational
Optimization and Applications, vol. 41, no. 3, pp. 307-335.
Kang, W., Ross, I.M. & Gong, Q. 2008, Pseudospectral Optimal Control and Its Convergence Theorems, Springer.
Kim, Y.J., Giraldo, F.X., Flatau, M., Liou, C.S. & Peng, M.S. 2008, "A sensitivity study of the Kelvin wave and the Madden-Julian Oscillation in aquaplanet simulations by the Naval Research
Laboratory Spectral Element Atmospheric Model", Journal of Geophysical Research-Atmospheres, vol. 113, no. D20.
Koster, M., Lindelauf, R., Lindner, I. & Owen, G. 2008, "Mass-mobilization with noisy conditional beliefs", Mathematical Social Sciences, vol. 55, no. 1, pp. 55-77.
Lauter, M., Giraldo, F.X., Handorf, D. & Dethloff, K. 2008, "A discontinuous Galerkin method for the shallow water equations in spherical triangular coordinates", Journal of Computational Physics,
vol. 227, no. 24, pp. 10226-10242.
Lindner, I., Grofman, B. & Owen, G. 2008, "Modified power indices for indirect voting" in Power, Freedom, and Voting, eds. M. Braham & F. Steffen, Springer Verlag, , pp. 119-138.
Mao, J. & Kang, W. 2008, "A three tier cooperative control architecture for multi-step semiconductor manufacturing process", Journal of Process Control, vol. 18, pp. 954-960.
Neta, B. 2008, "On Popovski's method for nonlinear equations", Applied Mathematics and Computation, vol. 201, no. 1-2, pp. 710-715.
Neta, B. 2008, "New third order nonlinear solvers for multiple roots", Applied Mathematics and Computation, vol. 202, no. 1, pp. 162-170.
Neta, B. & Johnson, A.N. 2008, "High-order nonlinear solver for multiple roots", Computers & Mathematics with Applications, vol. 55, no. 9, pp. 2012-2017.
Neta, B. & Johnson, A.N. 2008, "High order nonlinear solver", J. Comput. Methods Sci. Eng., vol. 8, no. 4-6, pp. 245--250.
Neta, B., van Joolen, V.J., Dea, J.R. & Givoli, D. 2008, "Application of high-order Higdon non-reflecting boundary conditions to linear shallow water models", Communications in Numerical Methods in
Engineering, vol. 24, no. 11, pp. 1459-1466.
Owen, G. 2008, "Endogenous formation of coalitions", Int. Game Theory Rev., vol. 10, no. 4, pp. 461--470.
Owen, G. & McCormick, G.H. 2008, "Finding a moving fugitive. A game theoretic representation of search", Computers & Operations Research, vol. 35, no. 6, pp. 1944-1962.
Stanica, P. 2008, "On the nonexistence of bent rotation symmetric Boolean functions of degree greater than two" in Proceedings of NATO Advanced Studies Institute (Boolean Functions in Cryptology and
Information Security - Nato Science for Peace and Security) Ed. O.A. Logachev, Berlin, pp. 214--218.
Stanica, P. & Maitra, S. 2008, "Rotation symmetric Boolean functions---count and cryptographic properties", Discrete Appl. Math., vol. 156, no. 10, pp. 1567--1580.
Van den Brink, R., Borm, P., Hendrickx, R. & Owen, G. 2008, "Characterizations of the β - and the degree network power measure", Theory and Decision, vol. 64, no. 4, pp. 519-536.
Wang, H. & Zhou, H. 2008, "Extendability of Equilibria of Nematic Polymers", Abstract and Applied Analysis, .
Wang, H.Y. & Zhou, H. 2008, "Stokes efficiency of molecular motor-cargo systems", Abstract and Applied Analysis, .
Wang, H.Y. & Zhou, H. 2008, "Multiple branches of ordered states of polymer ensembles with the Onsager excluded volume potential", Physics Letters A, vol. 372, no. 19, pp. 3423-3428.
Wang, H. & Zhou, H. 2008, "Exact solution of a constrained optimization problem in thermoelectric cooling", Applied Mathematical Sciences, vol. 2, no. 4, pp. 177-186.
Wilson, L., Zhou, H., Kang, W. & Wang, H. 2008, "Controllability of non-Newtonian fluids under homogeneous extensional flow", Appl. Math. Sci. (Ruse), vol. 2, no. 41-44, pp. 2145--2156.
Zhou, H., Kang, W., Krener, A.J. & Wang, H. 2008, Homogeneous flow field effect on the control of Maxwell materials.
Alonso-Meijide, J.M., Carreras, F., Fiestras-Janeiro, M.G. & Owen, G. 2007, "A comparative axiomatic characterization of the Banzhaf-Owen coalitional value", Decision Support Systems, , pp. 701-712.
Cusick, T.W., Fredricksen, H.M., Ionascu, E.J. & Stanica, P. 2007, "Remarks on a sequence of minimal Niven numbers" in Proceedings of SEQUENCES, ed. S.W. Golomb, Springer-Verlag, , pp. 162-168.
Cusick, T.W., Fredricksen, H.M. & Stanica, P. 2007, "On the delta sequence of the Thue-Morse sequence", Australas. J. Combin., vol. 39, pp. 293-300.
Filaseta, M., Luca, F., Stanica, P. & Underwood, R.G. 2007, "Two Diophantine approaches to the irreducibility of certain trinomials", Acta Arith., vol. 128, no. 2, pp. 149--156.
Frenzen, C.L., Ionascu, E.J. & Stanica, P. 2007, "A proof of two conjectures related to the Erdos-Debrunner inequality", JIPAM. J. Inequal. Pure Appl. Math., vol. 8, no. 3, pp. Article 68, 13.
Gera, R. 2007, "On dominator colorings in graphs", Graph Theory Notes N. Y., vol. 52, pp. 25--30.
Gera, R. & Ping, Z. 2007, "Stratified domination in oriented graphs", Journal of Combinatorial Mathematics and Combinatorial Computing, vol. 60, pp. 105-25.
Gong, Q., Ross, I.M. & Kang, W. 2007, "A pseudospectral observer for nonlinear systems", Discrete Contin. Dyn. Syst. Ser. B, vol. 8, no. 3, pp. 589--611 (electronic).
Ionascu, E.J., Luca, F. & Stanica, P. 2007, "Heron triangles with two fixed sides", J. Number Theory, vol. 126, no. 1, pp. 52--67.
Ionascu, E.J. & Stanica, P. 2007, "Extreme values for the area of rectangles with vertices on concentrical circles", Elem. Math., vol. 62, no. 1, pp. 30--39.
Ji, G.H., Wang, Q., Zhang, P.W., Wang, H.Y. & Zhou, H. 2007, "Steady states and their stability of homogeneous, rigid, extended nematic polymers under imposed magnetic fields", Communications in
Mathematical Sciences, vol. 5, no. 4, pp. 917-950.
Kang, W., Gong, Q. & Ross, I.M. 2007, "On the Convergence of Nonlinear Optimal Control using Pseudospectral Methods for Feedback Linearizable Systems", International Journal of Robust and Nonlinear
Control, vol. 17, pp. 1251-1277.
Luca, F. & Stanica, P. 2007, "Linear equations with the Euler totient function", Acta Arith., vol. 128, no. 2, pp. 135--147.
Mao, J. & Kang, W. 2007, "Benchmark study of run-to-run controllers for the lithographic control of the critical dimension", Journal of Micro/Nanolithography, MEMS, and MOEMS, vol. 6, no. 2.
Neta, B. 2007, "P-stable high-order super-implicit and Obrechkoff methods for periodic initial value problems", Computers & Mathematics with Applications, vol. 54, pp. 117-126.
Owen, G. & Lindner, I. 2007, "Cases where the Penrose limit theorem does not hold", Mathematical Social Sciences, vol. 53, no. 3, pp. 232-8.
Rasmussen, C.W. 2007, "On Efficient Construction of Minimum-Sum Vertex Covers", Graph Theory Notes of New York LII, , pp. 45-54.
Stanica, P. 2007, "Graph eigenvalues and Walsh spectrum of Boolean functions" in Combinatorial number theory de Gruyter, Berlin, pp. 431--442.
Wang, H.Y. & Zhou, H. 2007, "Monotonicity of a key function arised in studies of nematic liquid crystal polymers", Abstract and Applied Analysis, .
Zhou, H. & Forest, M.G. 2007, "Nematic liquids in weak capillary Poiseuille flow: structure scaling laws and effective conductivity implications", International Journal of Numerical Analysis and
Modeling, vol. 4, no. 3-4, pp. 460-477.
Zhou, H., Forest, M.G. & Wang, Q. 2007, "Anchoring-induced texture & shear banding of nematic polymers in shear cells", Discrete and Continuous Dynamical Systems-Series B, vol. 8, no. 3, pp. 707-733.
Zhou, H. & Wang, H.Y. 2007, "Elongational perturbations on nematic liquid crystal polymers under a weak shear", Physics of Fluids, vol. 19.
Zhou, H. & Wang, H.Y. 2007, "Steady states and dynamics of 2-D nematic polymers driven by an imposed weak shear", Communications in Mathematical Sciences, vol. 5, pp. 113-132.
Zhou, H., Wang, H.Y. & Wang, Q. 2007, "Nonparallel solutions of extended nematic polymers under an external field", Discrete and Continuous Dynamical Systems-Series B, vol. 7, pp. 907-929.
Zhou, H., Wang, H.Y., Wang, Q. & Forest, M.G. 2007, "Characterization of stable kinetic equilibria of rigid, dipolar rod ensembles for coupled dipole-dipole and Maier-Saupe potentials", Nonlinearity,
vol. 20, no. 2, pp. 277-297.
Zhou, H. & Wang, H. 2007, "Asymptotic study on the extendability of equilibria of nematic polymers", International Journal of Contemporary Mathematical Sciences, vol. 2, no. 21, pp. 1009-1023.
Zhou, H., Wilson, L. & Wang, H.Y. 2007, "On the equilibria of the extended nematic polymers under elongational flow", Abstract and Applied Analysis.
Cui, Z.L., Forest, M.G., Wang, Q. & Zhou, H. 2006, "On weak plane Couette and Poiseuille flows of rigid rod and platelet ensembles", Siam Journal on Applied Mathematics, vol. 66, no. 4, pp.
Demetriou, M.A. & Fahroo, F. 2006, "Model reference adaptive control of structurally perturbed second-order distributed parameter systems", International Journal of Robust and Nonlinear Control, vol.
16, no. 16, pp. 773-799.
Fahroo, F. & Ito, K. 2006, "Optimal absorption design for damped elastic systems", 2006 American Control Conference (IEEE Cat. No. 06CH37776C), , pp. 5 pp.|CD-ROM.
Fahroo, F. & Ito, K. 2006, "Optimal absorption design and sensitivity of eigenvalues", Proceedings of the 45th IEEE Conference on Decision and Control (IEEE Cat. No. 06CH37770), , pp. 6 pp.|CD-ROM.
Gera, R. & Ping, Z. 2006, "On stratification and domination in graphs", Discussiones Mathematicae Graph Theory, vol. 26, no. 2, pp. 249-72.
Gera, R., Rasmussen, C.W. & Horton, S. 2006, "Dominator colorings and safe clique partitions", Proceedings of the Thirty-Seventh Southeastern International Conference on Combinatorics, Graph Theory
and Computing, vol. 181, pp. 19--32.
Gera, R., Rasmussen, C.W., Stanica, P. & Horton, S. 2006, "Results on the min-sum vertex cover problem", Congressus Numerantium, vol. 178, pp. 161--172.
Giraldo, F.X. 2006, "High-order triangle-based discontinuous Galerkin methods for hyperbolic equations on a rotating sphere", Journal of Computational Physics, vol. 214, no. 2, pp. 447-465.
Giraldo, F.X. 2006, "Hybrid Eulerian-Lagrangian semi-implicit time-integrators", Computers & Mathematics with Applications, vol. 52, no. 8-9, pp. 1325-1342.
Giraldo, F.X. & Taylor, M.A. 2006, "A diagonal-mass-matrix triangular-spectral-element method based on cubature points", Journal of Engineering Mathematics, vol. 56, no. 3, pp. 307-322.
Gong, Q., Kang, W. & Ross, I.M. 2006, "A pseudospectral method for the optimal control of constrained feedback linearizable systems", IEEE Trans. Automat. Control, vol. 51, no. 7, pp. 1115--1129.
Hamzi, B., Krener, A.J. & Kang, W. 2006, "The controlled center dynamics of discrete time control bifurcations", Systems Control Lett., vol. 55, no. 7, pp. 585--596.
Ji, G.H., Wang, Q., Zhang, P.W. & Zhou, H. 2006, "Study of phase transition in homogeneous, rigid extended nematics and magnetic suspensions using an order-reduction method", Physics of Fluids, vol.
18, no. 12.
Kang, W. 2006, "Moving horizon numerical observers of nonlinear control systems", IEEE Trans. Automat. Control, vol. 51, no. 2, pp. 344--350.
Kang, W. & Krener, A.J. 2006, "Normal forms of nonlinear control systems" in Chaos in automatic control CRC Press, Boca Raton, FL, pp. 345--376.
Luca, F. & Stanica, P. 2006, "$F[1]F[2]F[3]F[4]F[5]F[6]F[8]F[10]F[12]=11!", Port. Math. (N.S.), vol. 63, no. 3, pp. 251--260.
McCormick, G.H. & Owen, G. 2006, "A game model of counterproliferation, with multiple entrants", Int. Game Theory Rev., vol. 8, no. 3, pp. 339--353.
Melman, A. & Gragg, W.B. 2006, "An optimization framework for polynomial zerofinders", Amer. Math. Monthly, vol. 113, no. 9, pp. 794--804.
Neta, B. 2006, "Variational data assimilation and optimal control - Preface", Computers & Mathematics with Applications, vol. 52, no. 8-9, pp. XIII-XV.
Neta, B. 2006, "Professor Ionel Michael Navon - Dedication", Computers & Mathematics with Applications, vol. 52, no. 8-9, pp. XVII-XXI.
Owen, G. & Grofman, B. 2006, "Two-stage electoral competition in two-party contests: persistent divergence of party positions", Social Choice and Welfare, vol. 26, no. 3, pp. 547-569.
Owen, G., Lindner, I., Feld, S.L., Grofman, B. & Ray, L. 2006, "A simple "market value" bargaining model for weighted voting games: characterization and limit theorems", International Journal of Game
Theory, vol. 35, no. 1, pp. 111-128.
Ross, I.M. & Fahroo, F. 2006, "Issues in the real-time computation of optimal control", Mathematical and Computer Modelling, vol. 43, no. 9-10, pp. 1172-1188.
Zhou, H. & Forest, M.G. 2006, "Anchoring distortions coupled with plane Couette & Poiseuille flows of nematic polymers in viscous solvents: Morphology in molecular orientation, stress & flow",
Discrete and Continuous Dynamical Systems-Series B, vol. 6, no. 2, pp. 407-425.
Ammar, G.S., Gragg, W.B. & He, C. 2005, "An efficient QR algorithm for a Hessenberg submatrix of a unitary matrix" in New Directions and Applications in Control Theory Springer, Berlin, pp. 1--14.
Banks, W.D., Luca, F., Saidak, F. & Stanica, P. 2005, "Compositions with the Euler and Carmichael functions", Abh. Math. Sem. Univ. Hamburg, vol. 75, pp. 215--244.
Canright, D.R. 2005, "A Very Compact S-box for AES", Cryptographic Hardware and Embedded Systems - CHES 2005. 7th International Workshop. Proceedings (Lecture Notes in Computer Science Vol. 3659), ,
pp. 441-455.
Foguel, T. & Stanica, P. 2005, "Almost Hamiltonian groups", Results Math., vol. 48, no. 1-2, pp. 44--49.
Georgescu, C., Joia, C., Nowell, W.O. & Stanica, P. 2005, "Chaotic dynamics of some rational maps", Discrete Contin. Dyn. Syst., vol. 12, no. 2, pp. 363--375.
Gera, R. & Zhang, P. 2005, "On stratified domination in oriented graphs", Congr. Numer., vol. 173, pp. 175--192.
Gera, R. & Zhang, P. 2005, "Realizable triples for stratified domination in graphs", Math. Bohem., vol. 130, no. 2, pp. 185--202.
Giraldo, F.X. 2005, "Semi-implicit time-integrators for a scalable spectral element atmospheric model", Quarterly Journal of the Royal Meteorological Society, vol. 131, no. 610, pp. 2431-2454.
Giraldo, F.X. & Warburton, T. 2005, "A nodal triangle-based spectral element method for the shallow water equations on the sphere", Journal of Computational Physics, vol. 207, no. 1, pp. 129-150.
Hamzi, B., Kang, W. & Krener, A.J. 2005, "The controlled center dynamics", Multiscale Model. Simul., vol. 3, no. 4, pp. 838--852 (electronic).
Kang, W., Song, M. & Xi, N. 2005, "Bifurcation control, manufacturing planning, and formation control", Acta Automatica Sinica, vol. 31, no. 1, pp. 84--91.
Kang, W., Xi, N., Tan, J., Zhao, Y. & Wang, Y. 2005, "Coordinated formation control of multiple nonlinear systems", J. Control Theory Appl., vol. 3, no. 1, pp. 1--19.
Krener, A.J., Kang, W., Hamzi, B. & Tall, I. 2005, "Low codimension control singularities for single input nonlinear systems" in New Directions and Applications in Control Theory Springer, Berlin,
pp. 181--192.
Limaye, N.B., Sarvate, D.G., Stanica, P. & Young, P.T. 2005, "Regular and strongly regular planar graphs", J. Combin. Math. Combin. Comput., vol. 54, pp. 111--127.
Luca, F. & Stanica, P. 2005, "Prime divisors of Lucas sequences and a conjecture of Ska\l ba", Int. J. Number Theory, vol. 1, no. 4, pp. 583--591.
Luca, F. & Stanica, P. 2005, "On a conjecture of Ma", Results Math., vol. 48, no. 1-2, pp. 109--123.
Luca, F. & Stanica, P. 2005, "Fibonacci numbers that are not sums of two prime powers", Proc. Amer. Math. Soc., vol. 133, no. 7, pp. 1887--1890 (electronic).
Neta, B. 2005, "P-stable symmetric super-implicit methods for periodic initial value problems", Computers & Mathematics with Applications, vol. 50, no. 5-6, pp. 701-705.
Saboya, M., Flam, S. & Owen, G. 2005, "The not-quite non-atomic game: Non-emptiness of the core in large production games", Mathematical Social Sciences, vol. 50, no. 3, pp. 279-97.
Stanica, P. 2005, "Cholesky factorizations of matrices associated with r-order recurrent sequences", Integers, vol. 5, no. 2, pp. A16, 11 pp. (electronic).
Umstattd, R.J., Carr, C.G., Frenzen, C.L., Luginsland, J.W. & Lau, Y.Y. 2005, "A simple physical derivation of Child-Langmuir space-charge-limited emission using vacuum capacitance", American Journal
of Physics, vol. 73, pp. 160-163.
van Joolen, V.J., Neta, B. & Givoli, D. 2005, "High-order Higdon-like boundary conditions for exterior transient wave problems", International Journal for Numerical Methods in Engineering, vol. 63,
no. 7, pp. 1041-1068.
Wang, Q., Sircar, S. & Zhou, H. 2005, "Steady state solutions of the Smoluchowski equation for rigid nematic polymers under imposed fields", Communications in Mathematical Sciences, vol. 3, no. 4,
pp. 605-620.
Zhou, H. & Forest, M.G. 2005, "A numerical study of unsteady, thermal, glass fiber drawing processes", Communications in Mathematical Sciences, vol. 3, no. 1, pp. 27-45.
Zhou, H., Forest, M.G., Zheng, X.Y., Wang, Q. & Lipton, R. 2005, "Extension-enhanced conductivity of liquid crystalline polymer nano-composites", Times of Polymers (Macromolecular Symposia Vol.228),
, pp. 81-9|313.
Zhou, H., Wang, H.Y., Forest, M.G. & Wang, Q. 2005, "A new proof on axisymmetric equilibria of a three-dimensional Smoluchowski equation", Nonlinearity, vol. 18, no. 6, pp. 2815-2825.
Dea, J.R. (2008), High-order non-reflecting boundary conditions for the linearized Euler equations [electronic resource], Naval Postgraduate School, Monterey, Calif.
Phillips, D.D. (2008), Mathematical modeling and optimal control of battlefield information flow [electronic resource], Naval Postgraduate School, Monterey, Calif.
Alevras, D., Simulating tsunamis in the Indian Ocean with real bathymetry by using a high-order triangular discontinuous Galerkin oceanic shallow water model [electronic resource], Naval Postgraduate
School, Monterey, Calif.
Bernotavicius, C.S., Modeling a 400 Hz signal transmission through the South China Sea basin [electronic resource], Naval Postgraduate School, Monterey, Calif.
Geary, A.C., Analysis of a man-in-the-middle attack on the Diffie-Hellman key exchange protocol [electronic resource], Naval Postgraduate School, Monterey, California.
Gibbons, S.L., Impacts of sigma coordinates on the Euler and Navier-Stokes equations using continuous Galerkin methods [electronic resource], Naval Postgraduate School, Monterey, Calif.
Kim, A.M., Simulating full-waveform LIDAR [electronic resource], Naval Postgraduate School, Monterey, California.
Mantzouris, P., Computational algebraic attacks on the Advanced Encryption Standard (AES) [electronic resource], Naval Postgraduate School, Monterey, California.
McNabb, M.E., Optimizing the routher configurations within a nominal Air Force base [electronic resource], Naval Postgraduate School, Monterey, California.
Petrakos, N., Cube-type algebraic attacks on wireless encryption protocols [electronic resource], Naval Postgraduate School, Monterey, California.
Smith, W.T., A game theoretic approach to convoy routing [electronic resource], Naval Postgraduate School, Monterey, Calif.
Damalas, K.A., Analysis of analytic models for the effect of Insurgency, Naval Postgraduate School, Monterey, Calif.
Fernandez, C.K., Pascal polynomials over GF(2) [electronic resource], Naval Postgraduate School, Monterey, Calif.
Florkowski, S.F., Spectral graph theory of the Hypercube [electronic resource], Naval Postgraduate School, Monterey, Calif.
Giannoulis, G., Efficient implementation of filtering and resampling operations on Field Programmable Gate Arrays (FPGAs) for Software Defined Radio (SDR) [electronic resource], Naval Postgraduate
School, Monterey, Calif.
Pollatos, S., Solving the maximum clique problems on a class of network graphs, with applications to social networks [electronic resource], Naval Postgraduate School, Monterey, Calif.
Shankar, A., Optimal jammer placement to interdict wireless network services [electronic resource], Naval Postgraduate School, Monterey, Calif.
De Luca, T.J., Performance of Hybrid Eulerian-Lagrangian Semi-Implicit time integrators for nonhydrostatic mesoscale atmospheric modeling [electronic resource], Naval Postgraduate School, Monterey,
Fletcher, D.M., Realizable triples in dominator colorings [electronic resource], Naval Postgraduate School, Monterey, Calif.
Karczewski, N.J., Optimal aircraft routing in a constrained path-dependent environment [electronic resource], Naval Postgraduate School, Monterey, Calif.
Martinsen, T., Refinement composition using doubly labeled transition graphs [electronic resource], Naval Postgraduate School, Monterey, Calif.
Spence, L.J., On the calculation of particle trajectories from sea surface current measurements and their use in satellite sea surface products off the Central California Coast [electronic resource],
Naval Postgraduate School, Monterey, Calif.
Wilson, L.M.Z., Controllability of Non-Newtonian fluids under homogeneous flows [electronic resource], Naval Postgraduate School, Monterey, Calif.
Sopko, J.J., Modeling fluid flow by exploring different flow geometries and effect of weak compressibility [electronic resource], Naval Postgraduate School; Available from National Technical
Information Service, Monterey, Calif; Springfield, Va.
House, J.B. , Optimizing the Army's base realignment and closure implementation while transforming and at war [electronic resource], Naval Postgraduate School, Monterey, Calif.
Course poaching and duplication are serious issues at NPS. The problem is longstanding and was noted as early as 1947 in the Heald Report (Report on the Educational Program of the United States Naval
Postgraduate School. By an Advisory Committee to the American Council on Education, Henry T. Heald, Chairman. New York: American Council on Education, June 27, 1947). The report noted that the
school’s failure to use standard individual courses as building blocks resulted in a “complicated array” of very similar courses. And furthermore, that this led to small, uneconomical classes and
related inefficiencies. This practice continues to have a serious negative impact both on the department and the school.
We now outline the most serious issues of poaching and duplication currently plaguing the department.
PH3991 Theoretical Physics (4-1) Spring/Fall
Discussion of heat flow, electromagnetic waves, elastic waves, and quantum-mechanical waves; applications of orthogonal functions to electromagnetic multipoles, angular momentum in quantum mechanics,
and to normal modes on acoustic and electromagnetic systems. Applications of complex analysis to Green Function in quantum mechanics and electromagnetism. Application of Fourier series and transforms
to resonant systems. Applications of partial differential equation techniques to equation of physics. Prerequisites: Basic physics, multivariable calculus, vector analysis, Fourier series, complex
numbers, and ordinary differential equations.
Comments: This is essentially a mathematics class and is taught using a mathematics textbook “Mathematical Methods in the Physical Sciences” by Mary L. Boas. We do not currently teach a version of
this class but this is a prime example of course poaching. The fact that this 3000 level class has been poached means that the math department has fewer opportunities to teach at that level. PH3991
is offered twice per year (every spring and fall) to about 15 students each time.
MN2039 Basic Quantitative Methods in Management (4-0) Fall/Spring
This course introduces the mathematical basis required for advanced management and cost-benefit analysis. Math topics include algebra, graphs, differential calculus, including both single and
multiple variable functions, and indefinite and definite integrals. Management concepts include cost-benefit and cost-effectiveness analysis, marginal analysis, unconstrained and constrained
optimization, and welfare analysis. Prerequisite: College algebra or consent of instructor.
Comments: This class is a direct duplication of:
MA2300 Mathematics for Management (5-0) Winter/Spring/Summer
Mathematical basis for modern managerial tools and techniques. Elements of functions and algebra; differential calculus of single- and multi-variable functions; integration (antidifferentiation) of
single-variable functions. Applications of the derivative to rates of change, curve sketching, and optimization, including the method of Lagrange multipliers. Prerequisite: College algebra.
It uses the same textbook (Brief Calculus and Its Applications – Goldstein, Lay, and Schneider) and the syllabus was adapted from the syllabus for MA2300. The students who used to take MA2300 now
take MN2039 instead (MA2300 has not been taught since 2001 as a result of this outright theft of a class). MN2039 is offered once per year (every fall) to about 30 students.
ME3440 Engineering Analysis (4-0) As Required
Rigorous formulation of engineering problems arising in a variety of disciplines. Approximate methods of solution. Finite difference methods. Introduction to finite element methods. Prerequisites:
ME2201, ME2502 or ME2503, and ME3611.
ME3450 Computational Methods in Mechanical Engineering (3-2) Fall/Spring
The course introduces students to the basic methods of numerical modeling for typical physical problems encountered in solid mechanics and the thermal/fluid sciences. Problems that can be solved
analytically will be chosen initially and solutions will be obtained by appropriate discrete methods. Basic concepts in numerical methods, such as convergence, stability and accuracy, will be
introduced. Various computational tools will then be applied to more complex problems, with emphasis on finite element and finite difference methods, finite volume techniques, boundary element
methods and gridless Lagrangian methods. Methods of modeling convective non-linearities, such as upwind differencing and the Simpler method, will be introduced. Discussion and structural mechanics,
internal and external fluid flows, and conduction and convection heat transfer. Steady state, transient and eigenvalue problems will be addressed. Prerequisites: ME3150, ME3201, ME3611.
MR4323 Numerical Air and Ocean Modeling (4-2) Spring/Fall
Numerical models of atmospheric and oceanic phenomena. Finite difference techniques for solving hyperbolic, parabolic and elliptic equations, linear and nonlinear computational instability. Spectral
and finite element models. Filtered and primitive equation prediction models. Sigma coordinates. Objective analysis and initialization. Moisture and heating as time permits. Prerequisites: MR4322,
OC4211, partial differential equation, MA3232 desirable.
OC4323 Numerical Air and Ocean Modeling (4-2) As Required
Numerical models of atmospheric and oceanic phenomena. Finite difference techniques for solving elliptic and hyperbolic equations, linear and non-linear computational instability. Spectral and finite
element models. Filtered and primitive equation prediction models. Sigma coordinates. Objective analysis and initialization. Moisture and heating as time permits. Prerequisites: MR4322 or OC4211,
partial differential equations; numerical analysis desirable.
PC2911 Introduction to Computational Physics (3-2) As Required
An introduction to the role of computation in physics, with emphasis on the programming of current nonlinear physics problems. Assumes no prior programming experience. Includes a tutorial on the C
programming language and Matlab, as well as an introduction to numerical integration methods. Computer graphics are used to present the results of physics simulations. Prerequisites: None.
SE3030 Quantitative Methods of Systems Engineering (3-2)
This course discusses advanced mathematical and computational techniques that find common application in systems engineering. It also provides an introduction to MATLAB, a computational tool useful
in obtaining quantitative answers to engineering problems. Among the topics addressed in this course are vector analysis, complex analysis, integral transforms, special functions, numerical solution
of differential equations, and numerical analysis. Prerequisites: SE1002, SE3100 or consent of instructor.
This collection of classes all duplicate material from
MA3232 Numerical Analysis (4-0) Spring/Summer/Fall/Winter
Provides the basic numerical tools for understanding more advanced numerical methods. Topics for the course include: Sources and Analysis of Computational Error, Solution of Nonlinear Equations,
Interpolation and Other Techniques for Approximating Functions, Numerical Integration and Differentiation, Numerical Solution of Initial and Boundary Value Problems in Ordinary Differential
Equations, and Influences of Hardware and Software. Prerequisites: MA1115, MA2121 and ability to program in MATLAB and MAPLE.
MA3243 Numerical Methods for Partial Differential Equations (4-1) Winter
Course designed to familiarize the student with analytical techniques as well as classical finite difference techniques in the numerical solution of partial differential equations. In addition to
learning applicable algorithms, the student will be required to do programming. Topics covered include: Implicit, Explicit, and Semi-Implicit methods in the solution of Elliptic and Parabolic PDE's,
iterative methods for solving Elliptic PDEs (SOR, Gauss-Seidel, Jacobi), the Lax-Wendroff and Explicit methods in the solution of 1st and 2nd order Hyperbolic PDEs. Prerequisites: MA3132 and the
ability to program in a high level language such as Fortran, C, or MATLAB.
Several of the classes, particularly MR4323 and OC4323, appear to be poor alternatives to a proper class in numerical analysis such as MA3232 or MA3243. Some of the classes are taught regularly and
others are not. In particular:
· ME3440 has not been taught since 1998.
· ME3450 is taught every fall and spring to about a dozen students.
· MR4323 is taught every year in the spring to roughly 8-10 students.
· OC4323 is taught every year in the fall usually to fewer than 6 students.
· PC2911 is taught sporadically, roughly once a year in either winter or summer with about a dozen students.
· SE3030 appears to never have been taught.
Follow this link
Follow this link
Follow this link
Follow this link | {"url":"http://www.nps.edu/Academics/Schools/GSEAS/Departments/Math/APR_sources/SelfStudy.htm","timestamp":"2014-04-20T18:29:48Z","content_type":null,"content_length":"806944","record_id":"<urn:uuid:557bc89f-9db6-410a-8c6c-8ed5775fbf71>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00041-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts by
Total # Posts: 17
One of the most commonly known_____________ which also happens to be a top favorite of mine is____________. do i need a comma's in this sentence?
thanks so, would 18 carat gold jewelry be the right answer?
In the chemical formula of ammonium sulfate, (NH4)2SO4, how many hydrogen atoms are present? 4
Thats why i need help with this problem. the only choices i have is these four. atoms of gold turning into atoms of iron conversion of methane, which contains carbon and hydrogen, and oxygen into
carbon dioxide and water the rusting of iron in a humid atmosphere decomposition ...
I thought those two are also chemical?
ththe rusting of iron in a humid atmosphere decomposition of water into hydrogen and oxygen gases
Which of the following events are not examples of a chemical reaction? atoms of gold turning into atoms of iron conversion of methane, which contains carbon and hydrogen, and oxygen into carbon
dioxide and water <<<? the rusting of iron in a humid atmosphere decomposi...
Which of the following is not an example of a physical change? is this the right Answer accidentally leaving a piece of toast in the toaster until it is completely black
Which of the following would not be acceptable units for density? Answer kg/L g/mL3 g/cm3<< my answer
so it would be 45.0 mL O3
Assuming the temperature and pressure are held constant, which of the gas samples has the greatest number of gas particles? Answer 45.0 mL O3 35.0 mL N2 15.0 mL of CO2 25.0 mL of SO3 how would i go
about answering this question.
If the pressure on a 2.50 mL gas sample were doubled from 0.500 atm to 1.00 atm, what would be gas volume at the new pressure? my Answer 1.25 mL if i'm wrong can you show me how u got to your answer
Math conversion
I know!
Math conversion
If one meter is equal to 39.4 inches, how many centimeters are in one foot? Answer 30.4 cm How is this answer obtained? i know there is 12 in in a foot and 100 centimeters in a meter. what happens
The diameter of a lead pipe is measured to be 2.40 cm and you are asked to convert to units of inches. Which of the following conversion factors can you use to solve this problem? 12 in = 1 ft 100 cm
= 1 m 3 ft = 1 yd Two of the these ?>None of these
The diameter of a lead pipe is measured to be 2.40 cm and you are asked to convert to units of inches. Which of the following conversion factors can you use to solve this problem? 12 in = 1 ft 100 cm
= 1 m 3 ft = 1 yd Two of the these ?>None of these
math help
Problem #3 I have to graph y =4 therefore doesn't this run right on 4 going horizontal. Then it asks: List the y-intercepts coordinate and the slope of the graph? Therefore there is no slope and for
the y-intercept I do not know what to do or say. Where does the line cross... | {"url":"http://www.jiskha.com/members/profile/posts.cgi?name=jakey","timestamp":"2014-04-25T09:27:31Z","content_type":null,"content_length":"9595","record_id":"<urn:uuid:ba425963-cb19-4c29-b35b-80c013700f14>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00147-ip-10-147-4-33.ec2.internal.warc.gz"} |