content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
San Gabriel Calculus Tutor
...I am currently a substitute teacher in the Whittier area and am great at explaining topics in a variety of ways to help students succeed. I have completed student teaching and hope to teach in
my own class soon. I am aware of Common Core standards and can assist students in a variety of ways.
7 Subjects: including calculus, chemistry, geometry, algebra 1
...Precalculus is harder than Calculus. Believe or not, it is true because in Precalculus students have to cover different topics and get familiar with all of them, but in Calculus students' focus
is mostly on couple of topics in depth. For this reason it is important to have a good understanding in Precalculus to be successful in Calculus I, II and III.
11 Subjects: including calculus, statistics, algebra 2, geometry
...I got a scholarship out of taking the ACT exam. I can help your student excel in the ACT as well as the SAT exam, and learn strategies to improve scores. I will be happy to help your student
46 Subjects: including calculus, physics, algebra 1, geometry
...I know what it's like to have problems with SAT reading. I have worked hard to overcome my own problems with getting distracted and having test anxiety. I work by helping students to build
their vocabulary and to extract the critical information from the reading passages.
42 Subjects: including calculus, reading, Spanish, chemistry
From my experience at the University of Southern California and my previous experience at Santiago High School (Top 100 in California High School, and I was at the top 10% of my graduating class),
I often help my peers with various homework assignments and am asked to explain certain topics of frust...
22 Subjects: including calculus, English, reading, algebra 2 | {"url":"http://www.purplemath.com/san_gabriel_calculus_tutors.php","timestamp":"2014-04-19T15:19:17Z","content_type":null,"content_length":"24056","record_id":"<urn:uuid:6c4c3873-8dde-4922-9c30-60ebda787a86>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00004-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Forums - View Single Post - A toolkit for working with natural units
Natural units make a lot of things easier to calculate and the Casimir force formla simpliefies to
F = (pi^2/240) A/L^4
the constant in front is about 1/24, since pi-sqare is about ten.
With all these formulas if you want to work them in metric you have to put the hbars and cees and maybe some Gees back in first. In other words make them more messy before you can use them. Oh, and
sometimes kays too.
However the cost of having radically clean formulas with c=G-hbar=k=1 is that the scale is unfamiliar.
For example E38 is a mile and E35 is a pace---a thousandth of a mile, around 1.6 meters---and E30 is 16 microns.
E-50 is about a micronewton of force---more exactly 1.2 micronewton.
So you could say "It is hopeless, I will never be able to learn this new scale----E44 is a million miles and E-50 is a micronewton---it's impossible" But it's actually OK and one does get used to the
scale and the clean simplicity of the formulas is a big plus.
Anyway: think of two plates 1.6 meters on a side so E35 on a side, area is E70. And imagine the separation is 16 microns which is E30.
the force is easy to calculate, just the area (E70) divided by the fourth power of the sep(E120) which gives E-50 that micronewton thing, and you have to include the numerical constant which is about
Maybe I should include that tiny force of 1.2 micronewtons in the dictionary:
E60-----1.7 billion years
E44-----a million miles
E38 ----a mile
E6 ------22 grams
E-50----1.2 micronewtons
Here's the list of constants etc, for review:
the proton compton wavelength----2.103E-14 meter---13E18
the CMB temperature-----2.725 kelvin----------1.93E-32
the Hubble time----4.35E17 seconds------------8.06E60
average sunlight photon----2E-19 joules----------E-28
the distance to the sun----150 million km---------93E44
earth orbit speed------30 kilometer/second--------E-4
solar constant----1380 watts/sq.meter------------E-119
2.701k----3.73E-23 joule/kelvin-------------------2.701
sigma-----5.67E-8 watt/sq.meter kelvin^4--------pi^2/60
a (the aT^4 law)---7.565E-16 joule/cub. meter kelvin^4---pi^2/15
I see I havent yet used the proton compton. Maybe that should be next. | {"url":"http://www.physicsforums.com/showpost.php?p=28625&postcount=7","timestamp":"2014-04-18T10:47:42Z","content_type":null,"content_length":"9804","record_id":"<urn:uuid:ac099ee6-a45e-4d9a-aac9-555d62b46f38>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00172-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Forums - Interchange of limit operations
Interchange of limit operations
1. The problem statement, all variables and given/known data
Find a sequence of continuous functions [tex]f_n: R \rightarrow R[/tex] such that [tex]lim_{x \rightarrow 0}lim_{n \rightarrow \infty}f_n(x) [/tex] and [tex]lim_{n \rightarrow \infty}lim_{x \
rightarrow 0}f_n(x) [/tex] exist and are unequal.
2. Relevant equations
3. The attempt at a solution
I think I need a sequence of continuous functions that has a limit function which is continuous at zero but discontinuous at some other point. In that case, the sequence of functions will not be
uniformly convergent and we will not have these limits equal. But I dont know what function can fulfill this criteria.
micromass Dec9-10 01:16 PM
Re: Interchange of limit operations
Well, can you give a sequence which is not uniformly convergent?
Re: Interchange of limit operations
Yeah I can but the limits turn out to be equal. For example,
[tex] f_n(x) = (cos(x))^n[/tex]
[tex] lim_{n \rightarrow \infty}lim_{x \rightarrow 0} f_n = 1 [/tex]
[tex] lim_{x \rightarrow 0}lim_{n \rightarrow \infty} f_n = 1 [/tex]
This sequence is not uniformly convergent but the two limits are equal. The question requires them to be UNEQUAL.
Re: Interchange of limit operations
In above post, there is a mistake.
[tex] lim_{x \rightarrow 0} lim_{n \rightarrow \infty} f_n(x) [/tex] doesn't exist. The question requires them to EXIST.
micromass Dec9-10 02:30 PM
Re: Interchange of limit operations
Are you certain that it doesn't exist? I think that the limit does exist and equals zero.
You do have to restrict the function to [-pi,pi], otherwise there is no limit...
Re: Interchange of limit operations
Quote by micromass (Post 3028852)
Are you certain that it doesn't exist? I think that the limit does exist and equals zero.
You do have to restrict the function to [-pi,pi], otherwise there is no limit...
Let's say we restrict the sequence to [-pi,pi] and say it is zero everywhere else (because we have to define it on R and not a subset of it).
Yes it doesn't exist. As you can see the limit function is following
[tex]lim_{n \rightarrow \infty}f_n(x) = 1 for x=0[/tex]
[tex]lim_{n \rightarrow \infty}f_n(x) = 0 , elsewhere} [/tex]
So the limit function is discontinuous at x=0 and [tex] lim_{x \rightarrow 0} f(x) [/tex] doesn't exist. By the way, it is the very first question of the exercise so it should be easy (I think so
:confused: ) I dont know where I'm missing the point.
micromass Dec9-10 02:49 PM
Re: Interchange of limit operations
So, the limit function is
[tex]f(0)=1~\text{and}~f(x)=0~\text{if}~x\neq 0[/tex].
Then the limit [tex]\lim_{x\rightarrow 0}{f(x)}[/tex] certainly exists (and it equals 0)! You can easily prove this with the definition of limit...
Re: Interchange of limit operations
Quote by micromass (Post 3028885)
So, the limit function is
[tex]f(0)=1~\text{and}~f(x)=0~\text{if}~x\neq 0[/tex].
Then the limit [tex]\lim_{x\rightarrow 0}{f(x)}[/tex] certainly exists (and it equals 0)! You can easily prove this with the definition of limit...
Don't mind my asking very basic questions because i think my definition of limit is flawed.
What I know is that limit is defined, in above case, when f is defined on C{0} i.e. R - {0}. In this case, we can say limit x approaches 0 is zero as you described. But, since function is defined on
x=0, shouldn't we take f(0) as limit ? Also, if we take it as a limit as you say, shouldn't the limit be different if we approach it from either side i.e. (-pi,0) and [0, pi). So a limit is not
really defined here?
micromass Dec9-10 03:10 PM
Re: Interchange of limit operations
No, the limit of f is independent of f(0). The limit of a function is what the function value should be to make the function continuous. In our situation, we have that the function is 0, except in
the point zero. So if f(0)=0 (which is not the case), then the function f would be continuous. This means that the limit of f equals 0.
Re: Interchange of limit operations
Quote by micromass (Post 3028921)
No, the limit of f is independent of f(0). The limit of a function is what the function value should be to make the function continuous. In our situation, we have that the function is 0, except in
the point zero. So if f(0)=0 (which is not the case), then the function f would be continuous. This means that the limit of f equals 0.
I got it now ...and in that case limit will be zero irrespective of the direction we use to approach x=0. Thank you very much micromass it really helped. Thanks again.
All times are GMT -5. The time now is 04:43 AM.
Powered by vBulletin Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
© 2014 Physics Forums | {"url":"http://www.physicsforums.com/printthread.php?t=455259","timestamp":"2014-04-19T09:43:56Z","content_type":null,"content_length":"13203","record_id":"<urn:uuid:af4687f0-07a5-40bb-93d1-3421b42424ed>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00084-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
5x-4 = 9x + 3 A) 1/2 B) 2 C) 1/4 D) 4
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/4ee40f9de4b0a50f5c56a78a","timestamp":"2014-04-17T01:45:56Z","content_type":null,"content_length":"228549","record_id":"<urn:uuid:7825424f-25db-43e1-b2f3-13cd0151cf14>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00009-ip-10-147-4-33.ec2.internal.warc.gz"} |
Patent application title: POSITION CONTROLLING DEVICE
Sign up to receive free email alerts when patent applications with chosen keywords are published SIGN UP
A structure is provided in which a thrust feed forward structure for operating a structure to be driven without vibration and a control structure which simultaneously compensates for positional
deviation caused by the thrust feed forward structure and positional deviation caused by a base displacement are included in a position controlling device (3). Alternatively, a structure is provided
in which an acceleration and deceleration process for realizing response of the position of the structure to be driven and base displacement without vibration and a control structure which determines
a feed forward amount with respect to a position instruction value after the acceleration and deceleration process are provided to the position controlling device.
A position controlling device in which a driving system which applies acceleration and deceleration operations to a structure to be driven is supported by and fixed on a base and compensation for
force displacement caused in the base is provided by a reaction force of the structure to be driven and which controls an absolute position of the structure to be driven by detecting a position of
the structure to be driven which is driven by a servo motor and calculating a position instruction value after compensation according to a position instruction value from an upper device, the
position controlling device comprising:an acceleration and deceleration processor which receives as an input the position instruction value and outputs a position instruction value after acceleration
and deceleration process wherein a third-order derivative with respect to time of the output position instruction value is bounded;an adjustment transfer function block which receives as an input the
position instruction value after compensation and outputs a position instruction value for control;a block which calculates a thrust feed forward based on the position instruction value after
compensation and adds the thrust feed forward to a driving force of the servo motor;a block which calculates a position deviation compensation amount, which compensates for a position instruction
deviation and a base displacement due to the adjustment transfer function, based on a derivative with respect to time of the position instruction value after acceleration and deceleration process;
anda block which subtracts the position deviation compensation amount from the position instruction value after acceleration and deceleration process, to obtain the position instruction value after
The position controlling device according to claim 1, whereinthe position deviation compensation amount is calculated as an amount of compensation of position instruction deviation due to the
adjustment transfer function.
A position controlling device in which a driving system which applies acceleration and deceleration operations to a structure to be driven is supported by and fixed on a base and compensation for
force displacement caused in the base is provided by a reaction force of the structure to be driven and which controls an absolute position of the structure to be driven according to a position
instruction value from an upper device by detecting a position of the structure to be driven which is driven by a servo motor, the position controlling device comprising:an acceleration and
deceleration processor which receives as an input the position instruction value and outputs a position instruction value after acceleration and deceleration process wherein a second-order derivative
with respect to time of the output position instruction value is bounded;a block which has a notch filter representing, as a transfer function, a relationship between a driving force which is output
by the servo motor and a driving position obtained by the driving force and having a transfer pole of the transfer function as a notch angle frequency, and which outputs, as a position instruction
value for control, the position instruction value after acceleration and deceleration process which is output from the acceleration and deceleration processor;a block which calculates a thrust feed
forward amount which causes the absolute position of the structure to be driven to correspond to the position instruction value for control;a block which calculates a base displacement based on the
position instruction value for control and adds the base displacement to the position instruction value for control, to calculate a position instruction value corresponding to the position of the
structure to be driven; anda block which differentiates, with respect to time, the position instruction value corresponding to the position of the structure to be driven, to calculate a velocity feed
forward amount.
The position controlling device according to claim 3, whereinthe block which outputs, as the position instruction value for control, the position instruction value after acceleration and deceleration
process which is output from the acceleration and deceleration processor has a notch filter representing, as a transfer function, a relationship between the driving force which is output by the servo
motor and the driving position obtained by the driving force and having a transfer zero point of the transfer function as the notch angle frequency, and outputs, as the position instruction value for
control, the position instruction value after acceleration and deceleration process which is output from the acceleration and deceleration processor.
This application claims priority to Japanese Patent Applications No. 2007-262961 filed on Oct. 9, 2007 and No. 2008-013266 filed on Jan. 24, 2008, which are incorporated herein by reference in their
BACKGROUND OF THE INVENTION [0002]
1. Field of the Invention
The present invention relates to a position controlling device which is used for shaft control of a numerical control machine.
2. Description of the Related Art
Conventionally, controlling devices are used in which a driving system for accelerating and decelerating a structure to be driven is supported by and fixed to a base of the device, and displacement
force acting on the base is compensated for by a reaction force of the structure to be driven.
FIG. 11
is a model of a driving system schematically showing a mechanism of one shaft of the driving system in a machine tool, which is one type of machine which employs numerical control. The driving system
has a structure in which a driving force Fx is imparted to a structure C to be driven by a servo motor (not shown) which moves on a structure B, which also functions as a guiding surface, in a
direction x
. Structures A placed on both sides of the structure B support and fix the structure B, and one side of each structure A is rigidly mounted on and fixed to the ground. When the structure C to be
driven is accelerated or decelerated in the x
direction, the structure A which is the base receives the reaction force from the structure C to be driven, deforms in a direction x
, and generates vibration. On the structure B, a linear scale (not shown) for detecting the position x
of the structure to be driven is provided.
Next, equations of motion are determined assuming the driving system model of
FIG. 11
as a target plant. In this case, as the generalized coordinate system, the position x
of the structure to be driven and the displacement x
of the base may be used, and the following two equations of motion can be obtained:
=0 (1)
}=Fx (2)
wherein Mb represents a mass Mb of the structure B
, Mc represents a mass Mc of the structure C to be driven, and Ra represents a directional rigidity Ra of the structure A in the direction of x
[0007]FIG. 12
is a block diagram showing the equations of motion (1) and (2) for the target plant, and will be described in detail in the description of the preferred embodiments of the present invention to be
described later.
[0008]FIG. 13
is a block diagram of a position controlling device of a related art. A position instruction value X which is generated by an upper device (not shown) employing a function is input to an acceleration
and deceleration processor 50. For a position instruction value Xc output by the acceleration and deceleration processor 50, a second-order functional acceleration and deceleration process is applied
in the acceleration and deceleration processor 50 so that the second order derivative with respect to time of dXc/dt is bounded even when the derivative of X with respect to time, dX/dt is
step-shaped. In order to accelerate a position instruction response, the position instruction value Xc is differentiated with respect to time in differentiators 54 and 55 (S is a Laplacian operator),
to calculate feed forward amounts Vf and Af of the instruction velocity and the instruction acceleration. A conversion block Cb is a conversion block for determining a feed forward amount of thrust
which corresponds to the motor thrust for generating the acceleration Af, and is usually substituted by multiplying the mass Mc of the structure C to be driven to the acceleration Af.
As the position detection value of the target plant 58, the position x
of the structure to be driven, which is detected by the above-described linear scale is used. The position x
of the structure to be driven is subtracted from the position instruction value Xc by a subtractor 51, and a position deviation output by the subtractor 51 is amplified by a factor of Gp by a
position deviation amplifier Gp, and the velocity feed forward amount Vf is added to the output of the position deviation amplifier Gp in an adder 52, to obtain a velocity instruction value V. A
subtractor 53 subtracts, from the velocity instruction value V, a velocity v of the structure to be driven which is obtained by differentiating the position x
of the structure to be driven with respect to time by a differentiator 56, and the output of the subtractor 53 which is a velocity deviation is amplified by a velocity deviation amplifier Gv. The
velocity deviation amplifier Gv generally comprises a proportional integration amplifier and various filters for inhibiting high-frequency vibration phenomena generated in the order of hundred Hz of
the target plant. The output of the velocity deviation amplifier Gv and the velocity feed forward amount Vf are added by an adder 57, and an output of the adder becomes the motor generated thrust,
that is, the driving force Fx of the structure C to be driven.
[0010]FIG. 14
shows a result of a simulation of a second-order functional acceleration response (maximum acceleration 2 [m/sec
]) of the position controlling device of the related art of
FIG. 13
, when the target plant parameters are set to Mb=500 [Kg], Mc=300 [Kg], and Ra=19.610
[Nm/m], and the amplifications Gp and Gv which are control parameters are preferably adjusted. The position controlling device 200 in this case attempts to control the absolute position (x
) of the structure to be driven of the target plant according to the position instruction value Xc, as shown in
FIG. 11
. However, because the position controlling device 200 of
FIG. 13
does not consider the displacement x
of the base, a large error in absolute position εo=Xc-(x
) is caused during acceleration as shown in
FIG. 14
[0011]FIG. 15
is a block diagram showing another example structure of a position controlling device of a related art. This device has a structure in which a compensation block for the displacement of base x
shown in JP 2007-025961 A is added. A structure of the added portion will now be described.
A base vibration monitor correspondent block 59 of
FIG. 15
is a block corresponding to a base vibration monitor of JP2007-025961 A. Because there is no dumping component in the base vibration, the operation of this block according to JP 2007-025961 A which
is Xsw=McS
+Ra)Xc becomes a unstable transfer function, and, thus, Xsw=(McS
/Ra)Xc is employed in the exemplified structure, placing more importance on the operation under constant acceleration. Here, Xsw represents an instruction value for base vibration compensation. An
adder 60 adds the position instruction value Xc to the base vibration compensation instruction value Xsw, resulting in a position instruction value Xco for control. The base vibration compensation
instruction value Xsw is also differentiated with respect to time by differentiators 61 and 63 so that a velocity instruction value Vsw for base vibration compensation and an acceleration instruction
value Asw for base vibration compensation are calculated. The velocity instruction value Vsw is added to the velocity feed forward amount Vf in an adder 62, and the acceleration instruction value Asw
is multiplied by the mass Mc of the structure to be driven and results in a thrust instruction value Fsw for base vibration compensation, which is in turn added with thrust feed forward amount Ff in
an adder 64.
[0013]FIG. 16
shows a result of a simulation of a response when target plant parameters, control parameters, and a second-order functional acceleration process similar to
FIG. 14
are applied on the position controlling device of related art of
FIG. 15
. Because a control structure which compensates the base displacement is employed, the error εo of the absolute position is reduced. However, because there is no dumping component, the response has a
remaining vibration at the start and end of acceleration generated by an acceleration derivative instruction value Bc (=d
), with the vibration being enlarged as the instruction value Bc is increased.
[0014]FIG. 17
is a block diagram of another example structure of a position controlling device of related art. In this example structure, the technique described by Akihiro YAMAMOTO (and four others) in
"High-Speed Positioning Control for Linear Motor Driving Table without Base Vibration", Journal of the Japan Society for Precision Engineering, Supplement Contributed Papers, Japan Society for
Precision Engineering, 2004, Vol. 70, No. 5, p. 645-650 is used. The thrust feed forward is realized using an inverse transfer function of the target plant and the vibration of the base is inhibited.
Next, portions which differ from the position controlling devices of the related art which are already described will be described.
A transfer function P
indicates a transfer function from the driving force Fx to the position x
of the structure to be driven, and is given by the following Equation 3 based on
FIG. 12
+Ra)} (3)
, because the inverse transfer function P
of the transfer function P
is not stable, a transfer function F represented by the following Equation 4 is considered in order to set P
F which has a stable pole (S=-ωo) of a first-order delay component.
+Ra}/Ra (4)
, P
F is:
+Ra)}/{(S+ωo)Ra} (5)
A feed forward amount Ff of thrust is calculated with
FXc, and the thrust feed forward amount Ff in
FIG. 11
can be calculated because a third-order derivative of the position instruction value Xc with respect time is bounded.
[0017]FIG. 18
shows a result of a simulation of a response when target plant parameters, control parameters, and a second-order functional acceleration process similar to
FIG. 14
are applied to the position controlling device of related art of
FIG. 17
with the parameter ωo=10000. Fundamentally, because a structure is employed in which the position x
of the structure to driven matches the position instruction value Xco for control, inhibition of the vibration of the response is achieved. However, when velocity instruction value Vc is not zero
(Vc≠0), an error in absolute position εo remains during shaft operation due to occurrence of a position instruction deviation εc=Xc-Xco.
SUMMARY OF THE INVENTION [0018]
As described, in the position controlling devices of the related art, it has not been possible to accurately control the position of the structure to be driven in consideration of both vibration
caused by rigidity of the base on which the structure to be driven is supported and fixed and generation of the displacement of the base. An advantage of the present invention is that a position
controlling device is provided in which vibration of a structure to be driven can be inhibited, even during acceleration and deceleration, and error of the position of the structure to be driven with
respect to the position instruction can be reduced. Another advantage realized by the present invention is that a position controlling device is provided which realizes prevention of induced
vibration of various parts of a device and inhibition of vibration during change of a device parameter.
The present invention achieves the above-described advantages by adding, to a position controlling device, a thrust feed forward structure for operating the structure to be driven with no vibration
and a control structure which simultaneously compensates a position deviation caused by the thrust feed forward structure and the position deviation caused by the displacement of the base.
According to one aspect of the present invention, there is provided a position controlling device in which a driving system which applies acceleration and deceleration operations to a structure to be
driven is supported by and fixed on a base and compensation for a force displacement caused in the base is provided by a reaction force of the structure to be driven and which controls an absolute
position of the structure to be driven by detecting a position of the structure to be driven which is driven by a servo motor and calculating a position instruction value after compensation according
to a position instruction value from an upper device, the position controlling device comprising an acceleration and deceleration processor which receives as an input the position instruction value
and outputs a position instruction value after acceleration and deceleration process wherein a third-order derivative with respect to time of the output position instruction value is bounded, an
adjustment transfer function block which receive as an input the position instruction value after compensation and outputs a position instruction value for control, a block which calculates a thrust
feed forward based on the position instruction value after compensation and adds the thrust feed forward to a driving force of the servo motor, a block which calculates a position deviation
compensation amount which compensates for a position instruction deviation and a base displacement due to the adjustment transfer function based on a derivative with respect to time of the position
instruction value after the acceleration and deceleration process, and a block which subtracts the position deviation compensation amount from the position instruction value after acceleration and
deceleration process, to obtain the position instruction value after compensation.
According to another aspect of the present invention, it is preferable that, in the position controlling device, the position deviation compensation amount is calculated as an amount of compensation
of position instruction deviation due to the adjustment transfer function.
According to another aspect of the present invention, there is provided a position controlling device in which a driving system which applies acceleration and deceleration operations to a structure
to be driven is supported by and fixed on a base and compensation for a force displacement caused in the base is provided by a reaction force of the structure to be driven and which controls an
absolute position of the structure to be driven according to a position instruction value from an upper device by detecting a position of the structure to be driven which is driven by a servo motor,
the position controlling device comprising an acceleration and deceleration processor which receives as an input the position instruction value and outputs a position instruction value after
acceleration and deceleration process wherein a second-order derivative with respect to time of the output position instruction value is bounded, a block which has a notch filter representing, as a
transfer function, a relationship between a driving force which is output by the servo motor and a driving position obtained by the driving force and having a transfer pole of the transfer function
as a notch angle frequency, and which outputs, as a position instruction value for control, the position instruction value after acceleration and deceleration process which is output from the
acceleration and deceleration processor, a block which calculates a thrust feed forward amount which causes the absolute position of the structure to be driven to correspond to the position
instruction value for control, a block which calculates a base displacement based on the position instruction value for control and adds the base displacement to the position instruction value for
control, to calculate a position instruction value corresponding to the position of the structure to be driven, and a block which differentiates, with respect to time, the position instruction value
corresponding to the position of the structure to be driven, to calculate a velocity feed forward amount.
According to another aspect of the present invention, it is preferable that, in the position controlling device, the block which outputs, as the position instruction value for control, the position
instruction value after acceleration and deceleration process which is output from the acceleration and deceleration processor has a notch filter representing, as a transfer function, a relationship
between the driving force which is output by the servo motor and the driving position obtained by the driving force and having a transfer zero point of the transfer function as the notch angle
frequency, and outputs, as the position instruction value for control, the position instruction value after acceleration and deceleration process which is output from the acceleration and
deceleration processor.
According to the position controlling device of various aspects of the present invention, by including a thrust feedforward structure which controls the structure to be driven according to the
position instruction value for control and a position deviation compensation structure which simultaneously and precisely compensates a position instruction deviation caused by introduction of the
thrust feed forward structure and a position deviation caused by the base displacement, it is possible to inhibit generated vibration and cause the absolute position (x
) of the structure to be driven of the target plant to precisely follow the position instruction value Xc during a shaft operation including acceleration and deceleration. In addition, because the
amount of control can be preferably varied according to the size of the acceleration instruction value Ac and the acceleration derivative instruction value Bc, a high control advantage can be
obtained regardless of the size of these instruction values.
In addition, the position controlling device of various aspects of the present invention comprises a feed forward structure of thrust and velocity for controlling the structure to be driven according
to the position instruction value and calculates the position instruction value for control by applying acceleration and deceleration processes of a notch filter structure which has a small
introduction impact on the position instruction value. With this structure, the vibrations in various feed forward amounts are removed, and the responses of the position of the structure to be driven
and the base displacement can be controlled without vibration and with a high precision. Furthermore, because vibration in the responses of the position of the structure to be driven and the base
displacement is cancelled, no vibration is induced in the various parts of the device, and highly advantageous vibration inhibition can be maintained even when the device parameters are changed.
BRIEF DESCRIPTION OF THE DRAWINGS [0026]
Preferred embodiments of the present invention will be described in detail by reference to the drawings, wherein:
[0027]FIG. 1
is a block diagram showing a structure of a first preferred embodiment of a position controlling device according to the present invention;
[0028]FIG. 2
is an explanatory diagram of an acceleration response of a target plant provided by a position controlling device as illustrated in
FIG. 1
FIG. 3 is a block diagram showing a structure of a second preferred embodiment of a position controlling device according to the present invention;
[0030]FIG. 4
is an explanatory diagram of an acceleration response of a target plant provided by a position controlling device as illustrated in FIG. 3;
[0031]FIG. 5
is a block diagram showing a structure of a third preferred embodiment of a position controlling device according to the present invention;
[0032]FIG. 6
is an explanatory diagram of an acceleration response of a target plant provided by a position controlling device as illustrated in
FIG. 5
[0033]FIG. 7
is an explanatory diagram of an acceleration response of a target plant provided by a position controlling device as illustrated in
FIG. 5
during a change in a device parameter;
FIG. 8 is a block diagram showing a structure of a position controlling device of a fourth preferred embodiment according to the present invention;
[0035]FIG. 9
is an explanatory diagram of an acceleration response of a target plant provided by a position controlling device as illustrated in FIG. 8;
FIG. 10 is an explanatory diagram of an acceleration response of a target plant provided by a position controlling device as illustrated in FIG. 8 during a change in a device parameter;
[0037]FIG. 11
is a schematic mechanism diagram of a target plant;
[0038]FIG. 12
is a block diagram describing a movement of a target plant of
FIG. 11
[0039]FIG. 13
is a block diagram showing a first example structure of a position controlling device of related art;
[0040]FIG. 14
is an explanatory diagram of an acceleration response of a target plant provided by a position controlling device as illustrated in
FIG. 13
[0041]FIG. 15
is a block diagram showing a second example structure of a position controlling device of related art;
[0042]FIG. 16
is an explanatory diagram of an acceleration response of a target plant provided by a position controlling device as illustrated in
FIG. 15
[0043]FIG. 17
is a block diagram showing a third example structure of a position controlling device of related art; and
[0044]FIG. 18
is an explanatory diagram of an acceleration response of a target plant provided by a position controlling device as illustrated in
FIG. 17
DESCRIPTION OF THE PREFERRED EMBODIMENTS [0045]
Preferred embodiments of the present invention (hereinafter also referred to as "embodiments") will now be described. A characteristic of the present embodiment is that an adjustment transfer
function M(s) is used to inhibit vibrations with a thrust feed forward Ff=P
MXc. Fundamentally, because the position x
of the structure to be driven matches the position instruction value for control, Xco=MXc, the position x
does not match Xc. Therefore, a position instruction value after compensation Xc* is introduced to set Xco=MXc*, and Ff=P
MXc* is determined. Moreover, a form is employed which has a position deviation compensation structure which simultaneously compensates the deviation (Xc*-Xco) on the position instruction caused by M
(s) and a position deviation by the base displacement x
Control is considered in which the absolute position (x
) of the structure to be driven of the target plant is controlled according to the position instruction value Xc. When restrictions for achieving both the vibration inhibition and position deviation
compensation are considered, the following restrictions (a)-(c) can be obtained.
(Restriction a): The adjustment transfer function M(s) is necessary and can be represented with the following equation (6) using a stable polynomial expression Go(s).
+Ra}/Go (6)
(Restriction b): Thrust feed forward Ff of equation (7) can be calculated.
+Ra)}/Go)Xc* (7)
Here, because x
+Ra}/Go)Xc*, and, based on
FIG. 12
, x
+Ra}, x
/Go)Xc*. Thus, the absolute position (x
) of the structure to be driven can be represented by the following Equation 8.
+Ra)/Go}Xc* (8)
, a position deviation compensation structure is considered which defines the position deviation compensation amount as α(Xc), a function of the original position instruction value Xc, and a
relationship between Xc and Xc* as Xc*=Xc-α(Xc). In this case, the restriction of the position deviation compensation becomes:
- ( x 2 - x 1 ) = { Xc - Xc * } + { Xc * - ( x 2 - x 1 ) } = α ( Xc ) + { ( Go - MbS 2 - Ra ) / Go } Xc * = α ( Xc ) + { ( Go - MbS 2 - Ra ) / Go } { Xc - α ( Xc ) } = 0 ( 9 ) ##EQU00001##
If Equation
9 is solved for α(Xc), the following restriction can be obtained.(Restriction c): The position deviation compensation amount α(Xc) satisfies Equation 10.
+Ra)}Xc (10)
Of these restrictions, the restriction (c) cannot be strictly satisfied because Equation 10 is not a stable rational function, but Go(s) and α(Xc) are determined with the following Equation 11 using
an approximation form which can be implemented:
+βS+(Ra/Mb)})Xc (11)
β is an arbitrary parameter of a positive real number. When β is set to reach 0 (β→0), the approximation as the position deviation compensation is improved, but the position deviation compensation
amount α(Xc) becomes more vibrating.
M, P
M, and Xc* can be determined, based on Equations 6, 7, etc.:
+Ra}/Ra (12)
+Ra)}/Ra (13)
+βS+(Ra/Mb)})Xc (14)
[0050]FIG. 1
is a block diagram of a position controlling device according to the present embodiment. Portions which differ from the position controlling device of the related art described above will now be
described. A position deviation compensation amount α(Xc) is determined with an input of an acceleration instruction value Ac (=d
), based on Equation 11. Based on the above description, the position instruction value Xc* after compensation is calculated by subtracting the position deviation compensation amount a (Xc) from the
position instruction value Xc at a subtractor 2. The position instruction value Xco for control is determined, based on the above description, by Xco=MXc*. In addition, the thrust feed forward Ff is
determined as Ff=P
The actual calculations of various parameters are given by the following Equations 15, 16, and 17:
= P 2 - 1 M Xc * = ( { McS 2 ( MbS 2 + Ra ) } / Ra ) ( Xc - [ S 2 / { S 2 + β S + ( Ra / Mb ) } ] Xc ) = McAc + { McMb β S 2 / ( Ra { S 2 + β S + ( Ra / Mb ) } ) } Bc ( 15 ) Xco = M Xc * = ( { ( Mb +
Mc ) S 2 + Ra } / Ra ) { Xc - ( S 2 / { S 2 + β S + ( Ra / Mb ) } ) Xc ) = Xc + [ { ( Mb + Mc ) β S + ( McRa / Mb ) } / ( Ra { S 2 + β S + ( Ra / Mb ) } ) ] Ac ( 16 ) Vf = Xco / t = Vc + [ { ( Mb +
Mc ) β S + ( McRa / Mb ) } / Ra { S 2 + β S + ( Ra / Mb ) } ) ] Bc ( 17 ) ##EQU00002##
wherein Bc
Therefore, an acceleration and deceleration processor 1 is a processor which applies a second-order functional acceleration and deceleration process to the position instruction value X so that Bc=d
which is a second-order derivative with respect to time of the velocity instruction value Vc=dXc/dt is bounded, and outputs the position instruction value Xc.
[0053]FIG. 2
shows a result of a simulation of a second-order functional acceleration response when a parameter β=4 is set to the position controlling device of the present embodiment shown in
FIG. 1
and target plant parameters and control parameters similar to
FIG. 14
are given. For the second-order functional acceleration process, conditions similar to the second-order functional acceleration process of FIGS. 14, 16, and 18 which are already described are chosen.
When the expression S
+βS+(Ra/Mb) is correlated to a standard expression of second order system, S
, β=4 corresponds to an attenuation rate ζ=0.01. As a result, the position controlling device of the present embodiment can inhibit the amount of generation of the error εo of the absolute position
and vibration during a shaft operation including acceleration and deceleration to very small values.
As described, according to the position controlling device of the present embodiment, by having a thrust feed forward structure which controls the structure to be driven according to a position
instruction value for control and a position deviation compensation structure which simultaneously and precisely compensates a position instruction deviation caused by introduction of the thrust feed
forward structure and a position deviation caused by the base displacement, it is possible to inhibit occurrence of vibration and to cause the absolute position (x
) of the structure to be driven of the target plant to highly precisely follow the position instruction value Xc, during shaft operations including acceleration and deceleration. In addition, because
the control amount is preferably varied according to the sizes of the acceleration instruction value Ac and acceleration derivative instruction value Bc, it is possible to achieve a high control
advantage regardless of the size of these parameters.
Next, an example will be described in which the position controlling device according to the present invention is applied to a control shaft targeted to controlling a position of a structure C to be
driven on a structure B in
FIG. 11
. In this case, the parameter to be controlled according to the position instruction value Xc is the position x
of the structure to be driven of the target plant. Here, the restrictions for achieving the vibration inhibition are restrictions (a) and (b) which are already described, and a restriction for
achieving the position deviation compensation is, based on Xc*-x
- x 2 = { Xc - Xc * } + { Xc * - x 2 } = α ( Xc ) + ( { Go - ( Mb + Mc ) S 2 - Ra } / Go ) Xc * = α ( Xc ) + ( { Go - ( Mb + Mc ) S 2 - Ra } / Go ) { Xc - α ( Xc ) } = 0 ( 18 ) ##EQU00003##
18 can be solved for α(Xc) to obtain a restriction:(Restriction d): Position deviation compensation amount α(Xc) satisfies Equation 19)
+Ra)}]Xc (19)
Similar to the first preferred embodiment, with regard to the restriction (d), the following Equation 20 is utilized to determine Go(s) and α(Xc) through an approximation form.
+βS+{Ra/(Mb+Mc)})]Xc (20)
M and P[2]^-1
M can be represented by the following Equation 21, based on Equations 12 and 13.
+βS+{Ra/(Mb+Mc)})]Xc (21)
FIG. 3 is a block diagram of a position controlling device according to the present embodiment. The structure is similar to the structure of the first preferred embodiment shown in
FIG. 1
except that the position instruction value Xc* after compensation is determined by Equation 21. The actual calculations of various parameters are given by Equations 22, 23, and 24.
= P 2 - 1 M Xc * = ( { McS 2 ( MbS 2 + Ra ) } / Ra ) ( Xc - [ S 2 / { S 2 + β S + Ra / ( Mb + Mc ) } ] Xc ) = McAc + { ( MbMc β S 2 - { Mc 2 Ra / ( Mb + Mc ) } S ) / ( Ra { S 2 + β S + Ra / ( Mb + Mc
) } ) } Bc ( 22 ) Xco = M Xc * = ( { ( Mb + Mc ) S 2 + Ra } / Ra ) { Xc - ( S 2 / { S 2 + β S + Ra / ( Mb + Mc ) } ) Xc } = Xc + { ( Mb + Mc ) β S / ( Ra { S 2 + β S + Ra / { Mb + Mc ) } ) } Ac ( 23
) Vf = Xco / t Vc + { ( Mb + Mc ) + β S / ( Ra { S 2 + β S + Ra / ( Mb + Mc ) } ) } Bc ( 24 ) ##EQU00004##
[0058]FIG. 4
shows a result of a simulation of a second-order functional acceleration response when a parameter β corresponding to the attenuation rate ζ=0.01 is set similar to
FIG. 2
in a position controlling device 4 according to the present embodiment shown in FIG. 3 and other conditions similar to
FIG. 2
are applied. The result shows that the generation amount and vibration of the positional error Xc-x
during shaft operations including the acceleration and deceleration are inhibited to very small values, and it can be understood that the position controlling device of the present embodiment is
effective when the position of the structure C to be driven on the structure B is controlled, similar to the control of the absolute position of the structure C to be driven.
A characteristic of the present embodiment is that, in order to cancel vibration in the responses of the position of the structure to be driven and the base displacement, a form is employed in which
an acceleration and deceleration process function which has a small introduction impact is applied to the position instruction value after the normal acceleration and deceleration process to cancel
vibration in various feed forward amounts and compensation amounts, to determine the position instruction value for control.
The present embodiment attempts to control the absolute position (x
) of the structure to be driven of the target plant according to the position instruction value Xc. First, an acceleration and deceleration process function H(s) having the position instruction value
Xc after the second-order functional acceleration and deceleration process as an input and the position instruction value Xco for control as an output is introduced, and control to achieve Xco=x
is considered. The impact of the introduction of the acceleration and deceleration process function H(s) will be described later.
Based on
FIG. 12
, the relationship between the driving force Fx and the absolute position (x
) of the structure to be driven can be represented by the following Equation 25.
)}Fx (25)
, the thrust feed forward amount Ff for controlling Xco=(x
) can be shown with Equation 26.
HXc (26)
The responses of position x[2]
of the structure to be driven and the base displacement x
are shown with Equations 27 and 28.
+Ra)}]McHAc (27)
HAc (28)
, P
represents a transfer function from the driving force Fx to the base displacement x
, and can be represented with Equation 29 based on
FIG. 12
+Ra) (29)
As the corresponding feed forward structure
, Equations 30 and 31 can be considered.
HXc (30)
HSXc (31)
The parameter Xco
* is a position instruction value corresponding to the position x
of the structure to be driven.
Here, in order to cancel vibration in the responses of the position x
of the structure to be driven and the base displacement x
, and reduce the impact of the introduction, an acceleration and deceleration process function H(s) is defined with Equation 32.
+αS+Ra) (32)
, α and D are arbitrary parameters of position real numbers. When α is set to reach 0 (α→0), the introduction impact of H(s) is reduced, but the responses of the position of the structure to be
driven and the base displacement become more vibrating. With regard to the parameter D, if there is a dumping component in the structure A, an approximation value is set.
[0063]FIG. 5
is a block diagram of a position controlling device 5 according to the present embodiment. Portions which differ from the position controlling devices of the related art which have been described
will now be described. The position instruction value Xc which is the output of the acceleration and deceleration processor 50 is input to the acceleration and deceleration process function H(s)
shown in Equation 32 and having a notch filter structure with a transfer pole of the target plant 58 as a notch angle frequency. The output of the acceleration and deceleration process function H(s)
is the position instruction value Xco for control. An adder 3 adds a first term and a second term of the right side of the Equation 30 and outputs the position instruction value Xco* corresponding to
the position x
of the structure to be driven. An adder 52 differentiates the position instruction value Xco* with a differentiator 4, and outputs the velocity feed forward amount Vf shown in Equation 31. Moreover,
the position instruction value Xco for control is multiplied by McS
so that the thrust feed forward amount Ff shown in Equation 26 is calculated and input to an adder 57.
[0064]FIG. 6
shows a result of a simulation of a second-order functional acceleration response when a parameter α=19810
is set in the position controlling device of the present embodiment shown in
FIG. 5
and target plant parameters, control parameters, and second-order functional acceleration conditions similar to
FIG. 16
are given. When the polynomial expression in the denominator of H(s), MbS
+αS+Ra is correlated to the standard expression of second order, S
, α=19810
corresponds to an attenuation rate ζ of 1 (ζ=1). The result shows that, with the position controlling device of the present embodiment, the control to Xco=x
is achieved including the times of acceleration and deceleration (top right drawing in
FIG. 6
). Because a large value is assigned to the attenuation rate ζ, vibrations in the thrust feed forward amount Ff and the velocity feed forward amount can be removed, and, thus, the vibrations in the
driving force Fx (bottom left drawing in
FIG. 6
) and base displacement x
(bottom right drawing in
FIG. 6
) can be inhibited.
[0065]FIG. 7
shows a result of a simulation of a second-order functional acceleration response when only the rigidity Ra of the structure A on the side of the target plant is reduced (-10%) compared to the
conditions of
FIG. 6
. Because the rigidity Ra used in the calculation on the control side is identical to that of
FIG. 6
, this result simulates a response when the device parameter is changed. Due to the reduction of rigidity Ra, the base displacement x1 is increased (bottom right drawing in FIG. 7), and the increase
causes a control error during acceleration (top right drawing in
FIG. 7
). However, the vibration inhibition performance is sufficiently high compared to the example control structure of the related art of
FIG. 16
In another preferred embodiment, a position x
of the structure to be driven of the target plant is controlled according to the position instruction value Xc. In this case also, similar to the third preferred embodiment, first, an acceleration
and deceleration process function Hr(s) having the position instruction value Xc after the second-order functional acceleration and deceleration process as an input and the position instruction value
Xco for control as an output is introduced, and control to achieve Xco=x2 is considered. The impact of the introduction of the acceleration and deceleration process function Hr(s) will be described
Based on
FIG. 12
, a relationship between the driving force Fx and the position x2 of the structure to be driven can be represented with the following Equation 33.
+Ra)}]Fx (33)
, the thrust feed forward amount Ff for achieving control of Xco=x
is represented by the following Equation 34.
+Ra)}/{(Mb+Mc- )S
+Ra}]HrXc (34)
The responses of the position x
of the structure to be driven and the base displacement x
to the thrust feed forward amount Ff can be represented by the following Equations 35 and 36.
Ff=Xco=HrXc (35)
+Ra}]HrXc (36)
, as a corresponding feed forward structure, the following Equations 37 and 38 are considered.
=HrXc (37)
=dXco/dt=SHrXc (38)
, in order to remove vibrations in the responses of the position x
of the structure to be driven and the base displacement x
and reduce the impact of introduction, the acceleration and deceleration process function Hr(s) is defined with the following Equation 39.
+γS+Ra} (39)
γ and D are arbitrary parameters of positive real number. When γ is set to reach 0 (γ→0), the introduction impact of Hr(s) is reduced, but the responses of the position of the structure to be driven
and the base displacement become more vibrating. With regard to the parameter D, when there is a dumping component in the structure A, an approximated value is set.
FIG. 8 is a block diagram of a position controlling device 10 of the present embodiment. Portions which differ from the position controlling devices which have been described will now be described.
The position instruction value Xc which is the output of the acceleration and deceleration processor 50 is input to the acceleration and deceleration process function Hr(s) shown in the Equation 39
and having a notch filter structure with a transfer zero point from the driving force Fx to the position x
of the structure to be driven of the target plant 59 as a notch angle frequency. The output of the acceleration and deceleration process function Hr(s) is the position instruction value Xco for
control. The velocity feed forward amount Vf shown in Equation 38 is determined by differentiating the position instruction value Xco with a differentiator 54. Moreover, the thrust feed forward
amount Ff shown in Equation 35 can be determined by multiplying Xc by P
Hr, because P
Hr is made a stable bounded function.
When the polynomial expression (Mb+Mc)S
+γS+Ra in the denominator of Hr(s) is correlated to a standard second-order expression, S
, γ=25010
corresponds to an attenuation rate ζ of 1.
FIG. 9
shows a result of a simulation of a second-order functional acceleration response when a parameter γ=25010
is set in the position controlling device of the present embodiment shown in FIG. 8, and target plant parameters, control parameters, and second-order functional acceleration conditions identical to
FIG. 6
are given. According to the position controlling device of the present embodiment, the control of Xco=x
is achieved even during acceleration and deceleration (top right drawing of
FIG. 9
). Because a large attenuation ζ is set, vibrations in the thrust feed forward amount Ff and velocity feed forward amount Vf can be removed, and, thus, the vibratios in the driving force Fx (bottom
left drawing in
FIG. 9
) and the base displacement x
(bottom right drawing in
FIG. 9
) can be inhibited similarly as in the first preferred embodiment.
FIG. 10 shows a result of a simulation of a second-order functional acceleration response when only the rigidity Ra of the structure A on the side of the target plant is reduced (-10%) compared to
the conditions of
FIG. 9
, similar to the conditions of
FIG. 7
compared to
FIG. 6
. Because the rigidity Ra used in the calculation at the control side is identical to
FIG. 9
, this result simulates a response when the device parameter is changed. Because of the reduction of the rigidity Ra, the base displacement x
is increased (bottom right drawing of FIG. 10). However, the control error defined by (Xco-x
) (top right drawing in FIG. 10) is not directly affected, and a high vibration inhibition performance is maintained similarly as in the third preferred embodiment.
The impact of introduction of the acceleration and deceleration process function H(s) shown in Equation 32 will now be described. Because H(s) has a construction common with the acceleration and
deceleration process function Hr(s) shown in Equation 39, in the following description, normalized F(s) of the following Equation 40 will be considered.
+2ζ.- omega.nS+ωn
) (40)
The introduction impact will be considered in comparison to a linear
acceleration and deceleration process L(s)=(1-e
TS)/TS (wherein T is a time constant in the linear acceleration and deceleration process) which is a typical position acceleration and deceleration process.
A direct impact of the acceleration and deceleration process on the position instruction is that a delay is caused between the position instruction X before the acceleration and deceleration process
and the position instruction Xo after the acceleration and deceleration process. Thus, a delay εp=X-Xo of the position instruction at a steady state with respect to a step velocity instruction dX/dt=
V is considered. In the case of the linear acceleration and deceleration process,
εp=(T/2)V (41)
On the other hand
, the delay εp in the acceleration and deceleration process function F(s) of the embodiments of the present invention is:
)={bS/(S.s- up.2+bS+c)}(V/S
) (42)
Using the final value theorem and a relationship of Equation
40, εp can be represented by the following Equation 43.
εp=(b/c)V=(2ζ/ωn)V (43)
It is known that, when a plurality of shafts are synchronously operated, a trajectory error is caused by the acceleration and deceleration process. Thus, a response radius Ro after acceleration and
deceleration process in the steady state is considered with respect to an arc position instruction (radius R and angular velocity ω) by synchronous operation of two orthogonal shafts, and the
trajectory error due to the acceleration and deceleration process is evaluated with an arc radius reduction amount ΔR=R-Ro. Because the response radius Ro is equal to the steady-state amplitude of Xo
(t) with respect to X(t)=Rcosωt, in the linear acceleration and deceleration process, Xo(s):
)} (44)
can be inverse Laplace transformed
, and, because ωT is much less than 1, that is ωT<<1, in general operation, the response radius Ro can be represented by the following Equation 45.
= ( R / ω T ) ( 2 - 2 cos ω T ) 1 / 2 ≈ ( R / ω T ) { ω T - ( ω T ) 3 / 24 } = R - R ( ω T ) 2 / 24 ( 45 ) ##EQU00005##
The arc radius reduction amount
ΔR can be approximated with the following Equation 46.
/24}R (46)
In the case of the acceleration and deceleration process function F(s) of the embodiments of the present invention,
)} (47)
is inverse Laplace transformed
, and the response radius Ro is:
/2=R cos θ (48)
, the arc radius reduction amount ΔR is shown by the following Equation 49.
ΔR=R-Ro=(1-cos θ)R (49)
, θ=tan
{2ζ.om- ega.nω/(ωn
When T=200 ms, ωn=200 rad/sec, V=0.4 m/sec, and ζ=1 are selected as conditions similar to the conditions employed in the above-described simulations, the delay εp in the position instruction is 40 mm
in the linear acceleration and deceleration process (εp=40 mm) and is 4 mm in the acceleration and deceleration process function F(s) of the embodiments of the present invention (εp=4 mm). When, on
the other hand, R=0.1 m and ω=2 rad/sec are chosen as arc operation conditions, the arc radius reduction amount ΔR is approximately 670 μm in the linear acceleration and deceleration process (ΔR≈670
μm) and is approximately 20 μm in the acceleration and deceleration process function F(s) of the embodiments of the present invention (ΔR≈20 μm). In other words, the delay in the position instruction
and the trajectory error which are caused by introduction of the acceleration and deceleration process function H(s) or Hr(s) of the embodiments of the present invention are sufficiently small
compared to the delay in the position instruction and the trajectory error caused in the acceleration and deceleration processor which is already present, and it can thus be understood that the
impact due to the introduction is small.
As described, a position controlling device of the embodiments of the present invention has a feed forward structure for thrust and velocity for controlling the structure to be driven according to
the position instruction value for control and, at the same time, calculates the position instruction value for control by adding the acceleration and deceleration process to the position instruction
value with a notch filter structure having a small introduction impact. With this structure, vibration in various feed forward amounts can be cancelled, and the responses of the position of the
structure to be driven and the base displacement can be controlled precisely and without vibration. Because the control amount is preferably varied according to the sizes of the acceleration
instruction value Ac (=d
) and the acceleration derivative instruction value Bc (=d
), a high control advantage can be obtained regardless of the sizes of the parameters Ac and Bc. In addition, because vibration in the responses of the position of the structure to be driven and the
base displacement is cancelled, no vibration is induced in various parts of the device, and an advantageously high degree of vibration inhibition can be maintained even when the device parameters are
Patent applications by Satoshi Eguchi, Niwa-Gun JP
Patent applications by Okuma Corporation
Patent applications in class Specific application of positional responsive control system
Patent applications in all subclasses Specific application of positional responsive control system
User Contributions:
Comment about this patent or add new information about this topic: | {"url":"http://www.faqs.org/patents/app/20090112376","timestamp":"2014-04-20T09:37:29Z","content_type":null,"content_length":"95773","record_id":"<urn:uuid:7a726e7b-2949-448e-9abb-2e0ae70542b2>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00203-ip-10-147-4-33.ec2.internal.warc.gz"} |
Post a reply
I was just thinking of some ways you could compute maclaurin or taylor expansions as quick as possible. Consider the maclauren expansion for sin(x)
or in sumation:
note that
(2n -1)! = (2n-1)(2n-2)(2n - 3)! and (2n - 3)! is the factorial of the preceding term.
which also appears in the preceding term. Thus
or if we allow n to move up in increments of 2 and shift the starting point, then we have
thus we could compute the sum as follows:
what do you think? Any ways we could make it even faster? | {"url":"http://www.mathisfunforum.com/post.php?tid=8910&qid=84329","timestamp":"2014-04-19T07:19:30Z","content_type":null,"content_length":"30631","record_id":"<urn:uuid:894828a1-79d0-4e08-a41f-ad69d0023a1d>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00586-ip-10-147-4-33.ec2.internal.warc.gz"} |
Bridgeport, CT Geometry Tutor
Find a Bridgeport, CT Geometry Tutor
...Whether you are looking for remedial help, or looking to advance a gifted child, I can help. I always have an eye towards building students’ confidence in the math at their level, but also
preparing them for the math to come. I teach students WHY they are doing what they need to do, not just procedures and rules, to deepen and broaden their understanding of concept.
24 Subjects: including geometry, chemistry, GRE, ASVAB
...In first my college prep school, and later at Boston University, I cataloged a long list of various courses, most heavily concentrated in maths and sciences. I have since been working as a vet
tech/ vet assistant first for large and now small animals. My current academic/ career goals include e...
23 Subjects: including geometry, chemistry, biology, algebra 2
...I can show the student how to use the tools of Geometry to make difficult tasks easier. Photography and carpentry are two of my hobbies in which knowledge of Geometry is critical, and I can
share this with the student. I’m able to help the student understand topics ranging from points, lines and planes to loci and coordinate transformations.
21 Subjects: including geometry, calculus, physics, ASVAB
I'm currently the chair of the mathematics dept at the Jewish High School of CT in Bridgeport. I've been tutoring professionally for almost 10 years and have a masters degree in theoretical
mathematics. While I specialize in mathematics and SAT prep, I have experience teaching many academic subjects at all levels, from elementary to graduate school.
30 Subjects: including geometry, reading, chemistry, English
...For the next 15 years, I programmed in C on a regular basis, as a student, teacher, and consultant. I have Undergraduate, Master's and PhD degrees in Computer Science and I was an Assistant
Professor of Computer Science. I have Undergraduate, Master's and PhD degrees in Computer Science and I was an Assistant Professor of Computer Science.
36 Subjects: including geometry, reading, ESL/ESOL, algebra 1
Related Bridgeport, CT Tutors
Bridgeport, CT Accounting Tutors
Bridgeport, CT ACT Tutors
Bridgeport, CT Algebra Tutors
Bridgeport, CT Algebra 2 Tutors
Bridgeport, CT Calculus Tutors
Bridgeport, CT Geometry Tutors
Bridgeport, CT Math Tutors
Bridgeport, CT Prealgebra Tutors
Bridgeport, CT Precalculus Tutors
Bridgeport, CT SAT Tutors
Bridgeport, CT SAT Math Tutors
Bridgeport, CT Science Tutors
Bridgeport, CT Statistics Tutors
Bridgeport, CT Trigonometry Tutors
Nearby Cities With geometry Tutor
Danbury, CT geometry Tutors
East Haven, CT geometry Tutors
Easton, CT geometry Tutors
Fairfield, CT geometry Tutors
Hamden, CT geometry Tutors
Milford, CT geometry Tutors
New Haven, CT geometry Tutors
Norwalk, CT geometry Tutors
Queens, NY geometry Tutors
Shelton, CT geometry Tutors
Stamford, CT geometry Tutors
Stratford, CT geometry Tutors
Trumbull, CT geometry Tutors
Waterbury, CT geometry Tutors
Westport, CT geometry Tutors | {"url":"http://www.purplemath.com/Bridgeport_CT_geometry_tutors.php","timestamp":"2014-04-20T21:43:45Z","content_type":null,"content_length":"24365","record_id":"<urn:uuid:3bdb82ea-71df-4f96-a4c2-c4c9762ee1fb>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00475-ip-10-147-4-33.ec2.internal.warc.gz"} |
natural convection from hot pipe in water
Thank you for your response..
Yes, I too verified, that a copper coil 5mm od and 75cm length at 60 deg cels. when placed in water at 40 deg cels. can provide heat rates as high as 1500 watts.
In my case, my mistake was in assuming the coil to be at 60 deg cels. This coil is actually the condenser in the refrigerator. The ref. fluid that flows through it is at 60 deg cels. and I wrongly
assumed that to be the temperature of the coil.
On accounting for the 2 convection resistances (1/hA) at the ref. and water surface, I found that the coil (whose resistance is negligible) is at a temperature of 42deg, only 2deg greater than the
temperature of water.
This gives me heat rates of 20-30 watts which is realistic for the case of the condenser coil in the defrost water tray. | {"url":"http://www.physicsforums.com/showthread.php?t=508245","timestamp":"2014-04-18T03:10:47Z","content_type":null,"content_length":"29064","record_id":"<urn:uuid:0f5a8d7a-a813-4890-ae1e-17f6041687e0>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00658-ip-10-147-4-33.ec2.internal.warc.gz"} |
Project Management, Policy Analysis, Marketing
A firm, which has been in existence for some time will have accumulated/ collected considerable data on sales pertaining to different time periods. Such data when arranged chronologically yield
‘time series’. The time series relating to sales represent the past pattern of effective demand for a particular product. Such data can be presented either in a tabular form or graphically for
further analysis. The most
well known method of analysis of time series is to project the trend of the time series. The trend line can be fitted through a series either visually or by means of statistical techniques such as
least square method. The analyst chooses a plausible algebraic relation between sales and the independent variable, time.
The trend line is then projected into the future by extrapolation. The basic assumption of the trend method is that the past rate of change of the variable under study will continue in the future.
This technique yields acceptable results so long as the time series shows a persistent tendency to move in the same direction. Whenever a turning point occurs, the trend projection breaks down.
Nevertheless, a forecaster could normally expect to be right in most forecasts especially if the turning points are few and spaced at long intervals from each other.
The real challenge of forecasting is in the prediction of turning points rather than in the projection of trends. It is when turning points occur that management will have to alter and revise its
sales and production strategies most drastically.
There are primarily four sets of factors, which are responsible for the characterization of time series by fluctuations and turning points in a time series, trend, seasonal variations, cyclical
fluctuations, and irregular or random forces. The problem in forecasting is separate and measures each of these four factors.
The fundamental approach is to treat the original time series data (O or observed data) as composed of four parts: a secular trend (T), a seasonal factor (S), a cyclical element (C) and on irregular
movement (I). It is generally assumed that these elements are bound together in a multiplicative relationship presented by the equation O = TSCI.
The usual practice is to first compute the trend from the original data. The trend values are then eliminated from observed data (TSCI/T). The next step is to calculate the seasonal index, which is
used to remove the seasonal effect (SCI/S). A cycle is then fitted to the remainder, which also contains the irregular effect.
The foregoing approach t the decomposition of time series data is a useful analytical device for understanding the nature of business fluctuations. The trend and seasonal factor can be forecast, but
the prediction of cycles is hazardous for the simple reason that there is no regularity in the cyclical behavior.
Though, there are two assumptions underlying this approach:
• The analysis of movements would be in the order of trend, seasonal variation and cyclical charges, and
• The effects of each component are independent of each other.
For the use of economic indicators, the following steps have to be taken:
• See if a relationship exists between the demand for a product and certain economic indicators.
• Establish the relationship through the method of least squares and derive the regression equation. Assuming the relationship to be linear, the equation will be of the form y= a + bx. There can be
curve-linear relationships as well.
• Once regression equation is derived, the value of Y i.e. demands, can be estimated for any given value of X.
• Past relationships may not recur. Hence the need for value judgment as well. New factors may also have to be taken into consideration.
Merits and Limitations of Time series Analysis Merits
1. The trend method is based on least square principle of demand forecasting is quite popular due to simplicity.
2. It provides good result, which is particularly suitable for long run.
3. It is very much simple in the sense that it doesn’t require the knowledge of economic theory and market structure.
1. This method is based on the assumption that future events will follow the same path, which may not be true for every time.
2. It is not suitable for short-term demand forecasting. This method cannot usually explain the turning points of the business cycle.
No comments: | {"url":"http://analysisproject.blogspot.com/2013/06/process-of-demand-forecasting-by-time.html","timestamp":"2014-04-16T04:40:42Z","content_type":null,"content_length":"111134","record_id":"<urn:uuid:42e9fa77-c26c-4ba8-93c3-1beb37f70bc3>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00454-ip-10-147-4-33.ec2.internal.warc.gz"} |
We need a point and a slope.
Since f(1) = 1, the point we want is (1,1). Since we found earlier that f'(1) = 2, 2 is the slope we want.
We want a line of the form
y = mx + b whose slope is 2, so m = 2: y = 2x + b.
We also want (1,1) to be on the line, so
1 = 2(1) + b.
Solving, we find b = -1. This means the equation for the tangent line to f at 1 is
y = 2x-1.
To check this answer, we graph the function f(x) = x^2 and the line y = 2x - 1 on the same graph:
Since the line bounces off the curve at x = 1, this looks like a reasonable answer.
When finding equations for tangent lines, check the answers. Stick both the original function and the tangent line in the calculator, and make sure the picture looks right. If we find something like
this, we know we've made a mistake somewhere:
If we find something that looks like a tangent line and quacks like a tangent line, there's a good chance we've correctly found the tangent line.
Since we haven't discussed the shortcuts for finding derivatives yet, these exercises will require derivatives we've already found.
We can find the derivatives again for practice or we can go look them up. | {"url":"http://www.shmoop.com/derivatives/finding-tangent-line-examples.html","timestamp":"2014-04-16T21:01:48Z","content_type":null,"content_length":"28793","record_id":"<urn:uuid:02ab9a25-7652-4198-8429-3f88bf156d93>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00098-ip-10-147-4-33.ec2.internal.warc.gz"} |
Generalized complex groupoid
up vote 5 down vote favorite
Is there any nontrivial example of Generalized complex groupoid?
By trivial, I mean all the classes of symplectic groupoids/ Abelian varieties as well as their products.
What I mean is that, is there any example of a groupoid in the category of generalized complex (GC) manifolds, in the sense of Hitchin. GC geometry is a way to unify complex geometry and symplectic
geometry in to one realm. The definition could be found in wiki.
The space of arrows is a generalized complex manifold, and the source and target maps are generalized complex morphisms.
I can only come up with trivial examples, which are symplectic/complex, so are not real examples reflecting the nature of the theory.
-Any complex manifold is a GC manifold, so any abelian variety is a GC groupoid.
-Any symplectic manifold is a GC manifold, so any symplectic groupoid is also a GC groupoid.
-any product of these examples is also a GC groupoid.
But I can not find out any other example.
dg.differential-geometry ct.category-theory
Could you clarify what you mean? – David Roberts Jan 7 '11 at 0:29
Please read the "how to ask" page, which has a link on the top of this page. – S. Carnahan♦ Jan 7 '11 at 4:37
add comment
3 Answers
active oldest votes
There are three examples which I'm aware of.
1. GC Lie groups: these are Lie groups equipped with GC structures compatible with the multiplication map. Holomorphic Poisson Lie groups are an example of this, but there are others. For
example, the known examples of generalized Kahler structures on compact even-dimensional semisimple Lie groups (we just need a bi-invariant metric, not all hypotheses are necessary)
consist of two commuting GC structures, one of which is multiplicative in the above sense, and the other of which is a GC homogeneous space over the GC Lie groupd defined by the first.
This situation will be familiar to those in Poisson Lie Group theory, and this is joint work in progress with Jiang-Hua Lu. David is of course correct in his statement that GC actions of
up vote GC Lie groups would then define GC action groupoids.
4 down
vote 2. B-symplectic groupoids as described in http://arxiv.org/abs/math/0412097. These are, first and foremost, symplectic groupoids, but they have an extra 2-form making them GC Lie groupoids.
3. Any holomorphic Poisson groupoid is an example of a generalized complex groupoid. For example, if Z is a Poisson manifold, then $Z\times Z$ is a Poisson groupoid and hence a generalized
complex groupoid.
1 Thanks for confirming my (somewhat educated) guess, Marco! – David Roberts Jan 8 '11 at 23:33
add comment
OK, I see what you mean now. How about this: take a complex Lie group $G$ which acts by symplectomorphisms on a symplectic manifold $(M,\omega)$. Then there is a Lie groupoid $M \times G \
rightrightarrows M$ - the action groupoid - which actually lifts to be internal to generalised complex manifolds, [S:if:S] (edit:because) the product of a complex manifold and a symplectic
manifold is a generalised complex manifold.
up vote 2
down vote (Thanks to Marco G for confirming this last fact)
add comment
I'm not very comfortable with GC geometry, but if I'm not mistaken Poisson-Lie groups might be group objects in the generalized complex world, so if you're given a Poisson space with a
up vote 0 Poisson-Lie action (as come up very often in integrable systems) that would qualify as a nontrivial example of a GC groupoid..
down vote
add comment
Not the answer you're looking for? Browse other questions tagged dg.differential-geometry ct.category-theory or ask your own question. | {"url":"http://mathoverflow.net/questions/51361/generalized-complex-groupoid?sort=oldest","timestamp":"2014-04-16T13:59:55Z","content_type":null,"content_length":"61972","record_id":"<urn:uuid:f14a79dd-09b2-4974-93c4-f00bf38ab720>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00424-ip-10-147-4-33.ec2.internal.warc.gz"} |
SAS-L archives -- February 2009, week 4 (#282)LISTSERV at the University of Georgia
Date: Wed, 25 Feb 2009 11:10:18 -0500
Reply-To: Peter Flom <peterflomconsulting@mindspring.com>
Sender: "SAS(r) Discussion" <SAS-L@LISTSERV.UGA.EDU>
From: Peter Flom <peterflomconsulting@MINDSPRING.COM>
Subject: Re: oversampling too much?
Comments: To: Gary <fuguoyi@GMAIL.COM>
Content-Type: text/plain; charset=UTF-8
Gary <fuguoyi@GMAIL.COM> wrote
>I am new to this group, and just started a job with a bank. When
>modeling rare events in marketing, it has been suggested by many to
>take a sample stratified by the dependent variable(s) in order to
>allow the modeling technique a better chance of detecting a
>difference. Many literature suggests the proportion of the event in
>the sample seems to range between 15-50% for a binary outcome, and we
>can use an offset to adjust it.
>The response rate of my current case is 0.3%, and when I build the
>model, I oversmapled the response to 25%. However, the trandition here
>is to oversample to 1%, and they told me that if oversample too much,
>the model will be sensitive.
>Is there any problem oversample from 0.3% (8000 out of 2.2M targets)
>to 25% (8000 resps and 24000 non-resps). We have about 500 variables
>to build the model.
I am not an expert on this area, but
1) I don't see how oversampling from an existing data set helps. I could see
oversampling when *building* a data set. You want to oversample rare populations so that
you have enough people from those populations. But in your situation, I think the
only advantage of oversampling would be the speed with with the logistic regression runs.
(That's just my intuition ....)
2) I am concerned with any model that has 500 variables, *regardless* of the number of cases.
The rule of thumb of 10-1 is not bad, but it's not ironclad. What are these 500 variables? How are they
3) Since you are in marketing, I imagine you are mainly or entirely interested in prediction, rather than explanation. You might consider multimodel averzging (see a book by Burnham and Anderson)
Peter L. Flom, PhD
Statistical Consultant
www DOT peterflomconsulting DOT com | {"url":"http://listserv.uga.edu/cgi-bin/wa?A2=ind0902d&L=sas-l&F=&S=&P=32421","timestamp":"2014-04-20T05:45:03Z","content_type":null,"content_length":"11011","record_id":"<urn:uuid:ae539d53-79ab-44a4-bbca-20f2e10fd922>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00067-ip-10-147-4-33.ec2.internal.warc.gz"} |
Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material,
please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 62
62 CHAPTER 3 Interpretation, Appraisal, and Applications 3.1 Overview (SDCs A, B, C, and D). Construction specifications are pro- vided as a new proposed article--Article 8.13.8, "Special This
chapter provides interpretation, evaluation, and appli- Requirements for Precast Bent Cap Connections"--to be cations of the findings of Chapter 2 in developing research added to the AASHTO LRFD
Bridge Construction Specifications deliverables for the precast bent cap systems investigated. In (LRFD BCS) (35). particular, this chapter presents design specifications, design examples, and design
flow charts developed using specimen test results and related references. Design methodologies for emu- 3.2 Development of Design lative connections generally follow existing CIP methodologies,
Specifications but changes are incorporated into new or revised design speci- This section presents the basis for the provisions proposed fications. Presented construction specifications were
developed for incorporation into the LRFD 2009 SGS (1). For simplic- using specifications previously developed together with results ity, the following sections generally use the same outline as from
test specimen fabrication and assembly (8, 21, 22, 23, 26). that found in the 2009 LRFD SGS. Proposed specifications are All research deliverables are also presented as attachments to given below.
References to articles within this section refer to this report, grouped in the following categories: proposed the LRFD SGS (2009 edition or proposed specifications). design specifications (new and
revised), design flow charts, Proposed design specifications have been prepared in the design examples, construction specifications, and example format and language of the 2009 LRFD SGS with detailed
com- connection details. In addition, an implementation plan is mentary (1). In addition, detailed drawings are incorporated provided. The attachments provide a detailed list of these into the design
specifications, including labeling of precast bent deliverables (attachments available at www.trb.org/Main/ cap features and joint shear reinforcement. Many sections of Blurbs/164089.aspx). this
chapter are directly incorporated into the proposed design Design specifications for the SDCs--SDCs C and D, SDC B, specifications, but not all sections of the specifications are and SDC A--are given
in appropriate format for incorpora- shown herein. It is recommended that the accompanying tion into a future edition of the AASHTO Guide Specifications attachments be reviewed together with this
section. for LRFD Seismic Bridge Design (LRFD SGS) (1). A major Chapter 3 refers extensively to 2009 LRFD SGS (1) provi- proposed change is to revise Article 8.13, "Joint Design for sions, which
adopt the AASHTO convention of using units of SDCs C and D" of the 2009 LRFD SGS to include precast bent . This practice results in different coefficients than ksi for f c cap connections (grouted
duct and cap pocket). However, to address all seismic design categories, two new articles are also those presented formerly using units of psi. For example, terms required. Therefore, current Article
8.13, "Joint Design for such as 3.5 fc psi (likely joint cracking) appear as 0.11 fc ksi SDCs C and D" is renumbered as Article 8.15, "Joint Design for the same provision (likely joint cracking).
This chapter for SDCs C and D." This allows two new articles to be added: adds clarifying units as needed. Article 8.13, "Joint Design for SDC A" and Article 8.14, "Joint Design for SDC B." 3.2.1
Overview Design flow charts and design examples are presented to illustrate the proper use of design specifications for both Based on specimen test results and analysis, the design spec- grouted duct
and cap pocket connections at all SDC levels ifications presented in the following sections differ in some
OCR for page 62
63 respects from the 2009 LRFD SGS provisions for nonintegral seismic response that has inherently less energy dissipation CIP bent caps (1). Precast bent cap connections conservatively capacity when
considering the hysteretic response. The series require that the joint principal tensile stress be calculated to of nonlinear time history analyses conducted on hybrid sys- determine the additional
joint shear reinforcement require- tems described in Chapter 2 indicate that the experienced seis- ment not only for SDCs C and D (as required for CIP design), mic demands for hybrid systems designed
in accordance with but also for SDC B. Where the joint principal tensile stress, pt, the provisions described herein are of similar magnitude to a indicates likely joint cracking (0.11 fc ksi or
larger), grouted CIP system. Therefore, the current provisions as specified in duct design specifications for joint shear reinforcement closely Article 4.3.3 can be implemented for hybrid systems.
match CIP specifications. Cap pocket specifications, however, account for use of a single corrugated pipe that replaces trans- 3.2.3 Vertical Ground Motion verse joint reinforcement, require a
supplementary hoop at Design Requirements each end of the pipe and a smaller area of vertical joint stir- rups, and do not specify horizontal J-bars. Where the joint The jointed nature of
discontinuous integral precast super- principal tensile stress indicates joint cracking is not expected structures with vertical joints at the bent cap face requires spe- (less than 0.11 fc ),
precast bent cap connections in SDCs B, cial attention when considering potential flexural and shear C, and D still require minimum transverse reinforcement and demands. The basic design philosophy
for lateral loading is to vertical stirrups within the joint. All precast bent cap connec- use capacity design procedures to ensure the elastic response tions require bedding layer reinforcement, and
specifications of the superstructure. However, vertical seismic loading cannot ensure proper design and placement of the column top hoop. be handled with the same capacity design procedures because
SDC A joints also prescribe minimum transverse reinforce- there is not a well-defined mechanism for inelastic response. ment and vertical joint stirrups within the joint. The effects of vertical
excitation can impose significant flex- For hybrid bent cap connections, the experimental results ural and shear demands on the superstructure at the interface indicated that many of the existing
joint detailing provisions between the bent cap and girder whether it is a precast or CIP are reasonable for implementation. Therefore, the underly- system. Therefore, seismic demands generated from
vertical ing joint transfer mechanism for CIP and emulative bent caps motions must be considered in seismic design. Additionally, is employed for hybrid bent caps. New provisions were added
seismically induced foundation movements such as relative to the LRFD SGS for the design of hybrid systems to ensure settlement, lateral spreading, and liquefaction can induce sub- that the response
characteristics of a hybrid system are stantial demands on the superstructure. This topic is covered achieved. in more detail in the discussion of superstructure design pro- In the 2009 LRFD SGS,
there are some disparities between visions. For precast systems, the potential implications of ver- the joint design provisions for nonintegral systems and for tical excitation are greater than
comparable CIP systems due transverse design of integral systems (1). The general mecha- to possible concentrated joint rotations and the reliance on nism for transverse response of a multicolumn
nonintegral or shear friction mechanisms to resist vertical shear demands integral structure is essentially the same. Therefore, recom- across the joint. It is recommended that more refined vertical
mended modifications are presented for integral systems to seismic demand provisions be developed for all bridge sys- develop a consistent design specification. Furthermore, a tems; however, at a
minimum, the following provision is rec- variety of design and detailing provisions is recommended for ommended for inclusion in Article 4.7.2 of the LRFD SGS for integral precast systems in order to
ensure that reliable and precast systems: safe seismic response is achieved. For integral precast bridge superstructures with pri- mary members that are discontinuous at the face of the bent cap
(i.e., precast segmental, integral spliced girder 3.2.2 Displacement Magnification systems, etc.), vertical seismic demand shall be explicitly for Short Period Structures considered in superstructure
design for both moment and shear using equivalent static, response spectrum, It is essential to consider the impacts on expected seismic or time history analysis. Demands from vertical ground demands
when utilizing a system that has a significantly differ- motion shall be combined with horizontal seismic ent mode of seismic response. The nonintegral emulative and demands based on plastic hinging
forces developed in integral details have been shown to perform in a similar man- accordance with Article 4.11.2. Seismic demands shall be combined considering 100% ner to CIP structures. Therefore,
these systems can be imple- of the demand in the vertical direction added with 30% mented using the same lateral seismic demand procedures that of the seismic demand resulting from flexural hinging
in are currently employed in the LRFD SGS. However, the hybrid one of the horizontal perpendicular directions (longitu- details investigated are aimed at providing a different mode of dinal) and 30%
of the seismic demand resulting from
OCR for page 62
64 flexural hinging in the second perpendicular horizontal 3.2.6 Plastic Moment Capacity direction (transverse). for SDC B, C, and D A major obstacle that must be overcome is the development The
current provisions for the determination of plastic of improved provisions for the development of vertical seismic moment capacities of ductile concrete members are suf- loadings. Current provisions
in the 2009 LRFD SGS admit that ficient for CIP and precast emulative systems. However, there are shortcomings in the design requirements for vertical the intentional debonding of post-tensioning and
reinforce- excitation that must be resolved for all bridge systems (1). ment within a hybrid system creates complications in the application of the existing provisions. With discrete joint rotations
and distributed straining of steel elements, moment- 3.2.4 Analytical Plastic Hinge Length curvature analysis cannot be directly implemented for hybrid For integral bridge systems, it is desirable to
have an under- concrete members. Therefore, additional provisions are standing of the expected rotation capacity of the superstructure required. when considering demands associated with vertical
loading or Debonded elements and associated distributed straining can potential seismically induced relative settlement. Similar to be accounted for using moment-rotation analyses. The prem- column
systems, the use of moment-curvature analysis and ise of moment-rotation analysis is similar to that of moment- an analytical plastic hinge length can provide an easy-to- curvature analysis where
strain compatibility is used to implement method for the estimation of the inelastic response perform sectional analysis of the member. In a moment- of a superstructure joint and its ultimate
rotation capacity. curvature analysis, the strain distribution is considered linear Moment-curvature analysis for capacity protected superstruc- and identical for both steel and concrete elements at
the same ture elements is already required for SDC C and D structures location. Moment-rotation analysis makes a similar plain sec- per the 2009 LRFD SGS Article 8.10 (1). The only obstacle in tion
assumption, but allows for varying strain at a given section the determination of the inelastic flexural response is a reason- by considering a fixed length over which an element accumu- able
estimate of the analytical plastic hinge length. For ele- lates strain. To account for the analysis procedure required ments that are flexurally dominated, the analytical plastic for hybrid members,
a new Article 8.5.2 is recommended for hinge length can be reasonably approximated as one-half of addition to the 2009 LRFD SGS (1): the element depth in the direction of loading. Therefore, the
following is a recommended addition to the 2009 LRFD SGS For hybrid concrete members, the plastic moment as Article 4.11.6.2: capacity shall be calculated using a moment-rotation (M-) analysis based
on the expected material proper- Lps = 0.5Ds Proposed LRFD SGS Eq. 4.11.6.2-1 ties. The moment-rotation analysis shall include the axial forces due to dead load together with the axial forces where:
due to overturning as given in Article 4.11.4. The M- curve can be idealized with an elastic per- Lps = analytical plastic hinge length for integral concrete fectly plastic response to estimate the
plastic moment superstructures (in) capacity of a member's cross section. The elastic portion Ds = total depth of superstructure (in) of the idealized curve passes through the point marking the first
reinforcing bar yield. The idealized plastic moment capacity is obtained by equating the areas 3.2.5 Reinforcing Steel Modeling between the actual and the idealized M- curves beyond the first
reinforcing bar yield point similar to as shown in Localized joint rotations associated with hybrid systems Figure 1. can cause increased straining in reinforcing bars due to geo- metric
compatibility. As the joint opens, the rotation must be In the execution of a moment-rotation analysis, the follow- accommodated in the reinforcing bar with the bar being fixed ing are the
recommended strain lengths for specific elements at both ends. These additional strain demands caused by geo- in the section. The concrete compressive strain length can be metric loading can be
accounted for by a conservative reduc- approximated as the neutral axis depth. In performing any tion in the ultimate tensile strain considered. The following strain-compatibility sectional analysis,
the neutral axis depth recommended addition to Article 8.4.2 of the 2009 LRFD SGS is calculated to determine the cross-sectional deformation accounts for this geometric loading in combination with
the distribution. This depth can be used to define the region over traditional reduced ultimate tensile strain (1): which the concrete strain is approximately constant. The rein- For hybrid
connections, the reduced ultimate tensile forcement strain length can be approximated as the length strain, su R , shall equal one-half the ultimate tensile over which the reinforcement is
intentionally debonded. The strain, su. post-tensioning strain length can be approximated as the dis-
OCR for page 62
65 tance between anchorages as the tendons are debonded for where: their full length. c = distance from extreme compression fiber to the neutral axis at the reference yield point (in) Dc = column
diameter or smallest dimension in the direc- 3.2.7 Hybrid Performance Requirements tion of loading (in) To ensure that hybrid systems exhibit the desired lateral The next recommended modification is
to Article 8.8.2 of response characteristics of self-centering behavior and lim- the LRFD SGS. This provision specifies a minimum flexural ited damage, a variety of provisions are recommended for
contribution of mild reinforcement for hybrid systems to inclusion in the LRFD SGS. These provisions are intended to ensure that stable and predictable lateral response is achieved. limit various
design parameters within specific target ranges The traditional minimum reinforcement requirements are to produce the intended mode of lateral response. not applicable to hybrid systems and therefore
a new provi- The aim of the first set of provisions is to ensure that the sion is added. For hybrid systems, the minimum amount of contribution of reinforcement is such that the system will be
reinforcement ensures that the response predictions for ref- capable of a reduction in the residual deformations as com- erence yield are reasonable and the overall seismic demands pared to
traditional bridge systems. A series of limits is rec- as modified by Article 4.3.3 are valid. The recommended ommended for inclusion in Article 8.8.1 of the LRFD SGS as addition to Article 8.8.2 is
the following: outlined below. The first of the three equations ensures that the effective axial load acting on the column following a seis- The minimum area of longitudinal reinforcement for mic
event is large enough to force the column reinforcement hybrid compression members shall satisfy: back to a zero strain state, thereby aiding in the self-centering response. The second equation
limits the contribution of the Ms 0.25 Proposed LRFD SGS Eq. 8.8.2-5 reinforcement on the overall flexural capacity in order to limit My the potential residual deformations associated with
traditional where: bridge construction. Increases over this limit will produce lateral response that is similar to traditional CIP bridges with Ms = flexural moment capacity provided by longitudinal
reinforcement at reference yield moment (kip-ft) more noticeable damage and residual deformation. The third My = reference yield moment (kip-ft) equation is a limit on the neutral axis depth that is
intended to limit the magnitude of strain in the concrete compression toe To prevent the premature fracture of column reinforcement due to joint opening. in hybrid systems, the reinforcement must be
intentionally debonded to accommodate the localized joint opening at the The maximum longitudinal reinforcement for hybrid compression members shall be proportioned to satisfy ultimate displacement
capacity. A provision is recommended Equations 2 through 4: for inclusion in the LRFD SGS as Article 8.8.14 to explicitly enforce this requirement: 0.9PD + Ppse > 1.0 Proposed LRFD SGS Eq. 8.8.1-2 Ts
Longitudinal reinforcement in hybrid columns shall be intentionally debonded from the surrounding con- where: crete at hybrid column end connections. The minimum debonded length shall be such to
ensure that the strain PD = dead load axial load action on column (kip) in the longitudinal reinforcement does not exceed the Ppse = effective force in post-tensioning tendon at end reduced ultimate
tensile strain specified in Article 8.4.2 of service life (kip) at the column ultimate rotation capacity. Ts = resultant column reinforcement tension force associated with ultimate moment capacity
(kip) As was previously discussed, the current provisions for Ms short period displacement amplification are acceptable for use 0.33 Proposed LRFD SGS Eq. 8.8.1-3 My with hybrid systems within the
bounds of the provisions pre- sented in the LRFD SGS and herein. However, the influence where: of joint opening at the reference yield point must also be Ms = flexural moment capacity provided by
longitudinal accounted for in the mathematical modeling of hybrid con- reinforcement at reference yield moment (kip-ft) My = reference yield moment (kip-ft) crete members to ensure that the added
flexibility is consid- ered. The moment-rotation analysis performed in accordance c with the recommended Article 8.5 modifications provides a 0.25 Proposed LRFD SGS Eq. 8.8.1-4 Dc means to
approximate the added flexibility for equivalent
OCR for page 62
66 elastic analysis. To account for the added flexibility, the effec- demand, the superstructure to bent cap connection should be tive moment of inertia can be modified based on the effective capable
of experiencing a defined level of rotation demand. section properties calculated using moment-rotation analysis. The intent of the following recommended Article 8.10.3 is to The recommended Article
5.6.6 of the LRFD SGS provides ensure that the superstructure can resist a limited amount of this requirement: inelastic action: The effective moment of inertia for calculation of The superstructure
to bent-cap connection shall have elastic flexural deformations for hybrid bridge columns plastic rotation capacity equal to or greater than 0.01 can be taken equal to the gross moment of inertia.
For radians. The plastic rotation capacity shall be calcu- mathematical modeling, the increase in flexibility at lated using the moment-curvature analysis required per reference yield due to joint
opening shall be considered. Article 8.9 and the analytical plastic hinge length for The influence of joint rotation shall be determined in superstructures as defined in Article 4.11.6. accordance
with the provisions of Article 8.5 using moment-rotation analysis. For equivalent elastic analy- sis, Ieff shall be decreased to account for the additional Torsional Design for Open Soffit
Superstructures flexibility due to joint rotation. CIP integral bridge systems traditionally have a top deck slab and bottom soffit slab that provide reliable distribution 3.2.8 Superstructure
Capacity for of column overstrength demands into the superstructure. Longitudinal Direction, SDCs C and D However, precast systems without a soffit slab cannot trans- fer the seismic demands through
the same mechanism. The Superstructure Demand column flexural overstrength demands must be transferred As discussed in relation to the recommended modifica- into the superstructure by way of
torsional response of the bent tions to the 2009 LRFD SGS, vertical seismic demands can cap. Commonly used torsional mechanisms cannot develop play a significant role in the performance of integral
precast over the short distance between the face of the column and bridge systems (1). Therefore, additional recommendations girder. Instead, a modified torsional response must be con- were specified
for the development and consideration of ver- sidered. The following new Article 8.10.4 requires the explicit tical ground motions in the design of integral precast bridges. consideration of a
torsional transfer mechanism for open For longitudinal response, seismic actions are distributed soffit systems: into the superstructure based on an effective width calculated in accordance with
Article 8.10. However, for vertical demands, The transfer of column overstrength moment, Mpo, the seismic loading can be distributed across the entire width and associated shear and axial load via
torsional mecha- nisms must be explicitly considered in the superstructure of the bridge. To account for this, the recommended addition design for open soffit structures. to Article 8.10 is the
following: Vertical seismic demands determined in accordance Shear Design for Integral Precast Superstructures with Article 4.7.2 shall be distributed to the entire width of the superstructure. The
demands associated with the The potential for inelastic superstructure response due to column overstrength moment, Mpo, shall be considered vertical motion and seismically induced settlement was men-
concurrently with vertical seismic demands as specified tioned in the discussion of the minimum superstructure rota- in Article 4.7.2. tion capacity. The bottom flanges of precast girders in the
superstructure should be detailed to accommodate the poten- Minimum Superstructure Rotation Capacity tial inelastic actions without degradation of the compression zone. Therefore, the use of closed
hoops is recommended as The use of capacity design procedures cannot ensure that a a means to enhance the integrity of the girder flanges in the superstructure system does not experience loads in
excess of event of inelastic loading. The recommended addition to the the superstructure capacity when considering actual vertical 2009 LRFD SGS (1) is the following: seismic motions and seismically
induced foundation move- ments. The potential for seismically induced relative settle- The bottom flange of integral precast girders shall be ments may induce substantial geometrically driven demands
reinforced with closed hoops within the region from on a bridge superstructure system. These mechanisms can the face of the bent cap equal to a distance equal to the superstructure depth. These hoops
shall be spaced with result in loadings in the superstructure that may cause inelas- the girder shear reinforcement, with spacing not to tic superstructure action. To ensure that the superstructure
exceed 8 in. The hoops shall be the same size as the girder can accommodate a limited amount of inelastic rotation shear reinforcement, with a minimum bar size of No. 4.
OCR for page 62
67 Experimental results highlighted in Chapter 2 and described 3.2.10 Joint Performance in detail in the attachments, indicate the importance of a SDCs C and D well-developed shear transfer mechanism
at the girder to bent cap connection. The potential for concentrated joint The joint performance for SDCs C and D is stated as follows: opening during seismic loading will result in a significant
decrease in the effective shear depth across the joint that must Moment-resisting connections shall be designed to be considered in design. The shear reinforcement can be transmit the maximum forces
produced when the column has reached its overstrength capacity, Mpo. distributed in the superstructure based on an assumed strut mechanism with a 30-deg maximum compression strut angle. Most
importantly, the shear reinforcement must be extended This matches the existing provision for SDCs C and D in the as close to the top of the deck as possible while still satisfying 2009 LRFD SGS (1).
concrete cover requirements. The recommended shear detail- ing uses headed reinforcement to ensure the sufficient anchor- SDC B age of the reinforcement within the short distance allocated. The joint
performance for SDC B is stated as follows: The following is a recommended addition to the 2009 LRFD SGS (1) as Article 8.10.5: Moment-resisting connections shall be designed to transmit the
unreduced elastic seismic forces in columns For integral precast superstructures with girders dis- where the column moment does not reach the plastic continuous at the face of the bent cap, headed
shear moment, Mp, and shall be designed to transmit the col- reinforcement shall be placed within a distance from umn forces associated with the column overstrength the face of the bent cap equal to
1.75 times the neutral capacity, Mpo, where the plastic moment, Mp , is reached. axis depth at nominal capacity as determined in accor- dance with Article 8.9. The headed shear reinforcement within
this distance shall be capable of resisting the fac- Based on Article 8.3.2 of the 2009 LRFD SGS, this provi- tored shear demand including effects of vertical seismic sion requires that connections
be designed to transmit the loading in accordance with Article 4.7.2. The shear lesser of the forces produced by Mpo or the unreduced elastic demand shall be calculated considering the direction of
seismic forces (1). However, when the elastic seismic moment loading and shears generated during positive flexural loading of the superstructure. This reinforcement shall reaches the plastic moment,
Mp, significant plastic hinging extend as close to the top of the deck as possible while may develop. Therefore, it is conservatively required in such maintaining required concrete cover dimensions.
cases that connections be designed to transmit the forces pro- duced by Mpo. For SDC B, the column section may be designed and governed by load cases other than seismic. 3.2.9 Joint Definition This
proposed article is an application of Articles 8.3.2 and Specimen test results and related research provide a suffi- C8.3.2 of the 2009 LRFD SGS, which recognize that SDC B cient basis for safe,
constructible, durable, and economical bridges may be subjected to seismic forces that can cause design of nonintegral emulative precast bent cap systems yielding of the columns and limited plastic
hinging, as they are using grouted duct or cap pocket connections and hybrid designed and detailed to achieve a displacement ductility, µD, connections in all SDC levels. However, as shown in Chapter
of at least 2.0 (1). According to Article C4.8.1, SDC B columns 2, testing was limited to interior joints of multicolumn bent are targeted for a drift capacity corresponding to concrete caps.
Therefore, proposed provisions for all SDCs follow the spalling. Article 4.8.1 provides an approximate equation for precedent for CIP joints found in Article 8.13.4.1 of the 2009 local displacement
capacity, providing an approach that lim- LRFD SGS (1) in limiting specifications to interior joints of its the required seismic analysis (i.e., expands the extent of a multicolumn bent caps: "No
Analysis" zone). Thus, based on Article 4.11.1, joint shear checks and full capacity design using plastic overstrength Interior joints of multicolumn bents shall be consid- forces are not required.
This more liberal practice, as stated ered "T" joints for joint shear analysis. Exterior joints in Article C4.11.1, may be adopted for CIP joints. However, shall be considered knee joints and require
special owners may also choose to implement the more conservative analysis and detailing that are not addressed herein, unless special analysis determines that "T" joint analysis capacity-protection
requirements given in Article 8.9. is appropriate for an exterior joint based on the actual Full and limited ductility specimen tests indicated initial bent configuration. concrete spalling of the
column at drift ratios ranging from 0.9% to 1.8% (µ1.5 to µ3). Specimens used a moderate amount Specifications for knee joints in CIP and precast bent cap of column longitudinal reinforcement--1.58%.
For this case, systems should be developed. joint shear cracks developed at loads less than effective yield
OCR for page 62
68 (µ1) for all specimens at principal tensile stresses that ranged 15.0 in from the face of the column connection into the from 2.95 fc psi to 4.3 fc psi, close to the 3.5 fc psi assumed adjoining
member (29). by the 2009 LRFD SGS (1). At µ2, all specimens had significantly According to these alternative provisions, when SD1 is greater than or equal to 0.10 but less than 0.15, minimum
transverse exceeded 3.5 fc and reached forces that were 88% to 100% reinforcement is required for CIP joints. When SD1 is less of the maximum overall force induced in the joint during than 0.10,
transverse shear reinforcement is not required. testing. Furthermore, the CPLD specimen, designed accord- For all values of SD1 in SDC A, the designer may choose to con- ing to SDC B detailing
requirements (i.e., no joint reinforce- servatively provide joint reinforcement as specified for SDC B, ment other than the steel pipe), exhibited extensive joint shear although SDC A is typically
considered a "No Analysis" region cracking. As reported in Matsumoto 2009 (26), the absence of for which seismic analysis is not required (2009 LRFD SGS, joint stirrups--in accordance with SDC B
design--was the Article C4.6) (1). main cause of the development and growth of joint shear Precast bent cap connections for SDCs B, C, and D are cracks. This indicates that SDC B joint design for
precast designed and detailed to provide sufficient reinforcement for connections should be based on a check of principal tensile force transfer through the joint and bent cap. The precast stresses
and that all SDC B joints should include at least min- bent cap design provisions for SDC A, including minimum imum joint shear reinforcement, defined as transverse rein- provisions, are more liberal
than those for precast bent caps forcement and joint stirrups. for SDC B and may be considered "No Analysis" requirements The proposed LRFD SGS adopts the more conservative for SDC A precast bent cap
systems. They are deemed appro- provisions that principal tensile stresses be checked for SDC B priate for all values of SD1 in SDC A. and that joint design depend on this check. These provisions
When SD1 is less than 0.10, alternative precast bent cap help ensure that the precast bent cap connections accommo- connections developed for nonseismic regions may be used. date forces in an
essentially elastic manner and do not become Figure 3.1 and Figure 3.2 show several nonintegral precast bent a weak link in the earthquake resisting system (Articles 4.11.1 cap details developed by
Matsumoto et al. (8) for grouted duct, and C4.11.1, 2009 LRFD SGS) (1). grout pocket, and bolted connections (7). Other references such as Brenes et al. (36) provide additional recommendations SDC A
for detailing nonintegral precast bent cap connections using grouted ducts. It is recommended that minimum vertical stir- The joint performance for SDC A is stated as the following: rups within the
joint be used, as required for SDC A details. In addition, column longitudinal reinforcement should be Moment-resisting connections shall be designed to transmit the unreduced elastic seismic forces
in columns. extended into the connection as close as practically possible to the opposite face of the bent cap. According to the 2009 LRFD SGS, bridges designed for SDC A are expected to be subjected
to only minor seismic dis- 3.2.11 Joint Proportioning placements and forces; therefore, a force-based approach is Two provisions should be satisfied in proportioning bent cap specified to determine
unreduced elastic seismic forces, in lieu joints: (1) provide cross-sectional dimensions to satisfy limits of a more rigorous displacement-based analysis (Articles 4.1 on principal tensile and
compression stresses and (2) provide and 4.2, 2009 LRFD SGS) (1). sufficient anchorage length to develop column longitudinal However, some SDC A bridges may be exposed to seismic reinforcement in the
bent cap joint under seismic demand. forces that may induce limited inelasticity, particularly in the columns. For this reason, Article 8.2 states that when SD1 is greater than or equal to 0.10 but
less than 0.15, minimum Principal Stress Requirements column shear reinforcement shall be provided in accordance SDCs C and D. Principal tensile and compression should with Article 8.6.5 for SDC B,
subject to Article 8.8.9 for the be checked for SDCs C and D as required by Article 8.13.2 of length over which this reinforcement is to extend. Although the 2009 LRFD SGS for CIP connections (1).
Article 8.8.9 does not specify placement of transverse col- umn reinforcement into the joint, Articles 5.10.11.4.1e and SDC B. As mentioned previously, to ensure that SDC B 5.10.11.4.3 of AASHTO LRFD
Bridge Design Specifications structures using precast bent caps are designed and detailed (4th edition) with 2008 and 2009 Interims, referenced by the to achieve a displacement ductility, µD, of at
least 2.0 (Article alternative provisions in Articles 8.2 and 8.8.9, specify place- C8.3.2, 2009 LRFD SGS) (1), the proposed provisions conser- ment of transverse reinforcement into the joint for a
distance vatively require that SDC B joints be proportioned based on not less than one-half the maximum column dimension or a check of principal stress levels. The provisions of Article
OCR for page 62
69 (b) Bolted (a) Grouted Duct Figure 3.1. Alternative precast bent cap connections for SDC A (SD1 < 0.10) (7). 8.13.2 of the 2009 LRFD SGS are thus used for joint propor- beams as close as
practically possible to the opposite face of tioning, except that the design moment used in determina- the cap beam and that for seismic loads, the anchorage length tion of principal stresses should
be the lesser of Mpo or the into the cap beam satisfy the following (1): unreduced elastic seismic column moment. 0.79dbl fye lac 2009 LRFD SGS Eq. 8.8.4-1 SDC A. Check of principal stresses is not
required for fc SDC A. where: lac = anchored length of longitudinal column reinforcing Minimum Anchorage Length bars into cap beam (in) dbl = diameter of longitudinal column reinforcement (in) Column
longitudinal bars should be extended into joints a fye = expected yield stress of longitudinal column rein- sufficient depth to ensure that the bars can achieve approxi- forcement (ksi) mately 1.4
times the expected yield strength of the reinforce- fc = nominal compressive strength of bent cap concrete ment, i.e., a level associated with extensive plastic hinging and (ksi) strain hardening up
to the expected tensile strength. For SDCs C and D, Article 8.8.4 of the 2009 LRFD SGS requires Prior research by Matsumoto et al. (7, 8) and Mislinski (9) that column longitudinal reinforcement be
extended into cap on anchorage of reinforcing bars in grouted ducts--confirmed
OCR for page 62
70 (a) Grouted Pocket (Double Line) (b) Grouted Pocket (Single Line) Figure 3.2. Alternative precast bent cap connections for SDC A (SD1 < 0.10) (7). by the NCHRP Project 12-74 grouted duct specimen
(22)-- The maximum grout compressive strength used in Eq. indicates that the following equation can be conservatively 8.15.2.2.2-1 should be limited to 7,000 psi, even where the used for seismic
applications: specified grout compressive strength (based on 2-in cubes) exceeds 7,000 psi. In addition, this equation applies #11 col- 2dbl fye umn reinforcing bars or smaller ones. lac Proposed
LRFD SGS Eq. 8.15.2.2.2- -1 fcg Anchorage of reinforcing bars in cap pocket connections can be based on prior precast bent cap research on grout pocket where: connections using trapezoidal
prism-shaped pockets without lac = anchored length of longitudinal column reinforc- a stay-in-place form (7, 8). Anchorage equations were modi- ing bar into grouted duct (in) fied by removing a 0.75
factor that accounted for extensive dbl = diameter of longitudinal column reinforcement (in) splitting cracks at reentrant corners of grout pockets. Such fye = expected yield stress of longitudinal
column rein- forcement (ksi) cracking did not develop for the cylindrical-shaped cap pocket fcg = nominal compressive strength of grout (cube connections for CPFD and CPLD that used steel pipes as
stay- strength) (ksi) in-place forms. The following equation--confirmed by CPFD
OCR for page 62
71 and CPLD results--can be used for cap pocket connections than 7% to fixed end rotation, and bar slip was comparable (7, 8, 23, 26): to that of the CIP specimen (21, 23). However, significant bar
slip was observed in the CPLD specimen, as summarized in 2.3dbl fye Chapter 2 and detailed in Matsumoto 2009 (26). The level of lac Proposed LRFD SGS Eq. 8.15.2.2.2 2-2 fc bar slip observed is
attributed to significant shear cracking in the joint that developed due to the lack of joint reinforce- where: ment, especially vertical stirrups. The proposed LRFD SGS lac = anchored length of
longitudinal column reinforc- requires at least minimum joint reinforcement (both trans- ing bars into cap pocket (in) verse confinement and vertical stirrups) for all SDC levels. dbl = diameter of
longitudinal column reinforcement (in) fye = expected yield stress of longitudinal column re- In addition, the embedment depth of the CPLD column inforcement (ksi) bars into the cap pocket was 26%
less than that required by fc = nominal compressive strength of cap pocket con- Eq. 8.15.2.2.2-2, due to the relatively low compressive strength crete fill (ksi) of the concrete fill. Article
8.13.8.3 of the proposed LRFD Bridge Construction Specifications (4) requires a minimum The anchored length includes the length of bar within the 500-psi margin between the compressive strength of
the steel pipe and within the portion of the bent cap between the bent cap and precast connection concrete fill (or grout). This bottom of the steel pipe and the bent cap soffit. margin accounts for
the likelihood that the actual bent cap Maximum compressive strength for the concrete fill used in compressive strength will exceed its specified strength and Eq. 8.15.2.2.2-2 should be limited to
7,000 psi, even where the the possibility of a low compressive strength of the grout or specified concrete fill compressive strength exceeds 7,000 psi. concrete fill. This provision is intended to
ensure that the In addition, this equation applies to #11 column reinforcing connection does not become a weak link in the system and bars or smaller ones. helps limit bar slip. As for CIP
connections, the proposed specifications for grouted duct and cap pocket connections require that col- Comparison of Anchorage Length Equations. Figure 3.3 umn longitudinal reinforcement be extended
into precast compares seismic anchorage (or development) length require- bent caps as close as practically possible to the opposite face ments for anchoring column longitudinal reinforcement into of
the bent cap. bent cap joints, based on the equations given in Table 3.1. For Only minor slip of column longitudinal bars was observed simplicity, anchorage length, lac, is used herein for both in
the full ductility test specimens (CIP, GD, and CPFD). For anchorage and development lengths (lac and ld) applied to example, for the CPFD specimen, bar slip contributed less anchorage of column bars
in a joint. Provisions in the 2009 60 2009 LRFD SGS AASHTO LRFD *1.25 UT Matsumoto, CP (GP*0.75) UT/CSUS Matsumoto, GD Anchorage Length/Bar Diameter (lac /db) 50 UT Brenes, GD (=.75, =1) UT Brenes,
GD (=.75, =1.3) UW Steuck *1.5, GD 40 30 20 10 0 4000 5000 6000 7000 8000 Compressive Strength of Concrete or Grout (psi) Figure 3.3. Anchorage length versus compressive strength--comparison of
OCR for page 62
72 Table 3.1. Comparison of anchorage length equations. / or , / Reference Anchorage Length, (#11; or , / 6000 psi) 2009 AASHTO LRFD SGS 30.9 1.00 [15] AASHTO LRFD BDS1 59.8 1.94 [27] UT Matsumoto,
Cap Pocket2 36.0 1.17 [3, 4] UT/CSUS Matsumoto, Grouted Duct 32.0 1.04 [3, 4] UT Brenes, 36.7 ( ) 1.19 ( ) Grouted Duct3 [34] 47.7 ( ) 1.54 ( ) UW Steuck, Grouted Duct4 14.9 0.48 [35] 1 Includes 1.25
seismic factor 2 Embedment includes 0.75 factor 3 fs,cr taken as fye.; ; for galvanized steel; for plastic duct 4 Includes 1.5 seismic factor; taken as 1.5 in. LRFD SGS (Article 8.8.4) and LRFD BDS
(Article 5.10.11.4.3) Brenes et al. (36) extended the grouted duct research of apply to CIP connections (29, 1). The grouted duct and cap Matsumoto et al. (8), examining group effects ( factor) and
pocket column bar anchorage equations (Eq. 8.15.2.2.2-1 and plastic ducts ( factor), among other variables. Values of Eq. 8.15.2.2.2-2) are recommended for use in precast bent range from 0.45 to 0.9
for typical configurations of bars in a cap connections. Additional equations for grouted duct con- grouted duct connection. The case of = 0.75 and = 1.0 nections based on recent research are also
provided (28, 37). shown in Figure 3.3 represents bar anchorage that accounts for Table 3.1 also compares the ratio of anchorage length to group effects based on a moderate number of grouted column
bar diameter (lac/db) for #11 rebar and compressive strength bars simultaneously subjected to tension under the design load of 6,000 psi (grout or concrete) as an example. In addition, the
combinations ( = 0.75) as well as galvanized steel duct mater- lac/db ratio for each equation is compared to that of the 2009 ial ( = 1.0). The equation in Brenes et al. for grouted ducts LRFD SGS
(Eq. 8.8.4-1), which is taken as a reference (1). is slightly more conservative than that of Matsumoto et al. for Figure 3.3 and Table 3.1 indicate that the LRFD BDS equation the assumed values of
and . Significantly, Brenes et al. is extremely conservative, requiring nearly twice the anchorage found that the required anchorage length increased by 30% length required by the 2009 LRFD SGS. The
proposed grouted when polyethylene or polypropylene (plastic) ducts are used duct and cap pocket (CP) equations are slightly more conser- instead of steel. Tension cyclic tests were not conducted.
vative (4% and 17%, respectively) than the 2009 LRFD SGS The University of Washington equation, which is multiplied equation for the example, although Figure 3.3 shows the by the recommended 1.5
seismic factor (37), results in an change in anchorage length with compressive strength. The exceptionally short development length and is not recom- proposed grouted duct equation is based on both
tension cyclic mended for use in precast bent cap design. and monotonic tension tests and includes a factor of safety of at SDCs B, C, and D. Based on the foregoing development, least 2.0. In
addition, this equation can be conservatively used Eq. 8.15.2.2.2-1 and Eq. 8.15.2.2.2-2 are proposed for anchorage for epoxy-coated bars. The use of f cg rather than fcg in the of column bars in
grouted duct and cap pocket connections, denominator is explained in Matsumoto et al. (7). respectively.
OCR for page 62
73 SDC A. SDC A incorporates the same requirements as the larger of Eq. 8.13.3-1 and Eq. 8.13.3-2 be used because the those for SDCs B, C, and D except that the nominal yield transverse reinforcement
requirement of Eq. 8.13.3-2 can stress of the column longitudinal reinforcement may be used become less than that of Eq. 8.13.3-1 in some cases, as shown in lieu of the expected yield stress. This
allows for a slightly in this research. reduced safety margin due to the significantly lower seismic demand and limited inelasticity in the columns. SDC B. The proposed provisions for precast
connections in SDC B require the same check of principal tensile stress to 3.2.12 Minimum Joint Shear Reinforcement determine transverse reinforcement in the joint as is required for SDCs C and D.
SDCs C and D. Minimum joint shear reinforcement refers to transverse reinforcement within the joint region in However, the 2009 LRFD SGS does not include provisions the form of column reinforcement,
spirals, hoops, intersect- for minimum transverse reinforcement for CIP structures in ing spirals or hoops, or column transverse or exterior trans- SDC B (1). The SDC B design requirement for CIP
would then verse reinforcement continued into the bent cap. For precast default to Article 5.10.11.3 of the 2009 LRFD BDS for Seismic connections, minimum transverse joint reinforcement is Zone 2,
which refers the designer to Article 5.10.11.4.3 (Seismic required to help ensure that the connection does not become Zones 3 and 4). Article 5.10.11.4.3 requires the following: a weak link in a
precast bent cap system. Transverse reinforce- ment for a grouted duct connection is the same as that for a Column transverse reinforcement, as specified in CIP connection. However, for a cap pocket
connection, the Article 5.10.11.4.1d, shall be continued for a distance steel pipe serves as the transverse reinforcement. not less than one-half the maximum column dimension For SDCs C and D, the
minimum joint shear reinforce- or 15.0 in from the face of the column connection into the adjoining member. ment for precast and hybrid connections is determined using essentially the same basis as
that used for CIP connections. If the nominal principal tensile stress in the joint, pt, is less than It is judged that this reinforcement is not adequate for CIP limited ductility connections. It is
therefore recommended that 0.11 fc , then the transverse reinforcement in the joint, s, minimum joint transverse reinforcement requirements also be must satisfy the following equation and no
additional rein- established for CIP bridges in SDC B. For limited or simplified forcement within the joint is required: seismic analysis (i.e., a "No Analysis"-type approach), mini- mum
reinforcement satisfying Eq. 8.13.3-1 of the 2009 LRFD 0.11 fc SGS is recommended (1). s 2009 LRFD SGS Eq. 8.13.3-1 fyh SDC A. For SDC A, principal stresses are not checked, where: but minimum joint
shear reinforcement is proposed for pre- fyh = nominal yield stress of transverse reinforcing (ksi) cast connections. This is simple, yet conservative and should f c = nominal compressive strength of
concrete (ksi) be considered good detailing practice. Such reinforcement is s = volumetric reinforcement ratio of transverse rein- not required for CIP joints per the 2009 LRFD SGS, but is rec-
forcing provided within the cap ommended (1). Where the principal tensile stress in the joint, pt , is greater than or equal to 0.11 fc , then transverse re- Grouted Duct Connections inforcement in
the joint, s, must satisfy both Eq. 8.13.3-1 and the following equation: SDCs B, C, and D. Grouted duct connections use the same basis as CIP connections in SDCs C and D, with the additional Ast
requirement that spacing of transverse reinforcement not s 0.40 2009 LRFD SGS Eq. 8.13.3-2 lac 2 exceed 0.3Ds nor 12 in. This is intended to provide a reason- able number of hoops within the joint
when the minimum where: requirement governs. Ast = total area of column longitudinal reinforcement anchored in the joint (in2) SDC A. Joint transverse reinforcement provisions con- lac = length of
column longitudinal reinforcement servatively match minimum requirements for SDC B. embedded into the bent cap (in) Cap Pocket Connections For this case, additional joint reinforcement is also
required. The 2009 LRFD SGS requires only Eq. 8.13.3-1 to be satis- SDCs B, C, and D--Basic Equation for Pipe Thickness. fied (1). However, the proposed specifications require that For cap pocket
connections, the thickness of the corrugated
OCR for page 62
74 steel pipe, tpipe, is based on providing shear resistance to the The maximum spacing requirements of 0.3Ds and 12 in do joint that is approximately the same as that provided by the not apply to
the determination of nh. hoops required for CIP joints: The minimum thickness of the steel pipe, tpipe, of 0.060 in corresponds to 16-gage steel pipe, which was used for the 18-in Cap pocket
connections shall use a helical, lock-seam, nominal diameter pipe in the cap pocket specimens (with a corrugated steel pipe conforming to ASTM A760 to 20-in diameter column). As shown in Table 3.2,
this is the form the bent cap pocket. A minimum thickness of cor- rugated steel pipe shall be used to satisfy the transverse thinnest gage typically available off the shelf for corrugated
reinforcement ratio requirements specified in Article steel pipe. Other pipe thicknesses (nominal and tolerance 8.15.3.1. The thickness of the steel pipe, tpipe, shall not range) are shown in Table
3.2, with specified and minimum be taken less than that determined by Eq. 1: values for coated steel sheet per ASTM A929 (25). Thicker pipes (gages 8, 7, and 5) are usually available through special
FH order. Material costs increase roughly according to the weight tpipe max Hpfyp cos 0.060 in. shown in the last column of Table 3.2. Proposed LRFD SGS Eq. 8.15.3.2.2-1 SDCs B, C, and D--Alternative
Equation for Pipe Thickness. The following simplified equations, Eq. C8.15. In which: 3.2.2-1 and Eq. C8.15.3.2.2-2, may be used to conservatively FH nh Aspfyh Proposed LRFD SGS Eq. 8.15.3.2.2-2
determine pipe thickness, tpipe, in lieu of calculating the num- ber of hoops in an equivalent CIP joint, nh, as the basis for where: determining pipe thickness. This avoids iteration in design cal-
FH = nominal confining hoop force in the joint (kips) culations, but may result in thicker gage pipe used in design. Hp = height of steel pipe (in) Where the principal tensile stress in the joint,
pt, specified fyp = nominal yield stress of steel pipe (ksi) = angle between horizontal axis of bent cap and in Article 8.15.2.1, is less than 0.11 fc , the thickness of the pipe helical corrugation
or lock seam (deg) steel pipe, tpipe, may be determined from the following: nh = number of transverse hoops in equivalent CIP joint Asp = area of one hoop reinforcing bar (in2) fcDcp tpipe 0.04
Proposed LRFD SGS Eq. C 8.15.3.2.2-1 fyh = nominal yield stress of transverse reinforcement fyp cos (ksi) where: The derivation of this equation is provided in the CPT fc = nominal compressive
strength of the bent cap Attachment. As shown in the design examples provided in the concrete (ksi) attachments, the spacing of transverse joint hoops can be = average diameter of confined cap pocket
fill Dcp between corrugated pipe walls (in) directly related to the number of hoops, nh, by the volumet- fyp = nominal yield stress of steel pipe (ksi) ric reinforcement ratio for transverse joint
hoops, s, using = angle between horizontal axis of bent cap and Eq. 8.6.2-7 of the 2009 LRFD SGS (1). pipe helical corrugation or lock seam (deg) Table 3.2. Steel corrugated pipe thicknesses.
Thickness (in) Gage Pounds per Number Square Foot Nominal Tolerance Range Specified Minimum 16 0.0598 0.0648 to 0.0548 0.064 0.057 2.439 14 0.0747 0.0797 to 0.0697 0.079 0.072 3.047 12 0.1046 0.1106
to 0.0986 0.109 0.101 4.267 10 0.1345 0.1405 to 0.1285 0.138 0.129 5.486 8* 0.1644 0.1742 to 0.1564 0.168 0.159 6.875 7* 0.1838 0.1883 to 0.1703 No value No value 7.500 5* 0.2092 0.2162 to 0.2022 No
value No value 8.750 *Nonstandard size available by special order Values refer to coated steel sheet thicknesses per ASTM A929 (25)
OCR for page 62
75 Where the principal tensile stress in the joint, pt, is greater the thinnest required pipe; (2) using the approximate equa- than or equal to 0.11 fc , the thickness of the steel pipe, tpipe, tions
(larger of the two equations, where principal tensile may be determined from the larger of Eq. C8.15.3.2.2-1 and stress is greater than or equal to 0.11 fc ) usually results in the following
equation: a pipe thickness one gage size larger than that required by the general equation, using the gage sizes given in Table 3.2; fyh Ast Dcp tpipe 0.14 (3) a reasonable pipe thickness results in
all cases; and (4) Eq. lac 2fyp cos C8.15.3.2.2-1 governs over Eq. C8.15.3.2.2-2 for all but the Proposed LRFD SGS Eq. C 8.15.3.2.2-2 largest column diameter (60 in). Figure 3.5 compares the pipe
thicknesses for column lon- where: gitudinal steel ratios, Ast/Acol of 0.010, 0.015, and 0.020. This Ast = total area of column longitudinal reinforcement figure shows the expected significant impact
of Ast/Acol on anchored in the joint (in2) Dcp = average diameter of confined cap pocket fill required pipe thickness. It also shows that Eq. C8.15.3.2.2-2 between corrugated pipe walls (in) results
in thick gage pipes for larger columns, indicating that fyh = nominal yield stress of transverse reinforcing (ksi) the designer may prefer to use the general equation in such lac = anchored length of
longitudinal column reinforc- conditions to minimize the required pipe thickness. ing bars into precast bent cap (in) fyp = nominal yield stress of steel pipe (ksi) The CPT Attachment provides
additional plots that show = angle between horizontal axis of bent cap and the effect of f c on pipe thickness for 4,000 psi, 6,000 psi, and pipe helical corrugation or lock seam (deg) 8,000 psi bent
cap concrete. The required pipe thickness increases approximately 10% to 30% with f c based on The derivations of these equations are provided in an attach- Eqs. C8.15.3.2.2-1 and C8.15.3.2.2-2. For
example, for a ment together with a comparison of the influence of different 36-in diameter column with #6 hoops (Ast/Acol = 0.015), the pipe variables in these equations. For example, Figure 3.4
com- thickness increases 18% as f c increases from 4,000 psi to 8,000 pares the pipe thicknesses required by Eq. 8.15.3.2.2-1, Eq. psi. Eq. C8.15.3.2.2-1 results in a larger increase of 41% (pro-
C8.15.3.2.2-1, and Eq. C8.15.3.2.2-2. For comparison, column portional to fc ). Eq. C8.15.3.2.2-2 is not dependent on f c. diameters range from 24 in to 60 in; equivalent hoop sizes vary according to
the column diameter; the column is assumed to SDC A. Cap pocket pipe thickness for SDC A is based on have a longitudinal steel ratio, Ast/Acol, of 0.015; and the bent cap the minimum provision for
transverse reinforcement, as compressive strength is assumed to be 6,000 psi. Figure 3.4 given in Eq. 8.15.3.2.2-1. Eq. C8.13.3.2.2-1 may be alterna- reveals that (1) using the general (refined)
equation results in tively used. 0.30 Eq. 8.15.3.2.2-1 Eq. C8.15.3.2.2-1 0.25 Eq. C8.15.3.2.2-2 Gage 5 0.20 Pipe Thickness (in) Gage 7 Gage 8 0.15 Gage 10 Gage 12 0.10 Gage 14 Gage 16 0.05 0.00 #3 #4
#4 #5 #6 #5 #6 #8 #6 #8 Dc = 24 in Dc = 36 in Dc = 48 in Dc = 60 in Figure 3.4. Pipe thickness versus column diameter (Dc ) and equivalent hoop size (Ast /Acol = 0.015, f c = 6,000 psi for bent cap).
OCR for page 62
76 0.30 Eq. 8.15.3.2.2-1 Eq. C8.15.3.2.2-1 0.25 Eq. C8.15.3.2.2-2 Gage 5 0.20 Pipe Thickness (in) Gage 7 Gage 8 0.15 Gage 10 Gage 12 0.10 Gage 14 Gage 16 0.05 0.00 0.010 0.015 0.020 0.010 0.015 0.020
0.010 0.015 0.020 Ast /Acol Dc = 36 in Dc = 48 in Dc = 60 in Figure 3.5. Pipe thickness versus column diameter (Dc ) and column flexural reinforcement ratio (#6 hoop, f c = 6,000 psi for bent cap).
3.2.13 Integral Bent Cap Joint Shear Design This requirement is based on an assumed longitudinal force transfer mechanism and is appropriate for use in both CIP Per 2009 LRFD SGS, joint shear design
is required for SDCs and precast integral bridge connections. In accordance with C and D, but not SDC B (1). Where the principal tensile Figure 8.13.4.2.1-1, for multicolumn bent caps additional
stress, pt, is greater than or equal to 0.11 fc , additional joint reinforcement is required on both sides of the column based shear reinforcement is required. The 2009 LRFD SGS requires on an
assumed transverse force transfer mechanism. These placement of joint shear reinforcement based on assumed force requirements vary from those presented for nonintegral bent transfer mechanisms in the
longitudinal and transverse direc- caps and must be updated for consistency. tions. Based on a review of the testing presented in Chapter 2 The recommended modifications to this provision elimi- and
other previous research on precast integral bridge systems, nate the second portion of Figure 8.13.4.2.1-1 for multicolumn the assumed force transfer mechanism for longitudinal load- bent caps.
Instead, this article should reference the recom- ing is adequate for the system considered. mended provisions of LRFD SGS Article 8.15.5 for vertical stir- However, the requirements presented in the
2009 LRFD rups inside and outside of the joint. The transverse provisions SGS for transverse response vary from the requirements for for nonintegral bent caps are described in more detail in sub-
nonintegral transverse response (1). The mechanism for trans- sequent sections of this report. verse response is the same whether an integral or nonintegral connection is used. Therefore, there are
recommended mod- Horizontal Stirrups ifications to the integral design provisions to account for these differences. The 2009 LRFD SGS requires the placement of horizontal stirrups around the vertical
stirrups within the bent cap (1). Vertical Stirrups These provisions are adequate for integral systems in the longi- tudinal direction but must also be updated to be in agreement Requirements
specified in the 2009 LRFD SGS for vertical with the nonintegral transverse requirements. The provisions of stirrups in integral bridge systems are based on longitudinal Article 8.13.4.2.2 of the
2009 LRFD SGS provide a minimum and transverse loadings. Per Figure 8.13.4.2.1-1 of the 2009 quantity of horizontal stirrups required in addition to spacing LRFD SGS, for single column bent caps only
vertical stirrups requirements. The provisions for nonintegral bent caps specify are required along both faces of the bent cap extending one- only spacing and size requirements. The recommended modi-
half the column dimension on both sides of the column (1). fication for integral bent caps is the addition of a provision to
OCR for page 62
77 ensure that the horizontal stirrups for multicolumn bent caps The proposed joint shear design approach for nonintegral also satisfy the nonintegral provisions. precast bent caps and integral bent
caps in the transverse direc- tion follows the same approach, but conservatively requires the Additional Longitudinal Cap Beam Reinforcement principal tensile stress check for SDC B as well as SDCs C
and D. In addition, the additional joint shear reinforcement differs Provisions for nonintegral bent caps per the 2009 LRFD in certain regards from CIP requirements. Although vertical SGS require the
placement of additional longitudinal rein- joint stirrups, horizontal J-bars, and additional longitudinal forcement within the cap beam (1). There is currently no bent cap reinforcement are
addressed, other provisions, such as requirement for the placement of this reinforcement for multi- bedding layer reinforcement and supplementary hoops, are column integral bent caps. Therefore, it
is recommended that included. In addition, minimum joint shear reinforcement-- a provision for integral bent caps that requires the placement both transverse joint reinforcement and vertical joint
stir- of additional longitudinal cap beam reinforcement for multi- rups inside the joint--is conservatively required for all SDC column bent caps be added. The adequacy of this requirement levels.
for transverse response of nonintegral bent caps is discussed The proposed joint shear reinforcement provisions are in more detail in a subsequent section. based on precast bent cap specimen response
reported in the work of Matsumoto (21, 22, 23, 26) as well as additional 3.2.14 Nonintegral Bent Cap analysis presented herein. Table 3.3 compares the joint rein- Joint Shear Design forcement used in
the full ductility specimen design to that Per the 2009 LRFD SGS, joint shear design is required for required by the 2006 LRFD RSGS, which was the original SDCs C and D, but not SDC B (1). For SDCs C
and D, where design basis of specimens, and the 2009 LRFD SGS, the current AASHTO seismic guide specifications (2, 1). The proposed the principal tensile stress, pt, is less than 0.11 fc , minimum
LRFD SGS for precast bent caps for SDCs C and D is also listed joint shear (transverse) reinforcement is required. Where pt and compared to the test specimens (Proposed Specification/ is greater than
or equal to 0.11 fc , the additional joint shear Test Specimen) and to the 2009 LRFD SGS (Proposed reinforcement (Asjvi, Asjvo, Asjl, and horizontal J-bars) is required. Specification/2009 LRFD SGS).
Table 3.3. Comparison of joint reinforcement for various seismic guide specifications--SDCs C and D. Proposed Proposed Specimen 2006 LRFD 2009 LRFD Proposed Guide Specification Specification
Reinforcement Type Term Quantity RSGS SGS Specification Test Specimen 2009 LRFD SGS max of Transverse Hoop - A -A, B 1.00B 0.27 0.20C 0.175 0.175 0.65D 1.00 Vertical Joint GD: CPFD: GD: CPFD: GD:
CPFD: 0.089E - 0.135 0.135 0.12 1.52 1.35 1.00 0.89 Additional Bent 0.0 0.0 0.245 0.245 -F 1.00 Cap Longitudinal Every other Every other GD: CPFD: Horizontal J-bar 0.13G 0.10G intersection -G
intersection in joint 1.00 -H in joint Reinforcement per Bedding Layer Hoop - - - - -I -I specification Notes: A GD test specimen used hoops close to minimum per 2006 LRFD RSGS, and CPFD used a steel
corrugated pipe thickness based on 2006 LRFD RSGS. B Typically this will be 1.0, except that the proposed specification requires the larger of 2009 LRFD SGS Eq. 8.13.3-1 and Eq. 8.13.3-2 be used. C
Placed transversely within Dc from either side of column center line per 2006 LRFD RSGS. Placement was adjacent to joint. D Difference was due to change in design requirements and rounding of bar
sizes in specimen. E Specimen used construction stirrups; 2006 LRFD RSGS did not require Asjvi . F Not used because 2006 LRFD RSGS did not require Asjl . G Proposed Guide Specification and 2009 LRFD
SGS require Asjh in joint for GD; GD specimen used Asjh adjacent to joint per 2006 LRFD RSGS. H Horizontal J-bars are not used for cap pocket connections. I Specimen did not use hoop in scaled 1- in
bedding layer. Bedding layer hoop applies only to precast connections.
OCR for page 62
78 From Table 3.3, it is evident that the joint design require- The full ductility specimens used the more liberal (and ments became considerably more conservative from the 2006 constructible)
placement of joint stirrups outside the joint as LRFD RSGS to 2009 LRFD SGS and that the differences the more severe condition for investigating joint response, between the test specimens and the
2009 LRFD SGS reflect permissible by the 2006 LRFD RSGS (2). Rounding stirrup this (2, 1). As a whole, the proposed joint reinforcement is bar diameters to practical sizes for the test specimens
resulted conservative compared to that used in the test specimens. In in a larger area of vertical stirrups outside the joint region addition, there is considerable consistency (i.e., a ratio of 1.00
than required. However, two 2-leg construction stirrups with for Proposed Specification/2009 LRFD SGS) between the a total area of 0.089Ast were included within the joint region, proposed
specifications and 2009 LRFD SGS for SDCs C and as mentioned in Matsumoto (21). As shown in Table 3.3, this D where principal tensile stress, pt, is greater than or equal to resulted in an area 66%
of that required by the 2009 LRFD 0.11 fc . However, there are still considerable differences in SGS (0.135Ast) (1). design specifications between nonintegral CIP and precast As shown in the strain
profiles of Figure 2.48 (CIP), Figure bent caps, as shown in the following sections. 2.54 (GD), and Figure 2.59 (CPFD), vertical joint stirrups were highly effective for the CIP and GD specimens for
which maximum joint crack widths were 0.025 in and 0.040 in, Limits on Bent Cap Depth respectively. This confirms the importance of such stirrups in Nonintegral precast bent caps are subject to the
same bent achieving emulative response. Smaller joint stirrup strains were cap depth limitations and the alternative design basis required evident for the CPFD specimen, which exhibited much smaller
by Article 8.13.5 of the 2009 LRFD SGS (for CIP bent caps) (1). crack widths and a crack pattern that differed from the CIP and The proposed provision is the following: GD specimens. In contrast, the
CPLD specimen exhibited severe joint cracking, which is attributed to the absence of joint Cast-in-place, emulative precast and hybrid bent cap reinforcement, especially joint stirrups. Figure 2.46
and Fig- beams satisfying Eq. 1 shall be reinforced in accordance ure 2.64 portray the significant effect of the CPLD joint shear with the requirements of Articles 8.15.5.1 and 8.15.5.2. cracking on
joint shear stiffness and system displacement, even Bent cap beams not satisfying Eq. 1 shall be designed on though the specimen achieved an exceptionally large drift of the basis of the strut and
tie provisions of the AASHTO 5.0% in the presence of cracks up to 0.080 in wide. Outside the LRFD Bridge Design Specifications and as approved by the Owner. joint, the GD and CPLD stirrup strains
were the largest, 68% of yield and 61% of yield, respectively. Dc d 1.25Dc Proposed LRFD SGS Eq. 8.15.5-1 Based on a comparison of GD and CIP results and the overall GD emulative response, the
proposed specification where: requires full ductility grouted duct connections to use the Dc = column diameter (in) same joint stirrups inside the joint as required by the 2009 d = total depth of the
bent cap beam (in) LRFD SGS (1). Based on a comparison of the CPFD and CIP response, it is Vertical Stirrups Inside and Outside the Joint deemed reasonably conservative to require 0.12Ast--12% less
than the CIP requirement (0.135Ast) but 35% more than that SDCs C and D--Principal Tensile Stress, pt, 0.11 fc or used in the CPFD specimen (0.089Ast), which exhibited min- Larger. The 2006 LRFD RSGS
used for the design of the imal joint distress and exceptional joint performance: prototype bridge and full ductility test specimens did not dis- tinguish between integral and nonintegral bent cap
systems, Vertical stirrups inside the joint with a total area, Asjvi, nor between the design of vertical stirrups inside the joint spaced evenly over a length, Dc , through the joint shall region and
outside (i.e., adjacent to) the joint region (2). satisfy: Based on the work of Sritharan (38), the 2009 LRFD SGS (1) Asjvi 0.12 Ast Proposed LRFD SGS Eq. 8.15.5.2.3a-1 1 for nonintegral bent caps
increased the required total area of vertical joint stirrups (inside and outside the joint) approxi- Vertical stirrups inside the joint shall consist of dou- mately 21% over the 2006 LRFD RSGS
requirement. In addi- ble leg stirrups or ties of a bar size no smaller than that tion, Articles 8.13.5.1.1 and 8.13.5.1.2 of the 2009 LRFD SGS of the bent cap transverse reinforcement. A minimum
require placement of 0.175Ast outside the joint (adjacent to of two stirrups or equivalent ties shall be used. each side of the column) as well as 0.135Ast inside the joint. These are major changes
in joint stirrup requirements over the Due to the presence of the pipe, overlapping double-leg 2006 LRFD RSGS provisions. These provisions should also be vertical stirrups are not practical. required
for the design of integral bent cap systems in the trans- Figure 3.6 shows the minimum number of two-leg vertical verse direction as the response mechanism is the same. joint stirrups required for
values of Ajvi s /Ast ranging from 0.08
OCR for page 62
79 Ast /A g = 0.01 Dc = 36 in Ast /Ag = 0.015 Dc= 36 in Ast /Ag = 0.02 Dc= 36 in 8 8 8 Minimum No. of 2-leg Stirrups Minimum No. of 2-leg Stirrups Minimum No. of 2-leg Stirrups (6.11) (6.87) (5.15)
(5.09) 6 6 6 (4.58) (4.07) (4.43) (3.05) (3.44) (3.32) (3.28) (3.94) (3.12) 4 4 4 (2.04) (2.54) (2.22) (3.05) (3.82) (2.96) (2.08) (2.34) (2.63) (2.31) (2.78) ( (11.3 31) 1) ( (11.64) 64) ( (11.97)
97) ( (00.93) 93) (1 (1.16) 16) (1 (1.39) (1.56) 39) (1 56) (1 ( 1.97) ( (22.46) 46) (1 ( 39) 1.3 (1 9) ( 1.74) 4) (1 85) (1.85) 2 2 2 0 0 0 #4 #5 #6 #4 #5 #6 #4 #5 #6 Rebar Size (No.) Rebar Size
(No.) Rebar Size (No.) 0.08 Asjvi/Ast 0.10 Asjvi/Ast 0.12 Asjvi/Ast 0.135 Asjvi/Ast 8% Asjvi/Ast 10% Asjvi/Ast 0.12 Asjvi/Ast 0.135 Asjvi/Ast 0.08 Asjvi/Ast 0.10 Asjvi/Ast 0.12 Asjvi/Ast 0.135 Asjvi/
Ast Ast /A g = 0.01 Dc= 48 in Ast /A g = 0.015 Dc= 48 in Ast /A g = 0.02 Dc= 48 in (7.88) Minimum No. of 2-leg Stirrups Minimum No. of 2-leg Stirrups Minimum No. of 2-leg Stirrups 8 8 8 (7.00) (5.25)
(5.91) (5.84) (5.55) 6 6 6 (4.38) (4.16) (4.67) (4.11) (4.94) (3.50) (3.94) (3.50) (3.08) (3.70) (3.29) (3.09) 4 4 4 ( (2.33) 2.33) (2.92) (2.06) (2.47) (2.78) (2.47) (2.06) (2.32) (2.29) (2.75)
(1.65) (0.92) (1.15) (1.37) (1.55) (1.37) (1.72) (1.83) 2 2 2 0 0 0 #5 #6 #8 #5 #6 #8 #5 #6 #8 Rebar Size (No.) Rebar Size (No.) Rebar Size (No.) 0.08 Asjvi/Ast 0.10 Asjvi/Ast 0.12 Asjvi/Ast 0.135
Asjvi/Ast 0.08 Asjvi/Ast 0.10 Asjvi/Ast 0.12 Asjvi/Ast 0.135 Asjvi/Ast 0.08 Asjvi/Ast 0.10 Asjvi/Ast 0.12 Asjvi/Ast 0.135 Asjvi/Ast Figure 3.6. Minimum number of 2-leg stirrups inside the
joint--36-in and 48-in diameter columns. to 0.135, based on several stirrup sizes. Results are shown for symmetrically. Three sets of stirrups can be placed symmetri- 36-in and 48-in diameter columns
and for column longitudi- cally when column bars are rotated a half turn (to avoid con- nal reinforcement ratios, Ast/Ag, of 0.01, 0.015, and 0.02. The flict); however, conflict between bent cap
longitudinal bars and height of the bar indicates the number of required stirrups, column bars and/or corrugated ducts should also be avoided. subject to the proposed specification minimum of two
stir- Based on a comparison of CIP results with GD and CPFD rups. Additionally, the number in parentheses at the top of response and the overall GD and CPFD emulative response, the bar indicates the
calculated stirrup requirement (i.e., the proposed specification requires grouted duct and cap without rounding or the 2-stirrup minimum). Figure 3.6 pocket full ductility connections to use the same
joint-related shows that both the 0.135Ast requirement for grouted ducts vertical stirrup area outside the joint (Ajvo s ) as required by the and the 0.12Ast requirement for cap pocket connections
pro- 2009 LRFD SGS (1). duce a number of stirrups that can be reasonably satisfied in design and construction. SDCs C and D--Principal Tensile Stress, pt, Less than Although the design requirement
can become significant 0.11 f c . Per the 2009 LRFD SGS, where the principal for larger percentages of column longitudinal reinforcement (Ast), potential congestion can be alleviated by the use of
larger tensile stress, pt, for a CIP connection is less than 0.11 fc , stirrup bar sizes. Grouted duct connections require a larger only minimum joint transverse (hoop) reinforcement is required;
vertical joint stirrups are not required (1). As men- area of vertical stirrups than cap pocket connections, and as tioned previously, additional joint shear reinforcement for shown in Figure 3.6,
this sometimes results in a larger number precast bent caps should be based on principal tensile stress of stirrups; however, the area requirement can be satisfied by using overlapping two-leg
stirrups. Cap pocket connections exceeding 0.11 fc (i.e., likely joint shear cracking). How- accommodate only two-leg stirrups; therefore, it is important ever, given the inherent variability in
actual bridge fabrica- that the designer carefully consider using larger stirrup sizes and tion, assembly, and seismic response and the potentially possibly bundling stirrups, especially when a
larger column severe impact of joint shear cracking on intended emulative reinforcement ratio is used. In addition, care should be taken in response, the proposed specifications conservatively
require design to ensure that the number and placement of stirrups a minimal area of vertical joint stirrups in grouted duct and over the top opening of the pocket does not unduly interfere cap
pocket connections in SDCs C and D, even where the with concrete placement in the pocket during the assembly principal tensile stress, pt, is less than 0.11 fc . This provision operation. For both
connection types, stirrups should be placed is expected to be more commonly associated with SDC B and
OCR for page 62
80 is therefore addressed in the next section (SDC B). Vertical cap reinforcement, which is prescribed in the 2009 LRFD SGS stirrups outside the joint are not required. (1) for nonintegral bent caps
as follows: SDC B. Where the principal tensile stress, pt, for an SDC B Longitudinal reinforcement, As jl, in both the top and bottom faces of the cap beam shall be provided in addi- bridge using a
precast bent cap connection is 0.11 fc or tion to that required to resist other loads. The addi- larger, the joint shear design provisions for SDCs C and D are tional area of the longitudinal steel
shall satisfy: conservatively required. Where the principal tensile stress is Asjl 0.245 Ast Proposed LRFD SGS Eq. 8.13.5.1.3-1 less than 0.11 fc , the following minimum provision for ver- tical
stirrups in the joint must be satisfied for grouted duct Maximum bent cap longitudinal bar strains for the speci- connections: mens were limited to 46% of yield for CIP and 53% of yield for GD, but
exceeded yield for CPFD and CPLD. The area of the Vertical stirrups with a total area, A jvi s , spaced evenly CPLD bent cap longitudinal reinforcement was reduced 30% over a length, Dc, through the
joint shall satisfy: from the CPFD, which contributed to the extent of yielding, as discussed in Matsumoto (26). Asjvi 0.10 Ast Proposed LRFD SGS Eq. 8.14.5.2.2a-1 1 Based on specimen response, the
proposed specifications Vertical stirrups inside the joint shall consist of dou- do not modify the 2009 LRFD SGS (1). Therefore, this addi- ble leg stirrups or ties of a bar size no smaller than that
tional reinforcement is required where the principal tensile of the bent cap transverse reinforcement. A minimum stress, pt, for a precast bent cap connection is 0.11 fc or of two stirrups or
equivalent ties shall be used. larger. As shown in the example connection details for cap pocket This limited provision, which still results in a highly con- connections for SDCs B, C, and D provided
in the attachments, structible joint region, provides reinforcement to restrict joint inverted U-bars or hairpins may be placed within the pocket to shear effects in the event of joint shear
cracking. As shown in help restrain potential splitting cracks and buckling of top bent Figure 3.6, joints will typically require only two to three 2-leg cap flexural bars within the joint. This
additional conservative stirrups. Vertical stirrups outside the joint are not required. measure is optional but recommended where the principal The Ajvi s requirement for cap pocket connections is
identi- cal to that for the grouted duct connections. tensile stress, pt, is 0.11 fc or larger. It is not required for grouted duct connections where overlapping vertical stirrups SDC A. For SDC A,
the principal tensile stress, pt, is not within the joint can serve the same purpose. calculated. However, the following minimum provision for SDC B. As for other additional joint shear
reinforcement, vertical stirrups in the joint conservatively applies for grouted the additional longitudinal bent cap reinforcement stipulated in duct connections: the previous section for SDCs C and
D is conservatively Vertical stirrups with a total area, Asjvi, spaced evenly required for SDC B where the principal tensile stress, pt, for a over a length, Dc, through the joint shall satisfy:
precast bent cap connection is 0.11 fc or larger. Asjvi 0.08 Ast Proposed LRFD SGS Eq. 8.13.4.2.2a-1 1 SDC A. For SDC A, additional longitudinal cap beam reinforcement is not required. Vertical
stirrups inside the joint shall consist of dou- ble leg stirrups or ties of a bar size no smaller than that of the bent cap transverse reinforcement. A minimum Horizontal J-Bars of two stirrups or
equivalent ties shall be used. SDCs C and D. In accordance with the 2006 LRFD RSGS As shown in Figure 3.6, the minimum 2-stirrup require- (2), horizontal J-bars with an area, Asjk, of at least
0.10Ast was ment is expected to govern. used in the CIP and GD specimens together with Asjv re- The provision for cap pocket connections is identical to inforcement adjacent to the joint (within Dc/2
of column face). that for the grouted duct connections. The 2009 LRFD SGS for nonintegral bent caps modified this requirement as follows (1): Additional Longitudinal Cap Beam Reinforcement Horizontal
J-bars hooked around the longitudinal reinforcement on each face of the cap beam shall be SDCs C and D. The 2006 LRFD RSGS (2) used in the provided as shown in Figure 8.15.5.1.1-1. At a mini- design
of the prototype bridge and emulative test specimens mum, horizontal J-bars shall be located at every other did not include the significant additional longitudinal bent vertical-to-longitudinal bar
intersection within the joint.
OCR for page 62
81 The J-dowel reinforcement bar shall be at least a #4 modate fabrication and placement tolerances. Transverse size bar. reinforcement around the column bars within the bedding layer provides
confinement and reduces the unsupported This provision is included in the proposed LRFD SGS for length of column bars, thereby reducing the potential for buck- grouted duct connections, when the
principal tensile stress, pt, ling during plastic hinging of the column. Accurate place- for a precast bent cap connection is 0.11 fc or larger. How- ment of reinforcement is, therefore, essential to
achieving the ever, based on specimen response, cap pocket connections expected system ductility capacity. Reinforcement should nor- were shown not to require horizontal J-bars. mally be placed
evenly through the depth of the bedding layer. However, in some cases, an uneven bedding layer (e.g., a slop- SDC B. Where the principal tensile stress, pt, for a precast ing bent cap on top of a
large diameter column) or a bedding bent cap connection is 0.11 fc or larger, J-bars stipulated for layer of an unusual shape may be encountered, requiring place- SDCs C and D are similarly required
for grouted duct con- ment of bedding layer reinforcement that is not uniformly dis- nections in SDC B. tributed, to minimize the unsupported length of column bars. In all cases, plan sheets should
show the intended placement of SDC A. For SDC A, horizontal J-bars are not required. the bedding layer reinforcement. The associated requirement for shop drawings is addressed in proposed Article
8.13.8.4.4 of Supplementary Hoops for Cap Pocket Connections the AASHTO LRFD Bridge Construction Specifications (LRFD BCS) (35). Adequate flowability of the concrete fill or grout SDCs C and D. The
CPFD specimen response demon- should not be prevented by the size and placement of the bed- strated the effectiveness of a supplementary hoop placed at ding layer reinforcement. Matsumoto et al. (8)
present an alter- each end of the steel pipe to limit dilation and potential native approach to accommodating tolerances and enhancing unraveling. This reinforcement, which matched the column
durability by embedding the column or pile into the bent cap. hoop bar size, reached up to 52% of yield during the test, indicating its contribution to joint performance. Therefore, The proposed
design specification is as follows: where the principal tensile stress, pt, is 0.11 fc or larger, Bedding layers between columns and precast bent cap pocket connections in SDCs C and D require the
use of caps shall be reinforced with transverse reinforcement, supplementary hoops: as shown in Figure 8.15.5.2.2-1 and Figure 8.15.5.2.3-1. Bedding layer reinforcement shall match the size and A
supplementary hoop shall be placed one inch from type of transverse reinforcement required for the col- each end of the corrugated pipe. The bar size of the umn plastic hinging region and shall be
placed evenly hoop shall match the size of the bedding layer rein- through the depth of the bedding layer. forcement required by Article 8.15.5.2.1. Grout bedding layer heights shall not exceed 3 in.
The hoop area meeting the requirement of the bedding layer For seismic loading scenarios, the bedding layer thickness is reinforcement (and column hoop) is considered sufficient. limited to 3 in when
constructed of a cementitious grout Where the principal tensile stress, pt, is less than 0.11 fc , hoops material. Grout properties do not show the same improve- are not required but may be
conservatively included. ment with lateral confinement that concrete materials show. Increasingly large grout bedding layer thicknesses can result in SDC B. As for the additional joint shear
reinforcement, the development of poor lateral response due to the degrada- supplementary hoops stipulated in the previous section for tion of the bedding layer. Therefore, the use of grout materials
SDCs C and D are conservatively required for SDC B where the is limited to joints 3 in or less in dimension. principal tensile stress, pt, for a precast bent cap connection is Lateral Reinforcement
Requirement for Columns 0.11 fc or larger. Where the principal tensile stress, pt, is less Connecting to a Precast Bent Cap. Uniform spacing between than 0.11 fc , supplementary hoops may be
optionally in- hoops at the top of the column and the bedding layer is crit- cluded as a simple, inexpensive, and conservative measure. ical to ensuring that system ductility is not compromised. Hoop
spacing is addressed in Article 8.8.14 of the proposed SDC A. For SDC A, supplementary hoops are not required. LRFD SGS and Article 8.13.8.4.4 of the proposed LRFD BCS. A smaller cover than that used
for typical column applications Reinforcement at the Bedding Layer is permitted for the top hoop because the placement of the bedding layer concrete or grout will provide additional cover and Top of
Column after the precast bent cap is set. Plan sheets and shop draw- Bedding Layer Reinforcement. A bedding layer between ings are required to show the intended placement of the first the bent cap
soffit and the top of column is used to accom- hoop at the top of the column. | {"url":"http://www.nap.edu/openbook.php?record_id=14484&page=62","timestamp":"2014-04-18T13:49:00Z","content_type":null,"content_length":"178362","record_id":"<urn:uuid:7d17ad2b-eead-4e7e-a374-e1c4857bfc92>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00069-ip-10-147-4-33.ec2.internal.warc.gz"} |
John Nash's Letter to the NSA
John Nash’s Letter to the NSA
February 17, 2012 by Noam Nisan
The National Security Agency (NSA) has recently declassified an amazing letter that John Nash sent to it in 1955. It seems that around the year 1950 Nash tried to interest some US security organs
(the NSA itself was only formally formed only in 1952) in an encryption machine of his design, but they did not seem to be interested. It is not clear whether some of his material was lost, whether
they ignored him as a theoretical professor, or — who knows — used some of his stuff but did not tell him. In this hand-written letter sent by John Nash to the NSA in 1955, he tries to give a
higher-level point of view supporting his design:
In this letter I make some remarks on a general principle relevant to enciphering in general and to my machine in particular.
He tries to make sure that he will be taken seriously:
I hope my handwriting, etc. do not give the impression I am just a crank or circle-squarer. My position here is Assist. Prof. of Math. My best known work is in game theory (reprint sent
He then goes on to put forward an amazingly prescient analysis anticipating computational complexity theory as well as modern cryptography. In the letter, Nash takes a step beyond Shannon’s
information-theoretic formalization of cryptography (without mentioning it) and proposes that security of encryption be based on computational hardness — this is exactly the transformation to modern
cryptography made two decades later by the rest of the world (at least publicly…). He then goes on to explicitly focus on the distinction between polynomial time and exponential time computation, a
crucial distinction which is the basis of computational complexity theory, but made only about a decade later by the rest of the world:
So a logical way to classify enciphering processes is by t he way in which the computation length for the computation of the key increases with increasing length of the key. This is at best
exponential and at worst probably at most a relatively small power of r, $ar^2$ or $ar^3$, as in substitution ciphers.
He conjectures the security of a family of encryption schemes. While not totally specific here, in today’s words he is probably conjecturing that almost all cipher functions (from some — not totally
clear — class) are one-way:
Now my general conjecture is as follows: for almost all sufficiently complex types of enciphering, especially where the instructions given by different portions of the key interact complexly with
each other in the determination of their ultimate effects on the enciphering, the mean key computation length increases exponentially with the length of the key, or in other words, the
information content of the key.
He is very well aware of the importance of this “conjecture” and that it implies an end to the game played between code-designers and code-breakers throughout history. Indeed, this is exactly the
point of view of modern cryptography.
The significance of this general conjecture, assuming its truth, is easy to see. It means that it is quite feasible to design ciphers that are effectively unbreakable. As ciphers become more
sophisticated the game of cipher breaking by skilled teams, etc., should become a thing of the past.
He is very well aware that this is a conjecture and that he cannot prove it. Surprisingly, for a mathematician, he does not even expect it to be solved. Even more surprisingly he seems quite
comfortable designing his encryption system based on this unproven conjecture. This is quite eerily what modern cryptography does to this day: conjecture that some problem is computationally hard;
not expect anyone to prove it; and yet base their cryptography on this unproven assumption.
The nature of this conjecture is such that I cannot prove it, even for a special type of ciphers. Nor do I expect it to be proven.
All in all, the letter anticipates computational complexity theory by a decade and modern cryptography by two decades. Not bad for someone whose “best known work is in game theory”. It is hard not
to compare this letter to Goedel’s famous 1956 letter to von Neumann also anticipating complexity theory (but not cryptography). That both Nash and Goedel passed through Princeton may imply that
these ideas were somehow “in the air” there.
ht: this declassified letter seems to have been picked up by Ron Rivest who posted it on his course’s web-site, and was then blogged about (and G+ed) by Aaron Roth.
Edit: Ron Rivest has implemented Nash’s cryptosystem in Python. I wonder whether modern cryptanalysis would be able to break it.
on February 18, 2012 at 2:35 am | Reply Amit C
That is awesome.
on February 18, 2012 at 6:11 am | Reply G'Day
Reblogged this on My Blog.
on February 18, 2012 at 7:20 am | Reply Anonymous
unbelievable. comparable to von neumann
on February 18, 2012 at 7:22 am | Reply me
just amazing. a mixture of godel + von neumann
“A beautiful mind” indeed. Peace.
Reblogged this on Kalpesh Padia’s Blog and commented:
Clearly John Nash was way ahead of his time… Schizophrenic, but super smart.. Respect!
on February 18, 2012 at 12:39 pm | Reply David Morris
Does that mean that NSA had this before the english guy Clifford Cocks invented it at GCHQ in 1972:
At GCHQ, Cocks was told about James H. Ellis’ “non-secret encryption” and further that since it had been suggested in the late 1960s, no one had been able to find a way to actually implement the
concept. Cocks was intrigued, and invented, in 1973, what has become known as the RSA encryption algorithm, realising Ellis’ idea. GCHQ appears not to have been able to find a way to use the idea,
and in any case, treated it as classified information, so that when it was reinvented and published by Rivest, Shamir, and Adleman in 1977, Cocks’ prior achievement remained unknown until 1997.
(From the Clifford Cocks article on Wikipedia)
or were NSA still stuck in the “no one had been able to find a way to actually implement the concept”.
“That both Nash and Goedel passed through Princeton may imply that these ideas were somehow “in the air” there.” I love the thought that an idea can live “in the air”, be dropped, almost forgotten,
hinted at, rediscovered and finally resolved in an institution like Princeton.
on February 18, 2012 at 6:51 pm | Reply Anonymous
Re: David Morrris
You’re referring to the invention of asymmetric (public key) cryptography right? Does Nash’s letter have anything at all to do with public key cryptography?
on February 18, 2012 at 7:44 pm | Reply Dr. Kenneth Noisewater
So does this invalidate any patents due to prior art?
• on February 18, 2012 at 8:53 pm | Reply me
This. All hell will brake loose in 3…2..1..
• on February 19, 2012 at 4:36 am | Reply Anonymous
Not unless it was made available to the public
Some caution is needed here. As always when interpreting historic writings, one is naturally tempted to use a modern perspective, based on knowing the current state of affairs. In addition, it is
tempting to attribute such phenomenal foresightedness to a well-established genius.
After reading the letter, it seems clear to me that Nash *did* foresee important ideas of modern cryptography. This is great and deserves recognition.
However, it seems also very clear that he did not foresee (in fact: could not even imagine) modern complexity theory. Why else would he say that he does not think that the exponential hardness of the
problem could ever be proven? It is true that the computational hardness of key tasks in modern cryptography is an unsolved problem, but we have powerful tools to prove hardness results in many other
cases. So if Nash had anticipated complexity theory as such, then his remark would mean that he would also have foreseen these difficulties. To foresee this, however, he would not only have to
understand the development of complexity theory in great detail, but also to anticipate the principles of today’s encryption mechanisms. It seems reasonable to assume that he would have shared the
latter in his letter if he had really had this insight.
Overall, it seems clear that a prediction of complexity theory or its current incapacity with respect to cryptographic problems cannot be found in this text, which does by no means diminish the
originality of the remarks on cryptography. One could also grant him a certain mathematical intuition that some computational problems could be inherently hard to solve, although I don’t see any hint
that he believes that such hardness could ever become a precise mathematical property. What he suggests is really close to the pragmatic approach of modern cryptography, but not to modern complexity
• on February 20, 2012 at 1:46 am | Reply Greg
“t is true that the computational hardness of key tasks in modern cryptography is an unsolved problem, but we have powerful tools to prove hardness results in many other cases.”
Not really— we’ve only proven things to be hard if and only if P!=NP. This is useful, but we’ve still not actually proven anything to be hard at all— only proven that there are a set of things
which if any of them turn out to be easy a whole bunch of other things must be ‘easy’ too.
□ “we’ve only proven things to be hard if and only if P!=NP.”
Regarding the class NP you are of course right. But our modern tools do not end at NP. For example, we know that ExpTime is strictly harder than P, and we have shown many problems to be hard
for ExpTime. At Nash’s time, ExpTime and NP would largely be synonyms (I have not traced back the exact history of these notions, but at least the letter seems to mix both concepts).
□ on February 20, 2012 at 8:58 pm Alex Ogier
Just to clarify, Markus is separating Complexity Theory, where we have plenty of solid hardness results, from Cryptography, where basically every assumption of hardness boils down to an
unproven conjecture.
The point being that this letter provides no evidence that Nash foresaw any of the structure that would allow proofs of hardness for any problem, and since this structure is foundational in
complexity theory, it is reasonable to conclude that he didn’t foresee complexity theory in any meaningful way.
In other words, while Nash saw the negative side of complexity theory — that mathematics would find it difficult to reason about the reversibility of one-way functions — he didn’t have any
particular insight into the positive side of complexity theory, that there would exist structure to classify the computational difficulty of many other problems
It’s interesting that, to conceal a message, you make it look like noise, and to get a message through a noisy channel, you do the same thing.
• Like when you hear the sounds (or see the motions of) words and decode them. (I know too little to understand whether that is accurate or dumb.)
on February 19, 2012 at 5:47 am | Reply Philonus Atio
Regarding ” I wonder whether modern cryptanalysis would be able to break it.”
I broke it in about 1 hour and I’m no expert in cryptanalysis. It is weak.
• on February 22, 2012 at 10:40 pm | Reply John Smith
I would be very interested in knowing how you did it.
Can you please Elaborate?
□ on February 24, 2012 at 2:24 am Phionus Atio
Sure. Here is a brief outline of strategy for how I broke it. Nash’s machine is essentially a linear feedback shift register (LFSR).
The LFSR operates as a weak pseudo-random number generator which is exclusive-ORed (XOR) with the plaintext to produce ciphertext. All you need to do is predict the output of the generator.
It cycles in less than 256 bits (i.e. the period). I used ‘known plaintext attacks’ to analyze the behavior of the LFSR. I created an equivalent LFSR using a variation on the Berlekamp-Massey
algorithm. Nash’s machine is weak and easy to predict because it is linear.
The rest is left as an exercise for the reader (there are a few minor wrinkles). Happy cracking.
Reblogged this on Vcjha's Blog.
“It seems reasonable to assume that he would have shared the latter in his letter if he had really had this insight.”
No, it’s fundamentally irrational to assume that. You’re correct that the letter needs to be evaluated within the context on his time; it also needs to be evaluated with author’s purpose in mind.
There is nothing in that letter that even hints that his purpose was to “foresee” anything or to give a complete theoretical overview of the subject he was discussing. The author’s purpose is to send
a practical note to a government agency in order to generate interest in his ideas because he thinks he can help the country with them. His concern is that no one at the NSA will take him seriously,
a concern that in hindsight seems well-founded. It’s grossly unfair in that context that you then come along and criticize him decades later for not being complete enough. It’s a handwritten letter
for crying out loud, not a phd thesis.
• “it’s fundamentally irrational” … “It’s grossly unfair in that context that you then come along and criticize him”
I think this discussion should not be that emotional :-) I am far from criticizing Nash for writing a letter decades ago. All I am saying is that the letter does not seem to provide evidence for
him anticipating computational complexity theory as suggested in the original post. You could be right that his letter does not provide the best basis for judging this (since there can be many
reasons for him to not write all that he knew in all detail). Then all we can do is to wait for more conclusive historic material to appear.
on February 20, 2012 at 3:41 am | Reply Geoffrey Watson
This is very interesting as a historical document, but not sure that it supports the interpretations being put on it.
It is not noteworthy that people working in the field anticipate ideas that only later become standard theories (look at the history of any advance in maths or science). From Goedel’s 1956 letter and
this Nash letter (with its reference to Prof. Huffman working towards similar objectives) a reasonable historical conclusion is that these ideas were just going around in the usual way.
The “conjecture” is a pretty muddled bit of thinking. It presumably means that Nash thinks that there are such exponentially hard cyphers, but to convert “almost all sufficiently complex types of
enciphering, especially where the instructions given by different portions of the key interact complexly with each other in the determination of their ultimate effects on the enciphering” into a
prescient anticipation of complexity theory is a big ask.
Markus Kroetzsch and Geoffrey Watson make important points in their earlier posts, and I’d encourage people to read them. It’s hard to know exactly what Nash was thinking from just these letters, but
it impressed me as similar to some of my own early thoughts on cryptography — but before I had really invested significant time and come up with good results.
In addition to Kroetzsch and Watson’s caveats, I’ll note what appears to be another: Breaking a simple substitution cipher takes a constant amount of time — not dependent on the key size (if one even
can think of it as having a variable key size).
If anyone thinks I’ve missed something, please chime in. I read the letters fairly quickly and might have missed a hidden gem.
Martin Hellman
on February 20, 2012 at 2:55 pm | Reply ali
i want to share my chapter in artificial organs
i have another chapter in liver cells
any one interested to invite me for his/or her book
on February 20, 2012 at 9:21 pm | Reply unruh
P, NP are really completely irrelevant for crypto. All crypto systems have a small finite key, and the breaking of them is constant (as Hellman points out for substitution cyphers). A problem could
go as constant for key lengths less than 10^(10^10) and exponential therefter. It would be “exponential” as far as P, NP,… were concerned, but for crypto it would be useless and would go as a
constant since we are never going to use keys that long. Or the key could go as r^(10^10) and the problem would be considered polynomial, but it would be far far stronger then almost any exponential
problem as far as crypto is concerned. That the NP hard problems we have looked at easily happen to also be something like exponential for small key lengths is more the “drunk and the lightpost” than
anything having to do with the inherent features of the problem. Thus, even if P=NP, it would make no difference to crypto, unless that proof also showed how all P problems could be reduces to , say,
linear P problems with small coefficients.
Noam, thank you very much for sharing this information in an interesting, annotated form. Ronald Rivest’s implementation of Nash’s Cryptosystem is indeed quite intricate and very well coded and
commented, clear to understand.
I’ve made a full HTML transcription of the PDF: http://www.gwern.net/docs/1955-nash
It’s comforting to know that in today’s day and age, people of this caliber of genius can just start a blog / Facebook page / YouTube channel about their amazing mathematical ideas and how Ed Harris
won’t stop following them around.
Reblogged this on "Random" thoughts and commented:
Nash and cryptography…
Amazing, Reblogged on my blog
on March 13, 2012 at 1:23 am | Reply Vinay
This is amazing.. John Nash’s work has always been an inspiration.
Reblogged this on Luay Baltaji's blog and commented:
A fascinating letter from John Nash to the NSA in 1955: can he prove that “conjecture” security is computationally unbreakable?
Reblogged this on Code through the Looking Glass.
on April 29, 2012 at 3:09 am | Reply Anonymous
A nice text, except for this sentence: “Not bad for someone whose ‘best known work is in game theory’”.
Awesome! Did he sent it when he was having delusional problems?
I drop a comment whenever I appreciate a post on a site or I
have something to contribute to the conversation. It’s triggered by the sincerness displayed in the post I looked at. And after this post John Nashs Letter to the NSA Turing’s Invisible Hand.
I was excited enough to drop a thought :-) I actually do have some
questions for you if it’s okay. Could it be just me or do a few of the responses look as if they are written by brain dead visitors? :-P And, if you are posting at other online social sites, I would
like to follow you. Would you list every one of all your community pages like your twitter feed, Facebook page or linkedin profile?
on February 1, 2013 at 8:39 pm | Reply Thomas Rivera
Great article, great blog!
Wow that was unusual. I just wrote an very long comment but after I clicked submit my comment
didn’t show up. Grrrr… well I’m not writing all that over again.
Anyway, just wanted to say wonderful blog!
Do you have a spam problem on this site; I also am a
blogger, and I was wondering your situation; many of us have developed some nice practices and we are looking to trade methods with others, be sure to shoot me an email if interested.
It was created by a brand new personal fitness educator named Craig Ballantyne from Ontario.
The particular great exercise/activity regarding
consider for pounds burning workouts often is cardio kickboxing.
I am genuinely thankful to the owner of this site who has shared this
fantastic piece of writing at at this place.
Hey There. I found your blog the use of msn. This
is an extremely neatly written article. I will make sure to bookmark it and
come back to learn more of your useful information.
Thank you for the post. I will definitely return.
What! NSA used Nashs’ stuffs without telling him? There must have been some serious security reason. Anyway, now that they have given the work credit to him, there is no complain.
on October 24, 2013 at 5:38 am | Reply wrecktafire
With 20-20 hindsight, one may imagine in the letter the roots of the Feistel cipher: one way functions based on key-driven combinations of simple functions, cascading to arbitrary depth.
Or maybe it’s just my imagination.
Reblogged this on Subhayan Roy Moulick.
47 Responses | {"url":"https://agtb.wordpress.com/2012/02/17/john-nashs-letter-to-the-nsa/?shared=email&msg=fail","timestamp":"2014-04-20T13:41:23Z","content_type":null,"content_length":"105660","record_id":"<urn:uuid:9084ddd5-d21c-492e-adae-1d73d12d2d1a>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00572-ip-10-147-4-33.ec2.internal.warc.gz"} |
Transformations on Functions and their Geometric Representation.
Concepts of Geometric Transformations (operations) on functions are a difficult one for students. Getting students to grasp these abstract concepts prove even more challenging for the tutor. To do so
one must convert these concepts into a concrete visualization and at the same time connect these operations performed on the function with the result. Such topics are best taught with visualization
tools such as Geometer's Sketchpad.
This past semester I had the opportunity working with a high school student in this area. While there are many types of transformations such as shears and rotations, we only covered the basic
transformations; dilation/shrinking, reflections and shifts. Honing in on my skills, I was able to successfully teach the concepts very quickly. Using Sketchpad we were able to cover this topic in
two lessons and effectively solidify his understanding of Transformations. Sketchpad enabled us to focus on manipulating the functions to see the transformation and eliminating tedious sketching by
All shapes and images around us from mountains and trees to snowflakes, no matter how complex, can be approximated geometrically by functions.
For anyone learning Geometry, particularly Euclidean Geometry or any advanced subject area involving analysis of functions up to calculus, Geometer's Sketchpad is an invaluable tool. The link below
showcase a sample of the work of one student in the area of Function Transformations. The image created is a simplified graphical view of a clown face using some basic functions and transformations.
No drawing required, just math. This is merely a small taste of the power of geometry!
To view this file, copy and paste the link in your browser. http://dl.dropbox.com/u/17545844/Transformations%20for%20a%20Face_1%20%281%29.pdf | {"url":"http://www.wyzant.com/resources/blogs/8907/transformations_on_functions_and_their_geometric_representation","timestamp":"2014-04-18T18:41:33Z","content_type":null,"content_length":"34607","record_id":"<urn:uuid:f6f89615-036e-4df8-8e94-82f923c327f9>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00109-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fantasy Baseball Cafe
Re: Coors effect.
Coors effect.
Help me out here, fellas. I was reading something on the Coors field effect on hitting and although it makes perfect sense to me on paper, it makes no sense to me in reality. To sum up the
calculations, there is a 20% bump a Rockie hitter gets playing at Coors, but on the road there's a -19% "hangover effect" for those same hitters. So, in short, there's only a 1% bump a player will
get by playing for the Rockies. Am I correct in my calculations when I say a .280, 20 homer, and 90 RBI player in a normal hitting park will become a a whopping .283, 20, and 91 after being traded to
the Rockies? I don't know about you fellas, but this seems like a bunch of matchematical mumbo-jumbo to me. What say you baseball gurus out there?
Erboes wrote:Help me out here, fellas. I was reading something on the Coors field effect on hitting and although it makes perfect sense to me on paper, it makes no sense to me in reality. To sum
up the calculations, there is a 20% bump a Rockie hitter gets playing at Coors, but on the road there's a -19% "hangover effect" for those same hitters. So, in short, there's only a 1% bump a
player will get by playing for the Rockies. Am I correct in my calculations when I say a .280, 20 homer, and 90 RBI player in a normal hitting park will become a a whopping .283, 20, and 91 after
being traded to the Rockies? I don't know about you fellas, but this seems like a bunch of matchematical mumbo-jumbo to me. What say you baseball gurus out there?
I'm not a mathematical or statistical whiz by any means but I would think the names Castilla, Bichette, Galaragga, Cirillo, Payton, Pr. Wilson, etc., etc. would fly in the face of the argument that
Coors doesn't effect hitter's #'s dramatically.
Here's the full piece I was reading:
Here's the full-blown calculation:
In order to determine how much Rockies hitters' numbers are boosted by Coors, we need to know three things:
A) The conventional Coors park factor.
B) The extent to which the hangover effect hurts our hitters' overall numbers.
C) The extent to which the hangover effect affects the conventional Coors park factor.
The overall hangover-adjusted Coors park factor, then, would be A-B-C.
We know that A is approximately +20%, as Lou mentioned.
What about B? Well, according to the link that Lou gave, the hangover effect reduces hitters' road OBP and SLG from .339/.411 to .302/.363. Using OBP*SLG, we then find that that is equal to a 27%
reduction in runs created in road games, which translates to a 13% reduction overall (I rounded down because most of our hitters' PA's come at home).
Now, onto the tricky one. How does the hangover effect affect the conventional park factor? Let's put it in equation form:
RSH: Runs scored at home
RAH: Runs allowed at home
RSR: Runs scored on the road
RAR: Runs allowed on the road
The conventional park factor is calculated as ((RSH+RAH)/(RSR+RAR)+1)/2. Set all four variables equal to one another to calculate the park factor of a neutral park; obviously, this is equal to 1.
Now, to account for the hangover effect. Let's first set all four variables equal to 4.61, the average number of runs scored per game in the 2003 NL. Now, the hangover effect reduces RSR by 27%, so
we replace RSR with 4.61/1.27, or 3.63. Now we calculate the park factor, and come up with 1.06. That means that the hangover effect, by itself, inflates the conventional park factor by 6%. This is
our value for C (in my first equation).
So we have our answer. The park factor equals A-B-C, where A=+20%, B=-13%, and C=-6%. Therefore, Coors inflates Rockies hitters' numbers by 1 percent.
Obviously, that's not an exact calculation; we don't know enough about the hangover effect (or about the Coors inflation effect) to come up with a precise answer. But it's pretty close, and it
certainly goes to show that any system that does not account for the hangover effect is completely worthless when it comes to evaluating Rockies hitters.
End quote
I think it has to be a bunch of garbage too, but maybe some of the statistical guys here can shed some light on this.
Erboes wrote:Here's the full piece I was reading:
Here's the full-blown calculation:
In order to determine how much Rockies hitters' numbers are boosted by Coors, we need to know three things:
A) The conventional Coors park factor.
B) The extent to which the hangover effect hurts our hitters' overall numbers.
C) The extent to which the hangover effect affects the conventional Coors park factor.
The overall hangover-adjusted Coors park factor, then, would be A-B-C.
We know that A is approximately +20%, as Lou mentioned.
What about B? Well, according to the link that Lou gave, the hangover effect reduces hitters' road OBP and SLG from .339/.411 to .302/.363. Using OBP*SLG, we then find that that is equal to a 27%
reduction in runs created in road games, which translates to a 13% reduction overall (I rounded down because most of our hitters' PA's come at home).
Now, onto the tricky one. How does the hangover effect affect the conventional park factor? Let's put it in equation form:
RSH: Runs scored at home
RAH: Runs allowed at home
RSR: Runs scored on the road
RAR: Runs allowed on the road
The conventional park factor is calculated as ((RSH+RAH)/(RSR+RAR)+1)/2. Set all four variables equal to one another to calculate the park factor of a neutral park; obviously, this is equal to 1.
Now, to account for the hangover effect. Let's first set all four variables equal to 4.61, the average number of runs scored per game in the 2003 NL. Now, the hangover effect reduces RSR by 27%,
so we replace RSR with 4.61/1.27, or 3.63. Now we calculate the park factor, and come up with 1.06. That means that the hangover effect, by itself, inflates the conventional park factor by 6%.
This is our value for C (in my first equation).
So we have our answer. The park factor equals A-B-C, where A=+20%, B=-13%, and C=-6%. Therefore, Coors inflates Rockies hitters' numbers by 1 percent.
Obviously, that's not an exact calculation; we don't know enough about the hangover effect (or about the Coors inflation effect) to come up with a precise answer. But it's pretty close, and it
certainly goes to show that any system that does not account for the hangover effect is completely worthless when it comes to evaluating Rockies hitters.
End quote
I think it has to be a bunch of garbage too, but maybe some of the statistical guys here can shed some light on this.
Ugh. I don't have the patience this morning to try and read through all that.
I'm not sure what they mean by hangover effect. (Is there a link to the article missing in the above post?) I do remember reading last year that when Rockies hitters start a road trip it takes them a
few games to adjust to seeing curves break again. But after a few games they start hitting curves again.
Is the claim that Rockies hitters actually hit worse in other ballparks than every other hitter? So much worse that over a season we shouldn't expect any bump in stats from hitting at Coors? That
seems ... odd.
"The game has a cleanness. If you do a good job, the numbers say so. You don't have to ask anyone or play politics. You don't have to wait for the reviews." - Sandy Koufax
Ramble2 wrote:Is the claim that Rockies hitters actually hit worse in other ballparks than every other hitter? So much worse that over a season we shouldn't expect any bump in stats from hitting
at Coors?
Now that right there is a very interesting question.
Yes doctor, I am sick.
Sick of those who are spineless.
Sick of those who feel self-entitled.
Sick of those who are hypocrites.
Yes doctor, an army is forming.
Yes doctor, there will be a war.
Yes doctor, there will be blood.....
Agreed with Erboes and ramble; at first glance, this does seem odd. Time to take a closer look...
Yep, that's what they're saying. At first glance, it's seems ludicrious since the Rockies have led the NL in runs and average pretty much every season since their inception except for the last two
(humidor?) and I don't think they've had that good of talent. I will look more closely at it when I find the time. What's the old saying? There are three types of lies: Lies. Damn lies. And
statistics. I think this may apply here.
Ramble2 wrote:Is the claim that Rockies hitters actually hit worse in other ballparks than every other hitter? So much worse that over a season we shouldn't expect any bump in stats from hitting
at Coors?
i would think the home/road splits tend to even things out but if you have daily transactions then you can get the good without the bad. example: i used jay payton as my 4th outfielder and played him
for home games and select road games (since he wasnt completely terrible on the road). i got really good stats from him at home and missed all of the garbage on the road.
back from the dead
Whhhhhew! Some interesting stuff. I'm browsing through some players' home and away splits and came upon this doozy.
Heltons at home (career):
Avg -- .378
HR -- 134
RBI's --- 453
Helton on the road:
Avg -- .294
HR -- 85
RBI's -- 287
To give you a better idea how good Helton is at home, if he played all his games at Coor's he'd average .378, with 45 homers and 117 rbi's. If he played on the road for all his games he'd be at .294
with 28 homers and 96 rbi's. My guess would be, if he were traded to a team with an average park he'd probably be a .310 hitter and about 30-35 homers and 100-110 rbi's, which leads me to believe
that there is a "hangover effect", but not anywhere near close enough to negate Coor's advantages.
I think looking at the numbers before and after Coor's of a player is probably more accurate. Wilson, for example, average about .265, 30 homers, and 95 rbi's if his numbers were averaged for 600 at
bats. The 600 at bats he had last season he hit in total .282, 36, and 141. His road splits were more aligned with his career averages if you pro-rate them over an entire 600 at abats: .260, 15, and
Looking at Wilson, it seems like there wasn't anything below normal except for his average on the road. This may not be enough of a sample size, but I'm beginning to believe this "hangover effect" is
highly inflated.
This also gets me thinking. Wilson's road numbers were pretty much his average over his career. Maybe he didn't have a great season last season, but just an average one that was highly inflated by
Coor's. Just something to think about I guess.
I still don't know what a hangover effect is.
"The game has a cleanness. If you do a good job, the numbers say so. You don't have to ask anyone or play politics. You don't have to wait for the reviews." - Sandy Koufax | {"url":"http://www.fantasybaseballcafe.com/forums/viewtopic.php?t=25759","timestamp":"2014-04-17T10:41:20Z","content_type":null,"content_length":"87820","record_id":"<urn:uuid:365f50a3-0c6f-4fd3-940a-92376b4eccc4>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00624-ip-10-147-4-33.ec2.internal.warc.gz"} |
GRE Practice Tests Math revision quiz
1. A person walking at the rate of 3 km/hr crosses a bridge in 20 minutes. What is the length of the bridge.
a. 1000 meter
b. 1500 meter
c. 1200 meter
d. 100 meter
e. 500 meter
2. A motorist is running at a speed of 90 kmph. What distance will it cover in 20 seconds.
a. 300 metre
b. 250 metre
c. 200 metre
d. 300 metre
e. 500 metre
3. A man reaches at a place late by 40 minutes, if he walks at 3 miles/hour and 30 minutes early,
if he walks at 4 miles/hour. Find out how far is the place?
a. 13 miles
b. 20 miles
c. 5 miles
d. 14 miles
e. None of the above
Two cars start at the same time from a point A and point B and proceed towards each other at a rate
4. of 16 miles/hour and 21 miles/hour respectively. When thay meet it is found that one has traveled
60 miles more then the other. Find the distance between the two points?
a. 220 miles
b. 148 miles
c. 444 miles
d. 384 miles
e. None of the above
5. A train crosses a man in 6 seconds at a speed of 60 km per hour.
Find out the length of the train.
a. 200 meters
b. 100 meters
c. 125 meters
d. 220 meters
e. 120 meters
6. Find out the combined length of the trains if one traveling at 40 km per hour and crosses
another train traveling in same direction at 30 miles per hour in 20 seconds.
a. 150 meters
b. 200 meters
c. 200/3 meters
d. 500/9 meters
e. 100 meters
7. A train is 20 minutes late if Walking at 7/8 minutes of its usual speed. What is the usual
speed of the train to cover the journey.
a. 200 minutes
b. 100 minutes
c. 140 minutes
d. 220 minutes
e. 50 minutes
A and B are two stations 400 Km apart. A train starts from A at 9 p.m. and travels towards
8. B at 70 Kmph. Anothre train starts from B at 10 p.m. and travels towards A at 40 kmph. At what
time do they meet?
a. 8 p.m.
b. 1 a.m.
c. 4 a.m
d. 9 p.m.
e. 10 a.m.
9. A train travels at an average of 80 miles per hour for 3 1/2 hours and then travels
at a speed of 60 miles per hour for 2 1/2 hours. How far did the train travel in entire 6 hours.
a. 200 miles
b. 195 miles
c. 500 miles
d. 430 miles
e. 400 miles
10. A train covers a distance of 20 km in 15 minutes. If its speed is decreased by 20km/hr,
what is the time taken by it to cover the same distance?
a. 10 minutes
b. 15 minutes
c. 20 minutes
d. 40 minutes
e. 80 minutes
11. Four persons are walking from a place A to another place B. There speeds are in the ratio of
3:4:6:8. What is the time ratio to reach B by these persons.
a. 3:4:6:8
b. 8:4:6:3
c. 8:6:4:3
d. 3:4:8:6
e. 4:3:6:8
12. A and B starting from the same place walk at the rate of 3 km/hr and 3.8 km/hr respectively.
What time will they take to be 6.4 km apart, if they walk in the same direction?
a. 32 hours
b. 8 hours
c. 16 hours
d. 40 hours
e. 15 hours
What others think about GRE Practice Test - Math revision questionnaire - Time and Distance Problems
By: quiz girl on 4/16/2014
hey folks! i like the amazing quizzes on quizmoz. it increases your general knowledge
By: Teresa on 4/15/2014
I took the quiz. It let me know that I failed. But I wasn't able to see what the correct answers. It would be great to see what the answers are so I can learn.
By: Teena on 4/14/2014
I love this quiz Website. This is the best free quiz site.
By: Hot girl on 4/13/2014
I love answering Quiz Questions
By: Kayla on 4/12/2014
I think this is a great quiz full of knowlodge and information.
By: Quiz Game Player on 4/11/2014
One day I will crack all the Impossible quizzes in the world
By: Nancy on 4/10/2014
I would like to see a complete page of horror movie quizzes for the horror genre fans!
By: Tracy on 4/9/2014
Great test. A nice way to gauge one's knowledge
By: Hannah on 4/8/2014
Enjoyed it, and learned a lot about general knowledge
By: Aumkar on 4/7/2014
By: Shannon on 4/6/2014
I have never seen such an excellent quiz website before this.
By: Tallitha on 4/5/2014
By: Personality Quiz Man on 4/4/2014
I love quizzes. Personality Quizzes are my favorite.
By: Roger on 4/3/2014
I love Quiz Games. QuizMoz is an excellent Quiz site
By: Tommy on 4/2/2014
Great site. Good learning and fun.
By: Reema on 4/1/2014
Great Quiz! The quizmaster laid out this quiz so that even beginners could learn more
By: Samantha on 3/31/2014
This is so cool. Even though I really did not know some of the questions, it was still fun!
By: Laura on 3/30/2014
I appreciate the time and effort that the quiz maker put into the quiz
By: Haris on 3/29/2014
These quizzes will increase my vocab skill.Frequently i Have used this website to check my vocab strength.
By: Penny on 3/28/2014
NICE WEBSITE, great quiz! | {"url":"http://www.quizmoz.com/quizzes/GRE-Practice-Tests/g/GRE-Practice-Test-Math-revision-questionnaire-Time-and-Distance-Problems.asp","timestamp":"2014-04-16T22:04:55Z","content_type":null,"content_length":"167462","record_id":"<urn:uuid:e9580648-9482-49dc-b6c0-5ef1951fc794>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00561-ip-10-147-4-33.ec2.internal.warc.gz"} |
quadratic formula/completing the square?
June 10th 2009, 12:05 PM
quadratic formula/completing the square?
First off I this is for a review for my class due tomorrow. I lost my notes and I am completely lost. I don't know if this is algebra or not, there used to be a help-board I see that is gone
which is a shame (or maybe I'm blind?).
A rock is thrown into the air from a bridge and falls to the water below. The height of the ball, h metres, relative to the water t seconds after being thrown is given by:
a) Determine the maximum height of the rock above the water.
b) How long does it take the rock to reach the maximum height?
c) After how many seconds does the rock hit the water?
Thanks in advance, I really hate exam time, so stressful. :P
June 10th 2009, 01:08 PM
Are you sure that's all the information that the question includes? Usually they will give the height of the bridge or something similar, because both c) and a) are impossible to solve without
knowing the height of the initial throw above the water.
Also, what level of math is this for? If you have taken intro calculus, I believe differentiation would really help here. We can find b) right now by taking the derivative of the function and
setting it equal to 0 to find the maximum.
$<br /> h(t)=-5t^{2}+10t+35<br />$
$<br /> h'(t)=-10t+10=0<br />$
$<br /> \frac{-10t}{-10}=\frac{-10}{-10}<br />$
$<br /> t=1<br />$
i.e. It takes the rock 1 second to reach maximum height.
June 10th 2009, 01:31 PM
First off I this is for a review for my class due tomorrow. I lost my notes and I am completely lost. I don't know if this is algebra or not, there used to be a help-board I see that is gone
which is a shame (or maybe I'm blind?).
A rock is thrown into the air from a bridge and falls to the water below. The height of the ball, h metres, relative to the water t seconds after being thrown is given by:
a) Determine the maximum height of the rock above the water.
b) How long does it take the rock to reach the maximum height?
c) After how many seconds does the rock hit the water?
Thanks in advance, I really hate exam time, so stressful. :P
This is deffinitely calc1.
Because no instructor would give you this question in algebra.
To find the maximimum hieght, of the thing, find the max of the function. You know how to find the maxes of functions right. Jus t set f'(x)=0.
set it to zero
the thing is at a max at t=1
Plug 1 into the original position function to know how high it is gonna go, then add that height to the hieght of the bridge and let that sum be equal to h(t), solve for t, and you'll have how
much time it took to reach the grounnd.
One more thing. This problam could have been solved with plain old algebra for the most part. If the guy wasn't on a bridge, you could just solve h(t) for 0 and then the time that the hieght
would have been attained would simply be the average. You could have done the whole thing without calc, but I don't want to get long winded.
June 10th 2009, 01:38 PM
No that is all my teacher gave me. I am in a Grade 11. University/College Functions and Applications class.
Thank you for your help, though :D
June 10th 2009, 02:02 PM
Not true. This can be done in an algebra class. (Wink) The maximum/minimum value of a parabola is at the vertex. Since the leading coefficient is negative, it will be a maximum. To find the
maximum height put it in the vertex form $y = a(x - h)^2 + k$ by completing the square:
\begin{aligned}<br /> h(t) &= -5t^2 + 10t + 35 \\<br /> h(t) &= -5(t^2 - 2t) + 35 \\<br /> h(t) &= -5(t^2 - 2t + 1) + 35 + 5\\<br /> h(t) &= -5(t - 1)^2 + 40\\<br /> \end{aligned}
The vertex in $y = a(x - h)^2 + k$ would be (h, k), so the maximum height is 40m at t = 1 sec.
June 10th 2009, 02:10 PM
Find the zeros of the function:
\begin{aligned}<br /> -5t^2 + 10t + 35 &= 0 \\<br /> -5(t - 1)^2 + 40 &= 0 \\<br /> -5(t - 1)^2 &= -40 \\<br /> (t - 1)^2 &= 8 \\<br /> t - 1 &= \pm \sqrt{8} \\<br /> t &= 1 \pm 2\sqrt{2} \\<br
/> t &= 1 + 2\sqrt{2} \approx 3.83 sec. \\<br /> t &= 1 - 2\sqrt{2} \approx -1.83 sec.<br /> \end{aligned}
Reject the second solution because it's negative. It'll take about 3.83 seconds before the rock hits the water.
June 10th 2009, 02:27 PM
Thank you for all of your help. As I read through your posts my memory of the subject came back, thanks! :D | {"url":"http://mathhelpforum.com/pre-calculus/92460-quadratic-formula-completing-square-print.html","timestamp":"2014-04-20T06:31:42Z","content_type":null,"content_length":"13512","record_id":"<urn:uuid:c938fdb5-0da9-45bb-bba8-b4cab8458028>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00263-ip-10-147-4-33.ec2.internal.warc.gz"} |
Camera Vectors in R5 - how to use, READ THIS! [Archive] - SA-MP Forums
24/03/2010, 06:20 PM
First of all, discard the "UP" vector, the only vector of use currently is the FRONT vector.
The FRONT vector is simply a unit length (1 unit) vector direction in which the player is aiming from the camera position.
How to make sense of this..:
Basically, if you add camera front vector to the camera position vector, you will get another point in game space which is exactly 1 unit apart from the player and is in the direction where the
player is looking at.
That camera target point can be then moved forward and backwards to have a point of reference against a specific target and see if the player is aiming towards it.
Example usage:
1. calculate distance from camera position to target check position (say a specific object)
2. rescale the front vector so it's length will be the distance to object to check, THIS IS VERY EASY - Since the vector is always of 1.0 length you can simply multiply the front vector XYZ values by
the distance from #1, we will call this vector a tempcamtarget vector.
3. Now since you have the tempcamtarget point that is of known distance away from the target point and towards what the player is looking at, you can do a simple distance check from the tempcamtarget
to the object position, the distance returned will be how far off from target object the player is looking, so you can know if he's looking at a certain object, or looking away from it.
Float:DistanceCameraTargetToLocation(Float:CamX, Float:CamY, Float:CamZ, Float:ObjX, Float:ObjY, Float:ObjZ, Float:FrX, Float:FrY, Float:FrZ) {
new Float:TGTDistance;
// get distance from camera to target
TGTDistance = floatsqroot((CamX - ObjX) * (CamX - ObjX) + (CamY - ObjY) * (CamY - ObjY) + (CamZ - ObjZ) * (CamZ - ObjZ));
new Float:tmpX, Float:tmpY, Float:tmpZ;
tmpX = FrX * TGTDistance + CamX;
tmpY = FrY * TGTDistance + CamY;
tmpZ = FrZ * TGTDistance + CamZ;
return floatsqroot((tmpX - ObjX) * (tmpX - ObjX) + (tmpY - ObjY) * (tmpY - ObjY) + (tmpZ - ObjZ) * (tmpZ - ObjZ));
The result of the function is how close to the object centre the player is looking at, use it like - if value is < 1.0 then the player is looking at the object within target diameter of 2.0 units
(for 1.0 radius.. diameter is 2.0)
Code is untested, but compiles and should work. | {"url":"http://forum.sa-mp.com/archive/index.php/t-136338.html","timestamp":"2014-04-21T15:51:37Z","content_type":null,"content_length":"24269","record_id":"<urn:uuid:24e88069-e926-4c3f-9b8b-60aa0cfd507f>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00412-ip-10-147-4-33.ec2.internal.warc.gz"} |
Day 1
Day 2
Day 3
Day 4
Acronyms, etc.
Concert Chat**
Math Tools**
PoW Library**
*username/password for any access
**username/password for full access
Participants List
Philly Photos
Student Solutions: Birthday Trip
Five Dimensions: Mathematics, Technology, Teaching, Learning, and Assessing
Below are incorrect student solutions to Birthday Trip. These solutions were submitted when this problem was used as the Forum's Math Fundamentals Problem of the Week.
If you'd like to see some correct student work to the same problem, accompanied by pretty good explanations, you can read the solution posted in our archives. You will need to have your Teacher
Membership information handy to view the solution.)
Gia, age 10
Jade's family will travel 362.5 miles to get to Aunt Mazie's house.
To solve Birthday Trip, I followed several steps.
First, I noticed that for the first two days,
fractions were used to describe the distances.
I added the two fractions together.
( 1/2 + 2/3 )
My answer came out to three fifths. ( 3/5 )
Next I realized that the remaning
number of miles is given to me.
This number is 145. (miles)
I then figured out that
145 is 2/5 of the final answer.
To get the remaining
three fifths ( 3/5 ), I divided
145 by 2 and got 72.5.
I did this because now
I have 1/5 of the final total.
I multiplied 72.5 by 5
and got 362.5.
Rico, age 10
The answer i came to is 48 and 1/3 miles
I times 145 which is the miles they have left times 2/3 and times 1/2
to get the original miles
Aleda, age 9
The trip to Aunt Mazie's is 146 and two-thirds mile.
The answer I got for this problem was 146 and two-thirds mile. The
reason I got this answer was because on the first day of the
family's trip they drove halfway which was 0.5. The next day the
family drove two-thirds which is 0.6 repeating and rounds to 0.7.
The last day they drove the rest of the way which was 145 miles. By
adding all of those numbers the result I got was 146 and two-thirds
mile. The reason I added those numbers together was because you
wanted to find how many miles all together the family's trip was, as
it said in the problem.
Jose, age 12
Aunt mazie's house is 652 and one half miles away.
I got 652 and one half miles because 1 third is 145 miles and three of 145
miles is 435 miles and half of 435 is 217 and one half. And if you add it all
up you get 652 and one half. Thats how I got my answer.
Troy, age 10
The trip was 304.5 miles long.
I took 145 and mutiplied it by .5 and adde 72.5 to 145. and then
multiplied 145 by .6 then added 87 to 217.5 then that equaled 304.5
Ulrika, age 11
211 miles away.
I put 2/3 on my paper then i put that into 100. i got 66%. Then i
wrote down 145 and added 66 and got 211 miles away.
Send comments to the facilitators: Annie, Cynthia, Steve, Riz, and Suzanne | {"url":"http://mathforum.org/nsdl_mathtech/sum2006/birthday.solutions.html","timestamp":"2014-04-17T13:08:06Z","content_type":null,"content_length":"6591","record_id":"<urn:uuid:012ece4a-19db-48cf-a98e-cd51cfd2525e>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00336-ip-10-147-4-33.ec2.internal.warc.gz"} |
Miami Beach Math Tutor
...In my experience, I have also taught beginning Spanish. Additionally, in my music education classes, I have taught music from around the world, in Spanish and French, as well as English. We
learn basic Spanish and French vocabulary in class.
30 Subjects: including prealgebra, algebra 1, reading, SAT math
...I began playing piano at the age of 4 and began competing when I was 7. The competitions began at the state level in Florida with the FFMC (Florida Federation of Music Clubs) and grew by the
time I was 10 into national and international competitions ranging from the Cleveland International Piano...
39 Subjects: including algebra 2, piano, calculus, Italian
...So come on aboard, I can show you the way. I have turned math into a fun and successful exercise for middle school, high school and college aged students from all around the country and even
some from other countries. I can teach online and offline with great results.
30 Subjects: including SAT math, probability, linear algebra, discrete math
...I currently go to the Honors College at Miami Dade College where I maintain an A average. I have been known to be an outstanding student who is always seeking to make a difference. I am very
good at helping others understand what they struggle with.
16 Subjects: including calculus, elementary (k-6th), vocabulary, grammar
...In the past I have tutored students ranging from elementary school to college in a variety of topics including FCAT preparation, Biology, Anatomy, Math and Spanish. I enjoy teaching and helping
others and always do my best to make sure the information is enjoyable and being presented effectively...
30 Subjects: including algebra 2, biology, calculus, prealgebra | {"url":"http://www.purplemath.com/Miami_Beach_Math_tutors.php","timestamp":"2014-04-16T19:15:25Z","content_type":null,"content_length":"23822","record_id":"<urn:uuid:12703043-0473-4841-811a-3450ac6eef43>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00452-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Library News
I just discovered this series and found it delightful. I sometimes say "automagically" but these videos coin the word "automathically." Enjoy.
Doodling in Math Class: Spirals, Fibonacci, and Being a Plant [1 of 3]
Doodling in Math Class: Spirals, Fibonacci, and Being a Plant [2 of 3]
Doodling in Math Class: Spirals, Fibonacci, and Being a Plant [Part 3 of 3]
"This pattern is not just useful, not just beautiful, but inevitable. This is why science and mathematics are so much fun." Hence the "automathically." | {"url":"http://wulibraries.typepad.com/mathnews/2012/01/index.html","timestamp":"2014-04-16T14:29:55Z","content_type":null,"content_length":"48149","record_id":"<urn:uuid:c2b74530-c2b7-4f16-a83d-3b4a3f7ab720>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00282-ip-10-147-4-33.ec2.internal.warc.gz"} |
Pescadero ACT Tutor
Find a Pescadero ACT Tutor
...I always want to ensure good service, and so I offer a free consultation session as a starting point. This helps us get to know each other and establish a good relationship. I also greatly
appreciate feedback so I can continually improve as a tutor.
27 Subjects: including ACT Math, chemistry, calculus, physics
...I'm a patient tutor with a positive, collaborative approach to building mathematical skills for algebra, pre-calculus, calculus (single variable) and advanced calculus (multi-variable).
Pre-calculus skills are very valuable for success on the mathematics section of the SAT exam and the SAT Math...
22 Subjects: including ACT Math, calculus, statistics, geometry
...I've taught at Universities as well as at Public Elementary, Middle and High Schools and have won several awards and grants for my innovative, creative and hands-on teaching techniques. I
believe that every child can learn and excel at something, and that it is important to keep learning interes...
29 Subjects: including ACT Math, chemistry, calculus, reading
...I have tutored over 100 hours with students for test prep (including ACT, SAT, and PSAT). I apply methods and strategies that have been proven to increase student's scores. I am also able to
help with students who are in home school. I have assisted with a variety of subjects, including teaching 4th grade math in Spanish.
15 Subjects: including ACT Math, English, Spanish, geometry
...I have worked with a number of kids with learning challenges, including Aspergers, ADHD, and test anxiety. Please contact me to discuss your needs.I have a master's degree in chemical
engineering from MIT. For this I took several classes in differential equations.
26 Subjects: including ACT Math, chemistry, calculus, physics | {"url":"http://www.purplemath.com/Pescadero_ACT_tutors.php","timestamp":"2014-04-16T10:19:48Z","content_type":null,"content_length":"23590","record_id":"<urn:uuid:5571ac10-e409-4994-95e3-7e285eb07654>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00185-ip-10-147-4-33.ec2.internal.warc.gz"} |
Pescadero ACT Tutor
Find a Pescadero ACT Tutor
...I always want to ensure good service, and so I offer a free consultation session as a starting point. This helps us get to know each other and establish a good relationship. I also greatly
appreciate feedback so I can continually improve as a tutor.
27 Subjects: including ACT Math, chemistry, calculus, physics
...I'm a patient tutor with a positive, collaborative approach to building mathematical skills for algebra, pre-calculus, calculus (single variable) and advanced calculus (multi-variable).
Pre-calculus skills are very valuable for success on the mathematics section of the SAT exam and the SAT Math...
22 Subjects: including ACT Math, calculus, statistics, geometry
...I've taught at Universities as well as at Public Elementary, Middle and High Schools and have won several awards and grants for my innovative, creative and hands-on teaching techniques. I
believe that every child can learn and excel at something, and that it is important to keep learning interes...
29 Subjects: including ACT Math, chemistry, calculus, reading
...I have tutored over 100 hours with students for test prep (including ACT, SAT, and PSAT). I apply methods and strategies that have been proven to increase student's scores. I am also able to
help with students who are in home school. I have assisted with a variety of subjects, including teaching 4th grade math in Spanish.
15 Subjects: including ACT Math, English, Spanish, geometry
...I have worked with a number of kids with learning challenges, including Aspergers, ADHD, and test anxiety. Please contact me to discuss your needs.I have a master's degree in chemical
engineering from MIT. For this I took several classes in differential equations.
26 Subjects: including ACT Math, chemistry, calculus, physics | {"url":"http://www.purplemath.com/Pescadero_ACT_tutors.php","timestamp":"2014-04-16T10:19:48Z","content_type":null,"content_length":"23590","record_id":"<urn:uuid:5571ac10-e409-4994-95e3-7e285eb07654>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00185-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: FindRoot problem
Replies: 0
FindRoot problem
Posted: Sep 20, 1996 12:43 AM
Dear All,
I came to the conclusion that FindRoot sometimes behaves a bit
strange. Say I want to calculate:
x=x/.First[FindRoot[(x-Pi),{x,-4,4},AccuracyGoal->20, WorkingPrecision->30,
The result of this is MachineNumber!!! It seems, problem is related
with evaluation of Pi. And hint N[Pi,30] do not help!!
Then I tried:
Now it works correct. Does anybody had similar problem?
I tried two methods: secant and brent's. Both gives the same
results. Can anybody explain this behaviour? This is important for my
Arturas Acus
Arturas Acus
Institute of Theoretical
Physics and Astronomy
Gostauto 12, 2600,Vilnius
E-mail: acus@itpa.lt
Fax: 370-2-225361
Tel: 370-2-612906 | {"url":"http://mathforum.org/kb/thread.jspa?threadID=223991","timestamp":"2014-04-19T09:25:56Z","content_type":null,"content_length":"14580","record_id":"<urn:uuid:1c8ac66e-721e-49ee-abd4-049bae5d7bbf>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00134-ip-10-147-4-33.ec2.internal.warc.gz"} |
How get the probality of (S|R) ?
I don't understand why assume that is a 1/1 probablity of S|R instead of R|R? Only ocurrs 1 state of S and in this state you don't know what is the prior state. Why assume that the prior state was S
and not R?
asked 15 Nov '11, 23:35
P(S|R) is the probability that the day that follows a rainy day will be sunny.
It probably would have been clearer if he had written something like P(day_(t+1)=S | day_(t)=R)
answered 15 Nov '11, 23:39
L_McLean ♦
Ok thanks I was understanding wrong.
answered 16 Nov '11, 00:01
Glad I helped :) | {"url":"http://www.aiqus.com/questions/11373/how-get-the-probality-of-sr","timestamp":"2014-04-18T18:28:31Z","content_type":null,"content_length":"33136","record_id":"<urn:uuid:b6b26af2-aae0-45ac-989f-16f2a58a9326>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00551-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mobile Phone Locations
20 August 1998
Date: Tue, 13 Aug 1996 15:23:52 +0200
To: jya@pipeline.com
From: interception <interception@ii-mel.com>
Subject: MOBILE PHONE LOCATIONS
WINLAB, Rutgers University
New BRunswick, NJ 08855
AT&T Bell Laboratories
101 Crawfords Corner Road
Holmdel, NJ 07733
Authors: David Goodman; P. Krishnan; Binay Sugla
The mobility of phones in a cellular or Personnal Communication
Service (PCS) environment introduces the problem of efficiently
locating the called phone. In this paper, we present an analysis
of the delay and number of messages transmitted in different
sequential and parallel search strategies, considering for the first
time the issue of queuing on radio paging channels. Our analysis
shows that parallel search mya not reduce the time to find a mobile
phone if the parameters of the system are unfavorable. We also develop
an efficient algorithm for searching with minimum expected number
of messages when the location of the phone is given by a probability
1 Introduction
In a traditional phone network, each phone is associated with a known
geographical location. The numbering scheme for these fixed phones
takes advantage of their known location. A relatively simple
mapping exists from the telephone number to its geographical
location. The introduction of the 800 number service utilized
an indirection in this mapping while still exploiting the property
that the final number is placed at a known geographical location.
The 700 number service, first introduced by AT§T, exploits the fact
that the called number may be mapped into one of several fixed
numbers using a logic and order specified by the customer.
The 500 number service is more advanced in that the customer can have
his/her calls forwarded to (virtually) any phone; however the
mapping between the customer's 500 number and a standard number must
be provided by the customer, e.g., by using a touch-tone phone.
The techniques used in the services mentioned above are inadequate
for the mobile environment since the location of the final destination
(viz. the called phone) is unknown and must be determined before
the call is completed.
In response to this need, several schemes have been employed and
suggested. A centralized paging scheme in which the called phone
number is broadcast and the called phone responds is inefficient
in the use of radio bandwidth. In the more recent techniques, the basic unit
of paging is the cell level. The problem now is to design an efficient
search algorithm such that the relevant costs are minimized.
A number of research papers address facets of the mobile location problem.
The problem is complex because there are several performance criteria
including costs of reporting, recording, and retrieving the locations of
mobile phones, costs of searching for mobile phones when calls arrive, the
probability of a successful search, and delays in finding phones.
Tracking and search costs include radio channel occupancy, transmissions in
fixed networks, and database transactions.
The overall complexity is also important because a scheme that
requires a highly distributed real-time system may introduce problems
of it own, particularly with respect to reliability. Previous papers
[1,2,3,4,5,7,8 on request] address network architecture issues,
database structures, and tradeoffs between registration and paging costs.
This paper focuses on the search process. We assume that the system
that accurate knowledge that a phone is in a "location area [LA]" which consists
of a collection of cells. We then examine sequential search strategies for
determining the cell in which the phone is located.
The search procedure consists of sending paging messages to a group of cells
in the location area and waiting for a response from the phone.
If no response arrives within a fixed waiting period, the network sends
paging messages in another group of cells, and again waits for a response.
The procedure continues until the phone responds to a paging message.
The quality criteria that we examine are the search delay and the total of
number of messages transmitted.
An important issue is the queuing delay of paging messages on radio
channels. Our probabilistic analysis reveals that parallel search may not
reduce the time to find a mobile phone if the parameters of the system are
unfavorable. We also develop an efficient algorithm for paging to minimize
the total numbers of messages when we know the probability of finding a
phone in any specific cell.
Our goal in this paper is to study the design implications of our analysis
taking into account both the delay incurred and number of messages
transmitted. The delay analysis focuses on uniformly distributed traffic and
includes the effect of a timeout at each cell.
In [9], the authors analyze delay as a function of the mean of the ordered
distribution that describes the uncertainly about the location
of the mobile phone. For the problem of developing an efficient algorithm
for paging to minimize the number of messages when we know the probability
of finding a phone in any paging cell, our emphasis in this paper is on on
the computational complexity of the solution, while in [10] the authors deal
with methods for different types of distributions.
IN [6], Madhavapeddy et al. propose methods to empirically compute the
probability of a mobile phone being in any paging cell using the
registration history. They also develop algorithms to minimize the expected
number of messages while searching for a mobile phone; we compare and
contrast our algorithms with the ones in [6] in Section 4.
The rest of the paper is organized as follows. In Section 2, we present our
model. In Section 3, we introduce and present an analysis of sequential and
parallel schemes used for locating a mobile phone.
The simple analysis in Section 3.1 raises important questions about the
issue of queuing delays. IN Section 3.2, we analyze the effect of
queuing on radio paging channels, assuming a Poisson call-arrival.
In Section 4, we develop an efficient paging strategy to minimize the total
number of messages sent in locating the mobile phone when the possible
locations of the phone are specified by a probability vector.
We conclude in Section 5 with directions for future research.
We have looked at the question of queuing delays and derived the result that
parallel search may not reduce the delay in searching for
a phone if the system is heavily loaded. In particular, our analysis, our
analysis sheds light on important design issues in paging in
cellular telephone networks. We find that at high loads, the delay
decreases significantly in going from 1 to 2 paging groups. When considered
along with the number of messages sent, perhaps 2-4 paging groups is a good
value; e.g., having 4 paging groups reduces the expected total number of
messages (relative to flooding)The minimum is 0.5X, achieved with flooding
which leads to a high delay.
It would be interesting to study the impact of queuing delay on other
results in the literature.
We have also presented an efficient method to group stations into paging
groups when a probability vector defining the likelihood of finding a phone
in any paging area is given, and the goal to minimize the expected number of
Our algorithm Divide2 optimally and efficiently partitions the states into
Ng = 2 paging groups, an important case as suggested by our delay analysis.
For general Ng, our algorithm Group takes worst case time proportional to
O(ref.4.2)to determine the optimal partitioning.
[1] B. Awerbuch, D. Peleg. "Concurrent Online Tracking of Mobile Users",
Proceedings of the 1991 ACM SIGCOMM, pp. 221-233.
[2] A. Bar-Noy, I. Kessler, M. Sidi. "Mobile Users: To Update or not to
to Update," Wireless Networks, 1(1), 1995, Preprint.
[3] A. Bar-Noy, I. Kessler. "Tracking Mobile Users in Wireless Networks,"
INFOCOMM 93, pp. 1232-1239.
[4] S.T.S. Chia. "Location Registration and Paging in a Third Generation
Mobile System," BT Technology Journal, vol. 9, no 4, October 1991,
pp. 71-68
[5] R. H. Katz. "Adaptation and Mobility in Wireless Systems," R. H. Katz,
IEEE Personnal Communications, First Quarter 1994, pp. 6-17
[6] S. Madhavapeddy, K. Basu, A. Roberts. "Adaptive Paging Algorithms for
Cellular Systems," Proceedings of the Fifth WINLAB Workshop on Third
Generation Wireless Information Networks, April 1995, pp. 347-361.
[7] K.S. Meir-Hellstern, E. Alonso, and D. O'Neil. " The Use of SS7 and GSM
to Support High Density Personnal Communications," Proceedings of the
International Conference on Communications (ICC), 1992.
[8] S. Mohan, R. Jain. "Two User Location Strategies for Personnal
Communications Services," IEEE Personnal Communications, First Quarter
1994, pp. 42-50
[9] C. Rose and R. Yates. "Ensemble Polling Strategies for Increased Paging
Capacity in Mobile Communication Networks,", manuscript.
[10]C. Rose and R. Yates. "Minimizing the Average Cost of Paging Under Delay
Constraints," ACM Wireless Networks, 1(2), pp. 211-219, 1995
source: ATT BELL (Goodman, Krishnan & Sugla) | {"url":"http://cryptome.org/jya/mobile-spy.htm","timestamp":"2014-04-18T13:52:52Z","content_type":null,"content_length":"10084","record_id":"<urn:uuid:51d6e01b-aced-4137-a1e5-520db636c6cb>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00192-ip-10-147-4-33.ec2.internal.warc.gz"} |
3 circles incscribed in one
December 22nd 2008, 07:49 PM #1
3 circles incscribed in one
A circle of radius R inscribes three circles of radius T
Find T in terms of R
I have found that the three circles with radius T form an equilateral triangle
The answer is supposed to be T=0.464R
If we connect the vertices of that equilateral triangle into the center we have the radius point of the larger circle. Then, we create 3 isosceles triangles.
Each of these triangles we can use the law of cosines.
But a=b, so:
Then radius of the large circle is R, so we have:
Solve for T and we have:
$T=(2\sqrt{3}-3)R\approx .464R$
As the triangle (formed by the centres of the three inscribed circles) in the attachment is equilateral we have:
and so:
$<br /> R=Oa=T+Oc=T\left(1+\frac{2}{\sqrt{3}}\right)<br />$
December 23rd 2008, 06:31 AM #2
December 23rd 2008, 07:00 AM #3
May 2006 | {"url":"http://mathhelpforum.com/geometry/65848-3-circles-incscribed-one.html","timestamp":"2014-04-18T09:18:48Z","content_type":null,"content_length":"38686","record_id":"<urn:uuid:fc044ee1-6884-46af-a9bb-9040752d6870>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00128-ip-10-147-4-33.ec2.internal.warc.gz"} |
Klein bottle
In mathematics, the Klein bottle // is an example of a non-orientable surface; informally, it is a surface (a two-dimensional manifold) in which notions of left and right cannot be consistently
defined. Other related non-orientable objects include the Möbius strip and the real projective plane. Whereas a Möbius strip is a surface with boundary, a Klein bottle has no boundary (for
comparison, a sphere is an orientable surface with no boundary).
The Klein bottle was first described in 1882 by the German mathematician Felix Klein. It may have been originally named the Kleinsche Fläche ("Klein surface") and that this was incorrectly
interpreted as Kleinsche Flasche ("Klein bottle"), which ultimately led to the adoption of this term in the German language as well.^[1]
Start with a square, and then glue together corresponding coloured edges, in the following diagram, so that the arrows match. More formally, the Klein bottle is the quotient space described as the
square [0,1] × [0,1] with sides identified by the relations (0, y) ~ (1, y) for 0 ≤ y ≤ 1 and (x, 0) ~ (1 − x, 1) for 0 ≤ x ≤ 1:
This square is a fundamental polygon of the Klein bottle.
Note that this is an "abstract" gluing in the sense that trying to realize this in three dimensions results in a self-intersecting Klein bottle. The Klein bottle, proper, does not self-intersect.
Nonetheless, there is a way to visualize the Klein bottle as being contained in four dimensions.
Glue the red arrows of the square together (left and right sides), resulting in a cylinder. To glue the ends together so that the arrows on the circles match, pass one end through the side of the
cylinder. Note that this creates a circle of self-intersection. This is an immersion of the Klein bottle in three dimensions.
By adding a fourth dimension to the three dimensional space, the self-intersection can be eliminated. Gently push a piece of the tube containing the intersection along the fourth dimension, out of
the original three dimensional space. A useful analogy is to consider a self-intersecting curve on the plane; self-intersections can be eliminated by lifting one strand off the plane.
This immersion is useful for visualizing many properties of the Klein bottle. For example, the Klein bottle has no boundary, where the surface stops abruptly, and it is non-orientable, as reflected
in the one-sidedness of the immersion.
The common physical model of a Klein bottle is a similar construction. The Science Museum in London has on display a collection of hand-blown glass Klein bottles, exhibiting many variations on this
topological theme. The bottles date from 1995 and were made for the museum by Alan Bennett.^[2]
Like the Möbius strip, the Klein bottle is a two-dimensional differentiable manifold which is not orientable. Unlike the Möbius strip, the Klein bottle is a closed manifold, meaning it is a compact
manifold without boundary. While the Möbius strip can be embedded in three-dimensional Euclidean space R^3, the Klein bottle cannot. It can be embedded in R^4, however.
The Klein bottle can be seen as a fiber bundle over the circle S^1, with fibre S^1, as follows: one takes the square (modulo the edge identifying equivalence relation) from above to be E, the total
space, while the base space B is given by the unit interval in y, modulo 1~0. The projection π:E→B is then given by π([x, y]) = [y].
The Klein bottle can be constructed (in a mathematical sense, because it cannot be done without allowing the surface to intersect itself) by joining the edges of two Möbius strips together, as
described in the following limerick by Leo Moser:^[3]
A mathematician named Klein
Thought the Möbius band was divine.
Said he: "If you glue
The edges of two,
You'll get a weird bottle like mine."
The initial construction of the Klein bottle by identifying opposite edges of a square shows that the Klein bottle is a CW complex with one 0-cell P, two 1-cells C[1], C[2] and one 2-cell D. Its
Euler characteristic is therefore 1-2+1 = 0. The boundary homomorphism is given by ∂D = 2C[1] and ∂C[1]=∂C[1]=0, yielding the homology groups of the Klein bottle K to be H[0](K,Z)=Z, H[1](K,Z)=Z×(Z/2
Z) and H[n](K,Z) = 0 for n>1.
There is a 2-1 covering map from the torus to the Klein bottle, because two copies of the fundamental region of the Klein bottle, one being placed next to the mirror image of the other, yield a
fundamental region of the torus. The universal cover of both the torus and the Klein bottle is the plane R^2.
The fundamental group of the Klein bottle can be determined as the group of deck transformations of the universal cover and has the presentation <a,b | ab = b^−1a>.
Six colors suffice to color any map on the surface of a Klein bottle; this is the only exception to the Heawood conjecture, a generalization of the four color theorem, which would require seven.
A Klein bottle is homeomorphic to the connected sum of two projective planes. It is also homeomorphic to a sphere plus two cross caps.
When embedded in Euclidean space the Klein bottle is one-sided. However there are other topological 3-spaces, and in some of the non-orientable examples a Klein bottle can be embedded such that it is
two-sided, though due to the nature of the space it remains non-orientable.^[4]
Dissecting a Klein bottle into halves along its plane of symmetry results in two mirror image Möbius strips, i.e. one with a left-handed half-twist and the other with a right-handed half-twist (one
of these is pictured on the right). Remember that the intersection pictured isn't really there.
Simple-closed curves[edit]
One description of the types of simple-closed curves that may appear on the surface of the Klein bottle is given by the use of the first homology group of the Klein bottle calculated with integer
coefficients. This group is isomorphic to Z×Z[2]. Up to reversal of orientation, the only homology classes which contain simple-closed curves are as follows: (0,0), (1,0), (1,1), (2,0), (0,1). Up to
reversal of the orientation of a simple closed curve, if it lies within one of the two crosscaps that make up the Klein bottle, then it is in homology class (1,0) or (1,1); if it cuts the Klein
bottle into two Möbius bands, then it is in homology class (2,0); if it cuts the Klein bottle into an annulus, then it is in homology class (0,1); and if bounds a disk, then it is in homology class
The figure 8 immersion[edit]
The "figure 8" immersion (Klein bagel) of the Klein bottle has a particularly simple parameterization. It is that of a "figure-8" torus with a 180 degree "Möbius" twist inserted:
\begin{align} x & = \left(r + \cos\frac{\theta}{2}\sin v - \sin\frac{\theta}{2}\sin 2v\right) \cos \theta\\ y & = \left(r + \cos\frac{\theta}{2}\sin v - \sin\frac{\theta}{2}\sin 2v\right) \sin \
theta\\ z & = \sin\frac{\theta}{2}\sin v + \cos\frac{\theta}{2}\sin 2v \end{align}
for 0 ≤ θ < 2π, 0 ≤ v < 2π and r > 2.
In this immersion, the self-intersection circle (when v = 0, π) is a geometric circle in the xy-plane. The positive constant r is the radius of this circle. The parameter θ gives the angle in the xy
-plane, and v specifies the position around the 8-shaped cross section. With the above parameterization the cross section is a 2:1 Lissajous curve.
In four dimensions this surface can be made non-intersecting by adding a little v dependent "bump" to the fourth w axis at the intersection point. E.g.
\begin{align} w & = \cos v \end{align}
4-D non-intersecting[edit]
Another non-intersecting 4-D parameterization is modeled after that of the flat torus:
\ \begin{align} x & = R\left(\cos\frac{\theta}{2}\cos v-\sin\frac{\theta}{2}\sin 2v\right) \\ y & = R\left(\sin\frac{\theta}{2}\cos v+\cos\frac{\theta}{2}\sin 2v\right) \\ z & = P\cos\theta\left
(1+e\sin v\right) \\ w &= P\sin\theta\left(1+e\sin v\right) \end{align}
where R and P are constants that determine aspect ratio, θ and v are similar to as defined above. v determines the position around the figure-8 as well as the position in the x-y plane. θ determines
the rotational angle of the figure-8 as well and the position around the z-w plane. e is any small constant and esinv is a small v depended bump in z-w space to avoid self intersection. The v bump
causes the self intersecting 2-D/planar figure-8 to spread out into a 3-D stylized "potato chip" or saddle shape in the x-y-w and x-y-z space viewed edge on. When e=0 the self intersection is a
circle in the z-w plane <0, 0, cosθ, sinθ>.
Bottle shape[edit]
The parameterization of the 3-dimensional immersion of the bottle itself is much more complicated. Here is a version found by Robert Israel:
\begin{align} x(u,v) &= -\frac{2}{15} \cos u (3 \cos{v}-30 \sin{u}+90 \cos^4{u} \sin{u} \\ &\quad -60 \cos^6{u} \sin{u}+5 \cos{u} \cos{v} \sin{u}) \\ y(u,v) &= -\frac{1}{15} \sin u (3 \cos{v}-3 \
cos^2{u} \cos{v}-48 \cos^4{u} \cos{v}+ 48 \cos^6{u} \\ &\quad \cos{v}-60 \sin{u}+5 \cos{u} \cos{v} \sin{u}-5 \cos^3{u} \cos{v} \sin{u}-80 \\ &\quad \cos^5{u} \cos{v} \sin{u}+80 \cos^7{u} \cos{v}
\sin{u}) \\ z(u,v) &= \frac{2}{15} (3+5 \cos{u} \sin{u}) \sin{v} \end{align}
for 0 ≤ u < π and 0 ≤ v < 2π.
The generalization of the Klein bottle to higher genus is given in the article on the fundamental polygon.
In another order of ideas, constructing 3-manifolds, it is known that a solid Klein bottle is topologically equivalent with the Cartesian product: $\scriptstyle M\ddot{o}\times I$, the Mobius band
times an interval. The solid Klein bottle is the non-orientable version of the solid torus, equivalent to $\scriptstyle D^2\times S^1$.
Klein surface[edit]
A Klein surface is, as for Riemann surfaces, a surface with an atlas allowing the transition functions to be composed using complex conjugation. One can obtain the so-called dianalytic structure of
the space.
See also[edit]
• A classical on the theory of Klein surfaces is [1] of Alling-Greenleaf
External links[edit]
This article incorporates material from Klein bottle on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License. | {"url":"http://blekko.com/wiki/Klein_bottle?source=672620ff","timestamp":"2014-04-19T13:25:28Z","content_type":null,"content_length":"44490","record_id":"<urn:uuid:3fd5ad61-6eca-4659-9e45-c160beead784>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00090-ip-10-147-4-33.ec2.internal.warc.gz"} |
Convert a matrix into an upper triangular matrix
The jit.la.determinant object converts a given input matrix to an upper triangular matrix via Gaussian elimination. The input matrix must have typefloat32 or float64, and may have planecount 1 or 2.
If the input matrix has a planecount of 2, it is assumed that the data is from the set of complex numbers.
Matrix Operator
matrix inputs:1, matrix outputs:1
Name IOProc Planelink Typelink Dimlink Plane Dim Type
out n/a 1 1 1 1 1 float32 float64
Information for Jitter Matrix Operator (MOP) messages and attributes to this object
Name Type g/s Description
swapcount int (get) The number of row swaps required to perform Gaussian elimination
thresh float The threshold value beneath which the absolute value of the result of internal calculations are considered to be equal to zero (default = 0.000000001)
See Also
Name Description
jit.la.determinant Calculate the determinant of a matrix
jit.la.diagproduct Calculate the product across the main diagonal
jit.la.inverse Calculate the inverse of a matrix
jit.la.mult True matrix multiplication
jit.la.trace Calculate the sum across the main diagonal | {"url":"http://cycling74.com/docs/max5/refpages/jit-ref/jit.la.uppertri.html","timestamp":"2014-04-21T15:12:00Z","content_type":null,"content_length":"3675","record_id":"<urn:uuid:4545df31-5034-4e31-baee-267f857b4b73>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00195-ip-10-147-4-33.ec2.internal.warc.gz"} |
Number of results: 798
each side of a square painting id 5 ft long.what is the painting area.what is the painting perimter?
Monday, June 20, 2011 at 10:10am by peter
Math Geometry
Marisol is painting on a piece of canvas that has an area of 180 square inches. The length of the Painting is 1 1/4 times the width. What are the dimensions of the painting
Friday, March 7, 2014 at 4:54pm by Donald
Math Geometry
Marisol is painting on a piece of canvas that has an area of 180 square inches. The length of the Painting is 1 1/4 times the width. What are the dimensions of the painting this is for elementary
school so make it simple
Friday, March 7, 2014 at 5:06pm by Donald
28. Van Gogh’s painting Starry Night measures 92 cm long by 73 cm high. You buy a poster that shows an enlargement of the painting. The poster measures 120 cm long by 100 cm high. Is the poster an
accurate representation of the painting? (Hint: Is the poster similar to the ...
Wednesday, December 29, 2010 at 5:15pm by miranda
a square painting is surrounded by a 2.5cm frame. if the total are of the painting plus frame is 3600 sq cm, find the dimensions of the painting.
Wednesday, July 11, 2012 at 8:36am by Maui
for my homework assignment i have a picture of Michaelangelo , creation of Adam painting i have to describe what i think is happening in the painting and also write a story about the painting . Ive
described the painting but i am struggling with the story part do i write it as...
Friday, January 2, 2009 at 7:20am by Ashleigh
A square painting is surrounded by a 3-cm-wide frame. If the total area of the painting plus the frame is 81 cm2, find the dimensions of the painting.
Friday, July 27, 2012 at 5:59am by Lizzie
Maths A Year 12
Four years ago, an art collector bought a painting for $32 500. He knows that this painting appreciates at 15% a year. What is this painting worth now, to the nearest $100?
Sunday, June 9, 2013 at 5:26am by Anne
Since the painting is square, the length of each side of the painting, a, is such that (a + 5)^2 = 3600 cm^2 a + 5 = 60 cm a = 55 cm The "5" in that equation is the width of two frame borders that
must be added to the painting dimension
Wednesday, July 11, 2012 at 8:42am by drwls
Art history
Painting : Sixtus in appointing platina as prefect of the Vatican Can you tell me more about this painting, for example, 1) the character of Sixtus IV and his favoritism 2) How Renaissance is
illustrated in the painting 3) How Humanist is illutrated. thanks! -Yvette
Monday, August 9, 2010 at 12:45am by Yvette
year 8 art
You can get some ideas about the background and interpretations of each painting by Googling the following subjects together: painting interpretation "(painting title)" Doing this with "Mona Lisa" as
the title, I got this and many other articles: (Broken Link Removed) Be ...
Monday, December 29, 2008 at 6:11am by drwls
hello me again! i am now looking at the wounderful painting of "little blue horses" by Franz Marc, dose anyone know any good websites where i can find the background history of this painting! i would
be very greatful, an also if anyone konws of any quotes that i could use e.g...
Wednesday, February 18, 2009 at 2:21pm by sophie
An artist makes a painting that is 12 feet by 9 feet. He paints a frame around the painting. The frame is 4 inches wide. What is the perimeter of the outside edge of the frame?
Thursday, November 29, 2012 at 8:29pm by zion
Van Gogh’s painting Starry Night measures 92 cm long by 73 cm high. You buy a poster that shows an enlargement of the painting. The poster measures 120 cm long by 100 cm high. Is the poster an
accurate representation of the painting? (Hint: Is the poster similar to the ...
Tuesday, April 13, 2010 at 6:45pm by Mimi<3
9th grade Algebra 1
Van Gogh’s painting Starry Night measures 92 cm long by 73 cm high. You buy a poster that shows an enlargement of the painting. The poster measures 120 cm long by 100 cm high. Is the poster an
accurate representation of the painting? (Hint: Is the poster similar to the ...
Tuesday, April 13, 2010 at 9:24pm by Mimi<3
i need to write a short story about mona lisa painting . im not sure what to write . do i describe the painting , how the painting looks , what mona looks like ? thanks
Saturday, December 27, 2008 at 7:03am by sadie
This art movement presents a celebration of the American wilderness and the West. A.American seascape painting B.American landscape painting C.American portraits D.pioneer painting my choice - D...?
Wednesday, March 13, 2013 at 11:30am by Cassie
English Literature
I think that would work, yes. It's a very famous pairing -- the painting by Breughel (1558) and the poem about the painting (published in 1940). The first time I read that poem, I felt as if I were
walking around in an art gallery or museum -- and then I found the painting in ...
Wednesday, October 20, 2010 at 8:20pm by Writeacher
Fine Arts
Ok i basically finished it, but i need to know what the painting means, and what the mood of the painting is. Now i think the painting means how shes thinking about her existence, and the skull
represents death, but i dont know what the flame represents. As for the mood, im ...
Tuesday, May 8, 2012 at 2:49pm by Mel
a square painting is surrounded by a 2.5cm frame. if the total are of the painting plus frame is 3600 sq cm, find the dimensions of the painting. what are the dimensions? i forgot how to find the
dimensions. please help me .
Wednesday, July 11, 2012 at 8:42am by Maui
First, you need to choose a painting. Then you can do some research on the painters style. Often there is an analysis of a painting, that you can get information from. Also, look up use of color, use
of space, use of form ( in art); Google can help with all of these. When you ...
Tuesday, September 4, 2007 at 3:26pm by GuruBlue
8 math
A painting by a famous impressionist sold through an auction house where the sale price is not publicly disclosed. The auction house works on a commission rate if 10%. They earned a commission of
$400,000 on this painting. What was the price of the painting.
Sunday, April 25, 2010 at 10:37am by Anonymous
My son is doing this in 6th grade math class. Mr. Weiss started painting the kitchen at 8:45 am and finished painting at 12:00 pm. About how many hours elapsed between the time he started painting
and the time he finished painting the kitchen? Answers are 2h, 3h, 4h, and 5h. ...
Thursday, October 6, 2011 at 4:25pm by m
In the painting, "Reading to Children by Mary Cassat, what is going on in the painting
Thursday, October 2, 2008 at 5:22pm by elaine
what type of clause or phase is painting Frances has plenty of time to devote to her painting.
Wednesday, July 11, 2012 at 2:14pm by Anonymous
Looking on the national gallery of art website (or it can be seen by typing the name of the painting in any search box) review The Emperor Napoleon in His Study at the Tuileries by Jacques Louis
David.Discuss the painting techniques and formal elements. I am confused on this. ...
Sunday, October 17, 2010 at 9:07am by m
My impression of this painting is curiosity of how this happened and the painting gives me a calm feeling when looking at it, apart from the crash.
Thursday, November 10, 2011 at 8:16pm by Anonymous
How can I describe my impression of a painting? The painting shows a freeway and the main focus is the crash site, where there is smoke and fire.
Thursday, November 10, 2011 at 8:16pm by Anonymous
a painting is made od 3 concentric squares. the side lenght of the largest square is 24 cm what is the area of the painting
Thursday, October 4, 2012 at 4:22am by Anonymous
College Algebra/Geometry
if the frame is width w, then it covers 1/2 inch of painting plus w outside the painting. pi(3-.5)^2 = pi((3+w)^2 - (3^2-2.5^2)) 25/4 = w^2 + 6w + 9 - (9-25/4) w^2 + 6w = 0 solve that for w, and you
get w=0 so, the border is just 1/2 inch wide, and does not extend outside the ...
Wednesday, May 2, 2012 at 9:00pm by Steve
maths, variations
i'm not sure how this question works the number of hours (h) taken to paint a picture is inversely proportional to th number of pupils (p) painting it. it takes 4 pupils 3 hours to finish painting it
assuming all pupils work at the same rate. a) find an equation connecting h ...
Tuesday, December 18, 2007 at 6:01am by hayley
3rd grade art
In the painting "Sunday Afternoon on the Island of La Grande Jatte", by George Seurat, what is the story the painting is telling? What is happening?
Tuesday, March 31, 2009 at 4:52pm by elaine
algebra1 help solutions
The area of a rectangular painting is given by the trinomial x2 + 4x – 21. What are the possible dimensions of the painting? Use factoring. Options: x + 7 and x + 3 x – 7 and x + 3 x – 7 and x – 3 x
+ 7 and x – 3
Wednesday, May 29, 2013 at 12:26am by Jennifer W.
fine art
Ok, if someone saw a fine art painting and they said great picture VS someone looking at a fine art painting and said great painting, what qualifys one a painting and one a picture
Monday, April 26, 2010 at 5:14pm by tom again
painting is purchased for $250. If the value of the painting doubles every 5 years, then its value is given by the function V(t) = 250 • 2t/5, where t is the number of years since it was purchased
and V(t) is its value (in dollars) at that time. What is the value of the ...
Saturday, August 6, 2011 at 3:07pm by soka
3rd grade-art
In the painting "Self Portrait" by Judith Leyster. How does the artist feel about herself? Use details in painting to support answer.
Thursday, October 16, 2008 at 4:51pm by elaine
the cast of painting a wall with dimensions 15m by 20m is 6250 find the altitude of a rhombus shaped window with side 7.5 m,,if the rate of painting 25 per m
Sunday, January 19, 2014 at 3:35am by Anonymous
the cast of painting a wall with dimensions 15m by 20m is 6250 find the altitude of a rhombus shaped window with side 7.5 m,,if the rate of painting 25 per m
Sunday, January 19, 2014 at 3:35am by Anonymous
1. Two sides are 3 * 4 and four sides are 4 * 7 = ? 2. See your later post for the equation. 3. Which are you painting, or are you painting both?
Saturday, November 27, 2010 at 10:46am by PsyDAG
The area of a square painting is 600 square inches. To the nearest hundredth inch,what is the perimeter of the painting?
Friday, October 26, 2012 at 2:53pm by Cam'Ron
What is an example of 'word-painting' or 'text-painting' in non-Western or world music?
Saturday, April 12, 2008 at 3:01pm by David Hollings
The perimeter of a rectangular painting is 266 centimeters. If the length of the painting is 77 centimeters, what is its width?
Wednesday, May 9, 2012 at 9:06pm by calvin
Van Gogh's painting Starry Night measures 92 cm long by 73 cm high. You buy a poster that shows an enlargement of the painting. The poster measures 120 cm long by 100 cm high. Is the poster an
accurate represenation of the painting? I think the answer is no. Is this right?
Saturday, November 24, 2007 at 11:44am by keisha
A small painting has an area of 400 cm^2. The length is 4 more than 2 times the width. Find the dimensions of the painting. Solve by completing the square. Round answers to the nearest tenth of a
Tuesday, April 9, 2013 at 4:22pm by Anonymous
I do believe the x = 12. Because the area of the painting and the frame is 208 FT^2 and the thickness of the frame is 1 foot. The dimensions of the painting are x by x+3, so they are x^2+3x. The
dimensions of the painting and the frame therefore must be x^2+5x+4 from (x+4)(x+1...
Friday, October 16, 2009 at 10:01pm by Aerin
Need help solving this problem please. Andrei buys a painting and sells it to Boris Andrei makes a 60% profit on the sale Boris sells the painting to Clarissa Clarissa pays 1.5times the price that
andrei paid for the painting Boris made a loss What is his loss as a % of the ...
Monday, June 23, 2008 at 4:16pm by Maggie
Early Childhood Education
An appropriate art activity for children 18 to 36 months of age is A. making a spatter painting. B. finger painting with shaving cream. C. doing embroidery. D. looking at a mobile. I am thinking it's
Saturday, April 2, 2011 at 8:06pm by Heather
3rd grade-art
In the art piece "Diego on My Mind" by Frida Kahlo, what can be inferred or concluded about the artist based on this painting? Use`details in the painting to support the ideas.
Thursday, January 8, 2009 at 4:32pm by Elaine
The painting is The Family of Carlos lV. 1. What is Goya's style of painting? Is he part of a larger art movment. 2. What art principles did he use? How? Where? Please Help Me.
Wednesday, December 12, 2012 at 2:12pm by Jman
Your assignment says to include "the important elements is my interpretation and supporting evidence." But I see very little interpretation or supporting evidence. Example: "The painting seems very
interesting to me because I like the details in it." Yet you do not name the ...
Sunday, February 1, 2009 at 1:05pm by Writeacher
a painting is 3 feet wide and 2 feet long. what is its area?
Tuesday, August 23, 2011 at 8:39pm by stephanie
natasha is painting her bedroom, which measures 15 feet by 12 feet and has an 8-foot ceiling. not accounting fo windows and doors, how many square feet will she be painting?
Sunday, January 8, 2012 at 9:57pm by DEON
Fine Art
OK, here are some ideas then. Take two sheets of paper, one for each of the paintings. Focus just on one painting, and on one of the sheets of paper brainstorm for 1, 2, and 3 in the list. Then on
the other sheet of paper, do the same for the other painting. When you have "...
Tuesday, February 11, 2014 at 3:05pm by Writeacher
Child Develpoment
Age 3? Yes, finger painting, but expect a mess! A few will be able to draw something recognizable, but not many. If they are to make puppets, it's really the teacher and aide who will be doing the
work. Finger painting is probably best.
Monday, November 4, 2013 at 1:15pm by Writeacher
Want to hang a painting on a wall that is 17and 5/8 inches high and I want 2 inches left on top and bottom of painting showing. Where would I hang it at?
Sunday, June 2, 2013 at 5:08pm by jamie
Area of a painting. A rectangular painting with a width of x centimeters has an area of x2 + 50x square centimeters. Find a binomial that represents the length.
Friday, September 10, 2010 at 12:58pm by Cheryl
College Math ll
Area of a painting A rectangular painting with a width of x centimeters has an area of x^2 + 50x square centimeters. Find the binomial that represent the length
Wednesday, February 23, 2011 at 9:41pm by alice
Area of a painting. A rectangular painting with a width of x centimeters has an area of x^2+50x square centimeters. Find a binomial that represents the length. See the accompanying figure.
Thursday, August 5, 2010 at 10:35am by Samantha
painting acrylic
Ok, I assume with the acrylic you are trying to avoid streaks.. http://painting.about.com/od/acrylicpaintingfaq/f/FAQ_streaks.htm
Wednesday, March 7, 2012 at 9:41am by bobpursley
Yes, I am looking for the steps and a answer because I can not get it. Question: Area of Painting: A rectangular painting with a width of x centimeters has an area X^2 +50x square centimeters. Find a
binomial that represents the length.
Tuesday, March 3, 2009 at 11:58am by JASMIN
Area of a painting. A rectangular painting with a width of x centimeters has an area of x2 50x square centimeters. Find a binomial that represents the length. See the accompanying figure.
Thursday, January 28, 2010 at 6:50pm by kool-v
choose the phrase in which commas are used correctly to complete the sentence. I don’t know very much about _________. A.painting sculpture or drawing B.painting, sculpture, or drawing I think it is
Thursday, April 11, 2013 at 8:11pm by Cassie
Then you could write: My impression of this painting is that it gives me a lot of curiosity of what has happened, if there were many people injured, and what had caused the incident. Although this is
a painting of a crash site, it gives me a nice calm feeling. This is because...
Thursday, November 10, 2011 at 8:16pm by Howler
3 grade math ms sue
1- IS THE AREA OF A PAPERBACK BOOK COVER CLOSER TO 28 SQUARE INCHES OR 28 SQUARE CENTIMETERS? TELL HOW YOU DECIDED. 4*7=28 SQUARE CENTIMENTERS 2- MARIA WANTS TO DRAW A PAINTING WITH AN AREA OF 40
SQUARE INCHES. IF SHE DREW HER PAINTING ON 1 -INCH GRID PAPER HOW MANY SQUARES ...
Tuesday, March 13, 2012 at 6:33pm by dw
art history
Im writing a paper on Van Gogh's cypress tree painting. I have a lot of information but is still not long enough. I was wondering if anybody knows any information that I can include. I have read that
cypress trees are associated with fire, mist and sea-how can I relate this to...
Wednesday, November 3, 2010 at 9:59pm by Anonymous
French - SraJMcGin/Frenchy
Painting (activity, art form) = peinture EX : She enjoys sculpture and painting (Elle adore la sculpture et la peinture) A painting (work of art) = un tableau) EX : The Louvre museum has thousands of
paintings on its walls (Le musée du Louvre a des milliers de tableaux sur ses...
Friday, April 1, 2011 at 8:54pm by Frenchy
1. He painted the emperor a gold tree. 2. He painted a gold tree for the emperor. 3. He painted a gold tree to the emperor. (Are they all grammatical? Which preposition do we have to use?) 4. Ma
Liang kept painting. (Ma Liang is a person's name. What is the part of speech of '...
Wednesday, March 24, 2010 at 1:53am by rfvv
I didn't find a poem by this name, but I did find this painting. http://www.victorianweb.org/painting/fildes/paintings/3.html You've compiled an impressive list. The only idea I'd add is that the
industrial revolution encouraged imperialism as the industrial countries sought ...
Monday, June 30, 2008 at 5:15pm by Ms. Sue
A square mirror has sides measuring 2 ft. less than the sides of a square painting. If the difference between their areas is 32 ft^2, find the lengths of the sides of the mirror and the painting?
Square mirror= x-2 Painting=x I don't know where to go from there... The ...
Sunday, July 13, 2008 at 5:39pm by Aniya
• For this assignment, you will view five examples of 20th Century paintings, one each of the following five schools of thought (the slides are labeled): o Impressionism o Expressionism o Abstract
expressionism o Minimalism o Primitivism • Then, listen to five examples of 20th...
Wednesday, April 21, 2010 at 1:27am by asd
A painting measuring 100cm by 55cm is to be framed with a matte border of uniform width surrounding the painting. the combined area of the picture and the matte is 9796cm^2. How wide is the matte
Sunday, July 12, 2009 at 10:57am by steven
I posted this the other day but no one responeded. Yes, I am looking for the steps and a answer because I can not get it. Question: Area of Painting: A rectangular painting with a width of x
centimeters has an area X^2 +50x square centimeters. Find a binomial that represents ...
Wednesday, March 4, 2009 at 11:26am by JASMIN
compare and contrast the representation of weight and space in the painting of The Good Shepherd in the Catacomb of Saints Pietro and Marcellino (Image 15-2) and the mosaic of Justinian and
Attendants in Ravenna's Church of San Vitale (Image 15-5)
Thursday, May 31, 2012 at 12:37pm by Sha
a painting is 3 inches longer than it is wide. it is set in a rectangular border that is 2 inches wider than the painting on each side. if the area of the border is 100in^2 what are the dimensions of
the painting? Here is a problem I worked for Jason that is almost the same. ...
Wednesday, November 29, 2006 at 2:27pm by tyrik
Another painting has length : width ratio approximately equal to the golden ratio (1 + sqrt 5) : 2. Find the length of the painting if the width is 34 inches.
Sunday, May 10, 2009 at 9:34am by Angie
An oil painting is 16 years older than a watercolor by the same artist. The oil painting is also three times older than the watercolor. How old is each? Identify a variable, set up an equation and
solve. thanks
Monday, January 27, 2014 at 2:15pm by ava
An oil painting is 16 years older than a watercolor by the same artist. The oil painting is also three times older than the watercolor. How old is each? Identify a variable, set up an equation, and
Monday, January 27, 2014 at 3:58pm by ava
system analysis in transportation INTRODUCTION TO
TruckCo manufactures two types of trucks: 1 and 2. each truck must go through the painting shop and assembly shop. if the painting shop were completely devoted to painting type 1 truck, 800 per day
could be painted, whereas if the painting shop were completely devoted to ...
Saturday, March 3, 2007 at 11:59pm by Lamar
Amanda Fall is starting up a new house painting business, Fall Colors. She has been advertising in the local newspaper for several months. Based on inquiries and informal surveys of the local housing
market, she anticipates that she will get painting jobs at the rate of four ...
Sunday, May 20, 2012 at 8:05am by alia
OTHER than a painting or sculpture? What in the world are you looking for? Even if you found an image of him on a building, it would be a form of one or the other. Sculpture, bas-relief (a type of
sculpture), and painting (of sculptures and on ceramics) are just about the ...
Wednesday, October 21, 2009 at 4:18am by Writeacher
Bill can paint either two walls or one window frame in one hour. In the same time, Frank can paint either three walls or two window frames. To minimize the time spent painting, who should specialize
in painting walls, and who should specialize in painting window frames?
Monday, May 13, 2013 at 6:06pm by winnie
A NEW ANSWER!
Try to tell them to draw or make animals that belongs to air, water and land on a different piece of paper or box (Use colored paper or tell them to buy them.)In a box or on a box (painting if the
student/s an expert at painting.)
Saturday, October 31, 2009 at 4:25pm by WS
year 8 art
ive got the flu and i have loads of homework to do and i am strugging with this one for my art homework i have got 4 pictures of paintings. 1/Michaelangelo, creation of Adam, 2/ Botticelli, Birth of
Venus , 3/ Mona Lisa , 4/ George and the Dragon. I need to tell a short story ...
Monday, December 29, 2008 at 6:11am by jenna
Do you mean you need to choose a painting? A painting that includes design elements and visual elements? You'll need to explain more about what you think the difference between those elements is.
Please also type with correct punctuation and capitalization to help us ...
Saturday, November 7, 2009 at 3:29pm by Writeacher
A large painting in the style of Rubens is 3 feet longer than it is wide. If the wooden frame is 18 in wide and the area of the picture frame is 286 ft^2, find the dimensions of the painting. I have
the basic idea down of what I'm supposed to do to solve it.. But I just can't...
Sunday, October 16, 2011 at 9:03pm by Wooah
the width of the painting would be x-2 feet (12 inches = 1 foot) and the length would be x+1 (I have no idea what your equation 12x=208 is supposed to say) area of whole picture = x(x+3) area of
painting = (x-2)(x+1) the difference in those areas is 208 x(x+3) - (x-2)(x+1) = ...
Friday, October 16, 2009 at 10:01pm by Reiny
i think more than painting there will be more systems of photograph and printouts from the printer. there won't be much demand for painting. infact new yongsters r very lazy so they can't even think
of making a printer. there will very few people left, who love to paint.
Sunday, September 30, 2012 at 10:17pm by Snehal
Una is painting a chair She finished painting the first coat at 10:42 AM the paint needs to dry for at least 1 hour 45 min before another coat of paint can be put on at what time will she be able to
paint a second coat 12:45?
Monday, January 14, 2013 at 7:01pm by Jerald
Sally's rate of painting = H/4 John's rate of painting = H/6 combined rate = H/4 + H/6 = 5H/12 Time with both working = H/((5H/12)) = 12/5 hours
Tuesday, May 27, 2008 at 12:56am by Reiny
The City of Allentown recently received a donation of two items: A letter written in 1820 valued at $24,000 from James Allen and 1920 painting of the town's city hall, painting by same artist sold
recently for $4,000.How do I do a journal capitalizing collectibles only when ...
Saturday, August 25, 2012 at 12:05pm by Lynda
A large painting in the style of Rubens is 3 feet longer than it is wide. If the wooden frame is 12 in wide and the area of the picture frame is 208 ft^2, find the dimensions of the painting. So I
know the width would be x and the length would be x+3 if I do 12x=208, I get 17....
Friday, October 16, 2009 at 10:01pm by Muffy
A last doubt. Which of the following sentences is best formulated? Which are grammatically wrong? Could you check the last 2 points, too? 1) The flowers are presented not as statically as in a
painting, but as alive with motion. 2) The flowers are presented not statically as ...
Sunday, June 26, 2011 at 6:32am by Henry2
a gallon of paint will cover about 350 square feet of space. You are painting 3 halls of a school. Ignoring the doors, the halls are 10 feet tall and 125 feet long. You buy 11 gallons of paint. Did
you buy enough of paint?
Wednesday, November 28, 2012 at 10:46am by Anonymous
Assignment: Type impressions, feelings, and thoughts in a one-page diary entry. Recall your earliest experiences with art. What did you do? Did you enjoy art? Why? How did your parents or teacher
respond to your art? How do you feel at present about your artistic development...
Monday, December 12, 2011 at 1:53pm by Rebekah
Math 8th Grade (Ms. Sue) ?
An artist makes a painting that is 12 feet by 9 feet. He paints a frame around the painting. The frame is 3 inches wide. What is the area of the frame? Please provide formula for figuring out.
Answers (a)10.75ftsq(b)10.5ftsq (c)12.96ftsq(d)5.3125ftsq Thanks.
Saturday, December 1, 2012 at 12:20am by Chris
I got a different answer than all of you guys :P HINT: To actually visualize the problem, draw it out! :) Obviously the dimensions of the painting are: w and 3+w We know that the frame is 12 in wide,
aka 1 ft wide. And we also are given that the total area of the painting and ...
Friday, October 16, 2009 at 10:01pm by Elle
I am desribing one of Pissarro's, (an Impressionist artist)painting. Are the vocabulary used correctly to dipict my meaning?? Comme le montre le blues lumineuse du ciel bleu en contraste avec les
teintes de la gelée cassante sur le sol gelé. Il s'est concentré sur décor dans ...
Wednesday, December 2, 2009 at 12:49am by sh
Sandra is not really good at painting and drawing, but she loves to spend her spare time "dabbling" as she calls it. If her products are not successful according to aesthetic standards, why does she
continue to draw and paint? A. She is attempting to reduce a primary drive. B...
Wednesday, June 19, 2013 at 7:06pm by Angela
Okay. I just realized something. X will equal 11 and the dimensions would be 11 by 14 because the width of the frame would have to be considered twice. So the dimensions of the painting and the frame
altogether would be (x+2)(x+5). The dimensions of the painting would still be...
Friday, October 16, 2009 at 10:01pm by Aerin
I like The Fifer by Manet most. I can see a boy playing the fife. Why do you like it? Because I like playing the fife and the boy's traditional clothes. He is wearing a curious hat, too. My favorite
painting The painting I like most is Manet's The Fifer. In the picture, I see ...
Friday, September 10, 2010 at 11:49pm by rfvv
Pages: 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | Next>> | {"url":"http://www.jiskha.com/search/index.cgi?query=painting","timestamp":"2014-04-19T02:06:26Z","content_type":null,"content_length":"41153","record_id":"<urn:uuid:273b62df-77b5-4980-9bee-c84a7f207947>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00506-ip-10-147-4-33.ec2.internal.warc.gz"} |
This puzzle consists of four hinged pieces which can be folded one way to a square and the other way to an equilateral triangle. Master puzzler Henry Dudeney demonstrated a wooden model before
the London Royal Society in 1905.
The digits of π, translated into music.
The video also shows some fun facts about π.
Combinatorics. Combinatorics began as a formalized treatment of efficient ways of counting certain collections of objects which arise relatively often. Nowadays the word ‘combinatorics’ can be
used to refer to pretty much all of finite mathematics, and the original field is more specifically called “enumerative combinatorics”.
The things in parentheses are not fractions (they don’t have a fraction bar). These are called binomial coefficients and (n [over] k) represents the number of ways to choose k balls from a set of
n balls. Pascal’s triangle is an arrangement of these numbers where (n [over] k) is the k-th number in the n-th row; it is very famous to be a useful way of visualizing many of the properties of
the binomial coefficients.
I’m experimenting a little with how to display the theorem statements. Not really sure what I like yet.
The proof of Sum of Square Numbers!
This is NOT a proof, but it’s still very beautiful and good for intuition.
"You missed a minus sign"
Great article by Evelyn Lamb on one of the other uses of pi: as the prime counting function.
See the use of phi as the totative-counting (totient) function. Lord, number theorists love to jeopardize notation. Also, Is it just me or does the graph of the prime counting function look oddly
like the stable isotope curve? Gosh, that’s cool. [CJH] | {"url":"http://mathematica.tumblr.com/","timestamp":"2014-04-16T14:20:42Z","content_type":null,"content_length":"63849","record_id":"<urn:uuid:c2cd2b92-2e8c-4b05-a2b7-e15acc3a536b>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00445-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Numpy-discussion] what goes wrong with cos(), sin()
David Goldsmith David.L.Goldsmith@noaa....
Wed Feb 21 13:52:39 CST 2007
Anne Archibald wrote:
> On 21/02/07, Zachary Pincus <zpincus@stanford.edu> wrote:
>> A corrolary: in general do not two floating-point values for equality
>> -- use something like numpy.allclose. (Exception -- equality is
>> expected if the exact sequence of operations to generate two numbers
>> were identical.)
> I really can't agree that blindly using allclose() is a good idea. For
> example, allclose(plancks_constant,0) and the difference leads to
> quantum mechanics... you really have to know how much difference you
> expect and how big your numbers are going to be.
> Anne
"Precisely!" ;-)
Last time we had a posting like this, didn't one of the respondents
include a link to something within the numpy Web docs that talks about
floating point precision?
> _______________________________________________
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://projects.scipy.org/mailman/listinfo/numpy-discussion
More information about the Numpy-discussion mailing list | {"url":"http://mail.scipy.org/pipermail/numpy-discussion/2007-February/026161.html","timestamp":"2014-04-18T21:19:00Z","content_type":null,"content_length":"4131","record_id":"<urn:uuid:acdf0be4-52f1-47fb-b845-c6c3f9e8b004>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00580-ip-10-147-4-33.ec2.internal.warc.gz"} |
Nonlocality, Entanglement Witnesses and Supra-Correlations
While entanglement is believed to underlie the power of
quantum computation
and communication, it is not generally well understood
for multipartite
systems. Recently, it has been appreciated that there
exists proper
no-signaling probability distributions derivable from
operators that do not
represent valid quantum states. Such systems exhibit supra-correlations
that are stronger than allowed by quantum mechanics, but
less than the
algebraically allowed maximum in Bell-inequalities (in
the bipartite case).
Some of these probability distributions are derivable
from an entanglement
witness W, which is a non-positive Hermitian operator
constructed such that
its expectation value with a separable quantum state
(positive density
matrix) rho_sep is non-negative (so that Tr[W rho] indicates entanglement
in quantum state rho). In the bipartite case, it is known
that by a
modification of the local no-signaling measurements by
spacelike separated
parties A and B, the supra-correlations exhibited by any
W can be modeled as
derivable from a physically realizable quantum state ρ.
However, this result
does not generalize to the n-partite case for n>2.
Supra-correlations can
also be exhibited in 2- and 3-qubit systems by explicitly
"states" O (not necessarily positive quantum
states) that exhibit PR
correlations for a fixed, but arbitrary number, of
measurements available to
each party. In this paper we examine the structure of
"states" that exhibit
supra-correlations. In addition, we examine the affect
upon the distribution
of the correlations amongst the parties involved when
constraints of
positivity and purity are imposed. We investigate
circumstances in which
such "states" do and do not represent valid
quantum states. | {"url":"http://www.perimeterinstitute.ca/videos/nonlocality-entanglement-witnesses-and-supra-correlations","timestamp":"2014-04-21T13:44:42Z","content_type":null,"content_length":"28927","record_id":"<urn:uuid:6a0eb385-6689-430e-9992-e5416aca9d6a>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00603-ip-10-147-4-33.ec2.internal.warc.gz"} |
Segment of a circle calculation
Basically, you can't find a finite expression for the inverse here.
However, various appoximative techniques might be used.
IF, for example, y is "sufficiently close to 0", a power series expansion about x=0 might zoom onto the solution fairly quickly.
To show how this might be done:
We have:
when expanding the sine function in its power series (a finite segment of that highly accurate when x is close to zero)
We now invert the power series, by assuming:
where the solution x(y) of the original equation boils down to determining the a's.
Inserting the latter in the former, we get:
[tex]y=2*(a_{1}y+a_{2}y^{2}+a_{3}y^{3}+a_{4}y^{4}+a_{5}y^{5}+++)-\frac{1}{6}(a_{1}^{3}y^{3}+3a_{1}^{2}a_{2}y^{4}+3a_{1}^{2}a_{3}y^{5}+3a _{2}^{2}a_{1}y^{5})+\frac{1}{30}a_{1}^{5}y^{5}++[/tex]
where terms of higher orders in y than 5 are dropped.
Now, we simply compare coefficients in each power to "y", to determine the a's.
We get:
[tex]a_{1}=\frac{1}{2}, a_{2}=0, a_{3}=-\frac{1}{48}, a_{4}=0, a_{5}=\frac{1}{640}[/tex]
Thus, to fifth order accuracy, you have: | {"url":"http://www.physicsforums.com/showthread.php?p=4204223","timestamp":"2014-04-18T23:17:59Z","content_type":null,"content_length":"45187","record_id":"<urn:uuid:08ab9acc-7207-4591-a26f-0436eadcc55c>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00529-ip-10-147-4-33.ec2.internal.warc.gz"} |
Axiomatizations and conservation results for fragments of bounded arithmetic
, 1998
"... ) Jan Johannsen Chris Pollett Department of Mathematics Department of Computer Science University of California, San Diego Boston University La Jolla, CA 91093-0112 Boston, MA 02215 Abstract We
dene theories of Bounded Arithmetic characterizing classes of functions computable by constantdepth t ..."
Cited by 9 (2 self)
Add to MetaCart
) Jan Johannsen Chris Pollett Department of Mathematics Department of Computer Science University of California, San Diego Boston University La Jolla, CA 91093-0112 Boston, MA 02215 Abstract We dene
theories of Bounded Arithmetic characterizing classes of functions computable by constantdepth threshold circuits of polynomial and quasipolynomial size. Then we dene certain second-order theories
and show that they characterize the functions in the Counting Hierarchy. Finally we show that the former theories are isomorphic to the latter via the socalled RSUV -isomorphism. 1 Introduction A
phenomenon that is commonly observed in Complexity Theory is that proofs of results about counting complexity classes (#P , Mod p P etc.) can often be scaled down to yield results about small depth
circuit classes with the corresponding counting gates. For example, Toda's result [17] that every problem in the Polynomial Hierarchy can be solved in polynomial time with an oracle for #P
, 2004
"... Let H be a proof system for the quantified propositional calculus (QPC). We j -witnessing problem for H to be: given a prenex S j -formula A, an H-proof of A, and a truth assignment to the free
variables in A, find a witness for the outermost existential quantifiers in A. We point out that the S ..."
Cited by 9 (2 self)
Add to MetaCart
Let H be a proof system for the quantified propositional calculus (QPC). We j -witnessing problem for H to be: given a prenex S j -formula A, an H-proof of A, and a truth assignment to the free
variables in A, find a witness for the outermost existential quantifiers in A. We point out that the S witnessing problems for the systems G 1 and G 1 are complete for polynomial time and PLS
(polynomial local search), respectively. We introduce
, 2002
"... We extend the definition of dynamic ordinals to generalised dynamic ordinals. We compute generalised dynamic ordinals of all fragments of relativised bounded arithmetic by utilising methods from
Boolean complexity theory, similar to Krajíček in [14]. We indicate the role of generalised dynamic ordin ..."
Cited by 1 (0 self)
Add to MetaCart
We extend the definition of dynamic ordinals to generalised dynamic ordinals. We compute generalised dynamic ordinals of all fragments of relativised bounded arithmetic by utilising methods from
Boolean complexity theory, similar to Krajíček in [14]. We indicate the role of generalised dynamic ordinals as universal measures for implicit computational complexity. I.e., we describe the
connections between generalised dynamic ordinals and witness oracle Turing machines for bounded arithmetic theories. In particular, through the determination of generalised dynamic ordinals we
re-obtain well-known independence results for relativised bounded arithmetic theories.
- University of Wales Swansea , 2006
"... Abstract. We describe a natural generalization of ordinary computation to a third-order setting and give a function calculus with nice properties and recursion-theoretic characterizations of
several large complexity classes. We then present a number of third-order theories of bounded arithmetic whos ..."
Cited by 1 (1 self)
Add to MetaCart
Abstract. We describe a natural generalization of ordinary computation to a third-order setting and give a function calculus with nice properties and recursion-theoretic characterizations of several
large complexity classes. We then present a number of third-order theories of bounded arithmetic whose definable functions are the classes of the EXP-time hierarchy in the third-order setting.
, 2004
"... We show that a semantical interpretation of Herbrand's disjunctions can be used to obtain 2 independent sentences whose nature is more combinatorial than the nature of the usual consistency
statements. Then we apply this method to Bounded Arithmetic and present 8 1 combinatorial sentences tha ..."
Cited by 1 (1 self)
Add to MetaCart
We show that a semantical interpretation of Herbrand's disjunctions can be used to obtain 2 independent sentences whose nature is more combinatorial than the nature of the usual consistency
statements. Then we apply this method to Bounded Arithmetic and present 8 1 combinatorial sentences that characterize all 8 1 sentences provable in S 2 . We use the concept of a two player game to
describe these sentences.
, 2012
"... We study the long-standing open problem of giving ∀Σ b 1 separations for fragments of bounded arithmetic in the relativized setting. Rather than considering the usual fragments defined by the
amount of induction they allow, we study Jeˇrábek’s theories for approximate counting and their subtheories. ..."
Cited by 1 (0 self)
Add to MetaCart
We study the long-standing open problem of giving ∀Σ b 1 separations for fragments of bounded arithmetic in the relativized setting. Rather than considering the usual fragments defined by the amount
of induction they allow, we study Jeˇrábek’s theories for approximate counting and their subtheories. We show that the ∀Σ b 1 Herbrandized ordering principle is unprovable in a fragment of bounded
arithmetic that includes the injective weak pigeonhole principle for polynomial time functions, and also in a fragment that includes the surjective weak pigeonhole principle for FP NP functions. We
further give new propositional translations, in terms of random resolution refutations, for the consequences of T 1 2 augmented with the surjective weak pigeonhole principle for polynomial time
, 2010
"... a formalized no-gap theorem ..."
"... In this article we study relations between various fragments of bounded arithmetic. The fragments of interest bere are the theories S ~ and T ~ introduced by Buss in [1]. The reader may recall
that the principal axioms of S ~ resp. of T ~ are!t-PIND resp.!t-IND axioms, where the former are instances ..."
Add to MetaCart
In this article we study relations between various fragments of bounded arithmetic. The fragments of interest bere are the theories S ~ and T ~ introduced by Buss in [1]. The reader may recall that
the principal axioms of S ~ resp. of T ~ are!t-PIND resp.!t-IND axioms, where the former are instances of
"... This paper concerns the second order systems U1 2 and V1 2 of bounded arithmetic, which have proof theoretic strengths corresponding to polynomial space and exponential time computation. We
formulate improved witnessing theorems for these two theories by using S1 2 as a base theory for proving the c ..."
Add to MetaCart
This paper concerns the second order systems U1 2 and V1 2 of bounded arithmetic, which have proof theoretic strengths corresponding to polynomial space and exponential time computation. We formulate
improved witnessing theorems for these two theories by using S1 2 as a base theory for proving the correctness of the polynomial space or exponential time witnessing functions. We develop the theory
of nondeterministic polynomialspace computation, includingSavitch’s theorem,inU 1 2.Kolodziejczyk, Nguyen, and Thapen have introduced local improvement properties to characterize the provably total
NP functions of these second order theories. We show that the strengths of their local improvement principles over U1 2 and V1 2 depend primarily on the topology of the underlying graph, not the
number of rounds in the local improvement games. The theory U1 2 proves the local improvement principle for linear graphs even without restricting to logarithmically many rounds. The local
improvement principle for grid graphs with only logarithmically many rounds is complete for the provably total NP search problems of V1 2. Related results are obtained for local improvement
principles with one improvement round, and for local improvement over rectangular grids. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=1208449&sort=cite&start=10","timestamp":"2014-04-16T12:14:22Z","content_type":null,"content_length":"33443","record_id":"<urn:uuid:516401c2-6208-4cf4-8629-186cbec2b5f0>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00143-ip-10-147-4-33.ec2.internal.warc.gz"} |
Complex prime problem
April 8th 2011, 04:57 AM #1
Apr 2011
Complex prime problem
I am having great difficulty figuring the followin problem. Your help is appreciated!
Find all primes p such that 2011p = 2+3+4+...+n for some positive integer n
I've goten as far as 2011p = (n(n+1)/2)-1
p= ((n(n+1)/2)-1)/2011 and plugging in various numbers for n.
Am i on the right track?
I am having great difficulty figuring the followin problem. Your help is appreciated!
Find all primes p such that 2011p = 2+3+4+...+n for some positive integer n
I've goten as far as 2011p = (n(n+1)/2)-1
p= ((n(n+1)/2)-1)/2011 and plugging in various numbers for n.
Am i on the right track?
I think you are. Observe that $\displaystyle{2,011p=\frac{n(n+1)}{2}-1=\frac{(n+2)(n-1)}{2}}$ ...
The above already tells you that if $p>2$ then either $n=0\!\!\pmod 4\mbox{ or else } n=3\!\!\pmod 4$ .
Thanks much Tonio. What does mod 4 mean? Is n=3 the only answer?
April 8th 2011, 05:16 AM #2
Oct 2009
April 8th 2011, 06:29 AM #3
Apr 2011 | {"url":"http://mathhelpforum.com/algebra/177240-complex-prime-problem.html","timestamp":"2014-04-19T07:37:07Z","content_type":null,"content_length":"35778","record_id":"<urn:uuid:8c4fa094-4d9c-4574-9974-b2c7d0fc5671>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00577-ip-10-147-4-33.ec2.internal.warc.gz"} |
Quincy Center, MA Algebra Tutor
Find a Quincy Center, MA Algebra Tutor
...I have currently been tutoring this subject for many years and am well versed in the changes to the subject requirements due to the Common Core. I have currently been teaching this subject for
the past 8 years and am well versed in the changes to the subject requirements due to the Common Core. ...
5 Subjects: including algebra 1, algebra 2, precalculus, study skills
...You list the information you know and use variables for unknown information. Then you find the connection between them to form one or more equations. Then you solve those equations.
5 Subjects: including algebra 1, algebra 2, physics, precalculus
I am a senior chemistry major and math minor at Boston College. In addition to my coursework, I conduct research in a physical chemistry nanomaterials lab on campus. I am qualified to tutor
elementary, middle school, high school, and college level chemistry and math, as well as SAT prep for chemistry and math.I am a chemistry major at Boston College.
13 Subjects: including algebra 2, algebra 1, chemistry, calculus
...I tutor both during the summer and the school year. I am Mass. certified; I tutor for the Lexington Public School System and have passed all the required CORI and background checks to do so. I
tutor all levels up through AP courses.
34 Subjects: including algebra 1, algebra 2, reading, English
I received my PhD in Chemistry from the University of Massachusetts, Amherst and am currently a research fellow at Mass General Hospital/ Harvard Med. I have taught Bio 101 at a local community
college recently and have experience teaching Chemistry from my graduate studies as well. I began tutori...
7 Subjects: including algebra 2, algebra 1, chemistry, biology
Related Quincy Center, MA Tutors
Quincy Center, MA Accounting Tutors
Quincy Center, MA ACT Tutors
Quincy Center, MA Algebra Tutors
Quincy Center, MA Algebra 2 Tutors
Quincy Center, MA Calculus Tutors
Quincy Center, MA Geometry Tutors
Quincy Center, MA Math Tutors
Quincy Center, MA Prealgebra Tutors
Quincy Center, MA Precalculus Tutors
Quincy Center, MA SAT Tutors
Quincy Center, MA SAT Math Tutors
Quincy Center, MA Science Tutors
Quincy Center, MA Statistics Tutors
Quincy Center, MA Trigonometry Tutors
Nearby Cities With algebra Tutor
Braintree Highlands, MA algebra Tutors
Braintree Hld, MA algebra Tutors
East Braintree, MA algebra Tutors
East Milton, MA algebra Tutors
Grove Hall, MA algebra Tutors
Houghs Neck, MA algebra Tutors
Marina Bay, MA algebra Tutors
Norfolk Downs, MA algebra Tutors
North Quincy, MA algebra Tutors
Quincy, MA algebra Tutors
South Quincy, MA algebra Tutors
Squantum, MA algebra Tutors
West Quincy, MA algebra Tutors
Weymouth Lndg, MA algebra Tutors
Wollaston, MA algebra Tutors | {"url":"http://www.purplemath.com/Quincy_Center_MA_Algebra_tutors.php","timestamp":"2014-04-17T19:40:39Z","content_type":null,"content_length":"24278","record_id":"<urn:uuid:1d584595-bafb-4ac7-955e-d7e96f436855>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00479-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Voronoi Cells of the E
Recently, R. T. Worley succeeded in determining the Voronoi cells of the E[6]^* and E[7]^* lattices. These turned out to be a 6-dimensional polytope with 720 vertices, and a 7-dimensional polytope
with 576 vertices, respectively. These two polytopes had been described in 1930 by H. S. M. Coxeter, who called them the 0[221] and the 1[23]. They belong to a large class of convex polytopes known
as uniform polytopes. In this paper we will use Wythoff's construction and a new result concerning the lengths of edges of orthogonal trees to identify these polytopes. We will also discuss the
symmetry groups of these two polytopes, which are of order 103,680 and 2,903,040 respectively.
In the last five years, R. T. Worley [7,8] succeeded in determining the Voronoi cells of the E[6]^* and E[7]^* lattices. These turned out to be a 6-dimensional polytope with 720 vertices, and a
7-dimensional polytope with 576 vertices, respectively. Worley was primarily interested in determining exact values for the normalized second moments of these two polytopes. However, he failed to
mention that these polytopes had been described as early as 1930 by H. S. M. Coxeter [2, pp. 414-417], who called them the 0[221] and the 1[23]. They belong to a large class of convex polytopes known
as uniform polytopes, and they can both be derived by a procedure called Wythoff's construction.
In this paper we will independently identify these polytopes using Wythoff's construction, aided by a new formula for the lengths of the edges of a kind of simplex known as an orthogonal tree. The
chief virtue of this new derivation is that it does not require the use of any coordinates whatsoever. We will also briefly discuss the symmetry groups of these two polytopes, which are of order
103,680 and 2,903,040 respectively.
Wythoff's Construction and Coxeter Graphs
A polytope is uniform if all its facets are uniform and it is ``vertex-regular,'' that is, if it can be rotated so that any vertex can be mapped onto any other vertex. To start the recursion off, a
polygon is defined to be uniform if it is regular.
In 1918, W. A. Wythoff [9] found almost all the uniform polytopes in four dimensions by truncating the regular polytopes. His method was generalized by Coxeter for higher dimensions [3]. At the same
time, Coxeter found a way to represent uniform polytopes as graphs with certain nodes circled. The uniform polytope represented by a diagram G is a cell of the uniform polytope or honeycomb
represented by the diagram H if and only if G is a subdiagram of H. The rest of this section is essentially a summary of relevant parts of [3] and [4, chap. 11]; the reader is referred there for
Coxeter found every example of a simplex that tiles either Euclidean or spherical space by reflections. These simplices (called fundamental simplices) have the property that any two bounding
hyperplanes meet each other at an angle which is a divisor of 180 degrees. An important example of a simplex that tiles 6-dimensional Euclidean space is shown in Figure 1.
Each node of this graph represents a vertex of the simplex. If two nodes share an edge then the two hyperplanes which are opposite the corresponding vertices meet at an angle of 60 degrees; otherwise
the hyperplanes meet at an angle of 90 degrees. (Since the graph is a tree, this simplex is also called an orthogonal tree [5].) An easy geometric argument shows that each node can be labeled with an
integer so that each node's label is half the sum of its neighbors' labels. A vertex represented by a node labeled 1 is called a special vertex. (The subscripts are purely for ease of reference.)
Each node can also be interpreted as a reflection in the corresponding bounding hyperplane. These seven reflections generate an infinite group, sort of a 6-dimensional kaleidoscope. The image of any
special vertex, say 1[a], under the action of this reflection group is the lattice shown in Figure 2, which is defined to be the lattice E[6]. (Hence Coxeter's name 2[22]. In general, if a graph has
three branches of length p, q, and r, then p[qr] is obtained by circling the last node of the branch of length p, and 0[pqr] is obtained by circling the center node.) The images of any one of the
three special vertices 1[a], 1[b], or 1[c] forms an E[6] lattice. The lattice E[6]^* consists of the images of all three special vertices, and is therefore the union of the three copies of E[6]:
The E[6]^* lattice is not uniform since it requires the union of more than one diagram to represent it, although we will soon see that its Voronoi cell is uniform.
Finding the Voronoi Cells
To find the Voronoi cell of E[6]^* we must find the point on the simplex which is furthest from the three special vertices 1[a], 1[b], 1[c]. Formally, we want to find the point P belonging to the
simplex that maximizes
We will now show that
is in fact the point labeled
, thereby justifying:
Theorem 1 The Voronoi cell of E[6]^* is 0[221].
Proof: To find the lengths of the edges of the simplex, label each edge of its graph with the inverse of the product of the labels of that edge's two endpoints:
This new diagram is to be read as follows: to find the distance between two vertices of the simplex, take the square root of the sum of the numbers on the edges of the unique path between the two
nodes representing the two vertices. (This is the new formula mentioned in the introduction. It was discovered by the author in 1990, and then generalized to arbitrary orthogonal trees with angles
other than 60 and 90 degrees by Coxeter in [5].) For instance, the distance from 1[a] to 1[c] is {1[a], 2[a], 3}, {1[b], 2[b], 3}, and {1[c], 2[c], 3} are all congruent right triangles lying on three
orthogonal planes.
So we see immediately that P = 3 is at least a local maximum for equation (1) since any infinitesimal adjustment would move P closer to some 1[N]. But since the images of vertex 3 form the honeycomb
which contains only one kind of cell, namely the 0
= 0
= 0
, the images of
must be the only vertices of the Voronoi cells of E
must be an absolute maximum for equation (
). This completes the proof of theorem 1.
The proof of the next theorem is very similar.
Theorem 2 The Voronoi cell of E[7]^* is 1[23].
Proof: The graph for the fundamental simplex for the E[7] and E[7]^* lattices is:
Again, we have labeled each node with a number in such a way that each node's label is half the sum of its neighbor's labels, and again we have labeled each edge with the inverse of the product of
the labels of its endpoints. The rule for determining edge length is the same as in theorem 1. The E[7]^* lattice consists of the images of the two special vertices 1[x] and 1[y]. We want to find the
point P which maximizes min[N=x,y]dist(P,1[N]). This time, the desired point P is easily seen to be the vertex 2[z], the images of which form the honeycomb 1[33], which has only one kind of cell, the
1[23] = 1[32]. This completes the proof of theorem 2.
Rotation and Symmetry Groups
The rotation and symmetry groups of the 0[221] and 1[23] and the relationships between them are summarized in the following chart:
The groups in the diagram surrounded by thick boxes have a central subgroup of order 2 containing the central inversion (which is a rotation in even dimensions); those surrounded by thin boxes do
not. Indeed, each group surrounded by a thick box is the direct product of the group to its lower left and the group of order two containing the central inversion. Groups are connected by single
lines to normal subgroups, and by double lines to other subgroups.
The symmetry group of the 2[21] was originally studied in the 19th century as the group of automorphisms of the 27 lines on the cubic surface, and the rotation group of the 1[23] was originally known
as the automorphism group of the 28 bitangents of a non-singular quartic curve. Two good modern references for the study of these polytopes, their groups, and their histories are [6, pp. 21-33] and [
1, pp. 26, 46].
[1] J. H. Conway, R. T. Curtis, S. P. Norton, R. A. Parker, R. A. Wilson. Atlas of Finite Groups. Oxford University Press, 1985.
[2] H. S. M. Coxeter. The Polytopes with Regular-Prismatic Vertex Figures, I. Phil. Trans. Royal Soc. London (A), 229 (1930), 329-425.
[3] H. S. M. Coxeter. Wythoff's Construction for Uniform Polytopes. Proc. London Math. Soc. (2), 38 (1935), 327-339. Reprinted as Chapter 3 of Twelve Geometric Essays, Southern Illinois University
Press, 1968.
[4] H. S. M. Coxeter. Regular Polytopes, 3rd ed., Dover, 1973.
[5] H. S. M. Coxeter. Regular and Semi-Regular Polytopes, III. Mathematische Zeitschrift, 200:1-45, 1988.
[6] H. S. M. Coxeter. Orthogonal Trees. Proc. of the 7th Annual Symposium on Computational Geometry, ACM Press, 1991, pp.89-97.
[7] R. T. Worley. The Voronoi Region of E[6]^*. J. Austral. Math. Soc. (A), 43 (1987), 268-278.
[8] R. T. Worley. The Voronoi Region of E[7]^*. SIAM J. Disc. Math., 1.1 (1988), 134-141.
[9] W. A. Wythoff. A Relation between the Polytopes of the C[600] Family. Proc. Royal Acad. of Sci., Amsterdam, 20 (1918) 966-970. | {"url":"http://home.digital.net/~pervin/publications/vermont.html","timestamp":"2014-04-18T13:06:49Z","content_type":null,"content_length":"14967","record_id":"<urn:uuid:fe243dbb-5467-4e40-a151-1a83e7b91a6a>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00141-ip-10-147-4-33.ec2.internal.warc.gz"} |
Hollywood, FL Algebra 1 Tutor
Find a Hollywood, FL Algebra 1 Tutor
...Lauderdale, FL. It is my goal to help students master those foundational skills of the elementary curriculum so that that they can apply those key, basic principles to every new challenge they
meet as they progress as young scholars. I actively engage my students and encourage parents to be an ...
17 Subjects: including algebra 1, reading, writing, English
...All math's from calculus and lower would be ideal areas for me to assist students with. I have worked with all ages and grade, as well as GED students so I am comfortable with kids of all ages
and adults. I did volunteer at Whispering Pines School in Miramar working with students who have behavior and mental disorders.
13 Subjects: including algebra 1, geometry, algebra 2, GED
...I tutor university student from a variety of school including Nova Southeastern University, Broward College, Miami-Dade Community College, Florida Atlantic University, Florida International
University and University of Miami. Being so involved in research at the university, I have put many theor...
10 Subjects: including algebra 1, chemistry, geometry, biology
I am expert in math & physics & Excel teaching. I would like to help students who are struggling in their math, physics and Excel. I have around 12 years of teaching experience in various
prestigious institutions and for various classes like KG to PG.
29 Subjects: including algebra 1, physics, statistics, geometry
...It is important to me that the student learns the material. I like to give a pretest and then go over each question to see what the student doesn't understand. Then go over that material
19 Subjects: including algebra 1, reading, GED, SAT math
Related Hollywood, FL Tutors
Hollywood, FL Accounting Tutors
Hollywood, FL ACT Tutors
Hollywood, FL Algebra Tutors
Hollywood, FL Algebra 2 Tutors
Hollywood, FL Calculus Tutors
Hollywood, FL Geometry Tutors
Hollywood, FL Math Tutors
Hollywood, FL Prealgebra Tutors
Hollywood, FL Precalculus Tutors
Hollywood, FL SAT Tutors
Hollywood, FL SAT Math Tutors
Hollywood, FL Science Tutors
Hollywood, FL Statistics Tutors
Hollywood, FL Trigonometry Tutors
Nearby Cities With algebra 1 Tutor
Aventura, FL algebra 1 Tutors
Cooper City, FL algebra 1 Tutors
Dania algebra 1 Tutors
Dania Beach, FL algebra 1 Tutors
Davie, FL algebra 1 Tutors
Fort Lauderdale algebra 1 Tutors
Hallandale algebra 1 Tutors
Miami Gardens, FL algebra 1 Tutors
Miramar, FL algebra 1 Tutors
N Miami Beach, FL algebra 1 Tutors
North Miami Beach algebra 1 Tutors
Pembroke Park, FL algebra 1 Tutors
Pembroke Pines algebra 1 Tutors
Plantation, FL algebra 1 Tutors
West Park, FL algebra 1 Tutors | {"url":"http://www.purplemath.com/hollywood_fl_algebra_1_tutors.php","timestamp":"2014-04-21T14:45:45Z","content_type":null,"content_length":"24157","record_id":"<urn:uuid:f7301426-c9cd-4ee2-8bf7-9ddb05991977>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00589-ip-10-147-4-33.ec2.internal.warc.gz"} |
Reply to comment
Here's a dilemma. Suppose you and a friend have been arrested for a crime and you're being interviewed separately. The police offer each of you the same deal. You can either confess, incriminating
your partner, or remain silent. If you confess and your partner doesn't, then you get 2 years in jail (as a reward for talking), while your partner gets 10 years. If you both confess, then you both
get 8 years (reduced from 10 years because at least you talked). If you both remain silent, you both get 5 years, as the evidence is only sufficient to convict you of a lesser crime.
What should your strategy be? As a selfish and rational individual, you should talk. If your partner also talks, then your confession gets you 8 years instead of 10. If your partner doesn't talk,
then it gets you 2 years instead of 5. Talking is your dominant strategy, it leaves you better off than silence, no matter what your partner does.
The trouble is that your partner, just as selfish and rational as you, will come to the same conclusion. You'll both decide to talk and get 8 years each. Paradoxically, your dominant strategy will
leave both of you worse off than silence would have done.
The prisoner's dilemma is one of game theory's most famous games because it illustrates why people might refuse to cooperate when they would be better off doing so. One real-life situation that is
similar to the dilemma is an arms race between two countries, in which both countries increase their military might when it would be better for both to disarm.
The dilemma has been used extensively in mathematical research into altruism. Mathematical research into altruism? Yes, that's right! Using the dilemma as the basis for computer simulations in which
simulated individuals can either cooperate or defect has shown how altruism can evolve as a survival strategy, even in large societies.
To find out more, read Does it pay to be nice? And there's more about the prisoner's dilemma and economics in Adam Smith and the invisible hand.
Return to the Plus Advent Calendar | {"url":"http://plus.maths.org/content/comment/reply/6003","timestamp":"2014-04-19T09:50:45Z","content_type":null,"content_length":"23087","record_id":"<urn:uuid:e09ecf22-0c1e-419d-ba70-e80f0d0b0e4e>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00233-ip-10-147-4-33.ec2.internal.warc.gz"} |
This article needs additional citations for verification. (December 2010)
The philosophy of mathematics is the branch of philosophy that studies the philosophical assumptions, foundations, and implications of mathematics. The aim of the philosophy of mathematics is to
provide an account of the nature and methodology of mathematics and to understand the place of mathematics in people's lives. The logical and structural nature of mathematics itself makes this study
both broad and unique among its philosophical counterparts.
Recurrent themes include:
• What are the sources of mathematical subject matter?
• What is the ontological status of mathematical entities?
• What does it mean to refer to a mathematical object?
• What is the character of a mathematical proposition?
• What is the relation between logic and mathematics?
• What is the role of hermeneutics in mathematics?
• What kinds of inquiry play a role in mathematics?
• What are the objectives of mathematical inquiry?
• What gives mathematics its hold on experience?
• What are the human traits behind mathematics?
• What is mathematical beauty?
• What is the source and nature of mathematical truth?
• What is the relationship between the abstract world of mathematics and the material universe?
The terms philosophy of mathematics and mathematical philosophy are frequently used as synonyms.^[1] The latter, however, may be used to refer to several other areas of study. One refers to a project
of formalising a philosophical subject matter, say, aesthetics, ethics, logic, metaphysics, or theology, in a purportedly more exact and rigorous form, as for example the labours of Scholastic
theologians, or the systematic aims of Leibniz and Spinoza. Another refers to the working philosophy of an individual practitioner or a like-minded community of practicing mathematicians.
Additionally, someunderstand the term "mathematical philosophy" to be an allusion to the approach taken by Bertrand Russell in his books The Principles of Mathematics and Introduction to Mathematical
The origin of mathematics is subject to argument. Whether the birth of mathematics was a random happening or induced by necessity duly contingent of other subjects, say for physics, is still a matter
of prolific debates.
Many thinkers have contributed their ideas concerning the nature of mathematics. Today, some philosophers of mathematics aim to give accounts of this form of inquiry and its products as they stand,
while others emphasize a role for themselves that goes beyond simple interpretation to critical analysis. There are traditions of mathematical philosophy in both Western philosophy and Eastern
philosophy. Western philosophies of mathematics go as far back as Plato, who studied the ontological status of mathematical objects, and Aristotle, who studied logic and issues related to infinity
(actual versus potential).
Greek philosophy on mathematics was strongly influenced by their study of geometry. For example, at one time, the Greeks held the opinion that 1 (one) was not a number, but rather a unit of arbitrary
length. A number was defined as a multitude. Therefore 3, for example, represented a certain multitude of units, and was thus not "truly" a number. At another point, a similar argument was made that
2 was not a number but a fundamental notion of a pair. These views come from the heavily geometric straight-edge-and-compass viewpoint of the Greeks: just as lines drawn in a geometric problem are
measured in proportion to the first arbitrarily drawn line, so too are the numbers on a number line measured in proportional to the arbitrary first "number" or "one."^[citation needed]
These earlier Greek ideas of numbers were later upended by the discovery of the irrationality of the square root of two. Hippasus, a disciple of Pythagoras, showed that the diagonal of a unit square
was incommensurable with its (unit-length) edge: in other words he proved there was no existing (rational) number that accurately depicts the proportion of the diagonal of the unit square to its
edge. This caused a significant re-evaluation of Greek philosophy of mathematics. According to legend, fellow Pythagoreans were so traumatised by this discovery that they murdered Hippasus to stop
him from spreading his heretical idea. Simon Stevin was one of the first in Europe to challenge Greek ideas in the 16th century. Beginning with Leibniz, the focus shifted strongly to the relationship
between mathematics and logic. This perspective dominated the philosophy of mathematics through the time of Frege and of Russell, but was brought into question by developments in the late 19th and
early 20th century.
20th century
A perennial issue in the philosophy of mathematics concerns the relationship between logic and mathematics at their joint foundations. While 20th century philosophers continued to ask the questions
mentioned at the outset of this article, the philosophy of mathematics in the 20th century was characterised by a predominant interest in formal logic, set theory, and foundational issues.
It is a profound puzzle that on the one hand mathematical truths seem to have a compelling inevitability, but on the other hand the source of their "truthfulness" remains elusive. Investigations into
this issue are known as the foundations of mathematics program.
At the start of the 20th century, philosophers of mathematics were already beginning to divide into various schools of thought about all these questions, broadly distinguished by their pictures of
mathematical epistemology and ontology. Three schools, formalism, intuitionism, and logicism, emerged at this time, partly in response to the increasingly widespread worry that mathematics as it
stood, and analysis in particular, did not live up to the standards of certainty and rigour that had been taken for granted. Each school addressed the issues that came to the fore at that time,
either attempting to resolve them or claiming that mathematics is not entitled to its status as our most trusted knowledge.
Surprising and counter-intuitive developments in formal logic and set theory early in the 20th century led to new questions concerning what was traditionally called the foundations of mathematics. As
the century unfolded, the initial focus of concern expanded to an open exploration of the fundamental axioms of mathematics, the axiomatic approach having been taken for granted since the time of
Euclid around 300 BCE as the natural basis for mathematics. Notions of axiom, proposition and proof, as well as the notion of a proposition being true of a mathematical object (see Assignment
(mathematical logic)), were formalised, allowing them to be treated mathematically. The Zermelo–Fraenkel axioms for set theory were formulated which provided a conceptual framework in which much
mathematical discourse would be interpreted. In mathematics as in physics, new and unexpected ideas had arisen and significant changes were coming. With Gödel numbering, propositions could be
interpreted as referring to themselves or other propositions, enabling inquiry into the consistency of mathematical theories. This reflective critique in which the theory under review "becomes itself
the object of a mathematical study" led Hilbert to call such study metamathematics or proof theory.^[2]
At the middle of the century, a new mathematical theory was created by Samuel Eilenberg and Saunders Mac Lane, known as category theory, and it became a new contender for the natural language of
mathematical thinking (Mac Lane 1998). As the 20th century progressed, however, philosophical opinions diverged as to just how well-founded were the questions about foundations that were raised at
its opening. Hilary Putnam summed up one common view of the situation in the last third of the century by saying:
When philosophy discovers something wrong with science, sometimes science has to be changed — Russell's paradox comes to mind, as does Berkeley's attack on the actual infinitesimal — but more
often it is philosophy that has to be changed. I do not think that the difficulties that philosophy finds with classical mathematics today are genuine difficulties; and I think that the
philosophical interpretations of mathematics that we are being offered on every hand are wrong, and that "philosophical interpretation" is just what mathematics doesn't need. (Putnam, 169-170).
Philosophy of mathematics today proceeds along several different lines of inquiry, by philosophers of mathematics, logicians, and mathematicians, and there are many schools of thought on the subject.
The schools are addressed separately in the next section, and their assumptions explained.
Contemporary schools of thought
Mathematical realism
This section needs additional citations for verification. (February 2007)
Mathematical realism, like realism in general, holds that mathematical entities exist independently of the human mind. Thus humans do not invent mathematics, but rather discover it, and any other
intelligent beings in the universe would presumably do the same. In this point of view, there is really one sort of mathematics that can be discovered: Triangles, for example, are real entities, not
the creations of the human mind.
Many working mathematicians have been mathematical realists; they see themselves as discoverers of naturally occurring objects. Examples include Paul Erdős and Kurt Gödel. Gödel believed in an
objective mathematical reality that could be perceived in a manner analogous to sense perception. Certain principles (e.g., for any two objects, there is a collection of objects consisting of
precisely those two objects) could be directly seen to be true, but the continuum hypothesis conjecture might prove undecidable just on the basis of such principles. Gödel suggested that
quasi-empirical methodology could be used to provide sufficient evidence to be able to reasonably assume such a conjecture.
Within realism, there are distinctions depending on what sort of existence one takes mathematical entities to have, and how we know about them.
Mathematical Platonism is the form of realism that suggests that mathematical entities are abstract, have no spatiotemporal or causal properties, and are eternal and unchanging. This is often claimed
to be the view most people have of numbers. The term Platonism is used because such a view is seen to parallel Plato's Theory of Forms and a "World of Ideas" (Greek: Eidos (εἶδος)) described in
Plato's Allegory of the cave: the everyday world can only imperfectly approximate an unchanging, ultimate reality. Both Plato's cave and Platonism have meaningful, not just superficial connections,
because Plato's ideas were preceded and probably influenced by the hugely popular Pythagoreans of ancient Greece, who believed that the world was, quite literally, generated by numbers.
The major problem of mathematical platonism is this: precisely where and how do the mathematical entities exist, and how do we know about them? Is there a world, completely separate from our physical
one, that is occupied by the mathematical entities? How can we gain access to this separate world and discover truths about the entities? One answer might be Ultimate ensemble, which is a theory that
postulates all structures that exist mathematically also exist physically in their own universe.
Plato spoke of mathematics by:
How do you mean?
I mean, as I was saying, that arithmetic has a very great and elevating effect, compelling the soul to reason about abstract number, and rebelling against the introduction of visible or tangible
objects into the argument. You know how steadily the masters of the art repel and ridicule any one who attempts to divide absolute unity when he is calculating, and if you divide, they multiply,
taking care that one shall continue one and not become lost in fractions.
That is very true.
Now, suppose a person were to say to them: O my friends, what are these wonderful numbers about which you are reasoning, in which, as you say, there is a unity such as you demand, and each unit
is equal, invariable, indivisible, --what would they answer?
—Plato, Chapter 7. "The Republic" (Jowell translation).
In context, chapter 8, H.D.P. Lee translation, reports the education of a philosopher containing five mathematical disciplines:
1. arithmetic, written in unit fraction 'parts' using theoretical unities and abstract numbers.
2. plane geometry and solid geometry also considered the line to be segmented into rational and irrational unit 'parts',
3. astronomy
4. harmonics
Translators of the works of Plato rebelled against practical versions of his culture's practical mathematics. However, Plato himself and Greeks had copied 1,500 older Egyptian fraction abstract
unities, one being a hekat unity scaled to (64/64) in the Akhmim Wooden Tablet, thereby not getting lost in fractions.
Gödel's platonism postulates a special kind of mathematical intuition that lets us perceive mathematical objects directly. (This view bears resemblances to many things Husserl said about mathematics,
and supports Kant's idea that mathematics is synthetic a priori.) Davis and Hersh have suggested in their book The Mathematical Experience that most mathematicians act as though they are Platonists,
even though, if pressed to defend the position carefully, they may retreat to formalism (see below).
Some mathematicians hold opinions that amount to more nuanced versions of Platonism.
Full-blooded Platonism is a modern variation of Platonism, which is in reaction to the fact that different sets of mathematical entities can be proven to exist depending on the axioms and inference
rules employed (for instance, the law of the excluded middle, and the axiom of choice). It holds that all mathematical entities exist, however they may be provable, even if they cannot all be derived
from a single consistent set of axioms.
Empiricism is a form of realism that denies that mathematics can be known a priori at all. It says that we discover mathematical facts by empirical research, just like facts in any of the other
sciences. It is not one of the classical three positions advocated in the early 20th century, but primarily arose in the middle of the century. However, an important early proponent of a view like
this was John Stuart Mill. Mill's view was widely criticized, because it makes statements like "2 + 2 = 4" come out as uncertain, contingent truths, which we can only learn by observing instances of
two pairs coming together and forming a quartet.
Contemporary mathematical empiricism, formulated by Quine and Putnam, is primarily supported by the indispensability argument: mathematics is indispensable to all empirical sciences, and if we want
to believe in the reality of the phenomena described by the sciences, we ought also believe in the reality of those entities required for this description. That is, since physics needs to talk about
electrons to say why light bulbs behave as they do, then electrons must exist. Since physics needs to talk about numbers in offering any of its explanations, then numbers must exist. In keeping with
Quine and Putnam's overall philosophies, this is a naturalistic argument. It argues for the existence of mathematical entities as the best explanation for experience, thus stripping mathematics of
being distinct from the other sciences.
Putnam strongly rejected the term "Platonist" as implying an over-specific ontology that was not necessary to mathematical practice in any real sense. He advocated a form of "pure realism" that
rejected mystical notions of truth and accepted much quasi-empiricism in mathematics. Putnam was involved in coining the term "pure realism" (see below).
The most important criticism of empirical views of mathematics is approximately the same as that raised against Mill. If mathematics is just as empirical as the other sciences, then this suggests
that its results are just as fallible as theirs, and just as contingent. In Mill's case the empirical justification comes directly, while in Quine's case it comes indirectly, through the coherence of
our scientific theory as a whole, i.e. consilience after E O Wilson. Quine suggests that mathematics seems completely certain because the role it plays in our web of belief is incredibly central, and
that it would be extremely difficult for us to revise it, though not impossible.
For a philosophy of mathematics that attempts to overcome some of the shortcomings of Quine and Gödel's approaches by taking aspects of each see Penelope Maddy's Realism in Mathematics. Another
example of a realist theory is the embodied mind theory (below). For a modern revision of mathematical empiricism see New Empiricism (below).
For experimental evidence suggesting that one-day-old babies can do elementary arithmetic, see Brian Butterworth.
Mathematical monism
Max Tegmark's Mathematical universe hypothesis goes further than full-blooded Platonism in asserting that not only do all mathematical objects exist, but nothing else does. Tegmark's sole postulate
is: All structures that exist mathematically also exist physically. That is, in the sense that "in those [worlds] complex enough to contain self-aware substructures [they] will subjectively perceive
themselves as existing in a physically 'real' world".^[3]^[4]
Logicism is the thesis that mathematics is reducible to logic, and hence nothing but a part of logic (Carnap 1931/1883, 41). Logicists hold that mathematics can be known a priori, but suggest that
our knowledge of mathematics is just part of our knowledge of logic in general, and is thus analytic, not requiring any special faculty of mathematical intuition. In this view, logic is the proper
foundation of mathematics, and all mathematical statements are necessary logical truths.
Rudolf Carnap (1931) presents the logicist thesis in two parts:
1. The concepts of mathematics can be derived from logical concepts through explicit definitions.
2. The theorems of mathematics can be derived from logical axioms through purely logical deduction.
Gottlob Frege was the founder of logicism. In his seminal Die Grundgesetze der Arithmetik (Basic Laws of Arithmetic) he built up arithmetic from a system of logic with a general principle of
comprehension, which he called "Basic Law V" (for concepts F and G, the extension of F equals the extension of G if and only if for all objects a, Fa if and only if Ga), a principle that he took to
be acceptable as part of logic.
Frege's construction was flawed. Russell discovered that Basic Law V is inconsistent (this is Russell's paradox). Frege abandoned his logicist program soon after this, but it was continued by Russell
and Whitehead. They attributed the paradox to "vicious circularity" and built up what they called ramified type theory to deal with it. In this system, they were eventually able to build up much of
modern mathematics but in an altered, and excessively complex form (for example, there were different natural numbers in each type, and there were infinitely many types). They also had to make
several compromises in order to develop so much of mathematics, such as an "axiom of reducibility". Even Russell said that this axiom did not really belong to logic.
Modern logicists (like Bob Hale, Crispin Wright, and perhaps others) have returned to a program closer to Frege's. They have abandoned Basic Law V in favour of abstraction principles such as Hume's
principle (the number of objects falling under the concept F equals the number of objects falling under the concept G if and only if the extension of F and the extension of G can be put into
one-to-one correspondence). Frege required Basic Law V to be able to give an explicit definition of the numbers, but all the properties of numbers can be derived from Hume's principle. This would not
have been enough for Frege because (to paraphrase him) it does not exclude the possibility that the number 3 is in fact Julius Caesar. In addition, many of the weakened principles that they have had
to adopt to replace Basic Law V no longer seem so obviously analytic, and thus purely logical.
If mathematics is a part of logic, then questions about mathematical objects reduce to questions about logical objects. But what, one might ask, are the objects of logical concepts? In this sense,
logicism can be seen as shifting questions about the philosophy of mathematics to questions about logic without fully answering them.
Formalism holds that mathematical statements may be thought of as statements about the consequences of certain string manipulation rules. For example, in the "game" of Euclidean geometry (which is
seen as consisting of some strings called "axioms", and some "rules of inference" to generate new strings from given ones), one can prove that the Pythagorean theorem holds (that is, you can generate
the string corresponding to the Pythagorean theorem). According to Formalism, mathematical truths are not about numbers and sets and triangles and the like — in fact, they aren't "about" anything at
Another version of formalism is often known as deductivism. In deductivism, the Pythagorean theorem is not an absolute truth, but a relative one: if you assign meaning to the strings in such a way
that the rules of the game become true (i.e., true statements are assigned to the axioms and the rules of inference are truth-preserving), then you have to accept the theorem, or, rather, the
interpretation you have given it must be a true statement. The same is held to be true for all other mathematical statements. Thus, formalism need not mean that mathematics is nothing more than a
meaningless symbolic game. It is usually hoped that there exists some interpretation in which the rules of the game hold. (Compare this position to structuralism.) But it does allow the working
mathematician to continue in his or her work and leave such problems to the philosopher or scientist. Many formalists would say that in practice, the axiom systems to be studied will be suggested by
the demands of science or other areas of mathematics.
A major early proponent of formalism was David Hilbert, whose program was intended to be a complete and consistent axiomatization of all of mathematics. Hilbert aimed to show the consistency of
mathematical systems from the assumption that the "finitary arithmetic" (a subsystem of the usual arithmetic of the positive integers, chosen to be philosophically uncontroversial) was consistent.
Hilbert's goals of creating a system of mathematics that is both complete and consistent were dealt a fatal blow by the second of Gödel's incompleteness theorems, which states that sufficiently
expressive consistent axiom systems can never prove their own consistency. Since any such axiom system would contain the finitary arithmetic as a subsystem, Gödel's theorem implied that it would be
impossible to prove the system's consistency relative to that (since it would then prove its own consistency, which Gödel had shown was impossible). Thus, in order to show that any axiomatic system
of mathematics is in fact consistent, one needs to first assume the consistency of a system of mathematics that is in a sense stronger than the system to be proven consistent.
Hilbert was initially a deductivist, but, as may be clear from above, he considered certain metamathematical methods to yield intrinsically meaningful results and was a realist with respect to the
finitary arithmetic. Later, he held the opinion that there was no other meaningful mathematics whatsoever, regardless of interpretation.
Other formalists, such as Rudolf Carnap, Alfred Tarski and Haskell Curry, considered mathematics to be the investigation of formal axiom systems. Mathematical logicians study formal systems but are
just as often realists as they are formalists.
Formalists are relatively tolerant and inviting to new approaches to logic, non-standard number systems, new set theories etc. The more games we study, the better. However, in all three of these
examples, motivation is drawn from existing mathematical or philosophical concerns. The "games" are usually not arbitrary.
The main critique of formalism is that the actual mathematical ideas that occupy mathematicians are far removed from the string manipulation games mentioned above. Formalism is thus silent on the
question of which axiom systems ought to be studied, as none is more meaningful than another from a formalistic point of view.
Recently, some formalist mathematicians have proposed that all of our formal mathematical knowledge should be systematically encoded in computer-readable formats, so as to facilitate automated proof
checking of mathematical proofs and the use of interactive theorem proving in the development of mathematical theories and computer software. Because of their close connection with computer science,
this idea is also advocated by mathematical intuitionists and constructivists in the "computability" tradition (see below). See QED project for a general overview.
The French mathematician Henri Poincaré was among the first to articulate a conventionalist view. Poincaré's use of non-Euclidean geometries in his work on differential equations convinced him that
Euclidean geometry should not be regarded as a priori truth. He held that axioms in geometry should be chosen for the results they produce, not for their apparent coherence with human intuitions
about the physical world.
Psychologism in the philosophy of mathematics is the position that mathematical concepts and/or truths are grounded in, derived from or explained by psychological facts (or laws).
John Stuart Mill seems to have been an advocate of a type of logical psychologism, as were many nineteenth-century German logicians such as Sigwart and Erdmann as well as a number of psychologists,
past and present: for example, Gustave Le Bon. Psychologism was famously criticized by Frege in his The Foundations of Arithmetic, and many of his works and essays, including his review of Husserl's
Philosophy of Arithmetic. Edmund Husserl, in the first volume of his Logical Investigations, called "The Prolegomena of Pure Logic", criticized psychologism thoroughly and sought to distance himself
from it. The "Prolegomena" is considered a more concise, fair, and thorough refutation of psychologism than the criticisms made by Frege, and also it is considered today by many as being a memorable
refutation for its decisive blow to psychologism. Psychologism was also criticized by Charles Sanders Peirce and Maurice Merleau-Ponty.
In mathematics, intuitionism is a program of methodological reform whose motto is that "there are no non-experienced mathematical truths" (L.E.J. Brouwer). From this springboard, intuitionists seek
to reconstruct what they consider to be the corrigible portion of mathematics in accordance with Kantian concepts of being, becoming, intuition, and knowledge. Brouwer, the founder of the movement,
held that mathematical objects arise from the a priori forms of the volitions that inform the perception of empirical objects. (CDP, 542)
A major force behind Intuitionism was L.E.J. Brouwer, who rejected the usefulness of formalized logic of any sort for mathematics. His student Arend Heyting postulated an intuitionistic logic,
different from the classical Aristotelian logic; this logic does not contain the law of the excluded middle and therefore frowns upon proofs by contradiction. The axiom of choice is also rejected in
most intuitionistic set theories, though in some versions it is accepted. Important work was later done by Errett Bishop, who managed to prove versions of the most important theorems in real analysis
within this framework.
In intuitionism, the term "explicit construction" is not cleanly defined, and that has led to criticisms. Attempts have been made to use the concepts of Turing machine or computable function to fill
this gap, leading to the claim that only questions regarding the behavior of finite algorithms are meaningful and should be investigated in mathematics. This has led to the study of the computable
numbers, first introduced by Alan Turing. Not surprisingly, then, this approach to mathematics is sometimes associated with theoretical computer science
This section requires expansion. (May 2008)
Like intuitionism, constructivism involves the regulative principle that only mathematical entities which can be explicitly constructed in a certain sense should be admitted to mathematical
discourse. In this view, mathematics is an exercise of the human intuition, not a game played with meaningless symbols. Instead, it is about entities that we can create directly through mental
activity. In addition, some adherents of these schools reject non-constructive proofs, such as a proof by contradiction.
Finitism is an extreme form of constructivism, according to which a mathematical object does not exist unless it can be constructed from natural numbers in a finite number of steps. In her book
Philosophy of Set Theory, Mary Tiles characterized those who allow countably infinite objects as classical finitists, and those who deny even countably infinite objects as strict finitists.
The most famous proponent of finitism was Leopold Kronecker,^[5] who said:
God created the natural numbers, all else is the work of man.
Ultrafinitism is an even more extreme version of finitism, which rejects not only infinities but finite quantities that cannot feasibly be constructed with available resources.
Structuralism is a position holding that mathematical theories describe structures, and that mathematical objects are exhaustively defined by their places in such structures, consequently having no
intrinsic properties. For instance, it would maintain that all that needs to be known about the number 1 is that it is the first whole number after 0. Likewise all the other whole numbers are defined
by their places in a structure, the number line. Other examples of mathematical objects might include lines and planes in geometry, or elements and operations in abstract algebra.
Structuralism is a epistemologically realistic view in that it holds that mathematical statements have an objective truth value. However, its central claim only relates to what kind of entity a
mathematical object is, not to what kind of existence mathematical objects or structures have (not, in other words, to their ontology). The kind of existence mathematical objects have would clearly
be dependent on that of the structures in which they are embedded; different sub-varieties of structuralism make different ontological claims in this regard.^[6]
The Ante Rem, or fully realist, variation of structuralism has a similar ontology to Platonism in that structures are held to have a real but abstract and immaterial existence. As such, it faces the
usual problems of explaining the interaction between such abstract structures and flesh-and-blood mathematicians.
In Re, or moderately realistic, structuralism is the equivalent of Aristotelean realism. Structures are held to exist inasmuch as some concrete system exemplifies them. This incurs the usual issues
that some perfectly legitimate structures might accidentally happen not to exist, and that a finite physical world might not be "big" enough to accommodate some otherwise legitimate structures.
The Post Res or eliminative variant of structuralism is anti-realist about structures in a way that parallels nominalism. According to this view mathematical systems exist, and have structural
features in common. If something is true of a structure, it will be true of all systems exemplifying the structure. However, it is merely convenient to talk of structures being "held in common"
between systems: they in fact have no independent existence.
Embodied mind theories
Embodied mind theories hold that mathematical thought is a natural outgrowth of the human cognitive apparatus which finds itself in our physical universe. For example, the abstract concept of number
springs from the experience of counting discrete objects. It is held that mathematics is not universal and does not exist in any real sense, other than in human brains. Humans construct, but do not
discover, mathematics.
With this view, the physical universe can thus be seen as the ultimate foundation of mathematics: it guided the evolution of the brain and later determined which questions this brain would find
worthy of investigation. However, the human mind has no special claim on reality or approaches to it built out of math. If such constructs as Euler's identity are true then they are true as a map of
the human mind and cognition.
Embodied mind theorists thus explain the effectiveness of mathematics — mathematics was constructed by the brain in order to be effective in this universe.
The most accessible, famous, and infamous treatment of this perspective is Where Mathematics Comes From, by George Lakoff and Rafael E. Núñez. In addition, mathematician Keith Devlin has investigated
similar concepts with his book The Math Instinct. For more on the philosophical ideas that inspired this perspective, see cognitive science of mathematics.
New empiricism
A more recent empiricism returns to the principle of the English empiricists of the 18th and 19th Centuries, in particular John Stuart Mill, who asserted that all knowledge comes to us from
observation through the senses. This applies not only to matters of fact, but also to "relations of ideas," as Hume called them: the structures of logic which interpret, organize and abstract
To this principle it adds a materialist connection: All the processes of logic which interpret, organize and abstract observations, are physical phenomena which take place in real time and physical
space: namely, in the brains of human beings. Abstract objects, such as mathematical objects, are ideas, which in turn exist as electrical and chemical states of the billions of neurons in the human
This second concept is reminiscent of the social constructivist approach, which holds that mathematics is produced by humans rather than being "discovered" from abstract, a priori truths. However, it
differs sharply from the constructivist implication that humans arbitrarily construct mathematical principles that have no inherent truth but which instead are created on a conveniency basis. On the
contrary, new empiricism shows how mathematics, although constructed by humans, follows rules and principles that will be agreed on by all who participate in the process, with the result that
everyone practicing mathematics comes up with the same answer — except in those areas where there is philosophical disagreement on the meaning of fundamental concepts. This is because the new
empiricism perceives this agreement as being a physical phenomenon. One which is observed by other humans in the same way that other physical phenomena, like the motions of inanimate bodies, or the
chemical interaction of various elements, are observed.
Combining the materialist principle with Millisian epistemology evades the principle difficulty with classical empiricism — that all knowledge comes from the senses. That difficulty lies in the
observation that mathematical truths based on logical deduction appear to be more certainly true than knowledge of the physical world itself. (The physical world in this case is taken to mean the
portion of it lying outside the human brain.)
Kant argued that the structures of logic which organize, interpret and abstract observations were built into the human mind and were true and valid a priori. Mill, on the contrary, said that we
believe them to be true because we have enough individual instances of their truth to generalize: in his words, "From instances we have observed, we feel warranted in concluding that what we found
true in those instances holds in all similar ones, past, present and future, however numerous they may be."^[7] Although the psychological or epistemological specifics given by Mill through which we
build our logical apparatus may not be completely warranted, his explanation still nonetheless manages to demonstrate that there is no way around Kant's a priori logic. To recant Mill's original idea
in an empiricist twist: "Indeed, the very principles of logical deduction are true because we observe that using them leads to true conclusions.", which is itself an a priori presupposition.
For most mathematicians the empiricist principle that all knowledge comes from the senses contradicts a more basic principle: that mathematical propositions are true independent of the physical
world. Everything about a mathematical proposition is independent of what appears to be the physical world. It all takes place in the mind. And the mind operates on infallible principles of deductive
logic. It is not influenced by exterior inputs from the physical world, distorted by having to pass through the tentative, contingent universe of the senses. It all happens internally, so to say.
This in turn may be the answer to what brings about Gödel's special kind of mathematical intuition, which was mentioned earlier in the article.
If all this is true, then where do the world senses come in? The early empiricists all stumbled over this point. Hume asserted that all knowledge comes from the senses, and then gave away the
ballgame by excepting abstract propositions, which he called "relations of ideas." These, he said, were absolutely true (although the mathematicians who thought them up, being human, might get them
wrong). Mill, on the other hand, tried to deny that abstract ideas exist outside the physical world: all numbers, he said, "must be numbers of something: there are no such things as numbers in the
abstract." When we count to eight or add five and three we are really counting spoons or bumblebees. "All things possess quantity," he said, so that propositions concerning numbers are propositions
concerning "all things whatever." But then in almost a contradiction of himself he went on to acknowledge that numerical and algebraic expressions are not necessarily attached to real world objects:
they "do not excite in our minds ideas of any things in particular." Mill's low reputation as a philosopher of logic, and the low estate of empiricism in the century and a half following him, derives
from this failed attempt to link abstract thoughts to the physical world, when it is obvious that abstraction consists precisely of separating the thought from its physical foundations.
The conundrum created by our certainty that abstract deductive propositions, if valid (i.e., if we can "prove" them), are true, exclusive of observation and testing in the physical world, gives rise
to a further reflection...What if thoughts themselves, and the minds that create them, are physical objects, existing only in the physical world?
This would reconcile the contradiction between our belief in the certainty of abstract deductions and the empiricist principle that knowledge comes from observation of individual instances. We know
that Euler's equation is true because every time a human mind derives the equation, it gets the same result, unless it has made a mistake, which can be acknowledged and corrected. We observe this
phenomenon, and we extrapolate to the general proposition that it is always true.
This applies not only to physical principles, like the law of gravity, but to abstract phenomena that we observe only in human brains: in ours and in those of others.
Aristotelian realism
Similar to empiricism in emphasizing the relation of mathematics to the real world, Aristotelian realism holds that mathematics studies properties such as symmetry, continuity and order that can be
literally realized in the physical world (or in any other world there might be). It contrasts with Platonism in holding that the objects of mathematics, such as numbers, do not exist in an "abstract"
world but can be physically realized. For example, the number 4 is realized in the relation between a heap of parrots and the universal "being a parrot" that divides the heap into so many parrots.^[8
] Aristotelian realism is defended by James Franklin and the Sydney School in the philosophy of mathematics and is close to the view of Penelope Maddy (1990) that when I open an egg carton I perceive
a set of three eggs (that is, a mathematical entity realized in the physical world). A problem for Aristotelian realism is what account to give of higher infinities, which may not be realizable in
the physical world.
Fictionalism in mathematics was brought to fame in 1980 when Hartry Field published Science Without Numbers, which rejected and in fact reversed Quine's indispensability argument. Where Quine
suggested that mathematics was indispensable for our best scientific theories, and therefore should be accepted as a body of truths talking about independently existing entities, Field suggested that
mathematics was dispensable, and therefore should be considered as a body of falsehoods not talking about anything real. He did this by giving a complete axiomatization of Newtonian mechanics that
didn't reference numbers or functions at all. He started with the "betweenness" of Hilbert's axioms to characterize space without coordinatizing it, and then added extra relations between points to
do the work formerly done by vector fields. Hilbert's geometry is mathematical, because it talks about abstract points, but in Field's theory, these points are the concrete points of physical space,
so no special mathematical objects at all are needed.
Having shown how to do science without using numbers, Field proceeded to rehabilitate mathematics as a kind of useful fiction. He showed that mathematical physics is a conservative extension of his
non-mathematical physics (that is, every physical fact provable in mathematical physics is already provable from Field's system), so that mathematics is a reliable process whose physical applications
are all true, even though its own statements are false. Thus, when doing mathematics, we can see ourselves as telling a sort of story, talking as if numbers existed. For Field, a statement like "2 +
2 = 4" is just as fictitious as "Sherlock Holmes lived at 221B Baker Street" — but both are true according to the relevant fictions.
By this account, there are no metaphysical or epistemological problems special to mathematics. The only worries left are the general worries about non-mathematical physics, and about fiction in
general. Field's approach has been very influential, but is widely rejected. This is in part because of the requirement of strong fragments of second-order logic to carry out his reduction, and
because the statement of conservativity seems to require quantification over abstract models or deductions.
Social constructivism or social realism theories see mathematics primarily as a social construct, as a product of culture, subject to correction and change. Like the other sciences, mathematics is
viewed as an empirical endeavor whose results are constantly evaluated and may be discarded. However, while on an empiricist view the evaluation is some sort of comparison with "reality", social
constructivists emphasize that the direction of mathematical research is dictated by the fashions of the social group performing it or by the needs of the society financing it. However, although such
external forces may change the direction of some mathematical research, there are strong internal constraints — the mathematical traditions, methods, problems, meanings and values into which
mathematicians are enculturated — that work to conserve the historically defined discipline.
This runs counter to the traditional beliefs of working mathematicians, that mathematics is somehow pure or objective. But social constructivists argue that mathematics is in fact grounded by much
uncertainty: as mathematical practice evolves, the status of previous mathematics is cast into doubt, and is corrected to the degree it is required or desired by the current mathematical community.
This can be seen in the development of analysis from reexamination of the calculus of Leibniz and Newton. They argue further that finished mathematics is often accorded too much status, and folk
mathematics not enough, due to an over-emphasis on axiomatic proof and peer review as practices. However, this might be seen as merely saying that rigorously proven results are overemphasized, and
then "look how chaotic and uncertain the rest of it all is!"
The social nature of mathematics is highlighted in its subcultures. Major discoveries can be made in one branch of mathematics and be relevant to another, yet the relationship goes undiscovered for
lack of social contact between mathematicians. Social constructivists argue each speciality forms its own epistemic community and often has great difficulty communicating, or motivating the
investigation of unifying conjectures that might relate different areas of mathematics. Social constructivists see the process of "doing mathematics" as actually creating the meaning, while social
realists see a deficiency either of human capacity to abstractify, or of human's cognitive bias, or of mathematicians' collective intelligence as preventing the comprehension of a real universe of
mathematical objects. Social constructivists sometimes reject the search for foundations of mathematics as bound to fail, as pointless or even meaningless. Some social scientists also argue that
mathematics is not real or objective at all, but is affected by racism and ethnocentrism. Some of these ideas are close to postmodernism.
Contributions to this school have been made by Imre Lakatos and Thomas Tymoczko, although it is not clear that either would endorse the title. More recently Paul Ernest has explicitly formulated a
social constructivist philosophy of mathematics.^[9] Some consider the work of Paul Erdős as a whole to have advanced this view (although he personally rejected it) because of his uniquely broad
collaborations, which prompted others to see and study "mathematics as a social activity", e.g., via the Erdős number. Reuben Hersh has also promoted the social view of mathematics, calling it a
"humanistic" approach,^[10] similar to but not quite the same as that associated with Alvin White;^[11] one of Hersh's co-authors, Philip J. Davis, has expressed sympathy for the social view as well.
A criticism of this approach is that it is trivial, based on the trivial observation that mathematics is a human activity. To observe that rigorous proof comes only after unrigorous conjecture,
experimentation and speculation is true, but it is trivial and no-one would deny this. So it's a bit of a stretch to characterize a philosophy of mathematics in this way, on something trivially true.
The calculus of Leibniz and Newton was reexamined by mathematicians such as Weierstrass in order to rigorously prove the theorems thereof. There is nothing special or interesting about this, as it
fits in with the more general trend of unrigorous ideas which are later made rigorous. There needs to be a clear distinction between the objects of study of mathematics and the study of the objects
of study of mathematics. The former doesn't seem to change a great deal^[citation needed]; the latter is forever in flux. The latter is what the Social theory is about, and the former is what
Platonism et al. are about.
However, this criticism is rejected by supporters of the social constructivist perspective because it misses the point that the very objects of mathematics are social constructs. These objects, it
asserts, are primarily semiotic objects existing in the sphere of human culture, sustained by social practices (after Wittgenstein) that utilize physically embodied signs and give rise to
intrapersonal (mental) constructs. Social constructivists view the reification of the sphere of human culture into a Platonic realm, or some other heaven-like domain of existence beyond the physical
world, a long standing category error.
Beyond the traditional schools
This section needs additional citations for verification. (February 2007)
Rather than focus on narrow debates about the true nature of mathematical truth, or even on practices unique to mathematicians such as the proof, a growing movement from the 1960s to the 1990s began
to question the idea of seeking foundations or finding any one right answer to why mathematics works. The starting point for this was Eugene Wigner's famous 1960 paper The Unreasonable Effectiveness
of Mathematics in the Natural Sciences, in which he argued that the happy coincidence of mathematics and physics being so well matched seemed to be unreasonable and hard to explain.
The embodied-mind or cognitive school and the social school were responses to this challenge, but the debates raised were difficult to confine to those.
One parallel concern that does not actually challenge the schools directly but instead questions their focus is the notion of quasi-empiricism in mathematics. This grew from the increasingly popular
assertion in the late 20th century that no one foundation of mathematics could be ever proven to exist. It is also sometimes called "postmodernism in mathematics" although that term is considered
overloaded by some and insulting by others. Quasi-empiricism argues that in doing their research, mathematicians test hypotheses as well as prove theorems. A mathematical argument can transmit
falsity from the conclusion to the premises just as well as it can transmit truth from the premises to the conclusion. Quasi-empiricism was developed by Imre Lakatos, inspired by the philosophy of
science of Karl Popper.
Lakatos' philosophy of mathematics is sometimes regarded as a kind of social constructivism, but this was not his intention.
Such methods have always been part of folk mathematics by which great feats of calculation and measurement are sometimes achieved. Indeed, such methods may be the only notion of proof a culture has.
Hilary Putnam has argued that any theory of mathematical realism would include quasi-empirical methods. He proposed that an alien species doing mathematics might well rely on quasi-empirical methods
primarily, being willing often to forgo rigorous and axiomatic proofs, and still be doing mathematics — at perhaps a somewhat greater risk of failure of their calculations. He gave a detailed
argument for this in New Directions (ed. Tymockzo, 1998).
Popper's "two senses" theory
Realist and constructivist theories are normally taken to be contraries. However, Karl Popper^[12] argued that a number statement such as "2 apples + 2 apples = 4 apples" can be taken in two senses.
In one sense it is irrefutable and logically true. In the second sense it is factually true and falsifiable. Another way of putting this is to say that a single number statement can express two
propositions: one of which can be explained on constructivist lines; the other on realist lines.^[13]
Few philosophers are able to penetrate mathematical notations and culture to relate conventional notions of metaphysics to the more specialized metaphysical notions of the schools above. This may
lead to a disconnection in which some mathematicians continue to profess discredited philosophy as a justification for their continued belief in a world-view promoting their work.^[citation needed]
Although the social theories and quasi-empiricism, and especially the embodied mind theory, have focused more attention on the epistemology implied by current mathematical practices, they fall far
short of actually relating this to ordinary human perception and everyday understandings of knowledge.^[citation needed]
Innovations in the philosophy of language during the 20th century renewed interest in whether mathematics is, as is often said, the language of science. Although most mathematicians and physicists^[
citation needed] (and many philosophers) would accept the statement "mathematics is a language", linguists believe that the implications of such a statement must be considered. For example, the tools
of linguistics are not generally applied to the symbol systems of mathematics, that is, mathematics is studied in a markedly different way than other languages. If mathematics is a language, it is a
different type of language than natural languages. Indeed, because of the need for clarity and specificity, the language of mathematics is far more constrained than natural languages studied by
linguists. However, the methods developed by Frege and Tarski for the study of mathematical language have been extended greatly by Tarski's student Richard Montague and other linguists working in
formal semantics to show that the distinction between mathematical language and natural language may not be as great as it seems.
Indispensability argument for realism
This argument, associated with Willard Quine and Hilary Putnam, is considered by Stephen Yablo to be one of the most challenging arguments in favor of the acceptance of the existence of abstract
mathematical entities, such as numbers and sets.^[14] The form of the argument is as follows.
1. One must have ontological commitments to all entities that are indispensable to the best scientific theories, and to those entities only (commonly referred to as "all and only").
2. Mathematical entities are indispensable to the best scientific theories. Therefore,
3. One must have ontological commitments to mathematical entities.^[15]
The justification for the first premise is the most controversial. Both Putnam and Quine invoke naturalism to justify the exclusion of all non-scientific entities, and hence to defend the "only" part
of "all and only". The assertion that "all" entities postulated in scientific theories, including numbers, should be accepted as real is justified by confirmation holism. Since theories are not
confirmed in a piecemeal fashion, but as a whole, there is no justification for excluding any of the entities referred to in well-confirmed theories. This puts the nominalist who wishes to exclude
the existence of sets and non-Euclidean geometry, but to include the existence of quarks and other undetectable entities of physics, for example, in a difficult position.^[15]
Epistemic argument against realism
The anti-realist "epistemic argument" against Platonism has been made by Paul Benacerraf and Hartry Field. Platonism posits that mathematical objects are abstract entities. By general agreement,
abstract entities cannot interact causally with concrete, physical entities. ("the truth-values of our mathematical assertions depend on facts involving platonic entities that reside in a realm
outside of space-time"^[16]) Whilst our knowledge of concrete, physical objects is based on our ability to perceive them, and therefore to causally interact with them, there is no parallel account of
how mathematicians come to have knowledge of abstract objects.^[17]^[18]^[19] ("An account of mathematical truth ..must be consistent with the possibility of mathematical knowledge"^[20]). Another
way of making the point is that if the Platonic world were to disappear, it would make no difference to the ability of mathematicians to generate proofs, etc., which is already fully accountable in
terms of physical processes in their brains.
Field developed his views into fictionalism. Benacerraf also developed the philosophy of mathematical structuralism, according to which there are no mathematical objects. Nonetheless, some versions
of structuralism are compatible with some versions of realism.
The argument hinges on the idea that a satisfactory naturalistic account of thought processes in terms of brain processes can be given for mathematical reasoning along with everything else. One line
of defence is to maintain that this is false, so that mathematical reasoning uses some special intuition that involves contact with the Platonic realm. A modern form of this argument is given by Sir
Roger Penrose.^[21]
Another line of defence is to maintain that abstract objects are relevant to mathematical reasoning in a way that is non causal, and not analogous to perception. This argument is developed by Jerrold
Katz in his book Realistic Rationalism.
A more radical defense is denial of physical reality, i.e. the mathematical universe hypothesis. In that case, a mathematician's knowledge of mathematics is one mathematical object making contact
with another.
Many practising mathematicians have been drawn to their subject because of a sense of beauty they perceive in it. One sometimes hears the sentiment that mathematicians would like to leave philosophy
to the philosophers and get back to mathematics — where, presumably, the beauty lies.
In his work on the divine proportion, H. E. Huntley relates the feeling of reading and understanding someone else's proof of a theorem of mathematics to that of a viewer of a masterpiece of art — the
reader of a proof has a similar sense of exhilaration at understanding as the original author of the proof, much as, he argues, the viewer of a masterpiece has a sense of exhilaration similar to the
original painter or sculptor. Indeed, one can study mathematical and scientific writings as literature.
Philip J. Davis and Reuben Hersh have commented that the sense of mathematical beauty is universal amongst practicing mathematicians. By way of example, they provide two proofs of the irrationality
of the √2. The first is the traditional proof by contradiction, ascribed to Euclid; the second is a more direct proof involving the fundamental theorem of arithmetic that, they argue, gets to the
heart of the issue. Davis and Hersh argue that mathematicians find the second proof more aesthetically appealing because it gets closer to the nature of the problem.
Paul Erdős was well known for his notion of a hypothetical "Book" containing the most elegant or beautiful mathematical proofs. There is not universal agreement that a result has one "most elegant"
proof; Gregory Chaitin has argued against this idea.
Philosophers have sometimes criticized mathematicians' sense of beauty or elegance as being, at best, vaguely stated. By the same token, however, philosophers of mathematics have sought to
characterize what makes one proof more desirable than another when both are logically sound.
Another aspect of aesthetics concerning mathematics is mathematicians' views towards the possible uses of mathematics for purposes deemed unethical or inappropriate. The best-known exposition of this
view occurs in G.H. Hardy's book A Mathematician's Apology, in which Hardy argues that pure mathematics is superior in beauty to applied mathematics precisely because it cannot be used for war and
similar ends. Some later mathematicians have characterized Hardy's views as mildly dated^[citation needed], with the applicability of number theory to modern-day cryptography.
See also
Related works
Historical topics
1. ^ Maziars, Edward A. (1969). "Problems in the Philosophy of Mathematics (Book Review)". Philosophy of Science 36 (3): 325. . For example, when Edward Maziars proposes in a 1969 book review "to
distinguish philosophical mathematics (which is primarily a specialised task for a mathematician) from mathematical philosophy (which ordinarily may be the philosopher's metier)", he uses the
term mathematical philosophy as being synonymous with philosophy of mathematics.
2. ^ Kleene, Stephen (1971). Introduction to Metamathematics. Amsterdam, Netherlands: North-Holland Publishing Company. p. 5.
3. ^ Tegmark, Max (February 2008). "The Mathematical Universe". Foundations of Physics 38 (2): 101–150. arXiv:0704.0646. Bibcode 2008FoPh...38..101T. doi:10.1007/s10701-007-9186-9.
4. ^ Tegmark (1998), p. 1.
5. ^ From an 1886 lecture at the 'Berliner Naturforscher-Versammlung', according to H. M. Weber's memorial article, as quoted and translated in Gonzalez Cabillon, Julio (2000-02-03). "FOM: What were
Kronecker's f.o.m.?". http://www.cs.nyu.edu/pipermail/fom/2000-February/003820.html. Retrieved 2008-07-19. Gonzalez gives as the sources for the memorial article, the following: 'Weber, H:
"Leopold Kronecker", _Jahresberichte der Deutschen Mathematiker Vereinigung_, vol ii (1893) pp 5-31. Cf page 19. See also _Mathematische Annalen_ vol xliii (1893) pp 1-25'.
6. ^ Brown, James (2008). Philosophy of Mathematics. New York: Routledge. ISBN 978-0-415-96047-2.
7. ^ A System of Logic Ratiocinative and Inductive, The Collected Works of John Stuart Mill published by the University of Toronto Press in 1973 . Book II, Chapter vi, Section 2 (Toronto edition
1975, Vol.7, p. 254)
8. ^ Franklin, James. "Aristotelian realism, in The Philosophy of Mathematics, ed. A. Irvine (Handbook of the Philosophy of Science)". North-Holland Elsevier. http://www.maths.unsw.edu.au/~jim/
irv.pdf. Retrieved 2009-12-25.
9. ^ Ernest, Paul. "Is Mathematics Discovered or Invented?". University of Exeter. http://www.people.ex.ac.uk/PErnest/pome12/article2.htm. Retrieved 2008-12-26.
10. ^ Hersh, Reuben (February 10, 1997). What Kind of a Thing is a Number?. Interview with John Brockman. Edge Foundation. http://edge.org/documents/archive/edge5.html. Retrieved 2008-12-26.
11. ^ "Humanism and Mathematics Education". Math Forum. Humanistic Mathematics Network Journal. http://mathforum.org/mathed/humanistic.math.html. Retrieved 2008-12-26.
12. ^ Popper, Karl Raimund (1946) Aristotelian Society Supplementary Volume XX.
13. ^ Gregory, Frank Hutson (1996) Arithmetic and Reality: A Development of Popper's Ideas. City University of Hong Kong. Republished in Philosophy of Mathematics Education Journal No. 26 (December
14. ^ Yablo, S. (November 8, 1998). "A Paradox of Existence". http://www.mit.edu/%7Eyablo/apex.html#fn1.
15. ^ ^a ^b Putnam, H. Mathematics, Matter and Method. Philosophical Papers, vol. 1. Cambridge: Cambridge University Press, 1975. 2nd. ed., 1985.
16. ^ Field, Hartry, 1989, Realism, Mathematics, and Modality, Oxford: Blackwell, p. 68
17. ^ "Since abstract objects are outside the nexus of causes and effects, and thus perceptually inaccessible, they cannot be known through their effects on us" Katz, J. Realistic Rationalism, p15
18. ^ Benacceraf, 1973, p409
Further reading
• Aristotle, "Prior Analytics", Hugh Tredennick (trans.), pp. 181–531 in Aristotle, Volume 1, Loeb Classical Library, William Heinemann, London, UK, 1938.
• Audi, Robert (ed., 1999), The Cambridge Dictionary of Philosophy, Cambridge University Press, Cambridge, UK, 1995. 2nd edition, 1999. Cited as CDP.
• Benacerraf, Paul, and Putnam, Hilary (eds., 1983), Philosophy of Mathematics, Selected Readings, 1st edition, Prentice-Hall, Englewood Cliffs, NJ, 1964. 2nd edition, Cambridge University Press,
Cambridge, UK, 1983.
• Berkeley, George (1734), The Analyst; or, a Discourse Addressed to an Infidel Mathematician. Wherein It is examined whether the Object, Principles, and Inferences of the modern Analysis are more
distinctly conceived, or more evidently deduced, than Religious Mysteries and Points of Faith, London & Dublin. Online text, David R. Wilkins (ed.), Eprint.
• Bourbaki, N. (1994), Elements of the History of Mathematics, John Meldrum (trans.), Springer-Verlag, Berlin, Germany.
• Carnap, Rudolf (1931), "Die logizistische Grundlegung der Mathematik", Erkenntnis 2, 91-121. Republished, "The Logicist Foundations of Mathematics", E. Putnam and G.J. Massey (trans.), in
Benacerraf and Putnam (1964). Reprinted, pp. 41–52 in Benacerraf and Putnam (1983).
• Chandrasekhar, Subrahmanyan (1987), Truth and Beauty. Aesthetics and Motivations in Science, University of Chicago Press, Chicago, IL.
• Colyvan, Mark (2004), "Indispensability Arguments in the Philosophy of Mathematics", Stanford Encyclopedia of Philosophy, Edward N. Zalta (ed.), Eprint.
• Davis, Philip J. and Hersh, Reuben (1981), The Mathematical Experience, Mariner Books, New York, NY.
• Devlin, Keith (2005), The Math Instinct: Why You're a Mathematical Genius (Along with Lobsters, Birds, Cats, and Dogs), Thunder's Mouth Press, New York, NY.
• Dummett, Michael (1991 a), Frege, Philosophy of Mathematics, Harvard University Press, Cambridge, MA.
• Dummett, Michael (1991 b), Frege and Other Philosophers, Oxford University Press, Oxford, UK.
• Dummett, Michael (1993), Origins of Analytical Philosophy, Harvard University Press, Cambridge, MA.
• Ernest, Paul (1998), Social Constructivism as a Philosophy of Mathematics, State University of New York Press, Albany, NY.
• George, Alexandre (ed., 1994), Mathematics and Mind, Oxford University Press, Oxford, UK.
• Hadamard, Jacques (1949), The Psychology of Invention in the Mathematical Field, 1st edition, Princeton University Press, Princeton, NJ. 2nd edition, 1949. Reprinted, Dover Publications, New
York, NY, 1954.
• Hardy, G.H. (1940), A Mathematician's Apology, 1st published, 1940. Reprinted, C.P. Snow (foreword), 1967. Reprinted, Cambridge University Press, Cambridge, UK, 1992.
• Hart, W.D. (ed., 1996), The Philosophy of Mathematics, Oxford University Press, Oxford, UK.
• Hendricks, Vincent F. and Hannes Leitgeb (eds.). Philosophy of Mathematics: 5 Questions, New York: Automatic Press / VIP, 2006. [1]
• Huntley, H.E. (1970), The Divine Proportion: A Study in Mathematical Beauty, Dover Publications, New York, NY.
• Irvine, A., ed (2009), The Philosophy of Mathematics, in Handbook of the Philosophy of Science series, North-Holland Elsevier, Amsterdam.
• Klein, Jacob (1968), Greek Mathematical Thought and the Origin of Algebra, Eva Brann (trans.), MIT Press, Cambridge, MA, 1968. Reprinted, Dover Publications, Mineola, NY, 1992.
• Kline, Morris (1959), Mathematics and the Physical World, Thomas Y. Crowell Company, New York, NY, 1959. Reprinted, Dover Publications, Mineola, NY, 1981.
• Kline, Morris (1972), Mathematical Thought from Ancient to Modern Times, Oxford University Press, New York, NY.
• König, Julius (Gyula) (1905), "Über die Grundlagen der Mengenlehre und das Kontinuumproblem", Mathematische Annalen 61, 156-160. Reprinted, "On the Foundations of Set Theory and the Continuum
Problem", Stefan Bauer-Mengelberg (trans.), pp. 145–149 in Jean van Heijenoort (ed., 1967).
• Körner, Stephan, The Philosophy of Mathematics, An Introduction. Harper Books, 1960.
• Lakoff, George, and Núñez, Rafael E. (2000), Where Mathematics Comes From: How the Embodied Mind Brings Mathematics into Being, Basic Books, New York, NY.
• Lakatos, Imre 1976 Proofs and Refutations:The Logic of Mathematical Discovery (Eds) J. Worrall & E. Zahar Cambridge University Press
• Lakatos, Imre 1978 Mathematics, Science and Epistemology: Philosophical Papers Volume 2 (Eds) J.Worrall & G.Currie Cambridge University Press
• Lakatos, Imre 1968 Problems in the Philosophy of Mathematics North Holland
• Leibniz, G.W., Logical Papers (1666–1690), G.H.R. Parkinson (ed., trans.), Oxford University Press, London, UK, 1966.
• Mac Lane, Saunders (1998), Categories for the Working Mathematician, 1st edition, Springer-Verlag, New York, NY, 1971, 2nd edition, Springer-Verlag, New York, NY.
• Maddy, Penelope (1990), Realism in Mathematics, Oxford University Press, Oxford, UK.
• Maddy, Penelope (1997), Naturalism in Mathematics, Oxford University Press, Oxford, UK.
• Maziarz, Edward A., and Greenwood, Thomas (1995), Greek Mathematical Philosophy, Barnes and Noble Books.
• Mount, Matthew, Classical Greek Mathematical Philosophy,^[citation needed].
• Peirce, Benjamin (1870), "Linear Associative Algebra", § 1. See American Journal of Mathematics 4 (1881).
• Peirce, C.S., Collected Papers of Charles Sanders Peirce, vols. 1-6, Charles Hartshorne and Paul Weiss (eds.), vols. 7-8, Arthur W. Burks (ed.), Harvard University Press, Cambridge, MA, 1931 –
1935, 1958. Cited as CP (volume).(paragraph).
• Peirce, C.S., various pieces on mathematics and logic, many readable online through links at the Charles Sanders Peirce bibliography, especially under Books authored or edited by Peirce,
published in his lifetime and the two sections following it.
• Plato, "The Republic, Volume 1", Paul Shorey (trans.), pp. 1–535 in Plato, Volume 5, Loeb Classical Library, William Heinemann, London, UK, 1930.
• Plato, "The Republic, Volume 2", Paul Shorey (trans.), pp. 1–521 in Plato, Volume 6, Loeb Classical Library, William Heinemann, London, UK, 1935.
• Putnam, Hilary (1967), "Mathematics Without Foundations", Journal of Philosophy 64/1, 5-22. Reprinted, pp. 168–184 in W.D. Hart (ed., 1996).
• Resnik, Michael D. Frege and the Philosophy of Mathematics, Cornell University, 1980.
• Resnik, Michael (1997), Mathematics as a Science of Patterns, Clarendon Press, Oxford, UK, ISBN 978-0-19-825014-2
• Robinson, Gilbert de B. (1959), The Foundations of Geometry, University of Toronto Press, Toronto, Canada, 1940, 1946, 1952, 4th edition 1959.
• Raymond, Eric S. (1993), "The Utility of Mathematics", Eprint.
• Smullyan, Raymond M. (1993), Recursion Theory for Metamathematics, Oxford University Press, Oxford, UK.
• Russell, Bertrand (1919), Introduction to Mathematical Philosophy, George Allen and Unwin, London, UK. Reprinted, John G. Slater (intro.), Routledge, London, UK, 1993.
• Shapiro, Stewart (2000), Thinking About Mathematics: The Philosophy of Mathematics, Oxford University Press, Oxford, UK
• Strohmeier, John, and Westbrook, Peter (1999), Divine Harmony, The Life and Teachings of Pythagoras, Berkeley Hills Books, Berkeley, CA.
• Styazhkin, N.I. (1969), History of Mathematical Logic from Leibniz to Peano, MIT Press, Cambridge, MA.
• Tait, William W. (1986), "Truth and Proof: The Platonism of Mathematics", Synthese 69 (1986), 341-370. Reprinted, pp. 142–167 in W.D. Hart (ed., 1996).
• Tarski, A. (1983), Logic, Semantics, Metamathematics: Papers from 1923 to 1938, J.H. Woodger (trans.), Oxford University Press, Oxford, UK, 1956. 2nd edition, John Corcoran (ed.), Hackett
Publishing, Indianapolis, IN, 1983.
• Tymoczko, Thomas (1998), New Directions in the Philosophy of Mathematics, Catalog entry?
• Ulam, S.M. (1990), Analogies Between Analogies: The Mathematical Reports of S.M. Ulam and His Los Alamos Collaborators, A.R. Bednarek and Françoise Ulam (eds.), University of California Press,
Berkeley, CA.
• van Heijenoort, Jean (ed. 1967), From Frege To Gödel: A Source Book in Mathematical Logic, 1879-1931, Harvard University Press, Cambridge, MA.
• Wigner, Eugene (1960), "The Unreasonable Effectiveness of Mathematics in the Natural Sciences", Communications on Pure and Applied Mathematics 13(1): 1-14. Eprint
• Wilder, Raymond L. Mathematics as a Cultural System, Pergamon, 1980.
External links | {"url":"http://dictionary.sensagent.com/Philosophy_of_mathematics/en-en/","timestamp":"2014-04-20T23:28:26Z","content_type":null,"content_length":"270509","record_id":"<urn:uuid:e3f01373-eb06-4223-9b16-689720a70077>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00482-ip-10-147-4-33.ec2.internal.warc.gz"} |
Teaching Discrete Mathematics via Primary Historical Sources, http://www.math.nmsu.edu/hist_projects
"... A course in discrete mathematics is a relatively recent addition, within the last 30 or 40 years, to the modern American undergraduate curriculum, born out of a need to instruct computer science
majors in algorithmic thought. The roots of discrete mathematics, however, are as old as mathematics itse ..."
Cited by 2 (1 self)
Add to MetaCart
A course in discrete mathematics is a relatively recent addition, within the last 30 or 40 years, to the modern American undergraduate curriculum, born out of a need to instruct computer science
majors in algorithmic thought. The roots of discrete mathematics, however, are as old as mathematics itself, with the notion of counting a discrete operation, usually cited as the first mathematical
"... Many are now teaching mathematics directly with primary historical sources, in a variety of courses and levels. How far should this be taken? Should we adapt or redesign standard courses to a
completely historical approach, chie‡y from primary sources? If so, what are the obstacles to achieving this ..."
Add to MetaCart
Many are now teaching mathematics directly with primary historical sources, in a variety of courses and levels. How far should this be taken? Should we adapt or redesign standard courses to a
completely historical approach, chie‡y from primary sources? If so, what are the obstacles to achieving this? Materials? Instructor attitudes? What should and can we do about such things? 1
"... Let's first study a few excerpts from Turing's original paper [13, p. 231-234], and then design a few machines to perform certain tasks. ..."
Add to MetaCart
Let's first study a few excerpts from Turing's original paper [13, p. 231-234], and then design a few machines to perform certain tasks. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=4948938","timestamp":"2014-04-18T21:48:24Z","content_type":null,"content_length":"17831","record_id":"<urn:uuid:f3a71f62-8e5f-4cca-a8c6-4764b7c365b5>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00342-ip-10-147-4-33.ec2.internal.warc.gz"} |
Can black holes in space ever become "normal" again?
A Staff Report from the Straight Dope Science Advisory Board
Can black holes in space ever become "normal" again?
February 11, 2003
Dear Straight Dope:
After seeing a Discovery Channel show on astronomy which included an explanation of "Hawking radiation" from black holes, I have a question. If the black hole is radiating away particles, however
randomly, will the mass of the black hole eventually cross back over the threshold of infinite density and become "normal" matter again? Will a black hole eventually become a brown dwarf that is
visible? Or, do black holes, having at one point crossed the critical threshold, retain their infinite density? So to get down to brass tacks, are black holes forever?
There are two important things to know about black holes. First, they're weird. But second, they're not as weird as most people think.
For instance, the gravitational field of a black hole is just like the gravitational field of any other mass. If the sun were instantly replaced by a solar-mass black hole (that is, a black hole
having the same mass as the sun), the planets would continue to orbit in exactly the same manner: They would not get "sucked in," despite what many people think. I know you didn't ask about that,
Keith, but it's a common enough misconception that I thought I would mention it anyway.
And black holes also don't have infinite density. When we talk about a black hole, we generally mean the entire region inside the event horizon, the surface of no return--to put it another way, the
region from which no light escapes. This horizon has a radius called the Schwarzschild radius, which is directly proportional to the mass of the black hole (specifically, Rs = 2GM/c^2, where G is the
gravitational constant, M is the mass of the object, and c is the speed of light). You can calculate the Schwarzschild radius for any mass, whether it's a black hole or not. For instance, the sun has
a Schwarzschild radius of about 3 kilometers.
If a black hole has a nonzero volume, then it also has a finite density. Since the radius of a hole is proportional to its mass, its density is inversely proportional to the mass squared. Bigger
holes, in other words, have lower densities. For a hole of a few solar masses, this density is greater than that of the nucleus of an atom, but a supermassive black hole such as one finds in the core
of a galaxy can have a density more like water, or even air.
So why do people talk about black holes as having infinite density? They're really just referring to the center of the hole. According to the simplest models, all of the mass is concentrated in a
single point at the center called a singularity. We don't know this for certain, though, since all we can observe about a black hole is what goes on outside the event horizon. For all we know, the
internal mass distribution could be anything, so long as it's spherically symmetrical. In fact, it's widely suspected that once we have a working theory of quantum gravity, it will turn out that the
mass of the hole is concentrated in some small, but finite, region in the center.
Getting back to your question, you spoke of Hawking radiation, a process first described by Stephen Hawking in 1974. According to Hawking, black holes actually have a temperature, and can therefore
emit radiation (and lose mass and energy in the process). Eventually, due to this effect, a black hole would evaporate away. A full explanation of the process would be too long for this column, but
Hawking gives an excellent description in his bestseller A Brief History of Time. The interested reader is encouraged to head to the local library to read more.
So, what happens when a black hole evaporates? It gets smaller, but it always stays a black hole. In the present universe, it's impossible to form a black hole smaller than a mid-sized star, but
there's no rule against smaller holes existing. As the hole gets smaller, its density increases, as does its power output: Oddly, the more energy a black hole radiates away, the hotter it gets, so
that near the end, you get an extremely energetic and explosive burst of radiation.
Except that all of this is only good up to a limit. Nobody's quite sure yet what happens right at the end. The simplest thing to assume is that it radiates away to nothing, but there are a lot of
problems with that. For one thing, it gets to the point that it would be emitting particles more massive than it is, which is just as absurd as it sounds, even to physicists. Another possibility is
that once it gets small enough, about the mass of a bacterium or so, it stops evaporating and just hangs out like that forever. It's even been proposed that the first time any black hole anywhere
does evaporate, it might leave behind what's called a naked singularity, and thereby cause the end of the world as we know it. Again, this is a job for quantum gravity. But don't worry: At the rate
that black holes evaporate, we've got a few hundred thousand billion billion billion trillion trillion trillion years to figure it out. | {"url":"http://www.straightdope.com/columns/read/2072/can-black-holes-in-space-ever-become-normal-again","timestamp":"2014-04-20T08:15:03Z","content_type":null,"content_length":"24550","record_id":"<urn:uuid:fd969e5b-488a-49f5-a76e-d5ae8a25535b>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00106-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mystery of Missing Rupee
Dear Friends,
I am presenting to you a very interesting riddle. Once you get the answer to this you can pose this riddle to your friends, etc. I am sure most of them will not be able to solve the mystery of the
missing rupee.
Riddle: Mystery of Missing Rupee
Three men walk into a hotel and rent a room for Rs. 30. They contribute towards the room rent equally. So each one of the paid Rs. 10
The hotel manager after sometime realized the room rent should have been only Rs. 25 rupees. So he sent the dishonest bellboy and told him to give Rs. 5 back to the men.
The bellboy cheated and gave each one of them Re. 1 back.
Now you know Rs. 27 (10-1 = Rs. 9 each) is paid by the 3 men and Rs. 2 is with the bell boy. That makes it Rs. 29 (27+2), so where is the remaining Re.1
**This question is actually meant to be asked when you are face to face with the other person.
I think its effectiveness and punch was somewhat lost in written words. Nevertheless, give it try and enjoy asking it to others.
1. Loss of 3 men=gain of waiter +gain of hotel manager
Hence proved
2. three frinds actually had 30 rs… they got rs 3 back…
belly boy kept 2 Rs…
manager kept all he needed ie. Rs 25
Now count…
Friends have Rs. (3)+ Manger Had (25) + Bell boy had (2)= 3+25+2=30
4. each paid Rs 9/- so the total amount is Rs 27/- of which Rs 25/- goes to manager
and Rs 2/- to the bell boy.
5. 5 rupees cannot b distributed equally among 3men.(if u consider whole numbers)so two men should get 1 re each and the third person should get 3 rs.the bellboy gives 2 rs to two men.and pays the
third man 1re.he keeps 2 rs wid himself.
6. actually they paid 27 RS, and left 3 rs was with them, the manager got 25 from 27 and remaining rs 2 was kept by bellboy
so total 25+2+3=30
7. total rent=25
bell boy took 2
three men took each Rs1
so together 25+2+1+1+1=30
8. Rent of the hotel = Rs25
Amount with boy = Rs 2
Total amount given by friends to hotel
manager(including boy amount) = Rs27 …(Rs25+Rs2)
Therefore there is no meaning of missing rupee.
9. re 30 given by them
3 returned
so bill according to them is 27
out of 27, 2 is with the w88ier n 25 with the manager:)
10. This is nothing but a question of mathematical jeopardy.
If we look the problem according to the money spent by each of them i.e (10-1)3 = 27, 1 rupee will seem to be lost.
But look it as follows:
the boy returns 3 rs. out of 5. Hence he has 2 rs. left with him.
So money spent in rent = Rs 28, and the boy has Rs. 2 Thus 28 + 2 = 30
11. re 30 given by them
3 returned
so bill according to them is 27
out of 27, 2 is with the w88ier n 25 with the manager:)
12. HELLO TO EVERY BODY
NO DOUBT MONEY CONTRIBUTED BY THREE FRNDS TOWARD RENT =27
NOW SUBSTRACT Rs 2/- WITH DISHONEST BOY =2
RENT CHARGED BY MANAGER =27-2=25
WE HAVE TO START WITH 30-3=27 CONTRIBUTION AND GO BACKWARD TO CACULATE RENT
13. The remaining Re. 1 is with the hotel manager. To prove lets have two equations:
1:> The total amount paid as rent is Rs. 27/-.
2:> Rs. 25/- is with the hotel manager and Rs. 2/- is with the bell boy.
14. heloo sir if u r dng (27+2= 9 ) mean the owner has given 7 rs back
its how v hav to CALCULATE
Total Rent 30
amount contributed= 10 + 10 + 10 = 30
moneny returned is 5
we hav to substract frm the total = 30 – (3+2)= 25
so now if u check
money with 3 frnds = 25+3 = 28
remaing 2 rs with hellboy
Speed Maths Tricks for Fast Multiplication { […] For multiplication with 99, the rule is (number +next number)*9. However, their is a... }
Speed Maths Tricks for Fast Multiplication { […] you know how to quickly multiply any number by 11 (click on the link... }
Steam Shower spares Read the articles within this site a lot, intending to take the leap before long... – 20Apr14
car Joire especially likes this phone's voice-prompting feature: "When you plug in a SIM card, it... – 19Apr14
How to overcome maths phobia? | Quicker Maths { […] are a lot of resources available for free on the internet. You must use... }
How to overcome maths phobia? | Quicker Maths { […] Trigonometry formula memorization trick […] } | {"url":"http://www.quickermaths.com/mystery-of-missing-rupee/","timestamp":"2014-04-20T21:25:51Z","content_type":null,"content_length":"54708","record_id":"<urn:uuid:9ea31820-3779-4112-a8a4-35b38a8ed61b>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00478-ip-10-147-4-33.ec2.internal.warc.gz"} |
First and Second partial dervitives
March 31st 2009, 01:27 PM #1
First and Second partial dervitives
Am kinda stuck with this function They gave me;
am asked for all first and second order partial dervitves to make the function simpller i did this
to break up the to ln's
i get now =ln(x^2t)+ln(1-1/x^3)
for my second order fxy=0 and for fyx=0 i need someone to confirm these for me is my alebgra correct is what am doing correctby splitting up to logarathimes thank you for ur help!
Hello, zangestu888!
They gave me: . $f(x,t)\:=\:\ln\left(x^2t-\tfrac{t}{x}\right)$
and asked for all first and second order partial dervitves.
To make the function simpller i did this:
. . $f(x,t)\:=\:\ln\bigg[x^2t\left(1-\frac{1}{x^3}\right)\bigg]$ to break up the two ln's
i get now: . $\ln(x^2t)+\ln\left(1-\tfrac{1}{x^3}\right)$
For my second order: . $f_{xy}= f_{yx}=0$ . . . . all this is correct.
I would break it up like this . . .
We have: . $f(x,t) \;=\;\ln\left(\frac{x^3t - t}{x}\right) \;=\;\ln\left(\frac{t(x^3-1)}{x}\right)$
. . Then: . $f(x,t) \;=\;\ln(t) + \ln(x^3-1) - \ln(x)$
March 31st 2009, 02:28 PM #2
Super Member
May 2006
Lexington, MA (USA) | {"url":"http://mathhelpforum.com/calculus/81672-first-second-partial-dervitives.html","timestamp":"2014-04-17T22:53:07Z","content_type":null,"content_length":"34862","record_id":"<urn:uuid:3b9a5970-53d6-4df9-9132-aef55acf80e4>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00123-ip-10-147-4-33.ec2.internal.warc.gz"} |
Homework Help
Posted by Ashley on Thursday, July 28, 2011 at 12:44am.
solve by factoring 2a^2+3=7a
• math - Dr Russ, Thursday, July 28, 2011 at 3:45am
Start by rearranging into
Then either use the formula or it is possible to see the roots by inspection.
• math - Ashley, Thursday, July 28, 2011 at 12:44pm
2a-1=0 or a-3=0
a=1/2 or a=3
Related Questions
intermediate algebra - Solve by factoring...2a^2+3=7a
Intermediate Algebra - Solve by factoring..2a^2+3=7a
algebra - Add. (2a^4-2a^3+3a^2+14a-5)+(a^5+7a^3+7a^2-3a+2)+(-5a^4+a^2-8a-6)
Math - how do you solve: 2A^2-7a-15
Junior Math - Can you help me combine the expression into a single fraction? (t^...
Sally - I have a factoring question; what is 8a^2-8a-30? I've been working on ...
math - 2X^2+3X+1 d^2+8d+7 2a^2+5a+3 3x^2-x-4 2n^2+n-6 x^2+5x-14 5x^2-2x-7 7n^2+...
math - Solve by Factoring 2a^2 - 3a = -5
math - Solve by factoring: 2a^2-3a= -5
Calculus - Simplify. State the nonpermissible values. 4a^2-1/4a^2-16*2-a/2a-1 ... | {"url":"http://www.jiskha.com/display.cgi?id=1311828256","timestamp":"2014-04-21T15:14:55Z","content_type":null,"content_length":"8302","record_id":"<urn:uuid:8f2cde14-517b-4540-88bc-ebca775ed22d>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00031-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mplus Discussion >> Dummy Independent variable and Multiple Group Analysis
Sanjoy posted on Sunday, May 15, 2005 - 6:23 pm
Professor Muthen/s
Respected Sir/Madam ... I have three things to get clarified and two questions regarding program code.
C1. Multiple group analysis (MGA) of the Structural part of SEM (assuming either no measurement sections or all dependent variable have only single indicator outcome variable) looks similar to the
Dummy independent variable regression, which we usually do as a standard Econometrics technique … am I right!
C2. Now if I’m so … then my second question is… how much does MPlus differ from standard econometric theory for Dummy independent variable regression when we run Multiple Group Analysis in MPlus (for
simplicity assuming no measurement section) …
Following is the standard procedure that we follow in our standard econometric theory as Dummy independent variable regression (when the dependent variable is continuous)
We Estimate the constrained model (i.e. the coefficients for DUMMY and the K interaction terms are all constrained to be zero) … regress Y on all of the X’s. (Total K no. of X’s)
For the unconstrained model, regress Y on all of the X’s, DUMMY, and the DUMMY interaction terms. Therefore we get 2K + 2 coefficients to be estimated: 1 intercept, a coefficient for DUMMY, a
coefficient for each of the K independent variable, and a coefficient for each of the K interaction terms.
Then we do the incremental F test. df1 = K + 1 for the constrained model, and df2 = N-2K-2, where N is the total number of observations
F test-statistics i.e. (F df1, df2) = (error sum square Constrained – error sum square Unconstrained)*df2 / (error sum square Unconstrained)*df2
I’m going through MPlus User’s Guide (296-306) but fail to understand the statistics behind the way MPlus handles MGA … however, there is a caveat Professor, I have categorical dependent variable (Y1
= 1/0) can I use the same notion of dummy independent variable regression, I mean the partial coefficient with respect to dummy variable at its mean value doesn’t make sense in probit/logit analysis
… isn’t it?
C3. I check MPlus Technical Appendices (page 18) plus your article (1989 Presidential Lecture of Psychometric Society) … I couldn’t get what’s the EXCAT TEST STATISTICS that we should use in order to
verify group difference, when we have categorical i.e. binary as well ordinal) outcome variable
Questions regarding Program Code
Q1. I’m interested to check the differences in the coefficients (inclusive of the thershold) of structural part of the SEM for two groups … below is my code … could you please tell me how/what to
write in the Groups Specific “Model command”.
(Y1 is 0/1, R1-R3 and B1-B3 are 5-scale ordinal)
DATA: FILE IS d:\datatotal.txt;
VARIABLE: ;
MISSING ARE .;
GROUPING IS Fitem (1=comA 0=comB);
R BY R1-R3;
B by B1-B3;
Y1 on R B x1 x2 x3 x4;
R on B x2 x5;
B on R x1 x2 x3;
MODEL comB:
I need to check whether or not regression coefficient of Y1 on R and B as well as the corresponding threshold are different ...WHAT should I write in this portion of Model Statement
Q2. do we need DIFFTEST in order to verify the group specific difference
Thanks and regards
bmuthen posted on Monday, May 16, 2005 - 6:40 pm
These questions are a bit too far-reaching to thoroughly answer, and I need to answer briefly. A key difference between having groups represented by covariates versus multiple-group analysis is that
the former approach cannot represent group differences in variances or factor loadings. You can read more about that in standard SEM texts such as Bollen's book (I thought I also treated it in the
1989 Psychometrika article). For multiple-group analysis setups in Mplus, see examples in the Version 3 User's Guide. DIFFTEST can be used whenever you want to test nested models in the WLSMV
Sanjoy posted on Wednesday, May 18, 2005 - 4:27 pm
Thanks Professor ... I think I could not make my point clear ... I am interested to check whether there is a SIGNIFICANT group effect (say I have two-grouped data and the regression coefficients vary
significantly over the two groups or not)
By default MPlus runs two models for Two groups, assuming factor loadings are SAME and regression coefficients are all DIFFERENT across the groups ... hence this is the UNRESTRICTED model ... is it
right? … I suppose this is also the “Baseline Model” as it's been reported in MPlus output?
Now is NOT “holding factor loadings equal across groups” a RESTRICTION by itself! Actually that’s what made me slightly confused.
Again following MPlus example 5.16 and 5.17, there it did NOT do any testing of group specific significance under categorical outcome scenario. These two examples tell us how to specify group
specific factor loading.
Kindly let me get it clear once more, in order to Test the Significance of group specific difference (for my case, the dependent variables are ordinal and the restriction is - putting SAME regression
coefficients for both the groups)
Step1. Run the model with group command but WITHOUT using the group specific model command …and save the data as DIFFTEST
Step 2. Open another file, and under analysis use “DIFFTEST” and under group specific model command write the restrictions and run it … is it correct!
Thanks and regards
Linda K. Muthen posted on Wednesday, May 18, 2005 - 4:36 pm
See the description of multiple group analysis in Chapter 13. It is explained here which parameters are held equal in Mplus as the default and why.
If you want to test whether regression coefficients differ across groups, you would run the model where they are free first and then the model where they are constrained to be equal.
It sounds like what you are saying is correct. The best way to find out is to try it.
Sanjoy posted on Wednesday, May 18, 2005 - 7:12 pm
Thank you madam ... 2/3 quick question once again :-)
this is what I have done ...running the free model first, that is our 1st file for the Unrestriced model (set as MPlus default setting)
DATA: FILE IS d:\datatotal.txt;
VARIABLE: ;
MISSING ARE .;
GROUPING IS Fitem (1=comA 0=comB);
R BY R1-R3;
B by B1-B3;
Y1 on R B x1 x2 x3 x4;
R on B x2 x5;
B on R x1 x2 x3;
SAVEDATA: DIFFTEST is d:\grouptest.txt;
therefore by default Mplus is givng us two sets of result for two groups
Now I want to TEST Y1 on R and B are significantly different across the group or not ... I open a new file ...and write the following (pasting only the model section here)
ANALYSIS: DIFFTEST is d:\grouptest.txt;
R BY R1-R3;
B by B1-B3;
Y1 on R(1)
x1 x2 x3 x4;
R on B x2 x5;
B on R x1 x2 x3;
LOOK I did NOT write any group specific model command like "MODEL comB:" etc ... however I found this way of coding works, in output we have same regresiion coefficients for Y1 on R and B for both
groups .... DO YOU THINK my code is OK?
Q2. after running DIFFTEST in MPlus Output we get this
Chi-Square Test for Difference Testing
Value 3.386
Degrees of Freedom 2**
P-Value 0.1812
What does this result mean? ... I suppose the difference being considered not statistically significant.
Q3. Can we check NON-Equality (linear) constraint on parameters in MPlus .... say e.g. (y1 is 0/1)
y on x1 x2 x3
and we are interested to check regression co-efficient of Y on X1 is Greater than regression co-efficient of Y on X1
thanks and regards
Linda K. Muthen posted on Thursday, May 19, 2005 - 6:39 am
If you ask for TECH1, you can compare the matrices and see whether the pattern of fixed and free parameters is what you want. It is not a good idea to try to decide this looking at the MODEL command
alone given that Mplus defaults vary across models.
A chi-square value of 3.386 is not significant for two degrees of freedom.
You can read about MODEL CONSTRAINT in the Mplus User's Guide.
Sanjoy posted on Thursday, May 19, 2005 - 2:48 pm
Thanks Madam ...I have never thaught before in this way
Back to top | {"url":"http://www.statmodel.com/cgi-bin/discus/discus.cgi?pg=prev&topic=11&page=674","timestamp":"2014-04-18T16:01:09Z","content_type":null,"content_length":"30533","record_id":"<urn:uuid:ca7437ad-1d52-4a5d-9ca3-ac337551907a>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00304-ip-10-147-4-33.ec2.internal.warc.gz"} |
341 search hits
Dynamic theory of the nuclear collective model (1964)
Michael Danos Walter Greiner
The rotation-vibration model and the hydrodynamic dipole-oscillation model are unified. A coupling between the dipole oscillations and the quadrupole vibrations is introduced in the adiabatic
approximation. The dipole oscillations act as a "driving force" for the quadrupole vibrations and stabilize the intrinsic nucleus in a nonaxially symmetric equilibrium shape. The higher dipole
resonance splits into two peaks separated by about 1.5-2 MeV. On top of the several giant resonances occur bands due to rotations and vibrations of the intrinsic nucleus. The dipole operator is
established in terms of the collective coordinates and the γ-absorption cross section is derived. For the most important 1- levels the relative dipole excitation is estimated. It is found
that some of the dipole strength of the higher giant resonance states is shared with those states in which one surface vibration quantum is excited in addition to the giant resonance.
Nuclear models and the osmium isotopes (1964)
Amand Faessler Walter Greiner Raymond K. Sheline
The energies of, and transition probabilities involving, the ground-state rotation bands of Os186, Os188, and Os190 are compared with a diagonalized rotation-vibration theory in which vibrations
are considered to three phonon order. Agreement even in the Os transition region is found to be excellent. The theory appears to be particularly successful in predicting two phonon states in
Magnetic dipole transitions and gR factors in deformed even-even nuclei (1965)
Walter Greiner
Shell-model treatment of nuclear reactions (1965)
Michael Danos Walter Greiner
A method is developed for the calculation of resonant nuclear states which preserves as many features of the shell model as possible. It is an extension of the R-matrix theory. The necessary
formulas are derived and a detailed description of the computational procedure is given. The method is valid up to the two-particle emission threshold. With the assumption of consecutive decay of
the nucleus, the two-particle emission process can also be described. The treatment is antisymmetrized in all particles.
Damping of the giant resonance in heavy nuclei (1965)
Michael Danos Walter Greiner
In heavy nuclei the damping of the giant resonance is due to thermalization of the energy rather than to direct emission of particles; the latter process is strongly inhibited by the
angular-momentum barrier. The thermalization proceeds via inelastic collisions leading from the particle-hole state to two-particle-two-hole states. In heavy nuclei, several hundred such states
are available at the energy of the giant dipole resonance. The rather large width of the giant resonance arises from the addition of many small partial widths of channels leading to the different
two-particle-two-hole states. Both the density of the two-particle-two-hole states and the mean value of the interaction matrix elements between the particle-hole and two-particle-two-hole states
are evaluated in a simplified square-well shell model. In a given nucleus the energy dependence of the widths is determined mainly by the density of states; the A dependence is determined mainly
by the size of the matrix elements. For A≈200, we find 0.5 MeV≤Γ≤2.5 MeV. The uncertainty in this value comes mostly from the uncertainty in the strength of the
interaction. Representing the energy dependence of the width by a power law we find for the exponent the value ∼1.8.
Dynamic collective theory of odd-a nuclei (1965)
Michael Danos Walter Greiner C. Byron Kohr
The unified model and the collective giant-dipole-resonance model are unified. The resulting energy spectrum and the transition probabilities are derived. A new approximate selection rule
involving the symmetry of the γ vibrations is established. It is verified that the main observable features in the photon-absorption cross section are not influenced by the odd particle,
despite the considerably richer spectrum of states as compared to even-even nuclei.
Collective treatment of the giant resonances in spherical nuclei (1966)
M. G. Huber Michael Danos H. J. Weber Walter Greiner
In a collective treatment the energies of the giant resonances are given by the boundary conditions at the nuclear surface, which is subject to vibration in spherical nuclei. The general form of
the coupling between these two collective motions is given by angular-momentum and parity conservation. The coupling constants are completely determined within the hydrodynamical model. In the
present treatment the influence of the surface vibrations on the total photon-absorption cross section is calculated. It turns out that in most of the spherical nuclei this interaction leads to a
pronounced structure in the cross section. The agreement with the experiments in medium-heavy nuclei is striking; many of the experimental characteristics are reproduced by the present
calculations. In some nuclei, however, there seem to be indications of single-particle excitations which are not yet contained in this work.
Photonuclear effect in heavy deformed nuclei (1966)
Hartmuth Arenhövel Michael Danos Walter Greiner
The theory of Raman scattering is extended to include electric-quadrupole radiation. The results obtained are used to compute the elastic and Raman scattering cross sections of heavy deformed
nuclei. The dipole and quadrupole resonances are described by a previously developed theory which includes surface vibrations and rotations. The computed cross sections are compared with
experimental data for all those nuclei where both absorption and scattering cross sections are available. Some discrepances still exist in certain details; however, the over-all agreement between
theory and experiment is very good.
Collective correlations in C12 (1966)
Dieter Drechsel J. B. Seaborn Walter Greiner
The strong coupling of the giant resonance to the surface vibrations in C12 results in the splitting of the single one-particle, one-hole, 1- collective state into several components, thus
improving the agreement between theory and experiment to a very large extent.
Static theory of the giant quadrupole resonance in deformed nuclei (1966)
Michael Danos Walter Greiner C. Byron Kohr
The modes and frequencies of the giant quadrupole resonance of heavy deformed nuclei have been calculated. The quadrupole operator is computed and the absorption cross section is derived. The
quadrupole sum rule is discussed, and the relevant oscillator strengths have been evaluated for various orientations of the nucleus. The giant quadrupole resonances have energies between 20 and
25 MeV. The total absorption cross section is about 20% of the giant dipole absorption cross section. Of particular interest is the occurrence of the quadrupole mode which is sensitive to the
nuclear radius in a direction of approximately θ=1/4π from the symmetry axis. This may give information on the details of the nuclear shape. | {"url":"http://publikationen.ub.uni-frankfurt.de/solrsearch/index/search/searchtype/authorsearch/author/%22Walter+Greiner%22/start/0/rows/10/institutefq/Physik/sortfield/year/sortorder/asc","timestamp":"2014-04-21T12:52:45Z","content_type":null,"content_length":"46972","record_id":"<urn:uuid:99f334b7-9c77-4d3d-ad6f-7a2a24d0a8e2>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00051-ip-10-147-4-33.ec2.internal.warc.gz"} |
[SciPy-User] help interpreting univariate spline
Elliot Hallmark permafacture@gmail....
Fri Apr 30 15:14:36 CDT 2010
> (I'll point out that you lose accuracy - possibly a lot of it -
> by converting polynomials from one representation to another.)
error between evaluation of the interpolation and eval of polynomial
is consistently 10^-16. I don't know if this is huge in computer
world, but it is
quite reasonable to me. 10^-16 is about zero in my view.
>If all
> you want to do is solve the polynomials, though, scipy already
> provides root-finding functionality in its splines
But I will be using c code through cython to solve for roots, which is
why I want a transparent way to solve the interpolation.
> If you really want the polynomials, though, the tck representation
> scipy uses is semi-standard for representing splines; you may find the
> non-object-oriented interface (splrep, splev, etc.) somewhat less
> opaque in this respect. If you do decide to decipher the results, keep
> in mind that with the knots held fixed, it's a linear representation
> of the space of piecewise-cubic functions, so if you can find
> representations for your basis functions (e.g. 1, x, x**2, x**3) you
> can easily work out the conversion. And since the interpolating spline
> for each of those functions is itself, all you need to do is four
> interpolations on a fixed set of knots.
um, I think this is what I already have done? But the "semi standard"
spline representation in tck is completely undocumented as far as I
can tell. The only way to get the polynomial coefficents I can tell
is through evaluating the derivatives.
Do you know what is the meaning of the coefficents splrep generates?
How could such a small set of coefficents represent all the
information of a cubic function?
More information about the SciPy-User mailing list | {"url":"http://mail.scipy.org/pipermail/scipy-user/2010-April/025186.html","timestamp":"2014-04-16T22:13:11Z","content_type":null,"content_length":"4441","record_id":"<urn:uuid:487667b8-275a-4fd8-9689-f4431de554a1>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00484-ip-10-147-4-33.ec2.internal.warc.gz"} |
At the Box Office
Date: 03/08/99 at 08:42:06
From: Michael Jacks
Subject: Probability (a "random walk" problem)
I was wondering if you could give me some help on answering the
following question.
A queue of n + m people is waiting at a box office; n of them have 5-
pound notes and m have 10-pound notes. The tickets cost 5 pounds each.
When the box office opens there is no money in the till. If each
customer buys just one ticket, what is the probability that none of
them will have to wait for change?
Date: 03/09/99 at 07:47:47
From: Doctor Anthony
Subject: Re: Probability (a "random walk" problem)
This problem can be modelled as a random walk.
We imagine walking in the direction of the positive x axis (x increases
by 1 for each customer) with the y values increasing by 1 for each 'n'
type customer and decreasing by 1 for each 'm' type customer. What we
require in this random walk is that we stay above the x axis throughout
all of the n + m steps. For this to be possible, we MUST have n > m
because the destination point has coordinates (n + m, n - m) having
started at (0, 0).
The number of possible paths is the number of ways of choosing n
positive steps from a total of n + m steps. This is given by
C(n + m, n).
We need now the number of such paths as are always above the x-axis.
Clearly, there as many admissible paths from (0, 0) as there are paths
from (1, 1) that do not touch or cross the x-axis. By the 'reflection
principle', the latter are calculated using the fact that the total
number of paths that do touch or cross the x-axis is the total number
of paths from (1, -1) to (n + m, n - m)
= Total number of paths from (1, 1) - total number from (1, -1)
= C(n + m - 1, n - 1) - C(n + m - 1, n)
(n + m - 1)! (n + m - 1)!
= ---------- - ------------
(n - 1)! m! n! (m - 1)!
n(n + m)! m(n + m)!
----------- - ------------
(n + m)n! m! (n + m)n! m!
(n + m)!(n - m) n - m (n + m)!
= ------------- = ----- x -------
(n + m)n! m! n + m n! m!
(n - m)
= ----- x C(n + m, n)
n + m
and the probability that type 'n' is always ahead of type 'm' is
(n - m)/(n + m) x C(n + m, n)
= -----------------------------
C(n + m, n)
= (n - m)/(n + m)
- Doctor Anthony, The Math Forum | {"url":"http://mathforum.org/library/drmath/view/56619.html","timestamp":"2014-04-20T04:43:40Z","content_type":null,"content_length":"7325","record_id":"<urn:uuid:2e4c41d8-6b68-4cec-88b3-67bf3ac29dff>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00322-ip-10-147-4-33.ec2.internal.warc.gz"} |
Talk Titles and Abstracts
Liming Feng (Illinois)
Hilbert transform approach to options valuation
A Hilbert transform approach to options valuation will be presented in this talk. For many popular option pricing models with known analytic characteristic functions for the underlying driving
stochastic processes, the Hilbert transform approach exhibits remarkable speed and accuracy, with errors decaying exponentially in terms of the computational cost. The pricing of discrete
barrier, lookback and Bermudan options will be illustrated. Applications in applied probability will also be discussed.
Peter Forsyth (Waterloo)
Analysis of A Penalty Method for Pricing a Guaranteed Minimum Withdrawal Benefit (GMWB)
The no arbitrage pricing of Guaranteed Minimum Withdrawal Benefits (GMWB) contracts results in a singular stochastic control problem which can be formulated as a Hamilton Jacobi Bellman (HJB)
Variational Inequality (VI). Recently, a penalty method has been suggested for solution of this HJB variational inequality (Dai et al, 2008). This method is very simple to implement. In this
talk, we present a rigorous proof of convergence of the penalty method to the viscosity solution of the HJB VI. Numerical tests of the penalty method are presented which show the experimental
rates of convergence, and a discussion of the choice of the penalty parameter is also included. A comparsion with an impluse control formulation of the same problem, in terms of generality and
computational complexity, is also presented.
Jim Gatheral (Merrill Lynch)
Optimal order execution
In this talk, we review the models of Algmren and Chriss, Obizhaeva and Wang, and Alfonsi, Fruth and Schied. We use variational calculus to derive optimal execution strategies in these models,
and show that static strategies are dynamically optimal, in some cases by explicitly solving the HJB equation. We present general conditions under which there is no price manipulation in models
with linear market impact. Finally, we present some new generalizations of the Obizhaeva and Wang model given in a recent paper by Gatheral, Schied and Slynko, again deriving explicit closed-form
optimal execution strategies.
This is partially joint work with Alexander Schied and Alla Slynko.
Kay Giesecke (Stanford)
Asymptotically Optimal Importance Sampling For Dynamic Portfolio Credit Risk
Dynamic intensity-based point process models, in which a firm default is governed by a stochastic intensity process, are widely used to model portfolio credit risk. In the context of these
models, this paper develops, analyzes and evaluates an importance sampling scheme for estimating the probability of large portfolio losses, portfolio risk measures including value at risk and
expected shortfall, and the sensitivities of these quantities with respect to the portfolio constituent names. The scheme is shown to be asymptotically optimal. Numerical experiments demonstrate
the advantages of the algorithm for several standard model specifications.
Mike Giles (Oxford)
Progress with multilevel Monte Carlo methods
The multilevel Monte Carlo path simulation method combines simulations with different levels of resolution to reduce the computational cost for achieving a prescribed Mean Square Error.
Is this talk I will describe the latest progress with this technique,with new applications to jump-diffusion models, multi-dimensional SDEs, the calculation of Greeks, and a stochastic PDE
arising from a credit. I will also outline joint work with Kristian Debrabant and Andreas Rossler on the numerical analysis of the multilevel method using the Milstein discretisation.
Garud Iyengar (Columbia)
A behavioral finance based tick-by-tick model for price and volume
We propose a model for jointly predicting stock price and volume at the tick-by-tick level. We model the investor preferences by a random utility model that incorporates several important
behavioral biases such as the status quo bias, the disposition effect, and loss-aversion. The resulting model is a logistic regression model with incomplete information; consequently, we are
unable to use the maximum likelihood estimation method and have to resort to Markov Chain Monte Carlo (MCMC) to estimate the model parameters. Moreover, the constraint that the volume predicted
by the MCMC model exactly match observed volume introduces serial correlation in the stock price; consequently, standard MCMC techniques for calibrating parameters do not work well. We develop
new modifications of the Metropolis-within-Gibbs method to estimate the parameters in our model. Our primary goal in developing this model is to predict the market impact function and VWAP
(volume weighted average price) of individual stocks.
Petter Kolm (Mathematics in Finance M.S. Program, Courant Institute, New York University)
Algorithmic Trading: A Buy-Side Perspective
The traditional view of portfolio construction, risk analysis, and execution holds that these three functions of money management are separable. Portfolios are constructed without incorporating
the costs of execution, and execution is conducted without considering portfolio level risk. With the explosive growth of algorithmic trading, several mathematical and computational methodologies
have been proposed for unifying and improving traditional money management functions. This presentation addresses some important developments in this area, including incorporating market impact
costs into portfolio optimization, multi-period dynamic portfolio analysis, and high-frequency simulation for dynamic portfolio analysis.
Ralf Korn (TU Kaiserslautern)
Recent advances in option pricing via binomial trees
A survey on some new results obtained in joint work with S. Mueller is given. In particular, we present an optimized 1-D-scheme (the optimal drift model) that is based on overlaying a given
binomial scheme with an additional drift process and that obtains a higher than advanced schemes such as the Tian- or the Chang-Palmer approach. Further, we introduce the orthogonal decoupling
approach to solve n-D-valuation problems. This approach is based on a non-linear transformation of the state space, always results in well-defined probabilities in the approximating n-D binomial
tree, and admits a regular convergence behaviour.
Ciamac Moallemi (Columbia)
A multiclass queueing model of limit order book dynamics
We model the limit order book as system of two, coupled multiclass queues. Specifically, each side of the book is modeled as a single server, multiclass queue operating under a strict priority
rule defined by the prices associated with each limit order. We describe the transient dynamics of this system, and formulate and solve the optimal execution problem for a block of shares over a
short time horizon.
This is joint work with Costis Maglaras.
Kumar Muthuraman (UT Austin)
Moving boundary approaches for solving free-boundary problems
Free-boundary problems arise when the solution of a PDE and the domain over which the PDE must be solved are to be determined simultaneously. Three classes of stochastic control problems (optimal
stopping, singular and impulse control) reduce to such free-boundary problems. Several classical examples including American option pricing and portfolio optimization with transaction costs
belong to these classes. This talk describes a computational method that solves free-boundary problems by converting them into a sequence of fixed-boundary problems, that are much easier to
solve. We will illustrate application on a set of classical problems, of increasing difficulty and will also see how the method can be adapted to efficiently handle problems in large dimensions.
Phillip Protter (Cornell)
Absolutely Continuous Compensators
Often in applications (for example Survival Analysis and Credit Risk) one begins with a totally inaccessible stopping time, and then one assumes the compensator has absolutely continuous paths.
This gives an interpretation in terms of a ``hazard function'' process. Ethier and Kurtz have given sufficient conditions for a given stopping
time to have an absolutely continuous compensator, and this condition was extended by Yan Zeng to a necessary and sufficient condition. We take a different approach and make a simple hypothesis
on the filtration under which all totally inaccessible stopping times have absolutely continuous compensators. We show such a property is stable under changes of measure, and under the expansion
of filtrations; and we detail its limited stability under filtration shrinkage. The talk is based on research performed with Sokhna M'Baye and Svante Janson.
Chris Rogers (Cambridge)
Convex regression and optimal stopping
There are many examples, particularly in finance, of optimal stopping problems where the state variable is some point in Euclidean space, and the value function is convex in the state variable.
This then permits approximation of the value function as the maximum of a sequence of linear functionals, an approach which has various advantages. The purpose of this paper is to present the
methodology and explore its consequences.
Birgit Rudloff (Princeton)
Hedging and Risk Measurement under Transaction Costs
We consider a market with proportional transaction costs and want to hedge a claim by trading in the underlying assets. The superhedging problem is to find the set of d-dimendional vectors of
initial capital that allow to superhedge the claim. We will show that in analogy to the frictionless case, the superhedging price in a market with proportional transaction costs is a (set-valued)
coherent risk measure, where the supremum in the dual representation is taken w.r.t. the set of equivalent martingale measures. To do so, we extend the notion of set-valued risk measure to the
case of random solvency cones. Connections to recent results about efficient use of capital when there are multiple eligible assets are drawn. When starting with a vector of initial capital that
does not allow to superhedge, a shortfall at maturity is possible. For an investor who finds a hedging error that is 'small enough' still acceptable, good-deal-bounds under transaction costs can
be defined.
Georgios Skoulakis (Maryland)
Solving Consumption and Portfolio Choice Problems: The State Variable Decomposition Method
This paper develops a new solution method for a broad class of discrete-time dynamic portfolio choice problems. The method efficiently approximates conditional expectations of the value function
by using (i) a decomposition of the state variables into a component observable by the investor and a stochastic deviation; and (ii) a Taylor expansion of the value function. The outcome of this
State Variable Decomposition (SVD) is an approximate problem in which conditional expectations can be computed efficiently without sacrificing precision. We illustrate the accuracy of the SVD
method in handling several realistic features of portfolio choice problems such as intermediate consumption, multiple risky assets, multiple state variables, portfolio constraints,
non-time-separable preferences, and nonredun- dant endogenous state variables. We finally use the SVD method to solve a realistic large-scale life-cycle portfolio choice and consumption problem
with predictable expected returns and recursive preferences.
Jeremy Staum (Northwestern)
Déjà Vu All Over Again: Efficiency when Financial Simulations are Repeated
Many computationally intensive financial simulation problems involve running the same simulation model repeatedly with different values of its inputs. Such tasks include pricing exotic options of
the same type but of different strikes and maturities, valuation of options given different values of the model s parameters during calibration, and measuring a portfolio s risk as the markets
move. The basic approach is to run the simulation model using each of the input values in which one is interested. In this talk, we explore generic methods for solving a suite of repeated
simulation problems more efficiently, by estimating the answer given one value of the inputs using information generated while running the simulation model with different values of the inputs.
Nizar Touzi (Ecole Polytechnique)
A Probabilistic Numerical Method for Fully Nonlinear Parabolic PDEs
We suggest a probabilistic numerical scheme for fully nonlinear PDEs, and show that it can be introduced naturally as a combination of Monte Carlo and finite differences scheme without appealing
to the theory of backward stochastic differential equations. Our first main result provides the convergence of the discrete-time approximation and derives a bound on the discretization error in
terms of the time step. An explicit implementable scheme requires to approximate the conditional expectation operators involved in the discretization. This induces a further Monte Carlo error.
Our second main result is to prove the convergence of the latter approximation scheme, and to derive an upper bound on the approximation error. Numerical experiments are performed for two and
five-dimensional (plus time) fully-nonlinear Hamilton-Jacobi-Bellman equations arising in the theory of portfolio optimization in financial mathematics.
Back to top | {"url":"http://www.fields.utoronto.ca/programs/scientific/09-10/finance/computational/abstracts.html","timestamp":"2014-04-18T11:15:24Z","content_type":null,"content_length":"27631","record_id":"<urn:uuid:358c5dd6-ca78-46c7-8694-0ea606d49e51>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00480-ip-10-147-4-33.ec2.internal.warc.gz"} |
, that part of the higher geometry, or geometry of curves, which considers the cone, and the several curve lines arising from the sections of it.
CONJUGATE Axis, or Diameter, in the Conic Sections, is the axis, or a diameter parallel to a tangent to the curve at the vertex of another axis, or diameter, to which that is a conjugate. Indeed the
two are mutually conjugates to each other, and each is parallel to the tangent at the vertex of the other.
Conjugate Hyperbolas, also called Adjacent Hyperbolas, are such as have the same axes, but in the contrary order, the first or principal axis of the one being the 2d axis of the other, and the 2d
axis of the former, the 1st axis of the latter. See art. 17 of Conic SECTIONS. | {"url":"http://words.fromoldbooks.org/Hutton-Mathematical-and-Philosophical-Dictionary/c/conics.html","timestamp":"2014-04-20T10:50:41Z","content_type":null,"content_length":"5879","record_id":"<urn:uuid:94d4b113-3602-4984-bf53-1e7ddd465704>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00421-ip-10-147-4-33.ec2.internal.warc.gz"} |
Stationary Distribution for a Stochastic Matrix
Kirkland, Steve (2010) Column Sums and the Conditioning of the Stationary Distribution for a Stochastic Matrix. Operators and Matrices, 4. pp. 431-443. ISSN 1846-3886
For an irreducible stochastic matrix T, we consider a certain condition number (T), which measures the sensitivity of the stationary distribution vector to perturbations in T, and study the extent to
which the column sum vector for T provides information on (T). Specifically, if cT is the column sum vector for some stochastic matrix of order n, we define the set S(c) = {A|A is an n × n stochastic
matrix with column sum vector cT }. We then characterise those vectors cT such that (T) is bounded as T ranges over the irreducible matrices in S(c); for those column sum vectors cT for which is
bounded, we give an upper bound on in terms of the entries in cT , and characterise the equality case.
Item Type: Article
Keywords: Stochastic matrix; Stationary distribution; Condition number;
Subjects: Science & Engineering > Hamilton Institute
Item ID: 2194
Depositing User: Professor Steve Kirkland
Date Deposited: 15 Oct 2010 11:07
Journal or Publication Title: Operators and Matrices
Publisher: Elements d.o.o. Publishing House
Refereed: Yes
Repository Staff Only(login required) | {"url":"http://eprints.nuim.ie/2194/","timestamp":"2014-04-20T19:00:36Z","content_type":null,"content_length":"21305","record_id":"<urn:uuid:e2a6888a-9f70-4c02-af16-5b09dfb55e8a>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00065-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
has anbody taken flvs geometry in here???
• one year ago
• one year ago
Best Response
You've already chosen the best response.
i have taken geometry does that count?
Best Response
You've already chosen the best response.
flvs geometry?
Best Response
You've already chosen the best response.
yes i need help with a problem but no one seems to know how to do it
Best Response
You've already chosen the best response.
why does it have to be specifically someone who has taken flvs geometry? after all according to the law of multiple proportions, flvs geometry is just the same as my and other people's geometry..
Best Response
You've already chosen the best response.
oh okay do u think u can help?
Best Response
You've already chosen the best response.
hmm show me
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
there two separate questions
Best Response
You've already chosen the best response.
i can't read your doc
Best Response
You've already chosen the best response.
k hold on ill post it up
Best Response
You've already chosen the best response.
soooo who's taking flvs geometry again? lol jk
Best Response
You've already chosen the best response.
i wonder what flvs stand for?
Best Response
You've already chosen the best response.
Since you're looking to prove using side-side-side, your best bet is to pick the option where you have sides in the answer (that's for question 1)
Best Response
You've already chosen the best response.
yeah this can definitely be solved by anyone who has studied geometry..not just flvs geometry...however most of us tend to forget proving after we're through with it heh
Best Response
You've already chosen the best response.
flvs is some sort of online school i believe
Best Response
You've already chosen the best response.
oh ok
Best Response
You've already chosen the best response.
Which fact could you use to help prove that ΔABDΔACB using Side-Side-Side? ∠ACB ≅ ∠DBA ∠BAC + ∠BCA = 90˚ DB over AD = BC over AB AD congruent to CB
Best Response
You've already chosen the best response.
im lost wat would the answer be???
Best Response
You've already chosen the best response.
For question 1, the second answer tells you that at least 2 sides are proportionnal, which is a very good start in the direction of S-S-S. I mean, you'd still need to prove that the 3rd side is
proportionnal for a full proof, but that one is one of the steps (which is what they seem to want.
Best Response
You've already chosen the best response.
For question 2, it looks a lot harder than it actually is. You don't need to go any further than the first 3 lines. If in the figure you're looking at all 3 statements are true, then you're all
set! For example, we know it won't be figure 1 because AB and AC are not perpendicular.
Best Response
You've already chosen the best response.
so the answer would be the first one?
Best Response
You've already chosen the best response.
No, since the first given statement is not true in that figure (AB is not perpendicular to AC)
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50057ca3e4b0fb991134bc70","timestamp":"2014-04-20T18:50:15Z","content_type":null,"content_length":"82481","record_id":"<urn:uuid:8bd57207-9f68-47e2-9541-afc8400e4b70>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00461-ip-10-147-4-33.ec2.internal.warc.gz"} |
Antioch, CA Math Tutor
Find an Antioch, CA Math Tutor
...Open Source) to support applications including the Java (and J2ME) and Bluetooth communication stacks.I have recent Hon. Algebra I students in the Pleasanton school district. I could explain
the main points precisely and concisely (Algebra is my PhD area). I also provide extra practice and drill problems.
15 Subjects: including prealgebra, SQL, differential equations, algebra 1
...I also worked one on one with students or did group tutoring (usually four kids or less). In total I have completed around 50 hours of recorded tutoring. In addition to this experience I
worked as an after school tutor in Cooper elementary school in Vacaville. There I helped kids from K-6th grade.
34 Subjects: including discrete math, linear algebra, logic, Microsoft Windows
...This approach demands an understanding the needs of each student from a holistic perspective. I believe the essence of teaching is not telling; it’s listening. I have experience and training
in administering and evaluating educational assessment.
10 Subjects: including prealgebra, reading, writing, grammar
...I can help you with a variety of topics in Excel. If you are completely new to Excel, I will teach you the Excel environment and elementary concepts and neat features of Excel, including
entering data versus formulas, absolute versus relative cell referencing, populating cells easily, formatting...
23 Subjects: including algebra 1, Microsoft Word, Microsoft PowerPoint, HTML
For over 20 years, I have been teaching, training & tutoring in many different forms. Working in small groups and one-on-one, I love interacting with students, asking them questions & getting
them to ask questions of their own. That give-and-take is where true learning really happens, but the grea...
17 Subjects: including algebra 1, public speaking, grammar, geometry
Related Antioch, CA Tutors
Antioch, CA Accounting Tutors
Antioch, CA ACT Tutors
Antioch, CA Algebra Tutors
Antioch, CA Algebra 2 Tutors
Antioch, CA Calculus Tutors
Antioch, CA Geometry Tutors
Antioch, CA Math Tutors
Antioch, CA Prealgebra Tutors
Antioch, CA Precalculus Tutors
Antioch, CA SAT Tutors
Antioch, CA SAT Math Tutors
Antioch, CA Science Tutors
Antioch, CA Statistics Tutors
Antioch, CA Trigonometry Tutors
Nearby Cities With Math Tutor
Berkeley, CA Math Tutors
Brentwood, CA Math Tutors
Concord, CA Math Tutors
Danville, CA Math Tutors
Elk Grove Math Tutors
Fairfield, CA Math Tutors
Hayward, CA Math Tutors
Oakland, CA Math Tutors
Oakley, CA Math Tutors
Pittsburg, CA Math Tutors
Pleasanton, CA Math Tutors
Richmond, CA Math Tutors
Stockton, CA Math Tutors
Vallejo Math Tutors
Walnut Creek, CA Math Tutors | {"url":"http://www.purplemath.com/antioch_ca_math_tutors.php","timestamp":"2014-04-18T08:25:53Z","content_type":null,"content_length":"23800","record_id":"<urn:uuid:62af2825-3f81-4c14-b34f-a54b35aa11b2>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00578-ip-10-147-4-33.ec2.internal.warc.gz"} |
heeeeeelp please easy for u????
Digital Logic Design Lab
(do this project with circuit maker design (7485 comparator))
Design 8 bit adder/subtractor (a+b, or a-b) with following:
1. Inputs: tow operands [A,B](8-bit each),
load(1-load new data,0-perform operation).
2. Outputs: 9 - bit (8-bit will be stored in A again).
3. Modes of operations:
a. Check if B is 0, then the output will be A.
b. Check if A is 0, then the output will be B OR B' depend
on the operation.
c. Otherwise , perform the addition or subtraction.
4. Display result on two seven segment displays + 1 LED.
please if u know the answer help me and i will be thankfull
i do it dut there is a problem and this is my attachment please do it for me?? | {"url":"http://www.physicsforums.com/showthread.php?t=72629","timestamp":"2014-04-19T22:51:32Z","content_type":null,"content_length":"19784","record_id":"<urn:uuid:5e844122-a1dd-4ab6-939b-7301f7ae13ce>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00438-ip-10-147-4-33.ec2.internal.warc.gz"} |
Helena Rasiowa
Born: 20 June 1917 in Vienna, Austria
Died: 9 August 1994 in Warsaw, Poland
Click the picture above
to see two larger pictures
Previous (Chronologically) Next Main Index
Previous (Alphabetically) Next Biographies index
Although Helena Rasiowa was born in Vienna, her parents were Polish. In 1918 Poland regained its status as an independent nation and Rasiowa's parents moved to Warsaw. She was educated there,
obtaining a good secondary school education with music lessons taken at a special music school. After completing her school studies she took a course in business management before entering
Rasiowa entered the University of Warsaw in 1938 but, after the German invasion of Poland in 1939, the university closed. Rasiowa and her parents moved to Lvov but the Poles were trapped between the
Soviets and the Germans and Lvov came under Soviet control. Life there seemed even more difficult than under German occupation, so after a year the family returned to Warsaw.
There was an impressive collection of mathematicians at the University of Warsaw at this time including Borsuk, Lukasiewicz, Mazurkiewicz, Sierpinski, Mostowski and others. They had organised an
underground version of the university which was strongly opposed by the Nazi authorities. Borsuk, for example, was imprisoned after the authorities found that he was helping to run the underground
In this dangerous situation Rasiowa learnt mathematics, knowing that the penalties for being discovered were extreme. Yet in this environment Rasiowa studied for her Master's Degree under Lukasiewicz
's supervision.
When the Soviet forces came close to Warsaw in 1944, the Warsaw Resistance rose up against the weakened German garrison. However German reinforcements arrived and put down resistance. Around 160,000
people died in the Warsaw Uprising of 1944 and the city was left in a state of almost total devastation. Rasiowa's time during the Uprising is described in [1]:-
In 1944 the Warsaw Uprising broke out and in consequence Warsaw was almost completely destroyed, not only because of warfare but also because of the systematic destruction which followed the
uprising after it had been squashed down. Rasiowa's thesis burned together with the whole house. She herself survived with her mother in a cellar covered by ruins of the demolished building.
After the war Rasiowa taught in a secondary school while her supervisor Lukasiewicz left Poland after the terrible suffering he had gone through. Mostowski however remembered Rasiowa's impressive
work and persuaded her to return to the University of Warsaw to complete a second Master's Thesis under his supervision.
In 1946, having obtained her Master's degree, she was appointed as an assistant at the University of Warsaw and continued to work for her doctorate under Mostowski's supervision. Her thesis,
presented in 1950, was on algebra and logic Algebraic treatment of the functional calculus of Lewis and Heyting and these topics would be the main areas of her research throughout her life.
Rasiowa was promoted steadily, reaching the rank of Professor in 1957 and Full Professor in 1967. She led the Foundations of Mathematics Section from 1964 and the Mathematical Logic Section after its
creation in 1970.
Her main research was in algebraic logic and the mathematical foundations of computer science. In algebraic logic she continued work by Post, Stone, Tarski and Lukasiewicz [1]:-
... aimed at finding a precise description for the mathematical structure of formalised logical systems.
Of course Rasiowa's work on algebraic logic was in precisely the right area to make her a natural contributor to theoretical computer science. However it is one thing to be in the right area and yet
another to have the ability to see the importance of a new subject such as computer science. Her contributions are described in [1]:-
Her contribution to theoretical computer science stems from her conviction that there are deep relations between methods of algebra and logic on the one side and essential problems of foundations
of computer science on the other. Among these problems she clearly distinguished inference methods characteristic of computer science and its applications. This conviction of hers had been
supported by her results on many-valued and non-classical logics, especially on applications of various generalisations of Post algebras to logics of programs and approximation logics.
In fact in 1984 Rasiowa introduced an important concept of inference where the basic information was incomplete. This led to approximate reasoning and approximate logics which are now central to the
study of artificial intelligence.
Rasiowa wrote over 100 papers, books and monographs. She also supervised the doctoral dissertations of more than 20 students. However her contributions were not restricted to research. She helped set
up the journal Fundamenta Informaticae which she was editor-in-chief from its setting up in 1977 until her death. In addition to these editorial duties she also was Collecting Editor of Studia Logica
from 1974 and, from 1986, an associate editor of the Journal of Approximate Reasoning.
She also played a major role in the mathematical life of Poland. A member of the Polish Mathematical Society, she was its secretary in 1955-57 and its vice-president in 1958/59. She served on the
Committee on Mathematics of the Polish Academy of Sciences and chaired various committees of the Polish Ministry of Science and Higher Education. It was partly through her endeavours that the Polish
Society for Logic and the Philosophy of Science was set up.
Rasiowa remained active right up to her death, having completed eight chapters of a new monograph Algebraic analysis of non-classical first order logics before entering hospital with her final
Article by: J J O'Connor and E F Robertson
A Reference (One book/article)
A Poster of Helena Rasiowa Mathematicians born in the same country
Previous (Chronologically) Next Main Index
Previous (Alphabetically) Next Biographies index
History Topics Societies, honours, etc. Famous curves
Time lines Birthplace maps Chronology Search Form
Glossary index Quotations index Poster index
Mathematicians of the day Anniversaries for the year
JOC/EFR © June 1997 School of Mathematics and Statistics
Copyright information University of St Andrews, Scotland
The URL of this page is: | {"url":"http://www-history.mcs.st-andrews.ac.uk/Biographies/Rasiowa.html","timestamp":"2014-04-18T00:13:56Z","content_type":null,"content_length":"16050","record_id":"<urn:uuid:0091a099-80bc-4e33-9c0d-6ea4ae27959e>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00010-ip-10-147-4-33.ec2.internal.warc.gz"} |
University Park, MD Algebra 2 Tutor
Find an University Park, MD Algebra 2 Tutor
...A. Analysis of graphs. B.
21 Subjects: including algebra 2, calculus, statistics, geometry
...I was very successful as their tutor. I enjoy math and I am very patient. Physics was always one of my favorite subjects.
15 Subjects: including algebra 2, chemistry, physics, calculus
...With several years of experience teaching math and tutoring, I know how to help students build both their conceptual understanding of mathematics and their confidence in problem solving. Most
of my experience both tutoring and teaching in the classroom was with high school or middle school subje...
16 Subjects: including algebra 2, English, writing, calculus
...I'm also professionally teaching Pre-GED math and have had two students already receive their GEDs, with four students registered. No students have failed from my class. I also had three
students score 100, 99, and 98 recently on their tests. *For those who might wonder about the few lower rat...
31 Subjects: including algebra 2, reading, English, writing
...My strategy generally includes confidence building, organization, and study skills in addition to helping students with the math material itself. Calculus is my favorite and one of my most
popular tutoring subjects. In addition to my B.A. in mathematics, I have been tutoring Algebra 1 through Calculus for about 5 years with great success.
17 Subjects: including algebra 2, English, reading, calculus
Related University Park, MD Tutors
University Park, MD Accounting Tutors
University Park, MD ACT Tutors
University Park, MD Algebra Tutors
University Park, MD Algebra 2 Tutors
University Park, MD Calculus Tutors
University Park, MD Geometry Tutors
University Park, MD Math Tutors
University Park, MD Prealgebra Tutors
University Park, MD Precalculus Tutors
University Park, MD SAT Tutors
University Park, MD SAT Math Tutors
University Park, MD Science Tutors
University Park, MD Statistics Tutors
University Park, MD Trigonometry Tutors
Nearby Cities With algebra 2 Tutor
Berwyn Heights, MD algebra 2 Tutors
Bladensburg, MD algebra 2 Tutors
Brentwood, MD algebra 2 Tutors
College Park algebra 2 Tutors
Colmar Manor, MD algebra 2 Tutors
Cottage City, MD algebra 2 Tutors
Edmonston, MD algebra 2 Tutors
Green Meadow, MD algebra 2 Tutors
Hyattsville algebra 2 Tutors
Landover Hills, MD algebra 2 Tutors
Mount Rainier algebra 2 Tutors
North Brentwood, MD algebra 2 Tutors
Riverdale Park, MD algebra 2 Tutors
Riverdale Pk, MD algebra 2 Tutors
Riverdale, MD algebra 2 Tutors | {"url":"http://www.purplemath.com/University_Park_MD_Algebra_2_tutors.php","timestamp":"2014-04-19T09:58:22Z","content_type":null,"content_length":"24267","record_id":"<urn:uuid:dc20eb1f-1599-484c-9f90-771e7478e441>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00487-ip-10-147-4-33.ec2.internal.warc.gz"} |
This is probably really simple but they didn't word it right :(
September 16th 2007, 03:48 PM
This is probably really simple but they didn't word it right :(
Okay I don't get this:
The diagram above shows a flat surface containing a line and a circle with no points in common. Can you visualize moving the line and/ or circle so that they intersect at exactly one point? Two
points? Three points? Explain each answer and illustrate each with an example when possible.
The perimeter (the sum of the side lengths) of the triangle above is 52 units. Write and solve an equation based on the information in the diagram. Use your solution for x to find the measures of
each side of the triangle. Be sure to confirm that your answer is correct.
Please and thank you.
September 16th 2007, 06:33 PM
I don't believe a line can intersect a circle at three points, but they sure can at two. When a line touches a circle at one point it is called a tangent. When a line touches at two points it is
called a secant. See the attached diagram (thanks wikipedia). Don't confuse secant with chord. A chord is a line segment, while a secant is a line. So the secant keeps on going while a chord
stops at the circumference.
To solve for x in:
Multiply both sides by 4,
Then divide both sides by 3,
For the triangle, we know the perimeter is 52, so
September 16th 2007, 07:53 PM
I don't believe a line can intersect a circle at three points, but they sure can at two. When a line touches a circle at one point it is called a tangent. When a line touches at two points it is
called a secant. See the attached diagram (thanks wikipedia). Don't confuse secant with chord. A chord is a line segment, while a secant is a line. So the secant keeps on going while a chord
stops at the circumference.
To solve for x in:
Multiply both sides by 4,
Then divide both sides by 3,
For the triangle, we know the perimeter is 52, so
Thanks for the thorough answers genius! Love ya! | {"url":"http://mathhelpforum.com/geometry/19048-probably-really-simple-but-they-didnt-word-right-print.html","timestamp":"2014-04-21T07:28:28Z","content_type":null,"content_length":"9436","record_id":"<urn:uuid:5184e12a-7a59-4167-9ddd-7d83555cb8e4>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00161-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts about electret microphone on Gas station without pumps
The second half of the mic lab went fairly well, but there were a couple of overly ambitious requests in the handout that I’ll have to trim out for next year. Because we have not gotten to complex
impedance yet (tomorrow, I swear!), the students were unable to choose a reasonable size for the DC-blocking capacitor, and guessing was not good enough. The 10MΩ input impedance of an oscilloscope
with a 10× probe makes for too long a time constant with the 0.1µF capacitor I initially suggested, at least with the digital scopes—they did not manage to get the DC offset removed even after a
minute, which surprised me. Students got decent results with a 0.022µF capacitor, though. I even got some of the students to be able to make measurements with the Tektronix digital scopes (always a
feat, since they have mind-bogglingly complex menu systems).
I did tell the students not to bother with the last question on the handout and just to write up what they actually did.
It took the students longer than I had expected to come up with a reasonable value for the pullup resistor for the mic. But I was careful not to be too helpful, so that I’m reasonably sure that at
least one in each pair of students knew how they got their answer. I did have them add load lines to their i-vs-v plots of the electret microphones, corresponding to rounding their desired pullup up
or down to the nearest value they had in their kits. That probably added a little time over a simple rounding, especially since I suggested to a couple of the students that they think about which
resistor would give higher sensitivity.
I did have one student ask what a “pullup” resistor was—I had used that term in the handout without ever explaining it! I gave a one-minute lecture explaining that a pullup was a resistor to the
positive power supply and a pulldown a resistor to ground (we had examples of each already on the whiteboard). Speaking of things on the board, I’ll have to remember to bring markers to the lab on
Tuesday, as the ones in there are all dead. A spray bottle of alcohol and some rags for cleaning the year-old buildup off the boards would also be good.
Even the pair of students who had run over on Tuesday finished on time today, despite collecting all the data that the other students collected on Tuesday, so I’m feeling a bit better about the size
of the labs.
Next week may get a bit hectic, though, with two unrelated labs: hysteresis and relaxation oscillators on Tuesday and sampling and aliasing on Thursday. I’ll have to remember on Tuesday to upload
the hysteresis oscillator code to all the machines in the lab.
Protein essentials and second gnuplot demo
I gave two lectures back-to-back today, which I found a little stressful.
The first lecture was a guest lecture in “Molecular biomechanics” on the basic of protein structure. I spent some time earlier this week picking out protein structure to show the students; digging
out my old Darling models protein chain, which I last used for assigning homework in Spring 2011( see also my instructions for building protein chains with the Darling models); and trying to boil
down the basics of protein structures to one 70-minute talk. I was up at 3 in the morning setting up the proteins I wanted to show on my laptop, even though I planned to rely mainly on the chalkboard
and the Darling model kit.
The protein talk went ok. I covered such basics as primary=covalent, secondary=H-bond, tertiary≈packing, quaternary=multiple chains; hydrogen bonding patterns for helices, antiparallel sheets, and
parallel sheets; supersecondary structure; domains; CO-R-N mnemonic for chirality; s-twisted and z-twisted helices (left- and right-handed in the confusing nomenclature used by biochemists); SCOP and
PFAM; and maybe a handful of other topics. I only showed two structures on the screen: a TIM-barrel and alpha-hemolysin, and I pointed them to the PDB education pages, which are actually quite good.
The most exciting thing during the lecture was that we had the fire alarm go off, and had to vacate the room for 10–15 minutes. I might have covered a little more if there had been more time, but I
did not have a set topic list I had to cover, so it didn’t really matter—I was just giving an extemporaneous dump of protein structure information. I certainly told them all the stuff I had decided
ahead of time was essential—I’ve no idea what else would have come out if I’d had 10–15 more minutes. I also managed to get in a plug for the library—they have Darling model kits that the students
can check out—and for the information sessions that I arranged for the library to run for bioengineering majors next week.
Right after that talk, I went to the classroom for my applied circuits class and set up for another gnuplot demo. I had about 10 minutes to get some candy from the vending machine also.
In the applied circuits class, I started out by showing them the result of fitting models to (some of) the data I had collected:
I had to lend my data to a couple of the students who had not managed to finish the lab yesterday—I assured them that they could continue the lab tomorrow, and that I would stay until everyone had
completed both parts of the lab. They will have to use their own data in the design report. While they were copying the data, I took some time to talk about the “zone of proximal development”,
imposter syndrome, and how my goal in the class was not to “weed them out”, but to help them achieve difficult success. My goal is to maximize their learning, which means that they will often be
struggling with concepts or skills that seem just a bit too difficult. I’ll help them, but it might be such “unhelpful” help as telling them “your breadboard doesn’t match your schematic” and leaving
them to find where the mismatch is, rather than debugging for them. Sometimes I’ll have to do more, when the problems are beyond reasonable expectations (like finding the blown fuses in the
multimeter and the broken clipleads yesterday). I promised to stay in the lab until everyone finished, even if it took them a long time.
Sometimes I’ll goof on the difficulty of a homework or lab, and they’ll be pushed into frustration rather than just being challenged, but my goal is to get them to persevere and to achieve that very
rewarding feeling of finally accomplishing something that seemed too difficult when they started. This course is only 3% of their college education, but I’m going to try to make it accomplish a lot
more than that share.
Fortuitously, in my email today I got a story about someone (a marketing manager for an electronics parts company) learning to solder for the first time. I shared that story with my class by e-mail,
as I thought that they could sympathize with him (having just learned to solder themselves last week), but also recognize the symptoms of imposter syndrome.
Getting back to the main material for the day, I started the guts of the gnuplot lesson. Building up the plot a little at a time, we first plotted the raw data from one PteroDAQ, then scaled the y
values first to amps, then to microamps. Because each group used a different resistor in their test setup, they couldn’t blindly copy what I was doing, but understand at least enough to put in the
correct resistance value. I showed them how to switch between linear and log scales on each axis with the plot-window keyboard shortcuts (“L” toggles the scaling on the nearer axis) and we noticed
that the data from the first data set (the red one above) was rather sparse at the low end.
I then showed them how to get two plots on the same set of axes, and I managed to get them to tell me what the plot would look like if we had been testing a resistor instead of the electret mic. We
then fit a simple resistor model to the low end (the resistive region) of the curve.
I then took a break from gnuplot to explain how an electret mic works. They were a little astonished at how many transformations of the information occurred in a simple device like a microphone:
pressure to force to displacement to capacitance to gate-source voltage to drain-source current (and I promised that we would convert back to voltage in tomorrow’s lab). We managed to follow the
transformations and see that they were all linear (well, displacement to capacitance to gate-source voltage was a pair of inversions, and I had to wave my hands at $\frac{d I_{DS(sat)}}{d V_{GS}} \
propto I_{DS(sat)}$, since we don’t have a model of FETs yet, and may not get to one complicated enough to derive that this quarter.
I then went back to gnuplot and showed them how to fit the I[sat] model to the data, first deliberately trying to fit the whole curve (which gives an obviously wrong result). I had deliberately
omitted the amp-to-microamp scaling, and gotten a straight line at zero for my fit—I had them debug that as a group before we got a constant line that was in the middle of the graph. I got them to
figure out what went wrong there also. Once they realized I was fitting the whole curve, I showed them how to limit the range of their fitting, and got a reasonable I[sat] value. My goal here was not
to “show them how to use gnuplot”, but to show them that they could debug mistakes that they were likely to make, and that they should not shut down when things went wrong. (A lot of today’s class
was this sort of meta-cognition stuff, while still getting in a reasonable amount of technical material.)
I then gave them the blended model, with $R_{DS} = \sqrt{R^2 + (V_{DS}/I_{SAT})^2}$, converting it to a current model on the board, then trying to fit the data. The result was (unexpectedly) a
terrible fit—I did not have them try to debug this, because were were almost out of time, and just showed them that the problem was one of units: my initial guess for R was in Ω, but my currents were
all in µA, so I needed to make the resistance in MΩ to have consistent units. After scaling the guess for R by 1e-6, I reran the fit and it worked fine. This gave me a chance to talk about the
importance of starting fitting procedures with reasonable guesses, since they might not otherwise converge.
I’ve been getting good participation from the class, only occasionally having to get them to speak up (I’m getting a bit deaf, and when they mumble a guess, I can’t hear them). I’ve been trying not
to suppress students who provide incomplete or wrong answers, but encourage them to amplify on or correct each other. So far it seems to be working.
We did not have time to fit the 3-parameter empirical model, but they have the model in the lab assignment and they should be able to fit it using gnuplot now. One student asked me if I was going to
distribute a script, as I did last week. I assured him that I was not going to do that. The goal was to build up their gnuplot scripting capabilities, not provide them with crutches. There was very
little “new” in this week’s lesson as far as gnuplot was concerned, and gnuplot does have an adequate help system for the sort of stuff they need (though I find the hierarchical help system to be
quite poor for finding out about features you’ve not been exposed to—you have to know precisely what names things are hidden under).
I still haven’t gotten to complex impedance, but that will have to be Friday’s lecture. I also wanted to get to load lines today, so that they could select an appropriate size for their load
resistors that do the current-to-voltage conversion, but I’ll do that at the beginning of lab tomorrow, when they’ll be scratching their heads about how to choose the resistor. Just-in-time teaching
can be a powerful motivator, if they get concepts just after they realize the need for them, rather than months earlier in anticipation of need. I may at the beginning of lab if anyone figured out on
their own how to choose the resistor, and get them to present first.
First mic lab slightly too long
The first half of the microphone lab took a little longer than anticipated. I had expected it to take about 2.5 hours, with some groups taking the full 3 hours, but it took more like 3–4 hours.
I have two conjectures about reasons for the extra time:
• I had the students label all their bags of capacitors. this had originally been planned for a week ago, but the capacitors had not been ordered in time, and Thursday’s lab had been way too
packed already, so this was the first opportunity we had. I probably should have waited until this Thursday, when the lab time is less packed.
• The group that fell the furthest behind had really terrible luck, having sat at a bench where both multimeters had blown fuses and two sets of multimeter leads had open circuits. I helped them
debug their setup, but we did not initially suspect the test equipment, and the delay in finding the problem cost them at least half an hour. It also cut into their confidence in debugging their
own circuitry later in the lab. Problems with the equipment is one of the difficulties with using a shared lab—a lot of the courses are taught by EE TAs who do not bother to teach students proper
use of the lab equipment (if they even know it themselves), so there is often damage of this sort to deal with.
A number of the students in the class are suffering from “imposter syndrome”—not confident of their abilities to master this new material. I’ll have to reassure them that they are doing fine—this
class is intended to be pushing them into unfamiliar territory. I may take a moment in today’s class to mention both “imposter syndrome” and “zone of proximal development”, so that they are aware
both that it is ok to be uncomfortable and that I’m trying to maximize what they are learning.
Students had a lot of trouble wiring up their breadboards accurately. Most of the lab time was taken up with students asking for my help and my taking a quick look at the breadboard and telling them
that it didn’t match the schematic they had copied. I eventually had the students write on each wire of the schematic what row (or rows) of the breadboard it was on, so that debugging the connections
was easier. I’ll have to try to remember to put that in the instructions for next year, as a way to get the students to learn to debug their breadboard wiring more independently. I should also add
a picture of the trimpot and an explanation of what a potentiometer does—students had a little trouble figuring out what the 3 pins on the package were for.
I’m pleased with the latest version of PteroDAQ, as students had no trouble getting their measurements. Adding a patterned light sequence to the reset sequence to let students know that they had the
latest version of the software installed was very useful.
Many of the groups managed to look at their data in the lab using gnuplot, and collecting more data as a result of what they saw. The students got 1000s of data points that fall nicely along a curve,
and they were able to superimpose different data sets that had different scaling for the current measurements. We’ll have excellent data to use in class today for fitting models to. I’m not going to
give them real FET models for the FET in the electret mics, though. Instead we’ll use some simple empirical models:
• current source. This is the same as a saturation current model.
• resistance. This is essentially the same as the linear-region model for FETs.
• blended model with $R_{FET} = \sqrt{R^2 + (V_{DS}/I_{SAT})^2}$ This is a simpler blend than is usually used in FET models, I think, but it fits the data fairly well. This blend is
mathematically very similar to the ones that compute the gain in RC filters (where we take the magnitude of a complex number and either the real or the imaginary component provides most of the
contribution). Using the same function for rounding the corner when we join two straight lines in different contexts reduces the math burden on the students.
• blended model with $R_{FET} = \sqrt{R^2 + (V_{DS}/I_{SAT})^{2\rho}}$ The extra parameter here is to handle the increase in saturation current with increasing drain-to-source voltage. Normally,
that is modeled with a fairly complicated “channel-length” model, and is not even mentioned in intro circuits classes. But the phenomenon is very obvious in the data, and can be adequately
modeled in the electret mic for our purposes with this 3-parameter empirical model.
I will have to give them one more concept about FETs: that the derivative of the saturation current with respect to the gate voltage is proportional to the saturation current. I’m not going to derive
that for them from some more general model, because we have no way (in the mic) of actually measuring the gate voltage. Later in the quarter, when we look at FETs again before doing the class-D
amplifier, I may give them a slightly more detailed model of an FET.
In addition to gnuplot tutorial today, I want to give them an intro to complex impedance, but I doubt that we’ll get far enough for them to choose the right size for a DC-blocking capacitor for the
AC mic lab tomorrow. I may have to suggest that they try one of the biggest sizes of ceramic capacitor that they have (either the 4.7µF or the 0.1µF). We’ll need to get to RC time constants and
corner frequencies before next week’s lab though.
New modeling lab for electret microphone
Last year, in Mic modeling lab rethought, I designed the DC measurement of an electret microphone around the capabilities of the Arduino analog-to-digital converters:
• The highest voltage allowed is 5v and the lowest is 0v.
• The resolution is only 10 bits (1024 steps).
• The steps seem to be more uniformly spaced at the low end of the range than the high end (so differences at the high end are less accurate than differences at the low end).
• The external reference voltage AREF must be at least 0.5v (this is not in the data sheet, but when I tried lower AREF voltages, the reading was always 1023).
This year we’ll be using the KL25Z boards, which have different constraints:
• The highest voltage is 3.3v and the lowest is 0v.
• The resolution is 16 bits (15 bits in differential mode).
• Differential mode only works if you stay away from the power rails—clipping occurs if you get too close.
• The external reference must be at least 1.13v. With less than a 3-fold range for the external reference, varying the external reference to get different ranges seems rather limited.
I think I’ll still have the students start with using the multimeter and the bench power supply to measure voltage and current pairs for 1V to 10v in steps of 1v. But then I’ll have them wire up a
different test fixture. The resistor R2 is one that students will have to choose to get an appropriate measuring range. Resistors R3 and R4 keep the voltages for the differential measurements E20–E21
and E22–E23 away from the power rails. I tried using smaller values, but 200Ω was not enough—I still got clipping problems. So 63mV is too close to the rails, but 275mV seems fine. I suspect that the
limit is around 100mV to 150mV, but I did not try to narrow it down.
I found that the differential measurements had less noise than single-ended measurements, despite having a resolution of about 100µV rather than the 50µV of single-ended measurements. Doing 32×
hardware averaging also helped keep the noise down. (Note: the data sheet for the KL25 chip does claim a higher effective number of bits for differential measurement than for single-ended
measurement, perhaps because of reduction in common-mode noise.)
I was able to get fairly clean measurements with just two different resistor sizes, to which I fit 4 different models:
• linear resistance: $I = V /R$
• constant current $I = I_{sat}$
• blended: $I = V/ \sqrt{ R^2 + (V/I_{sat})^2)}$
• blended with exponent: $I = V/ \sqrt{ R^2 + (V^p/I_{sat})^2)}$
Because of the large characters used for the data points, the lines look fat, but the noise level is fairly small—about ±300µV Some of that may be due to the movement of the potentiometer, as the
voltage and current aren’t measured at precisely the same time, but I suspect most is electrical noise in the processor itself.
Microphone sensitivity exercise
I’ve been thinking a bit about improving the microphone lab for the Applied Circuits course. Last year, I had the students measure DC current vs. voltage for an electret microphone and then look at
the microphone outputs on the oscilloscope (see Mic modeling lab rethought). I still want to do those parts, but I’d like to add some more reading of the datasheet, so that students have a better
understanding of how they will compute gain later in the quarter.
The idea for the change in this lab occurred to me after discussing the loudness detector that my son wanted for his summer engineering project. He needed to determine what gain to use to connect a
silicon MEMS microphone (SPQ2410HR5H-PD) to an analog input pin of a KL25 chip. He wanted to use the full 16-bit range of the A-to-D, without much clipping at the highest sound levels. Each bit
provides an extra 6.021dB of range, so the 16-bit ADC should have a 96.3dB dynamic range. The sound levels he is interested in are about 24dB to 120dB, so the gain needs to be set so that a 120dB
sound pressure level corresponds to a full-scale signal.
He is running a 3.3v board, so his full-scale is 3.3v peak-to-peak, or 1.17v RMS (for a sine wave). That conversion relies on understanding the difference between RMS voltage and amplitude of a sine
wave, and between amplitude and peak-to-peak voltage. The full-scale voltage is 20 log10(1.17), or about 1.3dB(V).
Microphone sensitivity is usually specified in dB (V/Pa), which is 20 log10 (RMS voltage) with a 1 pascal RMS pressure wave (usually at 1kHz). The microphone he plans to use is specified at –42±3dB
(V/Pa), which is fairly typical of both silicon MEMS and electret microphones.The conversion between sound pressure levels and pascals is fairly simple: at 1kHz a 1Pa RMS pressure wave is a sound
pressure level of about 94dB.
Scaling amplitude is equivalent to adding in the logarithmic scale of decibels, so for a sound pressure level of 120dB, the microphone output would be about 120–94–42±3=–16±3dB(V), but we want 1.3dB,
so we need a gain of about 17.3dB, which would be about 7.4×. Using 10× (20dB gain) would limit his top sound pressure level to 117dB, and using 5× would allow him to go to 123dB.
One can do similar analysis to figure out how big a signal to expect at ordinary conversational sound pressure levels (around 60dB): 60–94–42=–76db(V). That corresponds to about a 160µV RMS or
450µV peak-to-peak signal.
I tried checking this with my electret mic, which is spec’ed at –44±2dB, so I should expect 60–94–44±2=–78±2dB, or 125µV RMS and 350µV peak-to-peak. Note that the spec sheet measures the sensitivity
with a 2.2kΩ load and 3v power supply, but we can increase the sensitivity by increasing the load resistance. I’m seeing about a 1mV signal on my scope, so (given that I’m not measuring the loudness
of my voice), that seems about right.
I’ll have to have students read about sound pressure level, loudness, and decibels for them to be able to understand how to read the spec sheet, so these calculations should be put between the
microphone lab and the first amplifier lab. I’ll have them measure peak-to-peak amplitude for speech, and we’ll compare it (after the lab) with the spec sheet. This could be introduced as part of a
bigger lesson on reading spec sheets—particularly how reading and understanding specs can save a lot of empirical testing. | {"url":"http://gasstationwithoutpumps.wordpress.com/tag/electret-microphone/","timestamp":"2014-04-19T12:08:56Z","content_type":null,"content_length":"106786","record_id":"<urn:uuid:41bebadb-e477-4029-bc1c-564c011c553d>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00569-ip-10-147-4-33.ec2.internal.warc.gz"} |
A Nobel Prize for quantum optics
Quantum mechanics predicts the bizarrest things. Tiny particles like electrons can simultaneously be in two places, or, more generally, in two states that would seem mutually exclusive in our
everyday experience of physics. Similarly weirdly, particles that have once interacted can remain entangled even when they're moved far apart and then influence each other instantaneously, something
which Einstein called "spooky action at a distance". These seemingly magical properties could be exploited for exciting real-world applications, if it wasn't for another strange consequence of
quantum mechanics: that by simply looking at a quantum system you destroy many of its properties. (Find out more in this Plus article.)
The 2012 Nobel Prize for Physics has been awarded to Serge Haroche and David J. Wineland for (independently) finding ways of observing certain aspects of quantum systems without destroying them.
Haroche, of the Collège de France and Ecole Normale Supérieure in Paris, found a way of trapping individual photons (particles of light) for a record-breaking amount of time. Using extremely
reflective mirrors which bounce the photons back and forth, Haroche was able to keep the photons "alive" for almost a tenth of a second, during which time they would have travelled around 40,000km.
Cleverly devised experiments then allowed him to measure and count individual photons without destroying them. They also allowed him to use quantum entanglement to trace how a quantum system changes
from a state of superposition — being in two states at once — to the state of definite existence we expect based on our everyday experience.
David Wineland, from the University of Colorado, Boulder, used carefully tuned laser pulses to put electrically charged atoms in a state of superposition, for example occupying two different energy
levels at once.
Haroche and Wineland's work is interesting to theorists and experimentalists alike. On the theoretical side, it gives some insight into one of the greatest mysteries of quantum mechanics: exactly how
the act of measuring interferes with a quantum system, so that a particle which is in a state of superposition collapses into a single state.
On the practical side, their work may result in superfast quantum computers. While ordinary computers store information in bits which take on either the value 0 or the value 1, a quantum computer
would exploit the phenomenon of superposition to allow a quantum bit to take on both values at once. If a single quantum bit can simultaneously take on two values, then two of them can simultaneously
take on four values, three can simultaneously take on eight values, and so on. In general, n quantum bits can simultaneously take on 2^n values. It's this increased capacity to represent information
that may one day lead to computers much faster than anything around today. Wineland and his team were the first to show that a quantum operation involving two quantum bits is possible, thus paving
the way towards the superfast computers of the future.
Wineland has also used his lab techniques to build a clock that's 100 times more accurate than the clocks currently setting our time standards. Time can be defined in terms of the frequencies of
electromagnetic radiation emitted by atoms. Wineland's clock measures radiation that's within the visible light range of the spectrum, and it's therefore called an optical clock. Optical clocks are
incredibly accurate: if you had set one running at the moment of the Big Bang, it would now only be out by about five seconds.
According to Royal Swedish Academy of Sciences, who awards the Nobel Prizes, Haroche and Wineland have "opened the door to a new era of experimentation with quantum mechanics". Their methods for
probing the physical world at the smallest scales may one day help lift the veil on some of the biggest mysteries in physics.
You can find out more in this excellent write-up on the Nobel Prize website and read more about quantum mechanics on Plus. | {"url":"http://plus.maths.org/content/nobel-prize-quantum-optics","timestamp":"2014-04-19T15:35:13Z","content_type":null,"content_length":"28148","record_id":"<urn:uuid:2ce9ab6c-11df-427b-9142-7d7fe96a8631>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00063-ip-10-147-4-33.ec2.internal.warc.gz"} |
FOM: Re: your papers on constructive math
Neil Tennant neilt at mercutio.cohums.ohio-state.edu
Wed Jun 14 13:02:13 EDT 2000
I read both your papers this morning, and much enjoyed them. (I'm
referring to the first two in the list on your website).
You ought, by the way, to have your name on them as the author!
Also, be aware of posting stuff on the web in html format, if you don't
want to have your ideas stolen by unscrupulous operators.
The whole question of constructivity in physics (and the rest of science)
can, it seems to me, be approached from a more global methodological
perspective. Your local strategy is to constructivize various results as
needed. But what about the following argument? (This was first put forward
in my BJPS paper "Minimal logic is adequate for Popperian Science"
(1984), and in my book Anti-realism and Logic, OUP 1987.)
In all scientific applications, mathematical theorems (the ones that `get
applied') are *cut sentences*. First there is the mathematical proof of
the theorem, strictly within mathematics, of a sequent that we can write
Axioms : Theorem
When such a theorem finds application within science, it is used, in
conjunction with scientific hypotheses, auxiliary assumptions, boundary
conditions and initial conditions, to make a prediction (or retrodiction).
Thus, using obvious abbreviations, there is a proof of a sequent of the
Theorem, SHs, AAs, BCs, ICs : Prediction
We run an experiment to test the SHs---that is, we rig our apparatus so
that the BCs and ICs hold, and, so we assume, the AAs hold also. We
observe and measure the results: the Observations. If there is sufficient
discrepancy between Prediction and Observations, we have Absurdity:
Prediction, Observations : emptyset
Now of course all this has to be put together using Cut on Theorem, and
Cut on Prediction:
Axioms : Theorem Theorem, SHs, AAs, BCs, ICs : Prediction
Axioms, SHs, AAs, BCs, ICs : Prediction
Prediction, Observations : emptyset
Axioms, SHs, AAs, BCs, ICs, Observations : emptyset
This last sequent tells us that our observations refute our scientific
hypotheses, given the truth of our mathematical *axioms* (not: theorems),
the AAs, BSs, and ICs. (Of course, we now have to address the Quine-Duhem
problem of what to give up. If we're surer about the latter, then the SHs
will be rejected.)
Thus the applicable *theorems* of mathematics are needed only as deductive
halfway-houses on the way to refutations of scientific theories modulo the
*axioms* of mathematics.
Two more moves, and we're home. Note that for the classicist, it doesn't
matter if you write "every F is G" as "no F is not G" (in fact, in a
famous essay Popper urged that all universal hypotheses be understood as
the denial of the existence of counterexamples). Thus there is no need for
the universal quantifier anywhere in the picture above.
Secondly, note that logic is being used only to derive (ultimately) a
contradiction---i.e., a sequent whose succedent is emptyset. The
G"odel-Glivenko theorem says that any classicaly inconsistent set of
sentences (not involving the universal quantifier) is intuitionistically
Thus, provided a constructive mathematician can accept the
existential-quantifiers-only version of the *axioms* (or at least, *some*
such version that is *classically* equivalent to any acceptable version),
he is free to claim, quite generally, that one does not *need* anything
more than constructive reasoning in order to "do science".
Back-up point: (virtually?) every testable prediction is *decidable*.
Hence intuitionistic logic will match classical logic in the generation of
testable predictions.
This way of seeing things also provides an explanation of why,
nevertheless, mathematicians (and scientists) are worried about forsaking
classical methods (i.e. not being able to "help themselves to" strictly
classical mathematical theorems). Because those theorems are cut-formulae
as pointed out above, they will---when judiciously chosen---effect a huge
reduction in deductive work. Any intuitionistic proof of the final sequent
that results from those two applications of cut above will, in all
probability, be horrendously, if not unfeasibly, long. So it *pays* to
work classically, especially when the applicable and net effect of doing
so is underwritten by a guarantee that, in principle, it could all be done
PS Having written this, it seems it might be interesting to others on fom,
so I'll send fom a copy. That way too, others might have their interest
provoked in reading your two papers.
Neil W. Tennant
Professor of Philosophy and Adjunct Professor of Cognitive Science
230 North Oval
The Ohio State University
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/2000-June/004085.html","timestamp":"2014-04-21T03:10:58Z","content_type":null,"content_length":"7360","record_id":"<urn:uuid:3af79cbf-2e56-44e1-b36f-862778bdf5af>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00120-ip-10-147-4-33.ec2.internal.warc.gz"} |
Measurement unit conversion: cubic feet per second
The SI derived unit for volume flow rate is the cubic meter/second. 1 cubic meter/second is equal to 35.3146662127 cubic feet per second.
Valid units must be of the volume flow rate type. You can use this form to select from known units:
A cubic foot per second (also cfs, cu ft/s, cusec and ft³/s) is an Imperial unit / U.S. customary unit volumetric flow rate, which is equivalent to a volume of 1 cubic foot flowing every second. | {"url":"http://www.convertunits.com/info/cubic+feet+per+second","timestamp":"2014-04-17T21:43:16Z","content_type":null,"content_length":"29369","record_id":"<urn:uuid:e8a5368f-235f-4d31-8048-e76e5f8fd606>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00605-ip-10-147-4-33.ec2.internal.warc.gz"} |
Magic trick based on deep mathematics
up vote 90 down vote favorite
I am interested in magic tricks whose explanation requires deep mathematics. The trick should be one that would actually appeal to a layman. An example is the following: the magician asks Alice to
choose two integers between 1 and 50 and add them. Then add the largest two of the three integers at hand. Then add the largest two again. Repeat this around ten times. Alice tells the magician her
final number $n$. The magician then tells Alice the next number. This is done by computing $(1.61803398\cdots) n$ and rounding to the nearest integer. The explanation is beyond the comprehension of a
random mathematical layman, but for a mathematician it is not very deep. Can anyone do better?
popularization soft-question
6 Please make this community wiki? – Theo Johnson-Freyd Dec 25 '09 at 22:47
29 I am informed that Persi Diaconis is the correct person to answer this question. – Sam Nead Dec 26 '09 at 0:09
15 I have discussed this question with Persi. He could not come up with anything significant (though he did not think about it very long). – Richard Stanley Dec 26 '09 at 16:30
11 I've also heard Persi talk about this subject, and my guess is that he would say that the requirements of "deep mathematics" and "would actually appeal to a layman" are nearly incompatible in
practice. – Mark Meckes Dec 27 '09 at 13:54
2 I don't think they should be incompatible: the deep mathematics are the reason the trick works; you don't have to understand them to be stunned by the trick! – Sam Derbyshire Jan 17 '10 at 17:06
show 9 more comments
41 Answers
active oldest votes
Place $K$ faced-down cards on a table, blindfold yourself and ask him/her for a number $1 < n < K$. Allow him/her to flip $n$ random cards up. Cover the cards with an opaque box that
has two holes for you to put your hands in and claim that you can split the cards into 2 stacks, each with same number of faced-up cards.
up vote 2 Based on a well known logic puzzle: http://usna.edu/Users/physics/mungan/_files/documents/Scholarship/CoinPuzzle.pdf Modified the process to make it harder for audience to figure out
down vote what you did and used cards so that they will not think that you did it by differentiating the surface of the coins.
add comment
This is a trick that I designed years ago and I have used it in many different occasions for amusement only or educational purpose or both. It is indeed the finial difference method to find
a polynomial. Ask the person to write down a polynomial without you knowing the polynomial and even the degree of the polynomial. To keep your life easy, it would be better to keep the
degree less than or equal 3. (It wouldn't be hard to let a layman know what a polynomial is just by giving two or three examples). Then you ask for some information that is essentially the
value of the polynomial for 0, 1, 2, 3. As soon as you take one of the value you should calculate the difference. And in a few seconds after taking the last information, you announce not
only the degree of the polynomial but also the exact polynomial.
Note 1: Finding the degree is a very important part of this trick since it convinces more knowlegable persons that you are not just solving a simultaneous equation quickly.
up vote
2 down Note 2: I used this trick in my Calculus classes to give this seemingly paradoxical idea that "if you don't know what the function is, try to figure out how it changes."
Note 3: Of course, one can use it in many different classes for different purposes.
Note 4: I've just search the internet to see if Martin Gardner ever introduced this trick. Damn it! The answer was yes, here: "The calculus of finite differences". However, I still love to
keep the credit of telling the degree for my self :)
1 It is similar to Shamir's secret sharing: en.wikipedia.org/wiki/Shamir%27s_Secret_Sharing – Margaret Friedland May 15 '13 at 20:50
add comment
Here's a couple of well-known simple topology tricks:
Tie ends of a long enough piece of rope to your wrists, while wearing a loosely fitting jacket or sweatshirt. With your arms tied like that, take the jacket off your back and put it back
on inside out. It's easier to figure out how to do it than to explain it in words, so I'll skip the explanation. The more risque version is to tie the ankles and do the trick with pants.
up vote 1
down vote The other one I haven't tried, but maybe it can be done at a party if you have a stick and some plasticine around.
show 1 more comment
If you are not mathematically inclined, this game can drive you crazy. http://www.transience.com.au/pearl.html
up vote 1 down vote
add comment
A variant on Anton Geraschenko answer above- say you are in a fourth grade school that for some reason let these poor kids use calculators. you ask them to pick for themselves a 3 digit
number say abc. Tell them to write it twice in their calculator ,i.e., abcabc and then divide by 77. Then by 13. What did you get? do it again with 143 and then by 7? What did you get.
again with...
up vote 1 It teaches them about prime decomposition, about the decimal structure, about consecutive division etc.
down vote
I learnt it from Avraham Arcavi.
show 1 more comment
Here is a trick much in the spirit of the original number-adding example; moreover I'm sure Richard will appreciate the type of "deep mathematics" involved.
On a rectangular board of a given size $m\times n$, Alice places (in absence of the magician) the numbers $1$ to $mn$ (written on cards) in such a way that rows and columns are increasing but
otherwise at random (in math term she chooses a random rectangular standard Young tableau). She also chooses one of the numbers say $k$ and records its place on the board. Now the she removes
the number $1$ at the top left and fills the empty square by a "jeu de taquin" sequence of moves (each time the empty square is filled from the right or from below, choosing the smaller
candidate to keep rows and columns increasing, and until no candidates are left). This is repeated for the number $2$ (now at the top left) and so forth until $k-1$ is gone and $k$ is at the
top left. Now enters the magician, looks at the board briefly, and then points out the original position of $k$ that Alice had recorded. For maximum surprise $k$ should be chosen away from
the extremities of the range, and certainly not $1$ or $mn$ whose original positions are obvious.
up vote
1 down All the magician needs to do is mentally determine the path the next slide (removing $k$) would take, and apply a central symmetry with respect to the center of the rectangle to the final
vote square of that path.
In fact, the magician could in principle locate the original squares of all remaining numbers (but probably not mentally), simply by continuing to apply jeu de taquin slides. The fact that
the tableau shown to the magician determines the original positions of all remaining numbers can be understood from the relatively well known properties of invertibility and confluence of jeu
de taquin: one could slide back all remaining numbers to the bottom right corner, choosing the slides in an arbitrary order. However that would be virtually impossible to do mentally. The
fact that the described simple method works is based on the less known fact that the Schútzenberger dual of any rectangular tableau can be obtained by negating the entries and applying
central symmetry (see the final page of my contribution to the Foata Festschrift).
add comment
Destination Unknown is a magic trick that makes use of Combinatorics. It really fools people.
up vote 1 down vote See http://themagicwarehouse.com/cgi-bin/findit.pl?x_item=SP2453&keyword=DESTINATION
add comment
How about the "Flash Mind Reader"
up vote 0 down vote
1 No, that one is dumb. – Harry Gindi Jan 17 '10 at 22:42
1 No, that one isn't dumb, but it's more about psychology than about mathematics (like most tricks with cards). – Konrad Voelkel Feb 7 '10 at 13:58
add comment
Lay out 21 cards face up in three vertical lines. Have a friend pick out any card without telling you which card he/she has chosen. Have your friend tell you which line of cards the selected
card is in, and make three stacks of cards, each stack being made from each line of cards. stack the three stacks on top of each other, placing the stack with the selected card between the
other two stacks (IMPORTANT!). lay out the cards again in the exact same set up (3 lines of 7 all face up) but here is the trick: when laying out the cards, flip them face up in a line every
up time. In other words, don't make one line at a time, but put a card in every line one at a time. Have your friend again tell you which line has the selected card. Stack the cards again, the
vote 0 exact same way you did the first time. One more time, lay out the cards the exact same way as the last time, one card per line, and again have your friend tell you which line has the selected
down card. Stack all the cards again one last time, again placing the line with the selected card between the other stacked cards. now lay out all the cards face down, one at a time. while you're
vote doing this, remember to count, because the 11th card you place down is the selected card. from this point you can do whatever you can think of to make the trick "magical" and shock your friend
by suddenly coming up with his/her card.
add comment
Start with a deck of 32 cards. Then the player should take a card and tell a number $n$ between 1 and 32 then you divide the stack in 2 smaller stacks and the player has to tell which of the
stacks contains his chosen card. according to a rule dependend on that number you put that stack above or below the other stack. After repeating this 5 times the chosen card should be exactly
at position $n$. The rule has to depend on the way you want to deal cards (whether you turn around the deck and start dealing from the bottom, or you deal from the top and turn each single
up vote card around or you deal at first and then turn bost stacks around). In one of the cases the rule was take $N-11$, find the representation in the system with base $-2$ and revert that
0 down presentation. ($0$ tells you to put the stack containing the chosen card on top, etc.). I dont remeber this trick properly, it should not be too difficult to express the final position
vote depending on the choices in some formula; but it is the only situation I know, in which the $-2$-system is useful.
add comment
The "casting out nines" sanity check of calculations is dead simple to use (a small child can do it), but the proof requires a deeper knowledge of mathematics (more precisely of
up vote 0 arithmetic ; my own students don't have access to it even though they know what series are and can diagonalize matrices!).
down vote
show 4 more comments
Not the answer you're looking for? Browse other questions tagged popularization soft-question or ask your own question. | {"url":"http://mathoverflow.net/questions/9754/magic-trick-based-on-deep-mathematics/100363","timestamp":"2014-04-17T18:27:59Z","content_type":null,"content_length":"98863","record_id":"<urn:uuid:fa0d83fb-ccca-4a06-9ef0-bbce9dacb632>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00170-ip-10-147-4-33.ec2.internal.warc.gz"} |
Penllyn, PA Statistics Tutor
Find a Penllyn, PA Statistics Tutor
...During those experiences, I developed curriculum for special needs children who had not been diagnosed and provided assistance to keep them on track with the regular classroom material. I
taught social studies, science, math, language arts as well as helped them with special projects that requir...
51 Subjects: including statistics, English, reading, geometry
...Tutoring for one or two weeks with the goal of passing a test or completing a single assignment does not usually prepare the student to continue their study of mathematics. When teaching
calculus at The Rochester Institute of Technology, I ran into students who struggled due to a lack of confide...
18 Subjects: including statistics, calculus, geometry, GRE
...I have private notes, past exams, for study and practice. I find that many students lack the basic skills of Alegbra I to succeed in Geometry. I first assess the students Alegbra skills, and
if necessary do a review to bring the student up to the necessary level.
35 Subjects: including statistics, English, reading, chemistry
...As the entrance exam for graduate school, I scored in the 96th percentile. I have several resources which allowed me to perform to a high level in this exam and I will share these resources
with my students. I have assisted several individuals prepare for their GMAT and GRE exams, with results ...
19 Subjects: including statistics, calculus, geometry, GRE
...Two of those degrees were from Teachers College, Columbia University. In addition, I have been a practicing psychologist for more than two decades, and have taught psychology at the university
level in several difference schools. In the past 3 years I have edited 3 books.
23 Subjects: including statistics, reading, writing, English
Related Penllyn, PA Tutors
Penllyn, PA Accounting Tutors
Penllyn, PA ACT Tutors
Penllyn, PA Algebra Tutors
Penllyn, PA Algebra 2 Tutors
Penllyn, PA Calculus Tutors
Penllyn, PA Geometry Tutors
Penllyn, PA Math Tutors
Penllyn, PA Prealgebra Tutors
Penllyn, PA Precalculus Tutors
Penllyn, PA SAT Tutors
Penllyn, PA SAT Math Tutors
Penllyn, PA Science Tutors
Penllyn, PA Statistics Tutors
Penllyn, PA Trigonometry Tutors
Nearby Cities With statistics Tutor
Broad Axe, PA statistics Tutors
Center Square, PA statistics Tutors
Fair Oaks, PA statistics Tutors
Foxcroft Square, PA statistics Tutors
Foxcroft, PA statistics Tutors
Gulph Mills, PA statistics Tutors
Gwynedd Valley statistics Tutors
Jarrettown, PA statistics Tutors
Lower Gwynedd, PA statistics Tutors
North Hills, PA statistics Tutors
Plymouth Valley, PA statistics Tutors
Prospectville, PA statistics Tutors
Roslyn, PA statistics Tutors
Spring House statistics Tutors
Upper Dublin, PA statistics Tutors | {"url":"http://www.purplemath.com/Penllyn_PA_Statistics_tutors.php","timestamp":"2014-04-20T08:38:53Z","content_type":null,"content_length":"24249","record_id":"<urn:uuid:5e9bb77d-b880-4ff9-a93b-ec53f2db6134>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00142-ip-10-147-4-33.ec2.internal.warc.gz"} |
Calculus: Chain Rules and Implicit Differentiation
May 21st 2007, 03:35 PM #1
May 2007
Calculus: Chain Rules and Implicit Differentiation
I've actually already done the first 2 problems I'm posting, but I'd like to see how someone else works them out because I'm not sure about my work. Also, I definitely need to see how to work out
the last two for I'm also not sure about them. You'll definitely be repped (thanked).
1. Use the quotient rule to find the derivative of
You do not need to expand out your answer.
2. a)Find the derivative of: 6e-4xcos(9x). [Hint: use product rule and chain rule!]
b) find the equation of the tangent line to the curve at x=0. Write your answer in mx+b format.
3. a)Given the equation below, find dydx.
10x^10+8x30y+y^2=19b) find the equation of the tangent line to the curve at (1, 1). Write your answer in mx+b format
4. A fence 24 feet tall runs parallel to a tall building at a distance of 6 ft from the building.
/ | |
/ | |
/ | |
/ 24ft| | (bad pic, sorry)
We wish to find the length of the shortest ladder that will reach from the ground over the fence to the wall of the building.
[A] First, find a formula for the length of the ladder in terms of θ. (Hint: split the ladder into 2 parts.)
L(theta) = ?
[b] Now, find the derivative, L'(θ).
L'(theta) = ?
[C] Once you find the value of θ that makes L'(θ)=0, substitute that into your original function to find the length of the shortest ladder. (Give your answer accurate to 5 decimal places.)
L(theta(min)) is about ____?___ ft
Last edited by tyuolio; May 21st 2007 at 06:42 PM.
$y = 6e^{-4x}\cos 9x$
$y' = (6e^{-4x})'\cos 9x + 6e^{-4x}(\cos 9x)'$
$y' = -24e^{-4x}\cos 9x - 6e^{-4x} \cdot 9\sin 9x =-24e^{-4x}\cos 9x - 54 e^{-4x}\sin 9x$
b) find the equation of the tangent line to the curve at x=0. Write your answer in mx+b format.
$y'(0)=m = -24 e^0 \cos 0 - 6\cdot e^0 \cdot 0 = -24$
$y(0) = 6e^0 \cos 0 = 6$
$y-y(0) = y'(0)(x - 0) \to y-6 = -24x \to y = -24 x + 6$
$10x^{10}+8x\cdot 30y+y^2=19$
$100x^9 + \frac{d (8x)}{dx} \cdot 30 y + 8x \cdot \frac{dy}{dx} + 2y\frac{dy}{dx} = 0$
$100x^9 + 240 y + 8x \frac{dy}{dx} + 2y \frac{dy}{dx} = 0$
$\frac{dy}{dx} \big( 8x + 2y \big) = - 100 x^9 - 240y$
$\frac{dy}{dx} = \frac{-100 x^9 - 240 y}{8x+2y}$
For this first one, I am getting y' = (-100x^9-240y)/(2y+240x). Can someone check this and see if they gett the same thing? It be very appreciative!
And then in b, I got an answer also, but it depends if the first one is right.
1. a)Given the equation below, find dydx.
b) find the equation of the tangent line to the curve at (1, 1). Write your answer in mx+b format
we want to differentiate implicitly with respect to x. whenever we find the derivative of some x term we add dx/dx (that is, we took the derivative of x with respect x, but since derivative
notations can function as fractions, we just cancel the dx's and get 1, so we don't write it). whenever we differentiate some y term, we attach dy/dx to it. note that if we are differentiating a
product of x and y terms, we must use the product rule.
10x^10 + 8x^30*y + y^2 = 19 ......differentiating implicity w.r.t x, we get:
100x^9 + 240x^29*y + 8x^30 dy/dx + 2y dy/dx = 0
=> 8x^30 dy/dx + 2y dy/dx = - 100x^9 - 240x^29*y
=> dy/dx(8x^30 + 2y) = - 100x^9 - 240x^29*y ...........factored out the dy/dx
=> dy/dx = (- 100x^9 - 240x^29*y)/(8x^30 + 2y)
so yeah, i got the same as you did (you should really use parenthesis to make what you are saying clear)
(b) at (1,1)
dy/dx = (- 100(1)^9 - 240(1)^29*(1))/(8(1)^30 + 2(1))
........= (-100 - 240)/(8 + 2)
........= -34
can you take it from here?
2. A fence 24 feet tall runs parallel to a tall building at a distance of 6 ft from the building.
We wish to find the length of the shortest ladder that will reach from the ground over the fence to the wall of the building.
[A] First, find a formula for the length of the ladder in terms of θ. (Hint: split the ladder into 2 parts.)
L(theta) = ?
[b] Now, find the derivative, L'(θ).
L'(theta) = ?
[C] Once you find the value of θ that makes L'(θ)=0, substitute that into your original function to find the length of the shortest ladder. (Give your answer accurate to 5 decimal places.)
L(theta(min)) is about ____?___ ft
Now i could sware i've seen this question posted around here some time ago. but nevermind.
For problems like these, ALWAYS DRAW A DIAGRAM. see the diagram below:
I labeled the appropriate angle t for theta. Note that we form two similar triagles, i called one A and one B. The hypotenuse of A is a and the hypotenuse of B is b. so we have the length of the
ladder as a + b.
usinf trig ratios we will realize that:
in triagle A, the side opposite to t is 24, therefore we have:
sin(t) = 24/a
=> a = 24/sin(t)
in triangle B, the base is 6, therefore we have:
cos(t) = 6/b
=> b = 6/cos(t)
therefore, the length of the ladder is given by:
L(t) = a + b = 24/sin(t) + 6/cos(t)
i think you can take it from here
Now i could sware i've seen this question posted around here some time ago. but nevermind.
For problems like these, ALWAYS DRAW A DIAGRAM. see the diagram below:
I labeled the appropriate angle t for theta. Note that we form two similar triagles, i called one A and one B. The hypotenuse of A is a and the hypotenuse of B is b. so we have the length of the
ladder as a + b.
usinf trig ratios we will realize that:
in triagle A, the side opposite to t is 24, therefore we have:
sin(t) = 24/a
=> a = 24/sin(t)
in triangle B, the base is 6, therefore we have:
cos(t) = 6/b
=> b = 6/cos(t)
therefore, the length of the ladder is given by:
L(t) = a + b = 24/sin(t) + 6/cos(t)
i think you can take it from here
awesome. thank you very much jhevon.
May 21st 2007, 06:44 PM #2
Global Moderator
Nov 2005
New York City
May 21st 2007, 06:50 PM #3
Global Moderator
Nov 2005
New York City
May 21st 2007, 06:56 PM #4
Global Moderator
Nov 2005
New York City
May 21st 2007, 10:01 PM #5
May 21st 2007, 10:21 PM #6
May 22nd 2007, 08:53 AM #7
May 2007 | {"url":"http://mathhelpforum.com/calculus/15227-calculus-chain-rules-implicit-differentiation.html","timestamp":"2014-04-19T18:37:38Z","content_type":null,"content_length":"63016","record_id":"<urn:uuid:8ab8c6e2-3e0a-47d4-b2fb-c8b16fa013aa>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00337-ip-10-147-4-33.ec2.internal.warc.gz"} |
After 100 Years, Ramanujan Gap Filled
May 1, 2013
Posted by
A century ago, Srinivasa Ramanujan and G. H. Hardy started a famous correspondence about mathematics so amazing that Hardy described it as “scarcely possible to believe.” On May 1, 1913, Ramanujan
was given a permanent position at the University of Cambridge. Five years and a day later, he became a Fellow of the Royal Society, then the most prestigious scientific group in the world. In 1919
Ramanujan was deathly ill while on a long ride back to India, from February 27 to March 13 on the steamship Nagoya. All he had was a pen and pad of paper (no Mathematica at that time), and he wanted
to write down his equations before he died. He claimed to have solutions for a particular function, but only had time to write down a few before moving on to other areas of mathematics. He wrote the
following incomplete equation with 14 others, only 3 of them solved.
Within months, he passed away, probably from hepatic amoebiasis. His final notebook was sent by the University of Madras to G. H. Hardy, who in turn gave it to mathematician G. N. Watson. When Watson
died in 1965, the college chancellor found the notebook in his office while looking through papers scheduled to be incinerated. George Andrews rediscovered the notebook in 1976, and it was finally
published in 1987. Bruce Berndt and Andrews wrote about Ramanujan’s Lost Notebook in a series of books (Part 1, Part 2, and Part 3). Berndt said, “The discovery of this ‘Lost Notebook’ caused roughly
as much stir in the mathematical world as the discovery of Beethoven’s tenth symphony would cause in the musical world.”
In his book analyzing Ramanujan’s results, Berndt notes the existence of a solution for
What does the equation mean? We start by comparing arithmetic sequences to geometric sequences.
Arithmetic: 1 + 2 + 3 + … + n.
Geometric: a^1 + a^2 + a^3 + … + a^n.
For each type, we can predict behaviors with such things as partial sum formulas. Another form of arithmetic progression, in the realm of continued fractions, is the following:
where symbol Mathematica function ContinuedFractionK.
The geometric version of continued fractions is known as the Rogers–Ramanujan function R. There is a related Rogers–Ramanujan function S (after Leonard James Rogers, who published papers with
Ramanujan in 1919). In the lost notebook, F(q) represents S(q).
R(q) is a continued fraction of the form:
and similarly for S(q). (The presence of the prefactor
These functions are related by S(q) = -R(-q), but that’s incorrect due to branch cuts. We can also define R and S in a way that can be evaluated more quickly through q-Pochhammer symbols.
Here are pictures of the behavior of the R function on the unit disk in the complex plane. Values returned can be complex, so these pictures show the imaginary, real, argument, and absolute values
(Im, Re, Arg, and Abs) of the function R(q). The unit circle itself is the natural boundary of analyticity and has a dense set of singularities of the function R(q). As one can see, the
Roger–Ramanujan functions are beautiful, not just due to their mathematical properties, but also visually.
The functions R and S are two of the few named functions devoted to continued fractions. Recently, we’ve been collecting theorems and formulas for R and S, including the uncompleted ones in this
piece of Ramanujan’s original “lost” notebook. That line at the end is equivalent to
Many of these have been found since Ramanujan wrote them down. All of these are readily solved with Mathematica. We list the values together with the first known solvers, with solutions by Oleg
Marichev being first realized by Mathematica.
Bruce Berndt noted, “The value of R(q^5) with R(q). We do not record the value here, because it is not particularly elegant.”
With Simplify, RootReduce, and many other Mathematica functions, large equations can be boiled down to their most elegant form. Ramanujan used chalk and his mind to simplify most of his results—the
long results he erased from his slate, but the elegant results he wrote down. It seems likely to us that Ramanujan actually did know the elegant solution, or at least a method to find it, he just
didn’t have the time to write it down. Here’s a method we used. First, calculate a numerical value for the point of interest. Second, conjecture a closed algebraic form for this number. Third,
express the algebraic number as nested radicals. Finally, check the conjectured form with many digits of accuracy.
Then we check that the numerical value of the conjectured form is the same as the value of the function. The values agree to at least 10000 places.
Since both of these are algebraic numbers with elegant representations, this is a rather convincing check. And the method can easily be generalized to find many more, so far unknown, values for S(q),
and similarly for R(q).
An actual proof can be accomplished using modular equations. This is the modular equation of order 5 for S:
We use the previously known value for S(q^5) and solve for S(q) to obtain a value for
Clearing denominators, we obtain the above form of the result.
Ramanujan’s equations are related to work we’ve done recently to add a lot of continued fraction knowledge to Wolfram|Alpha. In a future blog we will expand on the new capabilities, such as the input
continued fraction K (1, n, {n, 1, inf}).
We also put together a list of hundreds of exact values in the “Ramanujan R and S” interactive Demonstration.
“Not particularly elegant”—never a good thing to say about Ramanujan. We’re glad we were able to show that Ramanujan had something elegant in mind.
Download this post as a Computable Document Format (CDF) file.
6 Comments
It is shame that am from the same state where he was born(tamilnadu, India) and none of our embrace what a mathematical genius he was, only a very few people know his work . not even our school books
cared about it :(
Posted by Karthick May 1, 2013 at 5:44 pm Reply
That’s sadly and unfortunately true :-( . I wish his works are respectfully recognized here in India too and not just in the mathematics community. What a genius he was..
Posted by Anamika January 3, 2014 at 12:03 am Reply
Wolfram’s computation technology beautifully unravels the profound relationship between Math and arts. It’s high time for Mathematicians and Artists to appreciate each other’s world and understand
the underlying ‘oneness’ between the two. Thank you!
Posted by Ankur Gupta May 11, 2013 at 2:31 pm Reply
Maths is always beautiful every fuction has its own sexy curve. what one need to see it is a mathematically aesthete mind. Well I know 1 thing whenever Ramanujan legacy will be solved universe will
not be a mystery. God knows if we can use it for teleporting or time travelling. theory of relativity may come out of papers and many more… I wish to see it in ma life.
Posted by Akash Sen October 8, 2013 at 11:39 am Reply
Ramanujan is not human, he is a beast.
Posted by Victor Kamat October 10, 2013 at 4:54 pm Reply
Bitcoins have been heavily debated of late, but the currency's popularity makes it worth attention. Wolfram|Alpha gives values, conversions, and more.
Some of the more bizarre answers you can find in Wolfram|Alpha: movie runtimes for a trip to the bottom of the ocean, weight of national debt in pennies…
Usually I just answer questions. But maybe you'd like to get to know me a bit, too. So I thought I'd talk about myself, and start to tweet. Here goes!
Wolfram|Alpha's Pokémon data generates neat data of its own. Which countries view it most? Which are the most-viewed Pokémon?
Search large database of reactions, classes of chemical reactions – such as combustion or oxidation. See how to balance chemical reactions step-by-step. | {"url":"http://blog.wolframalpha.com/2013/05/01/after-100-years-ramanujan-gap-filled/","timestamp":"2014-04-21T12:08:13Z","content_type":null,"content_length":"62398","record_id":"<urn:uuid:9da2efc0-ae07-4cfd-ad4f-b4da9a6a0747>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00318-ip-10-147-4-33.ec2.internal.warc.gz"} |
Galois field multiplication system and method - Patent # 7526518 - PatentGenius
Galois field multiplication system and method
7526518 Galois field multiplication system and method
(5 images)
Inventor: Zhang, et al.
Date Issued: April 28, 2009
Application: 10/964,850
Filed: October 13, 2004
Inventors: Zhang; Ming (San Jose, CA)
Nemat; Awais Bin (Santa Clara, CA)
Bliss; David Edward (Loomis, CA)
Assignee: Cisco Technology, Inc. (San Jose, CA)
Primary Ngo; Chuong D
Attorney Or Stolowitz Ford Cowger LLP
U.S. Class: 708/492
Field Of 708/492; 380/28
International G06F 7/00
U.S Patent
Other www.ka9q.net, Phil Karn, "Introduction and Executive Summary," 8 pages total, Jan. 7, 2002, www.ka9q.net/papers/ao40tlm.html. cited by other.
References: www.portal.acm.org, Christof Paar, Peter Fleischmann, Pedro Soria-Rodriguez; "Fast Arithmetic for Public-Key Algorithms in Galois Fields with Composite Exponents," 5 pages total, Oct.
1999, ACM Inc.,portal.acm.org/affiliated/citation.cfm?id=323224&dl=guide&cull=acm&cfid=1- 5151515&cftoken=6184618. cited by other.
C. Grabbe et al., "FPGA Designs of Parallel High Performance GF (2 to the 233) Multipliers", Proceedings of the IEEE International Symposium on Circuits and Systems, vol. II, pp.
268-271, 2003. cited by other.
A. Halbuto{hacek over (g)}ullari et al., "Mastrovito Multiplier for General Irreducible Polynomials", Oregon State University, 10 pages, 2000. cited by other.
Yongfei Hang et al., "Fast Algorithms for Elliptic Curve Cryptosystems over Binary Finite Field", Proceedings of the International Conference on the Theory and Applications of
Cryptology and Information Security: Advances in Cryptology, pp. 75-85,Springer-Verlag, 1999. cited by other.
Berk Sunar, "Fast Galois Field Arithmetic of Elliptic Curve Cryptography and Error Control Codes", Oregon State University, 81 pages, 1998. cited by other.
Tong Zhang et al., "Systematic Design Approach of Mastrovito Multipliers over GF (2 to the m)", Proceedings of the 2000 IEEE Workshop on Signal Processing Systems, pp. 507-516, 2000.
cited by other.
Abstract: A present invention Galois field multiplier system and method utilize lookup tables to generate one partial product term and one feedback term in one clock cycle. In one embodiment, a
Galois field multiplier system includes a plurality of shift registers, a plurality of exclusive OR components, a partial product lookup table, and a feedback table lookup table. The
plurality of shift registers perform shift multiplication operation and are coupled to the plurality of shift registers that perform addition operations. The partial product lookup
table and feedback lookup tables are selectively coupled to the exclusive OR components and values from the partial product lookup table and feedback lookup tables are fed into the
selectively coupled exclusive OR components. Coefficients of the partial product term and feedback term are utilized as indexes to the partial product lookup table and feedback lookup
table respectively.
Claim: What is claimed is:
1. A Galois field multiplier system comprising: a plurality of shift registers for performing shift multiplication operations; a plurality of exclusive OR components coupledto said
plurality of shift registers, said plurality of exclusive OR components for performing addition operations; a partial product lookup table selectively coupled to said exclusive OR
components, wherein values from said partial product lookup tableare fed into said exclusive OR components; and a feedback lookup table selectively coupled to said exclusive OR
components, wherein values from said feedback lookup table are fed into said exclusive OR components; wherein the Galois field multipliersystem produces a final result of a Galois
field multiplication operation to be used in error detection in data communication, error correction in data communication, data encryption, data decryption, data encoding, or data
decoding, or combinationsthereof.
2. The Galois field multiplier system of claim 1 wherein coefficients of a partial product term are utilized as indexes to said partial product lookup table.
3. The Galois field multiplier system of claim 1 wherein the Galois field multiplier system is further operable to reduce clock cycles required for producing the final result of the
Galois field multiplication operation using a serial bitimplementation.
4. The Galois field multiplier system of claim 1 wherein coefficients of a feedback term are utilized as indexes to said feedback lookup table.
5. The Galois field multiplier system of claim 1 wherein the Galois field multiplier system generates one partial product term and one feedback term in one clock cycle.
6. The Galois field multiplier system of claim 1 wherein the final result is a syndrome or a codeword.
7. The Galois field multiplier system of claim 1 wherein a value field of said partial product lookup table is the sum of partial products from non-zero index bits.
8. The Galois field multiplier system of claim 1 wherein a value field of said feedback lookup table is the sum of feedback terms from non-zero index bits.
9. The Galois field multiplier system of claim 1 wherein an index address of said feedback table is calculated by a look ahead method.
10. A Galois field multiplier method comprising: utilizing a partial product lookup table to provide resulting partial product coefficient values of a polynomial; utilizing a feedback
lookup table to provide resulting feedback coefficientvalues of the polynomial; performing an exclusive OR operation on said resulting partial product coefficient values and said
resulting feedback coefficient values; shifting results of said exclusive OR operation; and producing a final result of aGalois field multiplication operation to be used in error
detection in data communication, error correction in data communication, data encryption, data decryption, data encoding, or data decoding, or combinations thereof.
11. The Galois field multiplier method of claim 10 wherein utilizing the partial product lookup table further comprises utilizing multiple bits of the polynomial term.
12. The Galois field multiplier method of claim 11 further comprising utilizing the multiple bits as an index to said partial product lookup table to generate one partial product term
in one clock cycle.
13. The Galois field multiplier method of claim 11 wherein the multiple bits of the polynomial term belong to a partial product term.
14. The Galois field multiplier method of claim 10 wherein utilizing the feedback lookup table further comprises utilizing multiple bits of the polynomial term.
15. The Galois field multiplier method of claim 14 further comprising utilizing the multiple bits as an index to said feedback lookup table to generate one feedback term in one clock
16. The Galois field multiplier method of claim 14 wherein the multiple bits of the polynomial term belong to a feedback term.
17. A Galois field multiplier apparatus comprising: means for multiplying terms; means for adding terms; means for supplying one partial product value from multiple bits of a partial
product multiplier wherein said means for supplying onepartial product value includes means for looking up said partial product value; means for supplying one feedback value from
multiple bits of a feedback multiplier wherein said means for supplying one feedback value includes means for looking up saidfeedback value; and means for producing a final result of
a Galois field multiplication operation to be used in error detection in data communication, error correction in data communication, data encryption, data decryption, data encoding or
datadecoding, or combinations thereof.
18. The Galois field multiplier apparatus of claim 17 further comprising means for reducing clock cycles to produce the final result wherein said means for reducing clock cycles
includes means for reducing exclusive OR iterations.
19. A computer usable medium having a computer readable program code embodied therein that when executed causes a computer to: create a partial product lookup table; create a feedback
lookup table; identify a partial product multipliercoefficient as an index to said partial product lookup table; identify a feedback multiplier coefficient as an index to said
feedback lookup table; utilize the identified partial product multiplier coefficient to return a partial product value fromsaid partial product lookup table; utilize the identified
feedback multiplier coefficient to return a feedback value from said feedback lookup table; and produce a final result that is associated with Galois field multiplication and to be
used in; error detection in data communication, error correction in data communication, data encryption, data decryption, data encoding or data decoding, or combinations thereof.
20. The computer usable medium of claim 19 having the computer readable program code embodied therein that when executed further causes the computer to perform an exclusive OR
operation on said partial product values and said feedback values.
21. The computer usable medium of claim 20 having the computer readable program code embodied therein that when executed further causes the computer to shift results of said exclusive
OR operation.
22. The computer usable medium of claim 19 having the computer readable program code embodied therein that when executed further causes the computer to reduce a number of exclusive OR
iterations required in a Galois field multiplicationoperation by utilizing multiple feedback multiplier coefficients as indexes to said feedback lookup table and utilizing multiple
partial product multiplier coefficients as indexes to said partial product lookup table in a serial bit implementation.
23. The computer usable medium of claim 19 having the computer readable program cod embodied therein that when executed further causes the computer to: parse a Galois field
multiplication expression; and extract partial product coefficientvalues and feedback coefficient values.
Description: FIELD OF THE INVENTION
This invention relates to the field of computational systems. In particular, the present invention relates to a Galois field multiplication system and method.
BACKGROUND OF THE INVENTION
Electronic systems and circuits have made a significant contribution towards the advancement of modern society and are utilized in a number of applications to achieve advantageous
results. Numerous electronic technologies such as digitalcomputers, calculators, audio devices, video equipment, and telephone systems facilitate increased productivity and cost
reductions in analyzing and communicating data, ideas and trends in most areas of business, science, education and entertainment. These results are often provided by systems that
perform complicated computational operations, including operations associated with coding and cryptography techniques utilizing Galois fields. Many coding and cryptography
applications require theoperations to be completed quickly and slow results can have adverse impacts on performance. However, traditional Galois field computation approaches often
involve a relatively significant amount of resources and/or a large number of computationiterations that require a relatively long time to complete.
As various information processing systems advance and proliferate, reliable and efficient information communication and/or storage systems are becoming more important and often
critical. Emerging large-scale and/or fast systems (e.g., storagedevices, data networks, etc.) often require effective and dependable handling of vast amounts of information.
Traditional systems often utilize Galois field multiplication and division to facilitate accurate and dependable exchange, processing andretention of information in a variety of
applications. For example, Galois field computations are often used to ensure information is secure and/or an accurately reproduced.
Reliably "reproducing" data without errors (e.g., introduced by a signal "corrupted" during communication) is one typical major concern in both the communication and storage of
information. Error detection and correction schemes typicallyinvolve encoding/decoding information "messages". For example, a message is typically divided into blocks of k information
bits each with a total of 2.sup.k different possible messages. An encoder transforms each message into an n-tuple vector ofdiscrete symbols called a code word or code vector. Code
words and/or encrypted messages are often encoded and decoded utilizing Galois field manipulation. For example, codewords of cyclic codes are conveniently represented as Galois field
polynomialsthat are encoded and decoded using multiplication and division of the Galois field representation.
Galois field multiplication and division can also be utilized in cryptography operations by encoding messages to permit secure communication over an otherwise insecure communication
channel. Cryptography systems usually involve manipulations ofa message in accordance with a "key" and encryption/decryption rules. Traditionally, keys are utilized in both symmetric
cryptographic system (e.g., based on a secret "key") and asymmetric cryptographic systems (e.g., based upon a public-private keypair). The underlying operations involved in these
cryptography systems often involve Galois field multiplication and division of the messages and keys to encrypt/decrypt cyphertext.
Traditional Galois field GF(2.sup.m) multiplication and division approaches often utilize single bitwise serial or parallel operations. Traditional parallel GF(2.sup.m) multipliers
typically generate n partial products and n feedback terms eachn bits wide. Then an exclusive OR (XOR) of the n partial products and n feedback terms is performed to get a single
final product. Bit parallel multipliers generally provide results in one clock cycle (output bits are calculated in one clock cycle) andinvolve significant hardware (e.g., silicon
area) requirements. The significant hardware requirements of bit parallel multipliers often make them impractical for a number of large degree applications (e.g., a 2.sup.128
multiplier for authentication inCPS GCM mode). Serial bit multipliers typically require less hardware and are usually well suited for normal basis representation allowing efficient
While serial bit multipliers typically require less hardware, serial bit multipliers usually require multiple clock cycles to provide a final output. In conventional GF(2.sup.m)
serial approaches, a single bit is usually entered for eachiteration (e.g., one output bit is calculated per clock cycle). For example, in a typical traditional serial multiplier
approach one bit of a multiplier generates one partial product. Since the inputs (e.g., multiplier, quotient, etc) usually have arather large number of bits, conventional approaches
produce a relatively large number of partial products that require a significant number of XOR iterations, which can take a relatively significant amount of time to complete (e.g.,
due to largeiteration delays).
A Galois field multiplier system and method for reducing exclusive OR iterations in Galois field multiplication are presented. A present invention Galois field multiplier system and
method utilize lookup tables to facilitate simplified andexpedited Galois field multiplication. The lookup tables permit the use of multiple bits of a multiplier to generate one
partial product term and one feedback term in one clock cycle. In one embodiment, a Galois field multiplier system includes aplurality of shift registers, a plurality of exclusive OR
components, a partial product lookup table, and a feedback table lookup table. The plurality of shift registers perform shift multiplication operations. The plurality of exclusive OR
componentsare coupled to the plurality of shift registers and perform addition operations. The partial product lookup table is selectively coupled to the exclusive OR components and
values from the partial product lookup table are fed into the selectively coupledexclusive OR components. Similarly, the feedback table lookup table is selectively coupled to the
exclusive OR components and values from the feedback lookup table are fed into the selectively coupled exclusive OR components. The coefficients of thepartial product term and
feedback term are utilized as indexes to the partial product lookup table and feedback lookup table respectively.
DESCRIPTION OF THE DRAWINGS
FIG. 1 is a block diagram of a Galois field multiplier system in accordance with one embodiment of the present invention.
FIG. 2 is a block diagram of an exemplary partial product lookup table segment in accordance with one embodiment of the present invention.
FIG. 3 is a block diagram of an exemplary feedback selection input component in accordance with one embodiment of the present invention.
FIG. 4 is a flow chart of a Galois field multiplier method in accordance with one embodiment of the present invention.
FIG. 5 is a flow chart of a finite field lookup table method in accordance with one embodiment of the present invention.
DETAILED DESCRIPTION
Reference will now be made in detail to the preferred embodiments of the invention, examples of which are illustrated in the accompanying drawings. While the invention will be
described in conjunction with the preferred embodiments, it will beunderstood that they are not intended to limit the invention to these embodiments. On the contrary, the invention is
intended to cover alternatives, modifications and equivalents, which may be included within the spirit and scope of the invention asdefined by the appended claims. Furthermore, in the
following detailed description of the present invention, numerous specific details are set forth in order to provide a thorough understanding of the present invention. However, it
will be obvious toone ordinarily skilled in the art that the present invention may be practiced without these specific details. In other instances, well known methods, procedures,
components, and circuits have not been described in detail as not to unnecessarily obscureaspects of the current invention.
In one embodiment, a present invention Galois field multiplier system and method utilizes multiple coefficient bits of a partial product term and feedback term as indexes to a partial
product term lookup table and a feedback term lookup tablerespectively. The partial product term lookup table and the feedback term lookup table return partial product values and
feedback values for use in Galois field multiplication operations. The lookup tables permit the use of multiple bits of amultiplier to generate one partial product term and one
feedback term in one clock cycle.
In one exemplary implementation, a Galois Field GF(2.sup.n) multiplier in standard basis is used in which:
.times..times..times..alpha..times..times..times..alpha..function..times..- times..times..function..times..times..times. ##EQU00001## .times..times..times..times..times..alpha. ##
EQU00001.2## where C is the product of A and B. For example, theoperation for C can be written as follows:
.times..times..times..times..function..times..times..times..times..functio- n..times..alpha..times..times..times..times..function..times..alpha..times-
s..function..function..times..times..alpha..times..times..times..function.- .times..function..alpha..times..times..times..function. ##EQU00002## in a least significant bit (LSB)first
implementation. Alternatively, the equation for C can be written as:
.times..times..times..times..function..function..times..alpha..times..alph- a..times..times..alpha..times..times..times..function. ##EQU00003## in a most significant bit (MSB) first
implementation. Using a MSB first example, for step oriteration "k", where k is greater than zero and less than n+1 (e.g., 0<k<n+1)then: C(k)=C(k-1).alpha.+A b.sub.n-k mod f(x) for
each C(k).
In one embodiment, the operations for determining C(k) include shift operations (e.g., a left shift by one), feedback operations and partial product operations. When C(k), A and g are
treated as n-element arrays and using the definition:A[n-1:0] .alpha. mod f(x)={A[n-2:0],1'b0}+(A[n-1] g) the operations for determining C(k) can be expressed as:
.function..function..function..times..times.'.times..times..times..functio- n..function..times..function..function..times..times.'.times..times..times- ..function..function. ##
EQU00004## where PP (n-k) is a partial product term and FB(k) is afeedback term.
In one embodiment, more than one bit is utilized to generate one partial product term for PP(n-k) and the total number partial products that would otherwise be performed is reduced,
which in turn reduces the number of partial product XORiterations that would otherwise be performed. Similarly, more than one bit can be utilized to generate one feedback term for FB
(k). Again, the total number of feedback XOR iterations that would otherwise be performed can be reduced. Multiple bits ofthe partial product PP(n-k) coefficient are utilized to
generate one partial product term and multiple bits of the feedback FB(k) coefficient are utilized to generate one feedback term.
In one exemplary implementation, two bits are utilized to generate one partial product term PP(n-k) and one feed back term FB(k). The operations for performing a Galois field
multiplication using two bits to generate the partial product termPP(n-k) and one feedback term FB(k) can be expressed as:
.function..times..function..function..times..times..function..function..ti- mes..times.'.times..times..times..times..times..function..times..times..fu-
s..times..times..function..function..times..times..function..function..tim- es..times.'.times..times..times. ##EQU00005## C(k-1)[n-1] {g[n-2:0], 1'b0}+C(k)[n-1]g+b.sub.n-k{A[n-2:0],
1'b0}+b.sub.n-k-1A This can be rewritten as: C(k+1)={C(k-1)[n-3:0], 2'b00}+PP*[n-k]+FB*(k) with partial product term PP*[n-k] and feedback term FB*(k). The partial product operations
can be expressed as:PP*[n-k]=b.sub.n-k{A[n-2:0], 1'b0}+b.sub.n-k-1A with partial product term PP*[n-k] and feedback term FB*(k). The feedback operations can be expressed as:
.function..times..function..function..times..function..times..times.'.time- s..times..times..function..function..times..times..function..function..tim-
es..function..times..times.'.times..times..times..function..function..time-s..times..function..times..function..function..times..function..times. ##EQU00006## wherein C(k-1)[n-1] and
C(k-1)[n-2]+C(k-1)[n-1]g[n-1]+An-1]b[n-k] are coefficients of the feedback term. The coefficients of the partial product term and feedback termare utilized to generate one partial
product term for PP(n-k) and one feed back for term FB(k).
In the present implementation, the partial product term is defined by two bits. The first partial product term PP(n-k) coefficient bit is defined as: PP(n-k) first bit=b.sub.n-k and
the second partial product term PP(n-k) coefficient bit isdefined as: PP(n-k) second bit=b.sub.n-k-1 wherein the first and second partial product term bits are coefficients of the
partial product term. The first feedback term FB(k) bit is defined as: FB(k) first bit=C(k-1)[n-1] and the second partial productterm PP(n-k) bit is defined as: FB(k) bit=(C(k-1)[n-2]
+C(k-1)[n-1]g[n-1]+A[n-1]b[n-k]) wherein the first and second feedback term bits are coefficients of the feedback term.
FIG. 1 is a block diagram of Galois field multiplier system 100 in accordance with one embodiment of the present invention. Galois field multiplier system 100 includes shift registers
110 through 112, exclusive OR gates 121 through 124, feedbacklookup table 191 and partial product lookup table 192. The XOR gates 121 through 124 are sequentially coupled to
interleaved respective shift registers 110 through 112. The XOR gates 121 through 124 are also coupled to feedback lookup table 191 andpartial product lookup table 192. Feedback
lookup table 191 is coupled to feedback selection input 181. Partial product lookup table 192 is coupled to partial product selection input 130. The XOR gate 121 forwards output 141.
The components of Galois field multiplier system 100 cooperatively operate to perform Galois field multiplication operations. Shift registers 110 through 112 perform shift
"multiplication" operations. Exclusive OR components 121 through 124perform logical exclusive OR "addition" operations. Partial product lookup table 192 forwards partial product
values into the selectively coupled exclusive OR components 121 through 124 in accordance with information from partial product selection input130. Feedback lookup table 191 forwards
feedback values into the selectively coupled exclusive OR components 121 through 124 in accordance with selection information from feedback selection input 181.
In one embodiment of the present invention, the coefficients of a partial product term are utilized as indexes to partial product lookup table 192 and coefficients of a feedback term
are utilized as indexes to feedback lookup table 191. Forexample, a first partial product coefficient 131 (e.g., b.sub.n-i) and a second partial product coefficient 132 (b.sub.n-i-1)
are utilized as indexes to partial product lookup table 192. First feedback coefficient 182 (e.g., C(k-1)[n-1]) and secondfeedback coefficient 183 (e.g., C(k-1)[n-2]+C(k-1)[n-1]g[n-1]
+A[n-1]b[n-k]) are utilized as indexes to feedback lookup table 191.
In one embodiment, a value field of the partial product lookup table is the sum of partial products from non-zero partial product index bits and a value field of the feedback lookup
table is the sum of feedback terms from non-zero feedback indexbits. In one exemplary implementation, the index address of the feedback table is calculated by a look ahead method. It
is appreciated that a present invention lookup tables can have a variety of configurations and implementations. In one embodimentof the present invention, lookup tables include
multiplexers that forward an output based upon the coefficient bits of a Galois field polynomial term. For example, partial product lookup table component 192 includes partial product
multiplexers 151through 154 and feedback lookup table component 191 includes feedback multiplexers 171 through 174. Partial product multiplexers 151 through 154 and feedback
multiplexers 171 through 174 are coupled to exclusive OR gates 121 through 123 respectively. Partial product multiplexers 151 through 154 are also coupled to partial product selection
input 130. Feedback multiplexers 171 through 174 are coupled to feedback selection input 181. Partial product multiplexers 151 through 154 forward a partialproduct term value based
upon partial product selection input 130. Feedback multiplexers 171 through 174 forward a feedback term value based upon feedback selection input 181.
In one embodiment of the present invention, partial product multiplexer configurations are dependent on the number of multiple bits that are utilized as coefficient inputs. FIG. 2 is
a block diagram of an exemplary partial product lookup tablesegment 200 in accordance with one embodiment of the present invention. Partial product lookup table segment 200 includes
partial product multiplexer 221 (e.g., similar to partial product multiplexers 151 through 154) coupled to inputs first partialproduct value 211, second partial product value 212,
third partial product values 213, and for the partial product value 214. Partial product multiplexer 221 forwards one of the inputs as the final partial product value 231 based upon
partial productselection input 251 (e.g., similar to partial product selection input 130). The partial product values can be calculated on the fly from input 205. For example, given
input Ab, first partial product value 211 can be set to 0000, second partial productvalue 212 is set equal to A, and third partial product value 213 is set equal to A left shifted
once. Forth partial product value 214 is set equal to A exclusive ORed with the value of A left shifted once.
FIG. 3 is a block diagram of an exemplary feedback coefficient determination component 300 in accordance with one embodiment of the present invention. In one exemplary implementation,
feedback coefficient determination component 300 is utilizedto determine (e.g., on the fly) a feedback coefficient for utilization as an index value to a feedback lookup table.
Feedback coefficient determination component 300 includes a first logical AND gate 321 and a second logical AND gate 322 coupled tological XOR gate 331. The first coefficient
determination input 311 (e.g., C(k-1){n-2)), the outputs of first logical AND gate 321 and a second logical AND gate 322 are coupled to logical XOR gate 331. Second coefficient
determination input (e.g.,C(k-1[n-1]) and third coefficient determination input 312 (e.g. g[n-1]) are coupled to the input of first logical AND gate 321. Forth coefficient
determination input 314 (e.g., A[n-1]) and fifth coefficient determination input 315 (e.g., b[n-k]) arecoupled to the input of second logical AND gate 322.
FIG. 4 is a flow chart of Galois field multiplier method 400 in accordance with one embodiment of the present invention. Field multiplier method 400 facilitates reduction of XOR
iterations in Galois field multiplication operations. Fieldmultiplier method 400 can be utilized with irreducible polynomials operations that are fixed or programmable.
In step 410 a lookup table is utilized to provide resulting coefficient values of polynomial terms. In one embodiment, the resulting coefficient values are resulting partial product
coefficient values and resulting feedback coefficient values. Multiple bits of a multiplier can be utilized as an index to the lookup table to generate one term (e.g., one partial
product term, one feedback term, etc.). In one exemplary implementation the term is generated in one clock cycle.
In step 420, an exclusive OR operation is performed on the resulting coefficient values. In one embodiment of the present invention, the exclusive OR functions as a Galois field
addition operation. By utilizing the coefficient values from thelookup table of step 410, less XOR operation iterations are performed than would otherwise be required.
In step 430, the results of the exclusive OR operation are shifted. In one exemplary implementation the XOR operation results are shifted to the left and the shifting functions as a
Galois field multiplication.
The present invention can be implemented in a variety of systems (e.g., computers, data processing systems, communication networks) and/or diverse applications. For example, a present
invention system and/or method can be implemented in avariety of storage devices (e.g., random access memories, hard drive memories, flash memories, etc.) and communication equipment
(e.g., routers, switches, etc.). A present invention system and/or method can be utilized to perform Galois fieldmultiplication and/or division associated with checking for errors in
information (e.g. ECC encoding/decoding, etc.). A present invention system and/or method can also be utilized to facilitate security schemes and/or message authentication in a
varietyof applications. For example, communication of confidential information (e.g., information associated with financial transactions, national security, etc) over a network (e.g.,
over the Internet). It is appreciated that a present invention systemand/or method can be implemented in a variety of cryptography and/or encoding implementations. For example, Galois
field multiplier system 100 can be utilized to produce a codeword, a syndrome, cyphertext, etc. It is also appreciated that the presentinvention can be implemented in hardware,
software, firmware and/or combinations thereof. For example, instructions for causing a processor to perform Galois field multiplier method 400 can be stored on a computer readable
medium and executed by aprocessing system.
FIG. 5 is a flow chart of a finite field lookup table method 500, in accordance with one embodiment of the present invention.
In step 510, create a finite field lookup table. In one embodiment, the finite field lookup table is a partial product lookup table (e.g., partial product lookup table 192). The
finite field lookup table is a feedback lookup table (e.g.,feedback lookup table 191. In one exemplary implementation the finite field lookup table maps finite field lookup table
values with indexes associated with multiplier coefficients.
In step 520, identify a multiplier coefficient as an index to the finite field lookup table. In one embodiment of the present invention, a Galois field multiplication expression is
parsed and coefficient values are extracted. For example,partial product coefficient values (e.g., b.sub.n-k, b.sub.n-k-1) are extracted. Feedback coefficient values (e.g. C(k-1)[n-1]
and C(k)[n-1]) are extracted.
In step 530, the identified multiplier coefficient is utilized to return a value from the finite field lookup table.
Thus, the present invention is a system and method that facilitates expedited Galois field multiplication results. A present invention Galois field multiplication system and method
provides results faster than a traditional serial bitimplementation without requiring the significant hardware resources (e.g., in terms of silicon area) of traditional parallel
multipliers. The lookup tables permit the use of multiple bits of a multiplier to generate one partial product term and onefeedback term in one clock cycle.
The foregoing descriptions of specific embodiments of the present invention have been presented for purposes of illustration and description. They are not intended to be exhaustive or
to limit the invention to the precise forms disclosed, andobviously many modifications and variations are possible in light of the above teaching. The embodiments were chosen and
described in order to best explain the principles of the invention and its practical application, to thereby enable others skilledin the art to best utilize the invention and various
embodiments with various modifications as are suited to the particular use contemplated. It is intended that the scope of the invention be defined by the Claims appended hereto and
their equivalents.
* * * * *
Randomly Featured Patents | {"url":"http://www.patentgenius.com/patent/7526518.html","timestamp":"2014-04-16T16:57:13Z","content_type":null,"content_length":"52538","record_id":"<urn:uuid:302a1130-37a6-4083-a1ba-98e8fe789426>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00561-ip-10-147-4-33.ec2.internal.warc.gz"} |
please help with excel exercises
November 1st 2007, 05:49 AM #1
Junior Member
Oct 2006
please help with excel exercises
Hello, how are you.
could you please help me to solve these excel exercises, they are so difficult for me, I don't know what formulas to use.
A standard auto loan. For each of the 48 monthly payments compute the items in the table: Beginning Balance, Payment,
Interest portion of the Payment, etc ... Use Excel's Payment function (PMT), and be careful about relative and absolute
cell references. The payment, of course, will be the same for all months.
A detail: interest rates are usually quoted on an annual basis. But here the payments are monthly. How will you account for
this when you reference the interest rate?
If you do this cleverly, you can specify the first two rows, then make one big copy for the other 46 rows.
Amount $25,000.00
Interest Rate 6%
Months 48
Beginning Balance Payment Amount Interest Principal Reduction Ending Balance
1 25,000.00
Total Interest Total Principal
================================================== ==========
Prepare a Purchases Budget for Grunk that reflects a Cost of Merchandise Sold (Cost of Goods Sold, Cost of Sales)
of 60% and a required ending inventory of 65% of the next months sales.
Beginning inventory is $50,000.
Sales for January of next year are estimated to be $365,000
Jan Feb
Sales $300,000
Cost of
Mrch Sold
Inventory 50,000
Prepare a Cash Receipts Budget for Grunk based on its historical cash collection patterns:
50% in the month of sale.
40% in the month following the sale.
10% two months following.
January collections will include $80,000 from the previous December and $20,000 from November.
February collections will include $20,000 from the previous December.
Jan Feb
Sales $300,000
Cash Collections:
Month of
Month $80,000
2 Months $20,000 20,000
================================================== ==========
Create a Cash Disbursements Budget for Grunk based on historical patterns:
40% of payments are made in the month of purchase.
60% of payments are made in the month following.
January payments are expected to include $60,000 from December purchasing activity.
Jan Feb
Cash Payments:
Month of
Month 60,000
thank you.
First, please notice that Excel has documentation all along the way. Just type "=pmt(". As soon as you type the left parenthesis, you should get a picture of available parameters. The same with
many others. Try "=fv(". It's great stuff and should help you on your way.
The deterimination of what to do with mismatched interest crediting and payments MUST be specified in the problem statement or you should have a very good argument if you get it wrong on an exam.
Generally, unless you are an insurance company, interest rates are reported as the rather useless NOMINAL Annual rate. That's all fine, but you must know about compounding in order to get the
right answer. It also helps to know a little about common practices. For example, auto loans often report nominal annual and take payments monthly. For auto loans, simply dividing the nominal
annual rate by 12 will do. If this sort of thing is not given in the problem statement, you state it clearly before proceeding. This will give you power when you argue the point with the grad
student who marked it wrong.
In your case, enter simply "=pmt(0.06/12,48,25000,0,0)"
0.06/12 is the monthly interest.
48 is the number of months. Notice that this is 4*12 = 48
25000 is the beginning balance
0 is the final balance. This is good for "salvage" questions. If there is no salvage value, or if the loan is completely paid off, you can use "0" or simply omit the value.
0 is an option switch. Uusually, you can forget about it as a "0" or nothing means payments are at the end of each period. sometimes you want payments at the beginning of the period and this will
require a "1" in the fifth paramenter.
thank you.
Thank you so much tkhunny, I appreciate your help, but I still don't understand, I suck at excel, could you please tkhunny go to this website Untitled Document and solve for me the exercises
titled "special ll" and mail the answers to my email camachi@hotmail.com I solved special problems I, but I just can't solve the second set of exercises, they are the same exercises I already
typed. But probably you could understand them better if you see them in an excel sheet. Please Tkhunny help me.
Thank you so much tkhunny, I appreciate your help, but I still don't understand, I suck at excel, could you please tkhunny go to this website Untitled Document and solve for me the exercises
titled "special ll" and mail the answers to my email camachi@hotmail.com I solved special problems I, but I just can't solve the second set of exercises, they are the same exercises I already
typed. But probably you could understand them better if you see them in an excel sheet. Please Tkhunny help me.
As opposed to learning how to do it and doing the problems yourself?? Sheesh!
November 1st 2007, 01:39 PM #2
MHF Contributor
Aug 2007
November 1st 2007, 05:57 PM #3
Junior Member
Oct 2006
November 1st 2007, 06:05 PM #4 | {"url":"http://mathhelpforum.com/math-topics/21773-please-help-excel-exercises.html","timestamp":"2014-04-17T09:08:24Z","content_type":null,"content_length":"44772","record_id":"<urn:uuid:1370b282-34c7-4410-aaaa-5dba89c1246a>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00353-ip-10-147-4-33.ec2.internal.warc.gz"} |
R K Bansal Book Of Fluid Mechanics
Sponsored High Speed Downloads
AE3T4 Fluid Mechanics and Hydraulic Machinery Credits: 4 Lecture: 4 ... Machines”, Standard Book House, 2002 2. R.K.Bansal “A Textbook of Fluid Mechanics and Hydraulic Machines”, Laxmi Publications,
Ltd, 2005 Reference s: 1.
Subject Code ME 2204 Semester III UNIT No. I Proposed Actual 10.07.13 Units and Dimensions A Text book of Fluid Mechanics And Hydraulic Machines, R.K.Bansal &
A Text Book of Fluid Mechanics and Hydraulic Machines: R. K. Bansal(Lax mi publication) 2. Fluid Mechanics & Machinery – C.S.P Ojha, R.Berndtsson, P.N. Chandramouli, (Oxford university press) 2.
Primary Manufacturing Process Theory 1.
Bansal, R.K., Fluid Mechanics and Hydraulic Machines, Laxmi publications, New Delhi, 1998. 2. Modi, P.N. & Seth, S.M., “A Text book of Fluid Mechanics and Hydraulic Machines”, Standard Book House,
New Delhi, 10th Edition, 1991. Title:
R.K. Bansal Text Book of Fluid Mechanics and Hydraulic Machines R.K. Bansal 11 Heat transfer Heat and mass transfer: (SI units) D.S. Kumar Heat Transfer (Si Units) Sie Holman Heat & Mass Transfer 2E
P.K. Nag Fundamentals of
solid and fluid mechanics Dr.r.k.Bansal & laxmi publications (p) LTD ... Specify Book,Author & Publication ...
Dr. R.K.Bansal, Fluid Mechanics and Hydraulic Machines, Laxmi Publications, Delhi ... R K Rajput, Fluid Mechanics and Fluid Machines, Laxmi ... Higgins R. A., Hodder, Engineering Metallurgy I and II,
English language Book 4) A.K. Sinha, Powder Metallurgy, Dhanpatrai & Sons ...
2. Rajput, R.K.,” A Text book of Fluid Mechanics and ... Bansal, R.K., Fluid Mechanics and Hydraulic Machines, Laxmi Publications, New Delhi, 2005. 2. Som,S.R, & Biswas, “Introduction to Fluid
Mechanics and Fluid Machines”, Tata McGraw Hill, 1998. 3. Agarwal, S.K., Fluid Mechanics and ...
TRIDENT ACADEMY OF TECHNOLOGY, BHUBANESWAR CATALOGUE OF BOOKS Mechanics Sl.No. Book Name Author Publisher QUANTITY 41 A Text Book of Engineering Mechanics 5th ed. R.K.Bansal Laxmi 20
Any of the book listed below are more than adequate for this syllabus. Fluid Mechanics and Machinery R.K.Bansal Fluid Mechanics, Douglas J F, Gasiorek J M, and Swaffield J A, Longman. Mechanics of
Fluids, Massey B S., Van Nostrand Reinhold. Fluid Mechanics Shiv Kumar. FMM / KRG ...
Bansal, R.K. (1998). Fluid Mechanics and Hydraulic Machines. Laxmi Publications, Mad ras. 2. Frank M White ... Modi, P.M. and Seth, S.M. (1991). Hydraulics and Fluid Mechanics. Standard Book Hous e,
New Delhi. 3. Shames, I. (1982). Mechanics of Fluids (II ed.). Mc Graw Hill International. 4 ...
... Fluid Mechanics and Machines - Dr. R.K. Bansal (Laxmi Publications) Name of Reference Books: Fluid Mechanics - Dr. P.N. Modi (Standard Book House) ... (Prentice Hall India) Fluid Mechanics - R.J.
Garde (New Age International Publication) Fluid Mechanics - Streeter V.L. & Wylie E.B. (Tata ...
A text book of fluid mechanics and hydraulic machines by Dr. R. K. Bansal J. Reference Books 1. Fluid Mechanics V.L Streeter, and E.B Wylie,., , McGraw Hill, 1985, NewYork 2. Theory and Applications
of Fluid Mechanics, K Subramanya , Tata McGraw Hill Publishing Co, 1993, New ...
Fluid Mechanics and Hydraulic machines. R.K. Bansal ... casting ... Tata McGraw ... book presents the fundamentals of numerical simulation and describes the various available solution ... it provides
the background necessary for solving complex problems in fluid mechanics ...
Text Books: 1. P.N. Modi & S.M. Seth, Hydraulics and Fluid Mechanics including Hydraulic Machines, Standard Book House, New Delhi. 2. R.K.Bansal, A text book of Fluid Mechanics and Hydraulic
machinery, Laxmi Publications (P)
U.K. Shrivastav PEURIFOY 4. Soil Mechanics & Foundation Engineering Gopal Ranjan & Rao,Venkat Ramaiha, S. K. Garg, B.C. Punamia 5. Fluid Mechanics and Fluid Machines Modi & Seth, R. K. Bansal, A.K
.Jain K.Subramanyam, Jagdish Lal. 6. Environmental Engineering B.C. Punamia(Part I
Book Title List SI No Title Author No. of Copies 0 A course in mechanical measurements and instrumentation Sawhney, A K 5 1 A Text Books of Fluid Mechanics Bansal, R K 1
Even Semester Book Bank ECE ( 8 Sems) EE (8 Sems) ... 4 Strength of MATERIALS-1 Strength of Materials R.K.Bansal 5 Fluid Mechanics Fluid Mechanics D.S.Kumar (Fluid Mechanics & Hydraulic Machines)
Text book R K Bansal, Text Book fluid mechanics and machinery, Laxmi Publication D S Kumar, fluid mechanics fluid power Egg, kaitson publication EAG-302 : Farm Machinery 4 (3+1) Unit-1 Objectives of
farm mechanization. ...
... FLUID MECHANICS AND MACHINERY Course Instructor : D. MADESH ... Text book(s) [TB] 1. Bansal, R.K., “Fluid Mechanics and Hydraulics Machines”, Laxmi Publications (P) Ltd., ... 2 Properties of
fluids To understand the fluid properties 4 3 Specific gravity – Specific weight ...
Engineering Fluid Mechanics by K. L. Kumar, Eurasia Publishing House, New Delhi. 4. Fluid Mechanics and Hydraulic Machines by Dr. R. K. Bansal. 5. Dr. D. S. Kumar, Fluid Mechanics and Fluid Power
Engg. Katson Publishing House. ... c. R. Venkataramana, sapna book house, bangalore, 2001.
... Bansal, R.K., “Fluid Mechanics and Hydraulics Machines”, Laxmi publications (P) Ltd, New Delhi, 1995 Reference Books: R1 : Ramamirtham, S., "Fluid Mechanics and Hydraulics and Fluid ... 1998. R2
: Rajput, R.K., “A text book of Fluid Mechanics in SI Units” Title: Microsoft Word
New Arrival Book List of 2011 – 2012 at Central Library, BAU. Sl. Title Author 1. A T B Fluid mechanics Dr. R.K. Bansal
“Fluid Mechanics and Hydraulic Machines”- R. K. Bansal, Laxmi Pub., Delhi. 3. “Fluid Mechanics”- Streeter and Victor, McGraw Hill. ... Hydraulics & Fluid Mechanics - , K.R. Arora, Standard Book
house, New Delhi. 5. Fluid Mechanics & Machinery - Raghunath. H M., CBS Publishers
bb28077 fluid mechanics and hydraulics / bansal,r.k. jag pranesh kumar - 120366 12 03 1. bb6025 computer proging ... bb42879 a text book of fluid mechanics & hydraulic machines / rajput, r.k. rajeev
kumar ...
... (Khanna Publications) Fluid Mechanics and Machines - Dr. R.K. Bansal (Laxmi Publications) Name of Reference Books: Fluid Mechanics - Dr. P.N ... Fluid Machines - Dr. Jagdish Lal (Metropolitan
Book Company Private Ltd.) Fluid Machines - John P. Douglas (Pearson Publication) National Institute ...
R.K Bansal,A Text book of Fluid mechanics and Hydraulic machines, Laxmi Publications 7. Douglas,”Fluid mechanics” 4/e Pearson Education. 8. K Subramanya, Fluid Mechanics&Hydraulic Machines, Tata Mc
Graw Hill, Education Private Limited NewDelhi 9.
Bansal, R.K., "Fluid Mechanics and Hydraulic Machines", Laxmi ... 1. Jain, A.K., "Fluid Mechanics (including Hydraulic Machines)", 8th Edition, Khanna Publishers, 1995. 2. Ranga Raju, K.G ... Modi,
P.N. and Seth, S.M., “Hydraulic and Fluid Mechanics”, Standard Book House, 2000 ...
Recommended Books: 1. Modi P.N. and Seth S.M.: Hydraulics and Fluid Mechanics, Standard Book House 2. R K Bansal: Fluid Mechanics and Hydraulic Machines, Laxmi publications.
... “Fluid Mechanics and Machinery”, Tata ... Machines”, Tata Mc Graw Hill Publishing Company Ltd., New Delhi. 3. Bansal, Dr. R.K. “A Text Book of Fluid Mechanics and Hydraulic Machines”, Luxmi
Publications (P) Ltd., New Delhi. 4. Rajput, R.K. “A Text Book of Hydraulics ...
... Fluid Mechanics and Machinery ” ,Modi P N & Seth S N, Standard Book House ,New Delhi. 2. “Theory of Hydraulic Machinary”, V.P. Vasandani, Khanna Publishers, Delhi. 3. “A text book of fluid
Mechanics & hydraulic Machines”, Dr.R.K. Bansal, Laxmi Publications Ltd. 4. “Hydraulic ...
Fluid Mechanics and Machines – Dr. R.K. Bansal (Laxmi Publications) 3. Fluid Mechanics – Dr. P.N. Modi (Standard Book House) Reference Books: 1. Mechanics of Fluid – Irving H. Shames (McGraw Hill) 2.
Introduction to Fluid Mechanics – James A. Fay (Prentice Hall India) 3.
R K Bansal, “A Textbook of Fluid Mechanics and Hydraulic Machines”, 9. th. ed. Laxmi Publications, New Delhi, 2004 . 5. Modi, L.P., Seth, S.M., “Hydraulics and Fluid Mechanics”, Standard Book House,
New Delhi,2002 6. Bird R.B., Stewart W.E., Lightfoot E.N. “Transport phenomena” 2ed., ...
SI No Title Author Edition No 0 A course in mechanical measurements and instrumentation Sawhney, A K 12 1 A Text Books of Fluid Mechanics Bansal, R K
TEXT BOOK 1. Kumar, K.L., “Engineering Fluid Mechanics”, Eurasia Publishing House (P) Ltd., New Delhi (7 th edition), 1995. REFERENCES 1. Bansal, R.K., “Fluid Mechanics and Hydraulics Machines”, (5
th edition),
Fluid Mechanics and Mechanical Operations (12BT202) Microbiology (12BT203) ... A Text Book of Fluid Mechanics and Hydraulic Machines, Dr. R. K. Bansal, Latest Edition. 2010 30 Work done and Power
required, ...
A text book of fluid mechanics by R. K. Bansal ( Luxmi publication) 4. A text book of fluid mechanics and Hydraulic mechanics in SI Units by R. K. Rajput(S. Chand and company) MATS UNIVERSITY GULLU,
R.K.Bansal “A Text Book of Fluid Mechanics and Hydraulic Machines” Laxmi Publications, 2006. 3. Onkar Singh “Applied Thermodynamics” New Age International, 2006 4. R.K.Rajput “ A Text Book of
Hydraulic Machines” S. Chand & Co.,2008.
Dr R.K.Bansal Reference Book: 1. Engineering Fluid Mechanics by R.J. Garde and A.G. Mirajgaoker, Nem chand & Bros Publication Roorkee. 2. Fluid Mechanics by Dr. A.K.Jain, Khanna, Publishers- 2-B,
Nath Market, Nai Sark, Delhi-6. 3.
S. No. Subject Book Name Author 1 Power Electronics Power Electronics P.S. Bhimbra Micro ... 12 Fluid Mechanics Fluid Mechanics Modi & Seth, R.K. Bansal 13 Engg. Thermodynamics Engg. Thermodynamics
P.K. Nag 14 I.C. Engine I.C. Engine M.L. Mathur, R.P. Sharma . Title ...
... by R.K.Bansal, Laxmi publications. 2. ... Fluid Mechanics and its Applications, by S.K.Gupta and A.K.Gupta, Tata McGraw Hill, New Delhi. 4. Fluid Mechanics and Hydraulic Machines by R.K.Rajput,
S.Chand & Co. 5. ... Book , Inc. 2. Heat Engineering, I.T. Shvets et al, MIR Pub Moscow.
Engineering Fluid Mechanics by R.J.Garde & A.G ... Swaffield JP; Pitman Fluid Mechanics : Streetes VL & Wylie EB; Mcgraw Hill book company. Fluid Mechanics by White Introduction to Fluid Mechanics
... 8. Fluid Mechanics : Dr. R.K. Bansal; Laxmi Publications 9. Fluid Mechanics : Dr ...
Fluid Mechanics, Dr. R.K. Bansal, Laxmi Publication (P) Ltd. New Delhi 2. ... Applications by K. S. Trivedi. 6. A text book of Engineering Mathematics by N. P. Bali & M. Goyal, Laxmi Publication.
Rashtrasant Tukdoji Maharaj Nagpur University, Nagpur
Fluid Mechanics and Machines – Dr. R.K. Bansal (Laxmi Publications) Name of Reference Books: Fluid Mechanics – Dr. P.N. Modi (Standard Book House) Mechanics of Fluid – Irving H. Shames (McGraw Hill)
Introduction to Fluid Mechanics – James A. Fay (Prentice Hall India)
Fluid Mechanics & Hydraulics Machines-R.K.Bansal-Laxmi Publications.,Delhi 2. Engineering Fluid Mechanics –K.L. Kumar, Eurasia Publication House, Delhi 3. Mechanics of Fluid – B.S. Massey – English
Language Book Society (U.K.) 4. Fluid Mechanics- Yunush A. Cengel, ...
REFERECNE BOOK 1) Fluid Mechanics : Dr. A.K.Jain 2) Hydraulic and Fluid Mechanics : Dr. P.N.Modi , Dr. S.M. Seth. 3) Hydraulic and Fluid Mechanics : R.K.Bansal. 4) ... Theory and applications of
Fluid Mechanics : Dr. K. Subramanya. 6) Fluid Mechanics : Dr.Grade and Mirakgaokar. 7) Fluid ...
41 jain,R.K. Production technology Khanna Publishers,Delhi 93105 42 Bansal,R.K. Text book of fluid mechanics and hadraulic machines Laxmi pub,Banglore 93106 43 Tanenbaum,A.S /Woodhull,A.S. Operating
systems design and implementation 3 Dorling
Engineering Mechanics/ R K Bansal, Laxmi Publications 3. Engineering Mechanics/ K.L Kumar, TMH Publishers . Page 6 of 30 ... Standard book house. 2. Introduction to Fluid Machines by S.K.Som&G.Biswas
(Tata McGraw-Hill publishers Pvt. Ltd.) 3.
Bansal R. K. (2010), “Fluid Mechanics and Hydraulic Machines”, Laxmi Publishers, New Delhi. ... “Hydraulics and Fluid Mechanics” Standard Book House, New Delhi. 2. Jain A.K., (2002), “Fluid Mechanics
”, Khanna Publishers, New Delhi. 3. Streeter V.L and Wiley E.B., (1998) “Fluid ...
and biotechnology K R Aneja New Age Int'l, New Delhi ... 17 Food Chemistry Lillian Hoagland Meyer Reinhold Pub. Corp 2 304 366 18 A Text Book of Fluid Mechanics and Hydraulic Machines (S.I. Units) Dr
R.K.Bansal Laxmi Publications (P) Ltd., New Delhi 2 720 840 19 The Technology of Food ... | {"url":"http://ebookilys.org/pdf/r-k-bansal-book-of-fluid-mechanics","timestamp":"2014-04-20T10:50:14Z","content_type":null,"content_length":"46106","record_id":"<urn:uuid:e5ef8965-7ec5-4205-9137-9670cc08633f>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00589-ip-10-147-4-33.ec2.internal.warc.gz"} |
pK[a] and K[a]
Author: Anders Nielsen | Ctrl-D saves this page.
Background information
The K[a] value is a value used to describe the tendency of compounds or ions to dissociate. The K[a] value is also called the dissociation constant, the ionisation constant, and the acid constant.
The pK[a] value is related to the K[a] value in a logic way. pK[a] values are easier to remember than K[a] values and pK[a] values are in many cases easier to use than K[a] values for fast
approximations of concentrations of compounds and ions in equilibriums.
Definition of pK[a] and K[a]
The definition of K[a] is: [H^+]^.[B] / [HB], where B is the conjugate base of the acid HB.
The pK[a] value is defined from K[a], and can be calculated from the K[a] value from the equation pKa = -Log[10](K[a])
Example on how pK[a] and K[a] values are used
An example of ammonium and ammonia and how K[a] and pK[a] values are used is given below. The Ka value of NH[4]^+ is 5.75^.10^-10 under ideal conditions at 25 degrees Celsius.
This K[a] value is used to determine how much of the NH[4]^+ is dissociated into its conjugate base NH[3] by the reaction NH[4]^+ => NH[3] + H^+.
Do also notice that the reaction NH[4]^+ <==> NH[3] + H^+ can go either way, depending on conditions.
By introducing the parameter TAN (Total Ammonium Nitrogen) which is ([NH[4]^+] + [NH[3]]) it can be calculated how much of the TAN is on the form NH[4]^+ and NH[3] at any given pH.
The calculations allowing this is a bit complicated, but when you have been through them once, it is really simple:
TAN = [NH[4]^+] + [NH[3]] = [NH[3]]^.(1+[NH[4]^+] / [NH[3]])
Notice that normal rules such as multiplication before addition applies.
By removing the second part of the above equation only:
TAN = [NH[3]]^.(1+[NH[4]^+] / [NH[3]])
remains. This equation can be rearranged to:
[NH[3]] = TAN / (1+[NH[4]^+] / [NH[3]])
By multiplying the denominator in the last part of the above equation with [H^+] one gets:
[NH[3]] = TAN / (1+[H^+]^.[NH[4]^+] / [NH[3]]^.[H^+]) (*)
The definition of Ka said that Ka = [H^+]^.[B] / [HB]. Written in the context of the above example, Ka of ammonium or NH[4]^+ is: [H^+]^. [NH[3]] / [NH[4]^+]. This is not enough for a smooth
substitution in (*) so we calculate 1 / Ka to [NH[4]^+] / [H^+]^. [NH[3]]
which can be substituted into (*):
[NH[3]] = TAN / (1+([H^+]^.[NH[4]^+] / [NH[3]]^.)[H^+]) <==> [NH[3]] = TAN / (1+ [H^+] / K[a]) (#)
From this equation [NH[3]] can be calculated when TAN, [H^+] and Ka are known.
Let's say that TAN is 0.1 M, pH is 8.24 and the K[a] value is 5.75^.10^-10, equivalent of a pK[a] value of 9.24. If we place these values into (#) we get that:
[NH[3]] = 0.1 / (1 + 5.75^.10^-9 / 5.75^.10^-10) or 0.1/11 = 0.009 M
The trick with using pKa values is, that in equilibriums like the ammonium/ammonia equilibrium you can always tell that if the pH value is 1 unit lower than the pK[a] value, the concentration of
ammonia [NH[3]] is 1/11 of the total TAN concentration because of the base 10 log relationship between K[a] and pK[a]. Had the pH value been 7.24, or 2 units less than the pKa value, 1/101 of the TAN
had been [NH[3]].
With a little experience one can give rough estimates of ions and compounds in equilibrium without a calculator just by looking at the pK[a] value and the type of equilibrium.
Web resources
Read about the privacy policy of phscale.net here | {"url":"http://www.phscale.net/pka-ka.htm","timestamp":"2014-04-18T18:11:08Z","content_type":null,"content_length":"7521","record_id":"<urn:uuid:f562ddb0-03ab-41b0-a3e1-5678a3fca425>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00102-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts from September 2012 on Normal Deviate
The Remarkable k-means++
September 30, 2012 – 8:19 pm
1. The Problem
One of the most popular algorithms for clustering is k-means. We start with ${n}$ data vectors ${Y_1,\ldots, Y_n\in \mathbb{R}^d}$. We choose ${k}$ vectors — cluster centers — ${c_1,\ldots, c_k \in \
mathbb{R}^d}$ to minimize the error
$\displaystyle R(c_1,\ldots,c_k) = \frac{1}{n}\sum_{i=1}^n \min_{1\leq j \leq k} ||Y_i - c_j||^2.$
Unfortunately, finding ${c_1,\ldots, c_k}$ to minimize ${R(c_1,\ldots,c_k)}$ is NP-hard. The usual iterative method, \hrefnosnap{http://en.wikipedia.org/wiki/Lloyd is easy to implement but it is
unlikely to come close to minimizing the objective function. So finding
$\displaystyle \min_{c_1,\ldots, c_k}R(c_1,\ldots,c_k)$
isn’t feasible.
To deal with this, many people choose random starting values, run the ${k}$-means clustering algorithm then rinse, lather and repeat. In general, this may work poorly and there is no theoretical
guarantee of getting close to the minimum. Finding a practical method for approximately minimizing ${R}$ is thus an important practical problem.
2. The Solution
David Arthur and Sergei Vassilvitskii came up with a wonderful solution in 2007 known as k-means++.
The algorithm is simple and comes with a precise theoretical guarantee.
The first step is to choose a data point at random. Call this point ${s_1}$. Next, compute the squared distances
$\displaystyle D_i^2 = ||Y_i - s_1||^2.$
Now choose a second point ${s_2}$ from the data. The probability of choosing ${Y_i}$ is ${D_i^2/\sum_j D_j^2}$. Now recompute the distance as
$\displaystyle D_i^2 = \min\Bigl\{ ||Y_i - s_1||^2, ||Y_i - s_2||^2 \Bigr\}.$
Now choose a third point ${s_3}$ from the data where the probability of choosing ${Y_i}$ is ${D_i^2/\sum_j D_j^2}$. We continue until we have ${k}$ points ${s_1,\ldots,s_k}$. Finally, we run ${k}$
-means clustering using ${s_1,\ldots,s_k}$ as starting values. Call the resulting centers ${\hat c_1,\ldots, \hat c_k}$.
Arthur and Vassilvitskii prove that
$\displaystyle \mathbb{E}[R(\hat c_1,\ldots,\hat c_k)] \leq 8 (\log k +2) \min_{c_1,\ldots, c_k}R(c_1,\ldots,c_k).$
The expected value is over the randomness in the algorithm.
There are various improvements to the algorithm, both in terms of computation and in terms of getting a sharper performance bound.
This is quite remarkable. One simple fix, and an intractable problem has become tractable. And the method comes armed with a theorem.
3. Questions
1. Is there an R implementation? It is easy enough to code the algorithm but it really should be part of the basic k-means function in R.
2. Is there a version for mixture models? If not, it seems like a paper waiting to be written.
3. Are there other intractable statistical problems that can be solved using simple randomized algorithms with provable guarantees? (MCMC doesn’t count because there is no finite sample guarantee.)
4. Reference
Arthur, D. and Vassilvitskii, S. (2007). k-means++: The advantages of careful seeding. Proceedings of the eighteenth annual ACM-SIAM symposium on Discrete algorithms. 1027–1035.
By normaldeviate | Posted in Uncategorized | Comments (12)
Machine Learning In New York
September 27, 2012 – 11:53 am
I wanted to let you know that, on October 19, the New York Academy of Sciences is hosting its
7th Annual Machine Learning Symposium.
For details: see here:
By normaldeviate | Posted in Uncategorized | Comments (0)
Screening: Everything Old is New Again
September 22, 2012 – 4:23 pm
Screening: Everything Old Is New Again
Screening is one of the oldest methods for variable selection. It refers to doing a bunch of marginal (single covariate) regressions instead of one multiple regression. When I was in school, we were
taught that it was a bad thing to do.
Now, screening is back in fashion. It’s a whole industry. And before I throw stones, let me admit my own guilt: see Wasserman and Roeder (2009).
1. What Is it?
Suppose that the data are ${(X_1,Y_1),\ldots, (X_n,Y_n)}$ with
$\displaystyle Y_i = \beta_0 + \beta_1 X_{i1} + \cdots + \beta_d X_{id} + \epsilon_i.$
To simplify matters, assume that ${\beta_0=0}$, ${\mathbb{E}(X_{ij})=0}$ and ${{\rm Var}(X_{ij})=1}$. Let us assume that we are in the high dimensional case where ${n < d}$. To perform variable
selection, we might use something like the lasso.
But if we use screening, we instead do the following. We regress ${Y}$ on ${X_1}$, then we regress ${Y}$ on ${X_2}$, then we regress ${Y}$ on ${X_3}$. In other words, we do ${d}$ one-dimensional
regressions. Denote the regression coefficients by ${\hat\alpha_1,\hat\alpha_2,\ldots}$. We keep the covariates associated with the largest values of ${|\hat\alpha_j|}$. We then might do a second
step such as running the lasso on the covariates that we kept.
What are we actually estimating when we regression ${Y}$ on the ${j^{\rm th}}$ covariate? It is easy to see that
$\displaystyle \mathbb{E}(\hat\alpha_j) = \alpha_j$
$\displaystyle \alpha_j = \beta_j + \sum_{seq j} \beta_s \rho_{sj}$
and ${\rho_{sj}}$ is the correlation between ${X_j}$ and ${X_s}$.
2. Arguments in Favor of Screening
If you miss an important variable during the screening phase you are in trouble. This will happen if ${|\beta_j|}$ is big but ${|\alpha_j|}$ is small. Can this happen?
Sure. You can certainly find values of the ${\beta_j}$‘s and the ${\rho_{js}'s}$ to make ${\beta_j}$ big and make ${\alpha_j}$ small. In fact, you can make ${|\beta_j|}$ huge while making ${\alpha_j=
0}$. This is sometimes called unfaithfulness in the literature on graphical models.
However, set of ${\beta}$ vectors that are unfaithful has Lebesgue measure 0. Thus, in some sense, unfaithfulness is “unlikely” and so screening is safe.
3. Arguments Against Screening
Not so fast. In order to screw up, it is not necessary to have exact unfaithfulness. All we need is approximate unfaithfulness. And the set of approximately unfaithful ${\beta}$‘s is a non-trivial
subset of ${\mathbb{R}^d}$.
But it’s worse than that. Cautious statisticians want procedures that have properties that hold uniformly over the parameter space. Screening cannot be successful in any uniform sense because of the
unfaithful (and nearly unfaithful) distributions.
And if we admit that the linear model is surely wrong, then things get even worse.
4. Conclusion
Screening is appealing because it is fast, easy and scalable. But it makes a strong (and unverifiable) assumption that you are not unlucky and have not encountered a case where ${\alpha_j}$ is small
but ${\beta_j}$ is big.
Sometimes I find the arguments in favor of screening to be appealing but when I’m in a more skeptical (sane?) frame of mind, I find screening to be quite unreasonable.
What do you think?
Wasserman, L. and Roeder, K. (2009). High dimensional variable selection. Annals of statistics, 37, 2178.
By normaldeviate | Posted in Uncategorized | Comments (6)
High Dimensional Undirected Graphical Models
September 17, 2012 – 7:51 pm
High Dimensional Undirected Graphical Models
Larry Wasserman
Graphical models have now become a common tool in applied statistics. Here is a graph representing stock data:
Here is one for proteins: (Maslov and Sneppen, 2002).
And here is one for the voting records of senators: (Banerjee, El Ghaoui, and d’Aspremont ,2008).
Like all statistical methods, estimating graphs is challenging in high dimensional problems.
1. Undirected Graphical Models
Let ${X=(X_1,\ldots,X_d)}$ be a random vector with distribution ${P}$. A graph ${G}$ for ${P}$ has ${d}$ nodes, or vertices, one for each variable. Some pairs of vertices are connected by edges. We
can represent the edges as a set ${E}$ of unordered pairs: ${(j,k)\in E}$ if and only if there is an edge between ${X_j}$ and ${X_k}$.
We omit an edge between ${X_i}$ and ${X_j}$ to mean that ${X_i}$ and ${X_j}$ are independent, given the other variables which we write as
$\displaystyle X_i \amalg X_j | rest$
where “rest” denotes the rest of the variables. For example, in this graph
$\displaystyle X_1 \ \ \rule{2in}{1mm} \ \ X_2 \ \ \rule{2in}{1mm}\ \ X_3$
we see that ${X_1\amalg X_3 | X_2}$ since there is not edge between ${X_1}$ and ${X_3}$.
Now suppose we observe ${n}$ random vectors
$\displaystyle X^{(1)},\ldots, X^{(n)} \sim P.$
The goal is to estimate ${G}$ from the data. As an example, imagine that ${X^{(i)}}$ is a vector of gene expression levels for subject ${i}$. The graph gives us an idea of how the genes are related.
2. The Gaussian Case
Things are easiest if we assume that ${P}$ is a multivariate Gaussian. Suppose ${X\sim N(\mu,\Sigma)}$. In this case, there is no edge between ${X_j}$ and ${X_k}$ iff ${\Omega_{jk}=0}$ where ${\Omega
= \Sigma^{-1}}$. If the dimension ${d}$ of ${X}$ is smaller than the sample size ${n}$, then estimating the graph is straightforward. We can use the sample covariance matrix
$\displaystyle S = \frac{1}{n}\sum_{i=1}^n (X^{(i)}-\overline{X})(X^{(i)}-\overline{X})^T$
to estimate ${\Sigma}$ and we use ${S^{-1}}$ to estimate ${\Omega}$. It is then easy to test ${H_0: \Omega_{jk}=0}$. We put an edge between ${j}$ and ${k}$ if the test rejects. The R package SIN will
do this for you.
When ${d>n}$, the simple approach above won’t work. Perhaps the easiest approach is the method due to Meinshausen and B{ü}hlmann (2006) which John Lafferty has aptly named parallel regression. The
idea is this: you regress ${X_1}$ on all the other variables, then you regress ${X_2}$ on all the other variables, etc. For each regression, you use the lasso which yields sparse estimators. When you
regress ${X_j}$ on the others, there will be a regression coefficient for ${X_k}$. Call this ${\hat\beta_{jk}}$. Similarly, when you regress ${X_k}$ on the others, there will be a regression
coefficient for ${X_j}$. Call this ${\hat\beta_{kj}}$. If ${\hat\beta_{jk}eq 0}$ and ${\hat\beta_{kj}eq 0}$ then you put an edge between ${X_j}$ and ${X_k}$.
An alternative — the glasso (graphical lasso) — is to maximize the Gaussian log-likelihood ${\ell(\mu,\Omega)}$ with a penalty:
$\displaystyle \ell(\mu,\Omega) + \lambda \sum_{jeq k} |\Omega_{jk}|.$
The resulting estimator ${\hat\Omega}$ is sparse: many of its elements are 0. The non-zeroes denote edges. A fast implementation in R due to Han Liu can be found here.
3. Relaxing Gaussianity
The biggest drawback of the glasso is the assumption of Gaussianity. With my colleagues John Lafferty and Han Liu and others, I have done some work on more nonparametric approaches.
Let me briefly describe the approach in Liu, Xu, Gu, Gupta, Lafferty and Wasserman (2011). The idea is to restrict the graph to be a forest, which is a graph with no cycles.
When ${G}$ is a forest, the density ${p}$ can be written as
$\displaystyle p(y) = \prod_{(j,k)\in E} \frac{p(y_j,y_k)}{p(y_j)p(y_k)}\prod_{s=1}^d p(y_s)$
where ${E}$ is the set of edges. We can estimate the density by inserting estimates of the univariate and bivariate marginals:
$\displaystyle \hat{p}(y) = \prod_{(j,k)\in E} \frac{\hat p(y_j,y_k)}{\hat p(y_j)\hat p(y_k)}\prod_{s=1}^d \hat p(y_s).$
But to find the graph we need to estimate the edge set ${E}$.
Any forest ${F}$ defines a density ${p_F(y)}$ by the above equation. If the true density ${p}$ were known, we could choose ${F}$ to minimize the Kullback-Leibler distance
$\displaystyle D(p,p_F) = \int p(y) \log\left(\frac{p(y)}{p_F(y)} \right) dy.$
This maximization can be done by Kruskal’s algorithm (Kruskal 1956) also known, in this context, as the Chow-Liu algorithm (Chow and Liu 1968). It works like this.
For each pair ${(X_j,X_k)}$ let
$\displaystyle I(Y_j,Y_k) = \int p(y_j,y_k) \log \frac{p(y_j,y_k)}{p(y_j)p(y_k)} dy_j\, dy_k$
be the mutual information between ${Y_j}$ and ${Y_k}$. We start with an empty tree and add edges greedily according to the value of ${I(Y_j,Y_k)}$. First we connect the pair of variables with the
largest ${I(Y_j,Y_k)}$ and then the second largest and so on. However, we do not add an edge if it forms a cycle.
Of course, we don’t know ${p}$ so, instead, we use the data to estimate the mutual information ${I(Y_j,Y_k)}$. If we simply run the Chow-Liu algorithm, we get a tree, that is, a fully connected
forest. But we also use a hold-out sample to decide on when to stop adding edges. And that’s it.
There are other ways to relax the Gaussian assumption. For example, here is an approach that uses rank statistics and copulas.
4. The Future?
Not too long ago, estimating a high dimensional graph was unthinkable. Now they are used routinely. The biggest thing missing is a good measure of uncertainty. One can do some obvious things, such as
bootstrapping, but it is not clear that the output is meaningful.
5. References
Banerjee, O. and El Ghaoui, L. and d’Aspremont, A. (2008). Model selection through sparse maximum likelihood estimation for multivariate Gaussian or binary data. The Journal of Machine Learning
Research, 9, 485-516.
Chow, C. and Liu, C. (1968). IEEE Transactions on Information Theory, 14, 462-467.
Kruskal, J.B. (1956). On the shortest spanning subtree of a graph and the traveling salesman problem. Proceedings of the American Mathematical society, 7, 48-50.
Liu, H., Xu, M., Gu, H., Gupta, A., Lafferty, J. and Wasserman, L. (2011). Forest Density Estimation. Journal of Machine Learning Research, 12, 907-951.
Maslov, S. and Sneppen, K. (2002). Specificity and stability in topology of protein networks. Science, 296, 910.
Meinshausen, N. and B{ü}hlmann, P. (2006). High-dimensional graphs and variable selection with the lasso. The Annals of Statistics, 34, 1436-1462.
By normaldeviate | Posted in Uncategorized | Comments (5)
Hunting for Manifolds
September 8, 2012 – 11:18 am
Hunting For Manifolds
Larry Wasserman
In this post I’ll describe one aspect of the problem of estimating a manifold, or manifold learning, as it is called.
Suppose we have data ${Y_1,\ldots, Y_n \in \mathbb{R}^D}$ and suppose that the data lie on, or close to, a manifold ${M}$ of dimension ${d< D}$. There are two different goals we might have in mind.
The first goal is to use the fact that the data lie near the manifold as a way of doing dimension reduction. The popular methods for solving this problem include isomap, local linear embedding,
diffusion maps, and Laplacian eigenmaps among others.
The second goal is to actually estimate the manifold ${M}$.
I have had the good fortune to work with two groups of people on this problem. With the first group (Chris Genovese, Marco Perone-Pacifico, Isa Verdinelli and me) we have focused mostly on certain
geometric aspects of the problem. With the second group (Sivaraman Balakrishnan, Don Sheehy, Aarti Singh, Alessandro Rinaldo and me) we have focused on topology.
It is this geometric problem (my work with the first group) that I will discuss here. In particular, I will focus on these two papers: here and here.
1. Statistical Model
To make some progress, we introduce a statistical model for the data. Here are two models.
Model I (Clutter Noise): With probability ${\pi}$ we draw ${Y_i}$ from a distribution ${G}$ supported on ${M}$. With probability ${1-\pi}$ we draw ${Y_i}$ from a uniform distribution on some compact
set ${K}$. To be concrete, let’s take ${K = [0,1]^D}$. The distribution ${P}$ of ${Y_i}$ is
$\displaystyle P = (1-\pi) U + \pi G$
where ${U}$ is uniform on ${[0,1]^D}$.
Model II (Additive Noise): We draw ${X_i}$ from a distribution ${G}$ supported on ${M}$. Then we let ${Y_i = X_i + \epsilon_i}$ where ${\epsilon_i}$ is a ${D}$-dimensional Gaussian distribution. In
this case, ${P}$ has density ${p}$ given by
$\displaystyle p(y) = \int_M \phi(y-z) dG(z)$
where ${\phi}$ is a Gaussian. We can also write this as
$\displaystyle P = G \star \Phi$
where ${\star}$ denotes convolution.
There are some interesting complications. First, note that ${G}$ is a singular distribution. That is, there are sets with positive probability but 0 Lebesgue measure. For example, suppose that ${M}$
is a circle in ${\mathbb{R}^2}$ and that ${G}$ is uniform on the circle. Then the circle has Lebesgue measure 0 but ${G(M)=1}$. Model II has the complication that it is a convolution and convolutions
are not easy to deal with.
Our goal is to construct an estimate ${\hat M}$.
2. Hausdorff Distance
We will compare ${\hat M}$ to ${M}$ using the Hausdorff distance. If ${A}$ and ${B}$ are sets, the Hausdorff distance between ${A}$ and ${B}$ is
$\displaystyle H(A,B) = \inf \Bigl\{ \epsilon:\ A \subset B\oplus \epsilon,\ {\rm and}\ B \subset A\oplus \epsilon\Bigr\}$
$\displaystyle A\oplus \epsilon = \bigcup_{x\in A} B_D(x,\epsilon)$
and ${B_D(x,\epsilon)}$ is a ball of radius ${\epsilon}$ centered at ${x}$.
Let’s decode what this means. Imagine we start to grow the set ${A}$ by placing a ball of size ${\epsilon}$ around each point in ${A}$. We keep growing ${A}$ until it engulfs ${B}$. Let ${\epsilon_1}
$ be the size of ${\epsilon}$ needed to do this. Now grow ${B}$ until it engulfs ${A}$. Let ${\epsilon_2}$ be the size of ${\epsilon}$ needed to do this. Then ${H(A,B) = \max\{\epsilon_1,\epsilon_2
\}}$. Here is an example:
There is another way to think of the Hausdorff distance. If ${A}$ is a set and ${x}$ is a point, let ${d_A(x)}$ be the distance from ${x}$ to ${A}$:
$\displaystyle d_A(x) = \inf_{y\in A}||y-x||.$
We call ${d_A}$ the distance function. Then
$\displaystyle H(A,B) = \sup_x | d_A(x) - d_B(x)|.$
Using ${H}$ as our loss function, our goal is to find the minimax risk:
$\displaystyle R_n = \inf_{\hat M}\sup_{P\in {\cal P}} \mathbb{E}_P [ H(\hat M,M)]$
where the infimum is over all estimators ${\hat M}$ and the supremum is over a set of distributions ${{\cal P}}$. Actually, we will consider a large set of manifolds ${{\cal M}}$ and we will
associate a distribution ${P}$ with each manifold ${M}$. This will define the set ${{\cal P}}$.
3. Reach
We will consider all ${d}$-dimensional manifolds ${M}$ with positive reach. The reach is a property of a manifold that appears in many places in manifold estimation theory. I believe it first
appeared in Federer (1959).
The reach is ${\kappa(M)}$ is the largest number ${\epsilon}$ such that each point in ${M\oplus \epsilon}$ has a unique projection on ${M}$. A set of non-zero reach has two nice properties. First, it
is smooth and second, it is self-avoiding.
Consider a circle ${M}$ in the plane with radius ${R}$ . Every point on the plane has a unique projection on ${M}$ with one exception: the center ${c}$ of the circle is distance ${R}$ from each point
on ${M}$. There is no unique closest point on ${M}$ to ${c}$. So the reach is ${R}$.
A plane has infinite reach. A line with corner has 0 reach. Here is a example of a curve ${M}$ in two-dimensions. Suppose you put a line, perpendicular to ${M}$, at each point on ${M}$. If the length
of the lines is less than reach(M), then the lines won’t cross. If the length of the lines is larger than reach(M), then the lines will cross.
Requiring ${M}$ to have positive reach is a strong condition. There are ways to weaken the condition based on generalization so reach, but we won’t go into those here.
The set of manifolds we are interested in is the set of ${d}$-manifolds contained in ${[0,1]^D}$ with reach at least ${\kappa >0}$. Here, ${\kappa}$ is some arbitrary, fixed positive number.
4. The Results
Recall that that the minimax risk is
$\displaystyle R_n = \inf_{\hat M}\sup_{P\in {\cal P}} \mathbb{E}_P [ H(\hat M,M)].$
In the case of clutter noise it turns out that
$\displaystyle R_n \approx \left(\frac{1}{n}\right)^{\frac{2}{d}}.$
Note that the rate depends on ${d}$ but not on the ambient dimension ${D}$. But in the case of additive Gaussian noise,
$\displaystyle R_n \approx \left(\frac{1}{\log n}\right).$
The latter result says that ${R_n}$ approaches 0 very slowly. For all practical purposes, it is not possible to estimate ${M}$. This might be surprising. But, in fact, the result is not unexpected.
We can explain this by analogy with another problem. Suppose you want to estimate ${p}$ from ${n}$ observations ${Y_1,\ldots, Y_n}$. if ${p}$ satisfies standard smoothness assumptions, then the
minimax risk has the form
$\displaystyle \left(\frac{1}{n}\right)^{\frac{4}{4+d}}$
which is a nice, healthy, polynomial rate. But if we add some Gaussian noise to each observation, the minimax rate of convergence for estimating ${p}$ is logarithmic (Fan 1991). The same thing
happens if we add Gaussian noise to the covariates in a regression problem. A little bit of Gaussian noise makes these problems essentially impossible.
Is there a way out of this? Yes. Instead of estimating ${M}$ we estimate a set ${M'}$ that is close to ${M}$. In other words, we have to live with some bias. We can estimate ${M'}$, which we call a
surrogate for ${M}$, with a polynomial rate.
We are finishing a paper on this idea and I’ll write a post about it when we are done.
5. Conclusion
Estimating a manifold is a difficult statistical problem. With additive Gaussian noise, it is nearly impossible. And notice that I assumed that the dimension ${d}$ of the manifold was known. Things
are even harder when ${d}$ is not known.
However, none of these problems are insurmountable as long as we chnage the goal from “estimating ${M}$” to “estimating an approximation of ${M}$” which, in practice, is usually quite reasonable.
6. References
Fan, J. (1991). On the optimal rates of convergence for nonparametric deconvolution problems. Ann. Statist. 19 1257-1272.
Federer, H. (1959). Curvature measures. Trans. Amer. Math. Soc. 93 418-491.
Genovese, C., Perone-Pacifico, M., Verdinelli, I. and Wasserman, L. (2012). Minimax Manifold Estimation. Journal of Machine Learning Research, 13, 1263–1291.
Genovese, C., Perone-Pacifico, M., Verdinelli, I. and Wasserman, L. (2012). Manifold estimation and singular deconvolution under Hausdorff loss. The Annals of Statistics, 49, 941-963.
By normaldeviate | Posted in Uncategorized | Comments (13)
Robins and Wasserman Respond to a Nobel Prize Winner Continued: A Counterexample to Bayesian Inference?
September 2, 2012 – 5:28 pm
Robins and Wasserman Respond to a Nobel Prize Winner Continued: A Counterexample to Bayesian Inference?
This is a response to Chris Sims’ comments on our previous blog post. Because of the length of our response, we are making this a new post rather than putting it in the comments of the last post.
Recall that we observe ${n}$ iid observations ${O=\left( X,R,RY\right)}$, where ${Y}$ and ${R}$ are Bernoulli and independent given ${X}$.
Define ${\theta \left( X\right) \equiv E \left[ Y|X\right]}$ and ${\pi \left( X\right) \equiv E\left[ R|X\right]}$. We assume that ${\pi \left( \cdot \right) }$ is a known function. Also the marginal
density ${p\left( x\right)}$ of ${X=\left( X_{1},...,X_{d}\right)}$ (with ${d=100,000}$) is known and uniform on the unit cube in ${R^{d}}$. Our goal is estimation of
$\displaystyle \psi \equiv E\left[ Y\right] =E\left\{ E\left[ Y|X\right] \right\} = E\left\{E\left[ Y|X,R=1\right] \right\} =\int_{[0,1]^d} \theta \left( x\right) dx.$
The likelihood
$\displaystyle \prod_{i=1}^{n}p(X_{i})p(R_{i}|X_{i})p(Y_{i}|X_{i})^{R_{i}}=\left\{ \prod_{i}\pi (X_{i})^{R_{i}}(1-\pi (X_{i}))^{1-R_{i}}\right\} \left\{ \prod_{i}\,\theta (X_{i})^{Y_{i}R_{i}}(1-\
theta (X_{i}))^{(1-Y_{i})R_{i}}\right\} \$
factors into two parts – the first depending on ${\pi \left( \cdot \right)}$ and the second on ${\theta \left( \cdot \right)}$.
1. Selection Bias
This is a point of agreement. There is selection bias if and only if
$\displaystyle Cov\left\{ \theta \left(X\right) ,\pi \left( X\right) \right\}=0.$
Note that
$\displaystyle E\left[ Y\right] =E\left[ Y|R=1\right] - \frac{Cov\left\{ \theta \left(X\right) ,\pi \left( X\right) \right\} }{E[R]}.$
Hence, if ${Cov\left\{ \theta \left(X\right) ,\pi \left( X\right) \right\}=0}$ then
the sample average of ${Y}$ in the subset whose ${Y}$ is observed (R=1) is unbiased for
$\displaystyle \psi \equiv E\left[ Y\right] =\int_{[0,1]^d}\ \theta \left( x\right) dx.$
In this case, inference is easy for the Bayesian and the frequentist and there is no issue. So we all agree that the interesting case is where there is selection bias,
that is, where ${Cov\left\{ \theta \left(X\right) ,\pi \left( X\right) \right\} eq 0.}$
2. Posterior Dependence on ${\pi}$
If the prior ${W}$ on the functions ${\pi(\cdot)}$ and ${\theta(\cdot)}$ is such that ${W(\pi,\theta)= W(\pi)W(\theta)}$ then the posterior does not depend on ${\pi}$ and the posterior for ${\psi}$
will not concentrate around the true value of ${\psi}$. Again, we believe we all agree on this point.
We note that no one, Bayesian or frequentist, has ever proposed using an estimator that does not depend on ${\pi \left( \cdot \right) }$ in the selection bias case, i.e when ${Cov\left\{ \theta \left
( X\right) ,\pi\left( X\right) \right\} }$ is non-zero. (See addendum for more on this point.)
3. Prior Independence Versus Selection Bias
Reading Chris’ comments, the reader might get the impression that prior independence rules out selection bias, i.e. that is,
$\displaystyle W(\pi,\theta) =W(\pi)W(\theta)\ \ \ \ {\rm implies \ that\ }\ \ \ Cov\left\{ \theta \left(X\right) ,\pi \left( X\right) \right\}=0.$
Therefore, one might conclude that if we want to discuss the interesting case where there is selection bias, then we cannot have ${W(\pi,\theta) =W(\pi)W(\theta)}$.
But this is incorrect. ${W(\pi,\theta) =W(\pi)W(\theta)}$does not imply that ${Cov\left\{ \theta \left(X\right) ,\pi \left( X\right) \right\}=0.}$ To see this, consider the following example.
Suppose that ${X}$ is one dimensional and a Bayesian’s prior ${W}$ for ${\left( \theta \left( \cdot \right) ,\pi \left(\cdot \right) \right)}$ depends only on the two parameters ${\left( \alpha_{\
theta },\alpha _{\pi }\right) }$ as follows:
$\displaystyle \theta \left( x\right) =\alpha _{\theta }x,\ \ \ \pi \left( x\right) =\alpha _{\pi}x+1/10 \ \ \ \text{with}\ \alpha _{\theta }\text{ and }\alpha _{\pi }\text{ a\ priori\ independent, }
where ${\alpha _{\theta }}$ is uniform on ${\left( 0,1\right)}$ and ${\alpha_{\pi }}$ is uniform on $(0,9/10)$.
Then, clearly ${\theta\left( \cdot \right)}$ and ${\pi \left( \cdot \right)}$ are independent under ${W}$. However, recalling ${X}$ is uniform so ${p\left( x\right) \equiv 1,}$ we have that for for
any fixed ${\left( \alpha _{\theta },\alpha _{\pi }\right)}$,
$\displaystyle Cov\left\{ \theta \left( X\right) ,\pi \left( X\right) \right\} =\int_{0}^{1}\theta \left( x\right) \pi \left( x\right) dx-\int_{0}^{1}\ \pi \left( x\right) dx\int_{0}^{1}\theta \left(
x\right) dx \\ =\alpha _{\theta }\alpha _{\pi }\left( \int_{0}^{1}x^{2}dx-\left\{ \int_{0}^{1}xdx\right\} ^{2}\right) =\alpha _{\theta }\alpha _{\pi }/12 .$
$\displaystyle W({\rm there\ exists\ selection\ bias})=W\Biggl( Cov\left\{ \theta \left( X\right) ,\pi \left( X\right) \right\} >0 \Biggr) =1$
since ${\alpha _{\theta }}$ and ${\alpha_{\pi }}$ are both positive with ${W-}$probability 1.
4. Other Justifications For Prior Dependence?
Since prior independence of ${\pi}$ and ${\theta}$ does not imply “no selection bias,” one might instead argue that it is practically unrealistic to have ${W(\theta,\pi)=W(\theta)W(\pi)}$. But we now
show that it is realistic.
Suppose a new HMO needs to estimate the fraction ${\psi }$ of its patient population that will have a MI ${(Y)}$ in the next year, so as to determine the number of cardiac unit beds needed. Each HMO
member has had 300 potential risk factors ${X=(X_{1},...,X_{300})}$ measured: age, weight height, blood pressure, multiple tests of liver, renal, pulmonary, and cardiac function, good and bad
cholesterol, packs per day smoked, years smoked, etc. (We will get to 100,000 once routine genomic testing becomes feasible). A general epidemiologist had earlier studied risk factors for MI
by following 5000 of the 50,000 HMO members for a year. Because MI is a rare event, he oversampled subjects whose ${X}$, in his opinion, indicated a
smaller probability ${\theta \left( X\right) }$ of an MI (${Y=1)}$. Hence the
sampling fraction ${\pi \left( X\right) =P\left( R=1|X\right) }$ was a known, but complex function chosen so as to try to make ${\theta \left( X\right) }$ and ${\pi \left( X\right) }$ negatively
The world’s leading heart expert, our Bayesian, was hired to estimate ${\psi =\int \theta \left( x\right) p\left( x\right) dx}$ based on distribution ${ p\left( x\right) }$ of ${X}$ in HMO members
and the data ${\left(X_{i},R_{i},R_{i}Y_{i}\right) ,i=\left( 1,...,5000\right) }$ from the study.
As world’s expert, his beliefs about the risk function ${\theta \left( \cdot \right) }$ would not change upon learning ${\pi \left( \cdot \right) ,}$ as ${ \pi \left( \cdot \right) }$ only reflects a
nonexpert’s beliefs. Hence ${ \theta \left( \cdot \right) }$ and ${\pi \left( \cdot \right) }$ are a priori independent. Nonetheless, knowing that the epidemiologist had carefully read the expert
literature on risk factors for MI, he also believes with high probability that epidemiologist succeeded in having the random variables ${ \theta \left( X\right) }$ and ${\pi \left( X\right) }$ be
negatively correlated.
What’s more, Robins and Ritov (1997) showed that, if before seeing the data, any Bayesian, cardiac expert or not, thoroughly queries the epidemiologist
(who selected ${\pi \left( \cdot \right) }$) about the epidemiologist’s reasoned opinions concerning ${\theta (\cdot )}$ (but not about ${\pi (\cdot )}$ ), the Bayesian will then have independent
priors. The idea is that once you are satisfied that you have learned from the epidemiologist all he knows about ${\theta (\cdot )}$ that you did not, you will have an updated prior for
${\theta \left( \cdot \right) }$. Your prior for ${\theta \left( \cdot \right) \ (}$now updated) cannot then change if you subsequently are told ${\pi \left( \cdot \right) .}$ Hence, we could take as
many Bayesians as you please and arrange it so all had ${\theta \left( \cdot \right) }$ and ${\pi \left( \cdot \right) }$ apriori independent. This last argument is quite general, applying to many
5. Alternative Interpretation
An alternative reading of Chris’s third response and his subsequent post is that, rather than placing a joint prior ${W}$ over the functions
${\theta\left( \cdot \right) =\left\{ \theta \left( x\right) ;x\in \left[ 0,1\right]^{d}\right\} }$ and ${\pi \left( \cdot \right) =\left\{ \pi \left( x\right);x\in \left[ 0,1\right] ^{d}\right\}}$
as above,
his prior is placed over the joint distribution of the random variables ${\theta \left( X\right) }$ and ${\pi \left( X\right)}$.
If so, he is then correct that making ${\theta \left(X\right) }$ and ${\pi \left( X\right) }$ independent with prior probability one
also implies ${Cov\left\{ \theta \left( X\ \right) ,\pi \left( X\ \right)\right\} =0}$ and thus no selection bias.
However, it appears that from this, he concludes that selection bias, in itself, licenses the dependence of his posterior on ${\pi \left( \cdot \right)}$.
This is incorrect. As noted above, it is prior dependence of ${\theta \left( \cdot \right) }$ and ${\pi\left( \cdot \right) }$ that licenses posterior dependence on ${\pi \left(\cdot \right)}$ – not
prior dependence of ${\theta \left( X\right) }$ and
${\pi\left( X\right)}$. Were he correct, our Bayesian cardiac expert’s prior on ${\theta \left( \cdot \right)}$ could have changed upon learning the epidemiologists ${\pi \left( \cdot \right)}$.
6. What If We Do Use a Prior That Depends on ${\pi}$?
In the above scenario, ${W(\theta)}$ should ot depend on ${\pi}$. But suppose, for whatever reason, one insists on letting ${W(\theta)}$ depend on ${\pi}$.
That still does not mean the posterior will concentrate. Having an estimator that depends on ${\pi}$is necessary, but not sufficient, to get consistency and fast rates. It is not enough to use a
prior ${W(\theta)}$ that is a function of ${\pi}$. The prior still has to be carefully engineered to ensure that the posterior for ${\psi}$ will concentrate around the truth.
Chris hints that he can construct such a prior but does not provide an explicit algorithm nor an argument as to why the estimator would be expected to be locally semiparametric efficient. However, it
is simple to construct a ${n^{1/2}}$-consistent
locally semiparametric efficient Bayes estimator ${\hat{\psi}_{Bayes}}$ as follows.
We tentatively model ${\theta(x) =P(Y=1|X=x)}$ as a finite dimensional parametric function ${b\left( x;\eta_{1},\ldots ,\eta_{k},\omega \right) }$
with either a smooth or noninformative prior on the parameters ${\left( \eta_{1},\ldots ,\eta_{k},\omega \right)}$, where we take
$\displaystyle b\left( x;\eta_{1},\ldots ,\eta _{k},\omega \right) = \mathrm{expit}\left( \sum_{m=1}^{k}\eta_{m}\phi _{m}(x)+\frac{\omega }{\pi (x)}\right) ,$
${\mathrm{expit}(a)=e^{a}/(1+e^{a})}$, and the ${\phi _{m}\left( x\right)}$ are basis functions. Then the posterior mean
${\hat{\psi}_{\rm Bayes}}$ of ${\psi =\int \theta \left( x\right) dx}$ will have the same asymptotic distribution as the locally semiparametric efficient regression estimator of Scharfstein et al.
(1999) described in our original post. Note that the estimator is ${n^{1/2}}$ consistent, even if the model ${\theta(x) =P(Y=1|X=x)=b\left( x;\eta_{1},\ldots ,\eta_{k},\omega \right)}$ is wrong.
Of course, this estimator is a clear case of frequentist pursuit Bayes.
7. Conclusion
Here are the main points:
1. If ${W(\theta,\pi) = W(\theta)W(\pi)}$ then the posterior will not concentrate.
Thus, if a Bayesian wants the posterior for ${\psi}$ to concentrate around the true value,
he must justify having a prior ${W(\theta)}$ that is a function of ${\pi}$.
2. ${W(\theta,\pi) = W(\theta)W(\pi)}$ does not imply an absence of selection bias.
Therefore, an argument of the form: “we want selection bias so we cannot have prior independence” fails.
3. One can try to argue that prior independence is unrealistic. But as we have shown, this is not the case.
4. But, if after all this, we do insist on letting ${W(\theta)}$ depend on ${\pi}$,
it is still not enough. Dependence on ${\pi}$ is necessary but not sufficient.
We conclude Bayes fails in our example unless one uses a special prior designed just to mimic the frequentist estimator.
8. Addendum: What happens If The Estimator Does Note Depend on ${\pi}$?
The theorem of Robins and Ritov, quoted in our initial post, says that no uniformly consistent estimator that does not depend on ${\pi \left( \cdot \right) }$ can exist in the model ${\mathcal{P}}$
which contains all measurable ${\pi \left( x\right) }$ and ${\theta \left( x\right) }$ subject
to ${\pi \left( X\right) >\delta >0}$ with probability 1. Take ${\delta =1/8}$ for
concreteness. In fact, even when we assume ${\theta \left( \cdot \right) }$
and ${\pi \left( \cdot \right) }$ are quite smooth, there will be little
improvement in performance.
Given ${X}$ has 100,000 dimensions, we can ask how many derivatives ${\beta _{\theta }}$ and ${\beta _{\pi }}$ must ${\theta \left( \cdot \right) }$
and ${\pi \left( \cdot \right) }$ have so that it is possible to construct an estimator of ${\psi }$, not depending on ${\pi \left( \cdot \right) ,}$ that
converges at rate ${n^{-\frac{1}{2}}}$uniformly to ${\psi }$ over a submodel ${ \mathcal{P}_{smooth}}$. Robins et al. (2008) show that it is necessary and sufficient that ${\beta _{\theta }}$+${\beta
_{\pi }=50,000}$ and provide an
explicit estimator. More generally, if ${\beta _{\theta }}$ +${\beta _{\pi }=s}$ derivatives with ${0<s<50,000}$ , the optimal rate is ${n^{- \frac{s/50,000}{\left( 1+s/50,000\right) }}}$ which is
approximately ${ n^{-s/50,000}}$ when ${s}$ is small compared to ${50,000.}$ An explicit estimator
is constructed in Robins et al (2008) ; Robins et al (2009) prove that the rate cannot be improved on. Given these asymptotic mathematical results, we
doubt any reader can exhibit an estimator, not depending on ${\pi \left(\cdot \right) ,}$ that will have reasonable finite sample performance under
model ${\mathcal{P}}$ or even ${\mathcal{P}_{smooth}}$ with, say, ${s=25,000}$
and a sample size of 5,000. By reasonable finite sample performance, we mean an interval estimator that will cover the true ${\psi }$ at least 95% of the time and that has average length less than or
equal to intervals estimators
centered on the ${n^{1/2}-consistent}$ improved HT estimators. Nonetheless, we
await any candidate estimators, accompanied by at least some simulation
evidence backing up your claim.
9. References
1. Robins JM, Tchetgen E, Li L, van der Vaart A. (2009).
Semiparametric Minimax Rates. Electron. J. Statist. Volume 3 (2009),
2. Robins JM, Li L, Tchetgen E, van der Vaart A. (2008). Higher order influence
functions and minimax estimation of nonlinear functionals. Probability and
Statistics: Essays in Honor of David A. Freedman 2:335-421
3. Robins JM, Ritov Y. (1997). Toward a curse of dimensionality appropriate
(CODA) asymptotic theory for semi-parametric models. Statistics in Medicine,
By normaldeviate | Posted in Uncategorized | Comments (13) | {"url":"https://normaldeviate.wordpress.com/2012/09/","timestamp":"2014-04-19T23:19:16Z","content_type":null,"content_length":"152268","record_id":"<urn:uuid:0b704011-684f-4903-9ba0-eb0dd66c8177>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00440-ip-10-147-4-33.ec2.internal.warc.gz"} |
Thematic Programs
General Scientific Activities
Seminar Series
June 13-15, 2003
Mathematics as Story Symposium
to be held at the University of Western Ontario
Sponsored by The Fields Institute
June 12, 2003
Speakers: Juris Steprans (York), Raymond Laflamme (Waterloo), Moshe Milevsky (York)
June 2 - 20, 2003
Fields Institute Summer School
Logic and Foundations of Computation
to be held at the University of Ottawa
May 24 - 30, 2003
Conference in Number Theory in Honour of Professor H.C. Williams
to be held in Banff, Alberta
Sponsored by The Fields Institute
May 23, 2003 -- 3:30 pm
Special Lecture -- Manindra Agrawal
A Polynomial-time Algorithm for Primality Testing
Audio of Talk
May 3, 2003
Ottawa-Carleton Discrete Mathematics Day 2003
to be held at the University of Ottawa
Supported by The Fields Institute
May 2-3, 2003
9th Great Lakes K-theory Conference
to be held at Northwestern University
May 2, 2003
Southern Ontario Numerical Analysis Day (SONAD 2003)
to be held at McMaster University
Supported by The Fields Institute
May 1-2, 2003
12th Ontario Combinatorics Workshop
to be held at the University of Ottawa
Supported by The Fields Institute
April 23 and 24, 2003
Distinguished Lecture Series in Statistical Science -- Don Dawson, Carleton University and McGill University
Probabilistic Phenomena in Mathematics and Science
(January 25-26, 2003 and March 22-23, 2003)
Workshop on Arithmetic and Geometry of Higher Dimensional Varieties with Special Emphasis on Calabi-Yau Varities and Mirror Symmetry
November 2, 2002
Graduate School Information Day
October 25, 2002
Workshop on Industry, Mathematics and Computer Algebra
October 22, 2002
CRM -Fields Prize Lecture -- Professor John Friedlander
October 19, 2002
New FRSCs Day
-celebrating the achievements of this year's new Fellows in the mathematical and physical sciences
September 20-21, 2002
AD-HOC NetwOrks and Wireless (ADHOC-NOW)
Program Co-chairs: Michel Barbeau and Evangelos Kranakis
Supported by MITACS
September 23-28, 2002
Workshop on Categorical Structures for Descent and Galois Theory, Hopf Algebras and Semiabelian Categories
Organizers: G. Janelidze, B. Pareigis, Walter Tholen
July 15 - August 10, 2002
International Conference on Representations of Algebras and Related Topics (ICRA X)
Organizers: S. Berman, Y. Billig, R.-O. Buchweitz, V. Dlab, E. Neher, S. Liu
August 7 - 11, 2002
Workshop on Geometry, Dynamics, and Mechanics in Honour of the 60th birthday of J.E. Marsden
Organizers: A. Bloch, P. Newton, T. Ratiu, S. Shkoller, A. Weinstein
July 8-12, 2002
Workshop on Nonself-adjoint Operator Algebras
Organizer: Kenneth R. Davidson | {"url":"http://www.fields.utoronto.ca/programs/scientific/02-03/archive.html","timestamp":"2014-04-17T04:09:49Z","content_type":null,"content_length":"15401","record_id":"<urn:uuid:bb71f1ce-316f-46f3-8116-77ae42308136>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00138-ip-10-147-4-33.ec2.internal.warc.gz"} |
#include <algorithm>
iterator set_symmetric_difference( iterator start1, iterator end1, iterator start2, iterator end2, iterator result );
iterator set_symmetric_difference( iterator start1, iterator end1, iterator start2, iterator end2, iterator result, StrictWeakOrdering cmp );
The set_symmetric_difference() algorithm computes the symmetric difference of the two sets defined by [start1,end1) and [start2,end2) and stores the difference starting at result.
Both of the sets, given as ranges, must be sorted in ascending order.
The return value of set_symmetric_difference() is an iterator to the end of the result range.
If the strict weak ordering comparison function object cmp is not specified, set_symmetric_difference() will use the < operator to compare elements. | {"url":"http://idlebox.net/2008/apidocs/cppreference-20080420.zip/cppalgorithm/set_symmetric_difference.html","timestamp":"2014-04-20T05:59:54Z","content_type":null,"content_length":"2513","record_id":"<urn:uuid:3ec9922d-0f93-4dfc-a585-e569371efb4a>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00012-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fossies" - the Fresh Open Source Software archive
Member "jpgraph-3.5.0b1/docs/chunkhtml/ch22s03.html" of archive jpgraph-3.5.0b1.tar.gz:
Caution: In this restricted "Fossies" environment the current HTML page may not be correctly presentated and may have some non-functional links. Alternatively you can here view or download the
uninterpreted source code. That can be also achieved for any archive member file by clicking within an archive contents listing on the first character of the file(path) respectively on the according
byte size field.
By using mesh interpolation it is possible to obtain a "smoother" looking matrix plot by creating a "in-between" values in the original matrix by linear interpolation.
This is also used in contour plots. See Understanding mesh interpolation for a more thorough discussion on mesh interpolation and the implication of CPU usage.
The interpolation factor specifies how many times, recursively, the interpolation should be done. Practical value ranges from 2-6. While it is possible to specify larger values than 6 the time it
takes to do the interpolation will grow exponentially in the interpolation factor. It is also important to remember that this interpolation dos not create any "more" information than what is already
available in the matrix. In addition it needs to be verified that such a linear interpolation of data is at all valid for the underlying data in the matrix.
As an example the following figures show the effect of doing a 1-5 times interpolation of the original data (same as interpolation = 1). With the chosen graph size it is no point of interpolating
further since doing 5 times interpolating will force the module to be 1x1 pixel in order to fit within the constraints of the graph. (The original data was 8x11 and interpolating it 5 times creates a
113x161 matrix)
The different sizes of the plot is due to the fact that each cell in the matrix must have an integer number of pixels. In the graphs above we have used the largest module size while still fitting in
the image. Hence the different appearances.
There are two ways of doing this interpolation.
1. When the matrix plot is created by specifying the interpolation factor as the second argument to the plot constructor, i.e.
1 $matrixplot = new MatrixPlot($data,4); // 4 times interpolation
2. If many plots share the same data it is more efficient to do it once in the beginning instead of doing the interpolation each time a new matrix plot object is created. This can be done by using
the utility function
As can be seen from the declaration this is a call by reference method where the data is replaced by the new data that has been interpolated the specified number of times. This avoids unnecessary
data copying for large matrices.
Those familiar with Matlab (tm) will recognize a similar mesh interpolation in the interp2() function. | {"url":"http://fossies.org/linux/www/jpgraph-3.5.0b1.tar.gz:a/jpgraph-3.5.0b1/docs/chunkhtml/ch22s03.html","timestamp":"2014-04-16T07:25:10Z","content_type":null,"content_length":"7308","record_id":"<urn:uuid:399e8480-b5dd-4960-b099-a4f68ffae1f1>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00181-ip-10-147-4-33.ec2.internal.warc.gz"} |
Geometry: Two column proof for this.
December 7th 2012, 09:43 PM
Geometry: Two column proof for this.
I need a two-column proof for this asap. (Statements & its corresponding reason)
Given: JKLM is a parallelogram
<M is a right angle (It is read as angle M)
MJ = JK (MJ is congruent to JK)
Prove: Parallelogram JKLM is a square
Appreciated. ♥
December 8th 2012, 01:09 AM
Re: Geometry: Two column proof for this.
1) JKLM is a rectangle --- if a parallelogram contains one right angle, then all other angles are right angles and its a rectangle.
2) JM = LK , LM = JK ---- Opposite sides of a parallelogram are congruents.
3) JM = JK = LM = LK ---- Transitive property
4) JKLM is a square ----- Defn of a square (4 right angles and all sides are congruent) | {"url":"http://mathhelpforum.com/geometry/209323-geometry-two-column-proof-print.html","timestamp":"2014-04-20T09:33:33Z","content_type":null,"content_length":"4144","record_id":"<urn:uuid:3f8891d0-1a2e-4bd2-8d2c-71cff68c83e8>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00548-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts by
Posts by Gee
Total # Posts: 57
college physics
A cruise ship of seniors is moving due west with an engine speed of 60km/hr when it encounters a current from the North at 20Km/hr. What is its relative velocity due to this current? (speed and
angle) What course correction does it need to continue due west? (new angle and res...
college physics
A cannonball shot at an angle of 60 Degrees had a range of 196m. What was its initial speed?
college physics
A 3rd baseman is throwing a ball to 1st base which is 39.0m away. It leaves his hand at 38.0m/s at a height of 1.50m from the ground and makes an angle of 20 Degrees. How high will it be when it gets
to 1st base.
college physics
A ball is thrown from a 20.0m high building and hits the ground 6.0m from the building. With what horizontal velocity was it thrown?
physics college
A dog sled is pulled at an angle of 30 degrees with the horizon using a tension of 450N. If the sled has a mass of 50kg, then find the recoil, forward force and acceleration (if any).
physics college
A rocket booster fires in space with a force of 500N and moves the 10kg satellite from rest to motion in 2.1 seconds. Find the acceleration? What is the final speed of the satellite?
physics college
A lamp has a mass of 10kg and is supported by two wires that make an angle of 35 degrees between them. Find the tension in each wire?
A dog is pulled at an angle of 30 degrees with the horizon using a tension of 450N. If the sled has a mass of 50kg, then find the recoil, forward force and acceleration (if any)?
Help my name is Gee and i have posted several question on and ive waited for help and no one will answer can someone please assist me. My post were on Sept 13 and 18 2012 thank you
Can someone please answer my question help
Can someone please answer my question help
Healthy breakfast contains the rating of 77 cereals and the number of grams of sugar contained in each serving. A simple linear regression model considering "sugar" as the explanatory variable and
"rating" as the response variable produced the following reg...
Can someone please answer my question please
Can someone please answer my question please
Pre study scores versus post-study scores for a class of 120 college freshman english students were considerated. The residual plot for the least squares regression line showed no pattern. The least
square regression line was y = 0.2 + 0.9x with a correlation coefficient r = 0...
Can someone please answer my question thank u
Use the following information to calculate the lower and upper fence: A teacher was interested in how many text messages teenagers got per day. He randomly selected 70 students and asked them how
many text messages they received the previous day. The mean (x)=23.1 standard dev...
If a distribution is skewed, should the mean must be smaller than the median?
Can categorical data be graphed using a boxplot?
A correlation of 0 always means there is no relationship between x and y true or false?
Is another name for an explanatory variable an independent variable?
standard deviation is not resistant to outliers true or false?
Is weight an example of a discrete variable?
The box plot below summarize the distributions of SAT verbal and math scores among students at an upstate New York high school. 300 400 500 600 700 800 data Whic of the following statements is false?
1. The range of the math scores equals the range of the verbal scores. 2. The...
On a given day the NYSE opening price per share for the 100 largest U.S. corporations has a mean of $385 and a median of $92. We can conclude A. The distribution of the price per share is symmetric,
B. The distribution of the price per share is skewed to the right or is it ske...
On a given day the NYSE opening price per share for the 100 largest U.S. corporations has a mean of $385 and a median of $92. We can conclude A. The distribution of the price per share is symmetric,
B. The distribution of the price per share is skewed to the right or is it ske...
A teacher was interested in how many text messages teenagers got per day. He randomly selected 70 students and asked them how many text messages they reveived the previous day.?
Pre- study scores versus post- study scores for a class of 120 college freshman english students were considerated. The residual plot for the least squares regression line showed no pattern. The
least squares regression line was y^=0.2+0.9x withwith a correlation coefficient r...
The data set healthy breakfast contains the ratings of 77 cereals and the number of grams of sugar contained in each searving. A simple linear regression model considering Sugar as the explanatory
variable and Rating as the response variable produced the following regression l...
I want to examine the relationship between gas mileage of cars and the engine size ( displacement in cubic inches). The explanatory variable is?
I want to examine the relationship between gas mileage of cars and the engine size ( displacement in cubic inches). The explanatory variable is?
The ABC Company has been evaluating the performance of two advertising agencies it deals with. They produce the following scatterplot of sales against advertising expenditures.
The data in the scatterplot below are an individual' s weight and time it takes ( in seconds) on a tredmill to raise their pulse rate to 140 per minute. The o' s correspond to females and the +'s to
I wish to determine the correlation between the height ( in inches) and weight ( in pounds) of 21 year old males. To do this , i measure the height and weight of two 21 year old men. The measured
values are?
The lifetime of a 2- volt non- rechargeable battery in constant use has a normal distribution with a mean of 516 hours and a standard deviation of 20 hours. What is the z-score for batteries with
lifetimes of 600 hours?
The lifetime of a 2- volt non-rechargeable battery in constant use has a normal distribution of 516 hours and a standard deviation of 20 hours. What is the z- score for batteries with lifetimes of
520 hours?
Research reports a correlation of +0.8 between math achievement and math aptitude it also reports a correlation of -0.8 between math achievement and a math anxiety. Which interpretations is most c?
A student computes the correlation between two variables in a spreedsheet and finds r= 0.06.Is there a relationship between the variables?
If a certain species of female lizard mate males that are .75 years younger what would the correlation between the ages of the males and female lizard be?
The cholesterol levels of young women aged 20 to 34 vary approximately normal with mean 185 milligrams per deciliter and standard deviation 39 mg/dl. Cholesterol levels for men vary with mean 222mg/
dl and standard deviation 37 mg/dl Sandy's cholesterol level is 220 her fat...
The lifetime of a 2 volt non- rechargeable battery has a normal distribution with a mean of 516 hours and a standard deviation of 20 hours.What is the z- score for batteries with lifetimes of 600
The most common intelligence quotient (IQ) scale is normally distributed with mean 100 and standard deviation 15. What score would put a child 3 standard deviations above the mean
The cholestetol levels of young women aged 20 to 34 vary with a mean of 185 milligrams per decliter and standard deviation 39 mg/dl. What cholesterol level would be a young woman have in this age
group if they were 2 standard deviations below the mean?
The most common intelligence quotient (IQ) scale is normally distributed with mean 100 and standard deviation 15. What score would put a child 3 standard deviations above the mean
The lifetime of a 2 volt non-rechargeable battery has a normal distribution with a mean of 516 hours and a standard deviation of 20 hours. What is the z- score for batteries with lifetimes of 520
h(x)=2x-5 h(0)= h(-8)=
I have been trying to figure this out for hours and I still cant figure it out! Please help me.. The question says to find the value of theta when sin(theta)=0 the answers include equations like pi/2
+2piK 0=2piK pi/2+pik pik
A large wooden wheel of radius R and the moment of inertia I is mounted on an axle so as to rotate freely. A bullet of mass m and speed v is shot tangential to the wheel and strike its edge, lodging
in the wheel s rim. If the wheel was originally at rest, what is its rota...
authenticity is the right answer just checking thought>>>
A minor league baseball team plas 84 games in a season. If the team won 15 more than twice as many games as they lost, how many wins and losses did the team have?
a car is moving at 40km per hour what is the velocity in meters per second?
Why does the US need a single banking system to help organize its financial structure?
I need help with this question: Estimate the product by rounding to the nearest one 6.38 times 18.716 I said on my assignment 120 and my teacher marked it wrong. She said it's 114, and I donot see
how it's 114. Any input, would certainly be welcome. Thank you! By round...
How do I solve this? 2 {2/3 } / 4/5 / 1/3 {4/9} / 4/4 / 1/3 How do I cross multily the numbers? Is the (2/3) supposed to be squared? Is the (4/5)/(1/3) in a denominator below (2/3) ? You need to use
more parentheses or brackets to clarify the order of operations. (4/5)/(1/3) i...
there also is a problem with the vaster train's distance. Make sure to divide by two. Dividing in this instance makes the d of the faster train 800 instead of 1600. These different numbers don't
change the fact that they collided, though, considering d1+d2=total, which...
(1)v_f^2= v_o^2 + 2ad so d=(v_f^2 - v_o^2)/2a For the red train v_f=0m/s, v_o=20m/s and a=-1m/s^2 then d=(0-400)/-1 m = 400m this statement is wrong-- you failed to multiply the acceleration by 2 as
it states in the equation. So (vx^2-vx0^2)/2(ax)= x so therefore--(0-400)/(2*-... | {"url":"http://www.jiskha.com/members/profile/posts.cgi?name=Gee","timestamp":"2014-04-18T16:53:25Z","content_type":null,"content_length":"19374","record_id":"<urn:uuid:df864ed8-5f84-4d98-a8c8-ae3725aead1f>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00258-ip-10-147-4-33.ec2.internal.warc.gz"} |
Paradoxes Resolved, Origins Illuminated - Requiem for Relativity
Author Topic
Joe Keller
Posted - 20 May 2007 : 15:42:02
USA quote:Originally posted by Stoat
944 Posts
Hi Joe, I just downloaded a pdf file of a paper by Tifft and gave it quick skim read. That's pretty amazing stuff
Now, I've never liked the idea of time as simply another spatial dimension but have no great problems with thinking of time as having a metric of its own. Suppose that we are
forced, as matter, to walk the "high road" along the hillls and dales of a sine wave. Light has to do the same but its sine wave is of a much lower amplitude. Lazy fat gravity,
just walks along the x axis. We all arrive at the same time, to find that we, matter, have walked a thousand miles, light has walked five miles and gravity has walked five yards
Surely there are many unimagined possibilities. Thanks for your comment!
Joe Keller
Posted - 20 May 2007 : 16:20:06
USA Twelve light-years away, is a star system like the Sun, Barbarossa & Frey:
944 Posts
"Epsilon Indi Ba,Bb: The nearest binary brown dwarf", McCaughrean et al, Astronomy & Astrophysics 413:1029-1036, 2004.
The primary, Eps Indi A, is a main-sequence Type K star thought to be roughly 1.3 Gyr old. At a distance of 1500 AU, it is orbited by a pair of brown dwarfs (Ba & Bb) which orbit each
other about 2.65 AU apart. There is extreme observational bias in favor of hotter brown dwarfs that are self-illuminated in infrared, and in favor of resolvable brown dwarfs farther
than 1000 AU from their primaries. So, presumably this pair of brown dwarfs is unusually massive and hot, and unusually distant from the primary.
The dimmer one, Eps Indi Bb, is estimated to have 0.027 solar mass and surface temperature 854K. I estimate Barbarossa to have 1/3 this mass and, like the Sun, 3.5x this age. So, it's
plausible that Barbarossa would be too cold to be self-illuminated in infrared.
From luminosity and temperature, the diameter of the brighter one, Eps Indi Ba (which has est. 0.045 solar mass) is estimated as 53,000 mi.; this is only 72% of "the
minimum...predicted by structural models..." (Op. cit., p. 1034). So, Barbarossa likewise might be much smaller than the Jupiter size predicted by structural models that assume a
Jupiter-like composition.
Joe Keller
Posted - 20 May 2007 : 18:55:25
USA quote:Originally posted by Bill_Smith
944 Posts
Looks like the forum has gone a little haywire with duplication.
About the magnitudes, so this is why the object is at or near the limit of Bobs images. I think I quoted mag 18.5 to Bob when I measured them so it leaves a large gap between 18.5
and 20.
If the object is a brown dwarf would you be relying on albedo only?
Hi Bill!
My May 20 posts address the self-illumination issue. Thanks for mentioning it!
- Joe
Joe Keller
Posted - 20 May 2007 : 19:18:15
USA Because IRAS did not find it, Barbarossa must be much colder than Burrows' theoretical temperature (or else much less massive than believed). Burrows et al, Reviews of Modern Physics
944 Posts 65:301+, gives Table I (p. 316) and formula 2.58 (p. 312; see also p. 305 for a very brief explanation of the "Rosseland mean opacity") for theoretical brown dwarf temperature as a
function of age and mass. Because Barbarossa's presumed 0.0090 solar mass is outside Table I, I used Burrows' formula to find (assuming age 4.6 Gyr) that Barbarossa's surface
temperature should be 229K. By Wien's law, the peak emission would be at 12.65 microns.
The limit for deuterium fusion, is said to be 0.013 solar mass (Oppenheimer et al, in: Mannings et al, "Protostars & Planets IV", 2000), but deuterium fusion lasts only briefly, early
in the life of the brown dwarf. At a Gyr or more old, Burrows' temperature-mass chart changes little across the 0.013 solar mass boundary (Burrows et al, 1997, in: Mannings, op. cit.,
Color Plate 22); so the above surface temperature formula remains fairly accurate. (Gravitational potential energy is about as important as deuterium fusion.) This chart shows that
the absolute luminosity of Barbarossa should be 7.25 log10 units less than the sun's, thus its apparent (full spectrum) luminosity 7.25 + 4.6 = 11.85 log10 units less.
The infrared radiation measured by IRAS from a 229K Barbarossa (assuming "Burrows size" for its mass, i.e., approx. Jupiter size) would be roughly the same, in spectrum and intensity,
as that measured from an asteroid 3 AU from the sun and 1000 mi in diameter. IRAS cataloged many asteroids much smaller than this.
"Asteroids and comets moving more slowly than 1' per hour would hours-confirm and thus reside in the Working Survey Database."
- NASA IRAS Asteroid & Comet Survey webpage
From a 229K Burrows-size (approx. Jupiter-size) Barbarossa, IRAS would have measured a 5900 Jansky peak at 12 microns. Within one degree of Barbarossa's expected position, the main
IRAS catalog's brightest object at 12 microns, measured 6 Jansky; within 10 degrees, 82 Jy.
If a Burrows-size Barbarossa had a rather Neptune-like surface temperature of 48K, its radiation peak would be 55 Jy at 60 microns. Within one degree of Barbarossa's expected
position, the brightest object at 60 microns is 1 Jy; within 10 deg, 27 Jy.
With no internal heat at all, the surface temperature would be about 18K (equilibrium at Barbarossa's distance from the sun; only slightly warmer than the cosmic far infrared
background). Barbarossa either would be indistinguishable from background cold interstellar dust; or, if Barbarossa were seen against a 3K background, IRAS would record only 2.1 Jy
even at 100mu, 0.5 Jy at 60mu, and 0 at 25 & 12mu. The lowest readings ever found are about 0.7 Jy @ 100mu, 0.3 Jy @ 60mu, and 0.2 Jy @ 25 & 12mu. So, I looked for readings within a
factor of two, of 2.8 Jy @ 100mu, 0.8 Jy @ 60mu, and 0.2 Jy @ 25 & 12mu. There were 76 such IRAS objects within 10 degrees, but none within one degree, of Barbarossa's expected
position. If Barbarossa's diameter were half the Burrows value, none of the brightnesses should be much above noise.
So, only a completely cold (equilibrium with solar radiation), smallish (~ 1/2 Jupiter diam) Barbarossa is readily consistent with IRAS' negative detection, given my coordinates. If
my coordinates are wrong, then many IRAS objects (about 0.2 per sq degree) are consistent with even a Jupiter-size Barbarossa, if it is 18K. Some IRAS objects have been correlated
with known objects and some haven't.
Burrows assumes that cooling is limited by the rate of radiation through the degenerate matter, and that convection is insignificant. On the contrary: most of the gravitational energy
will be released at the boundary where the nondegenerate mantle is collapsing onto the degenerate core. The nondegenerate overlying mantle would melt and convection would carry the
heat to the surface, as on Earth. Thus Burrows' temperatures might be accurate for dwarfs that have burned deuterium in their cores, but might drastically overestimate the temperature
of sub-dwarfs like Barbarossa whose heat is only gravitational.
Joe Keller
Posted - 22 May 2007 : 18:10:05
USA Pursuant to Kozai's and similar expressions, I found arcsin(rms sin(i)), the "rms sin(i) inclination to the ecliptic" of KBO's: it is 12# 13.5' (standard error of mean, 41')(n=220)
944 Posts (SEM questionable due to skew distribution). I used the top third (i.e., discovery years 2003-2006) of the "general ephemeris" list on ifa.hawaii.edu. The bottom quarter of the list,
KBOs with special names, also contained some 2003-2006 discoveries but I omitted those. The absolute magnitudes of the KBOs I used were mostly +6 to +8 (vs. +3 for Varuna).
This agrees with the orbital inclinations of Barbarossa (12# 9.7') and Lescarbault/LeVerrier's Vulcan (12# 10'). Iowa State Univ. has LeVerrier's Compte Rendus communication (January
2, 1860; LeVerrier entered Lescarbault's letter to him on p. 40 and his own comment on p. 45). I can't read even scientific French well, but it seems to me that LeVerrier discussed no
other sightings; he simply without fanfare corrected Lescarbault's orbital inclination calculation. The subsequent two years of Compte Rendus contained many communications by
LeVerrier, but only about other subjects.
I found that the ascending nodes, omega, of the same KBOs, cluster near 14.00# and 194.00# (n=175). I omitted KBOs with i < 3# because for these, the ascending node on the ecliptic
correlates poorly with the ascending node on Jupiter's or Saturn's orbit. Otherwise all KBOs were weighted equally. The best fit, theta, was defined as minimizing the sum of abs(sin
(omega - theta)); the criterion should emphasize the difference between 10# & 20# error more than the difference between 40# & 50#, or 70# & 80# error. This agrees with the ascending
nodes of Lescarbault/LeVerrier's Vulcan (named by Babinet) (12.98#) and, except for a 90 degree shift, of Barbarossa (283.69#).
The eccentric (e > 0.1) subset of KBOs (these also tend to be the ones with larger inclination) significantly clusters near omega = 20# and 200#. Again omitting those with i < 3#, 40/
117 had omega (rounded to the nearest degree) within the inclusive intervals [1,40] or [181,220] (p = 0.076, Poisson test). This clustering occurs despite the tendency, for small i,
for omega to cluster near the ascending nodes of Jupiter & Saturn, which are at right angles to this. That is, the clustering would be seen to be even more significant if reference
were made to the orbital plane of Jupiter/Saturn instead of to the ecliptic.
Using the Jupiter/Saturn reference plane instead of the ecliptic, would increase Barbarossa's inclination almost two degrees, and move the ascending nodes of Vulcan and the clustered
KBOs, backward about seven degrees. Even that, would be close enough agreement to suggest that Barbarossa influenced the orbits of Vulcan (whatever Vulcan was) and the KBOs. All this
is more reason to turn a big telescope there and look.
Joe Keller
Posted - 23 May 2007 : 16:51:47
USA From the chart of KBO semimajor axes on Prof. Jewitt's Kuiper Belt website on ifa.hawaii.edu, one sees that some KBOs, like Pluto (hence called plutinos), cluster at 39.4 AU where
944 Posts their orbits have 3:2 resonance with Neptune's orbit. On this chart, KBOs scarcely occur with semimajor axes 47.8 AU where there would be 2:1 resonance with Neptune. Also the 5:3
resonance distance, 42.3 AU, intersects the distribution well off to one side: nothing peaks there.
On Prof. Jewitt's chart, the median and the mode of the distribution of semimajor axes, of the non-plutino majority set of KBOs (and of both the low- and high-eccentricity subsets
separately) is about 43.5 AU. The eccentricity minimum of the high-eccentricity subset, is about 43 AU. (The high-eccentricity subset becomes more eccentric with distance; if sqrt(1-e
^2) increases linearly with distance, then e=1 at about 50.5 AU.) If not resonance, what is at 43.5 AU, to attract the KBOs?
The distance at which the Barbarossa system torques KBO orbits, as effectively as the remainder of the solar system torques them, is 43.75 AU. The strength of the CMB dipole implies,
from my theory above, that the Barbarossa+Frey system is 0.0104 solar mass. I totaled the torque due to all significant known solar system mass, including Pluto and Charon but
excluding any other KBOs. The effect of Pluto is such that if plutinos equal to 100x Pluto's mass were added, the distance would change to 43.2 AU (for Pluto & plutinos I approximated
them as circular orbits at their semimajor axis).
I used Gauss' idea of calculating (by Romberg trapezoidal numerical integration) simply the torque on Ring A due to Ring B, where Ring B is the orbit of a planet or Barbarossa. When
Ring A is at 43.75 AU, then Barbarossa, vs. the rest of the solar system, cause equal torques, per degree of tilt.
Suppose all the KBOs originally lay in the Jupiter/Saturn plane. If Barbarossa's mass were negligible compared to Jupiter, then all the KBOs would have i=0 in the J/S plane (i.e., i=0
to 2 in the ecliptic plane) forever. If Jupiter's mass were negligible compared to Barbarossa, then the KBOs would precess about Barbarossa's plane, which is inclined 14# to the J/S
plane, giving i=0 to 28. Because really the effect of J/S/N et al, at 43.75 AU, equals that of Barbarossa/Frey, the observed situation is a compromise. Roughly half the non-plutino
KBOs are found near i=0 and roughly half are spread out between i=0 to 28.
On my list of 220 unnamed KBOs discovered 2003-2006 (it includes some plutinos), I counted 15 at inclinations [18,21], 11 at [22,25], 6 at [26,29], 4 at [30,33], and one apiece at 36,
37, & 48 (this latter surprisingly with eccentricity only 0.13). This hints at the 39 degree limit beyond which chaotic variation of eccentricity and inclination theoretically occur.
It also shows that an exponential dropoff in population begins at about i=25.
In the above sense, Jupiter is an order of magnitude more effective than Barbarossa, at torquing Saturn. Thus Saturn's inclination to Jupiter is an order of magnitude less than its
inclination to Barbarossa. Uranus should, on average, have larger inclination than it does, but this could be a chance time in the precession cycle when the inclination is rather
small. Neptune is a paradox. Barbarossa should torque Neptune about as effectively, in the above sense, as do J/S/U. Yet Neptune is only 0.9# from the J/S plane. (Like a good KBO,
Neptune's ascending node on the J/S plane is near right angles to Barbarossa's.)
Maybe the aberrant axial rotation of Uranus, allows Uranus to exchange orbital angular momentum with Neptune by some new physics. Uranus would be the linchpin holding Neptune in the
plane of the ecliptic. Alternatively Neptune might simply be analogous to the low-inclination half of KBOs that seem to be influenced by J/S rather than by Barbarossa: dynamically
there seems to be an all-or-nothing choice of influences.
Posted - 24 May 2007 : 06:05:37
United Kingdom Hi Joe, have you looked at Eris and Sedna yet? Their perihelion looks to be about 180 degrees from our brown dwarf. Huge eccentricities, so we can find them now as they come in close.
964 Posts That suggests that there are others, of about the same mass way way out there on highly eccentric orbits.
Joe Keller
Posted - 24 May 2007 : 12:26:06
USA quote:Originally posted by Stoat
944 Posts
Hi Joe, have you looked at Eris and Sedna yet? Their perihelion looks to be about 180 degrees from our brown dwarf. Huge eccentricities, so we can find them now as they come in
close. That suggests that there are others, of about the same mass way way out there on highly eccentric orbits.
Thanks again for your input! One of my posts above cites a plot of Trans-Neptunian Objects known as of 1998. Reviewing my notes, I see that these were shifted, in my best analysis,
roughly 5.9 AU toward ecliptic longitude 182. That is, the average perihelion was at 002, 78 degrees from Barbarossa's ascending node, 284.
Joe Keller
Posted - 24 May 2007 : 14:37:42
USA Response to a Ph.D. physicist on another messageboard:
944 Posts
> Are u claiming this theory is correct? If yes, please provide some basis
> for it, starting with how a grav field can produce or influence (other than
> negligible blueshifting) incoming microwaves.
Thanks for your good questions!
The fullest writeup is posted by "Joe Keller" in the "requiem for relativity" thread of Dr. Van Flandern's metaresearch.org messageboard. Here's a paraphrase:
My theory is that the microwaves are produced not by the "big bang" nor even in intergalactic space, but at an interface at which the sun's gravitational force is of a certain
strength. The vacuum isn't empty; rather, it's like a pot of water. Water evaporates at an interface at which intermolecular forces are of a certain strength. Just as the infrared
wavelength coming from boiling water is determined by the *pressure* at that interface (the surface of the water), the wavelength of the CMB is determined by the gravitational
*potential* at the interface where the gravitational force is of a certain strength.
> Feel free to post details including mathematical formulae. You're talking
> to a Ph.D. physicist, other well-qualified folk are watching. Your claims
> seem outlandish without details.
I'm sitting here with my notebook of formulas. Why don't I post them all? Maybe I will tomorrow. Maybe I'll post my BASIC solution program too. The formulas aren't useful without a
computer program to do the numerical integrations and successive approximation solutions. So it isn't proof unless someone checks the correctness of my program, and no one is going to
take time to do that. It would be easier just to aim a telescope, than to check a computer program written in BASIC by a stranger. Furthermore, the attitude I've been encountering
elsewhere is so adverse, that if I were to post a formula with a minor error, I'd never hear the end of it, and my case might be lost forever. So, here's how to write the equations
yourself ("Teach a man to fish..."):
Consider the gravitational field of the sun at 52.6 AU: call its strength "a0". (Several pages of circumstantial evidence of why that distance is important, are on Dr. Van Flandern's
website.) Now add a point mass m0 << msun at distance r0 > 52.6, at the pole of polar coordinates with the sun at the center. For every polar angle phi, there is some distance r, near
52.6 AU because m0 << msun, at which the field strength exactly equals a0; let E(phi) be the gravitational potential at (r,phi). For each m0, there is an r0 such that, the dipole
(first order Legendre term), of E(phi), is in proportion to the CMB dipole observed.
I get r0 from the circular orbit displayed by the aforementioned 1954, 1986 & 2007 centers of mass. Within the measurement error of the progression period of the 5:2 Jupiter:Saturn
resonance, this is the same thing as the r0 which would give that period. Then I know m0.
Some of your other questions are addressed in previous posts to this messageboard. Right now, theoretical discussion is less important than campaigning for observations to be made
with more powerful instruments.
Joe Keller
Posted - 24 May 2007 : 14:57:13
USA (previous reply to the same correspondent, Grant Hallman, a Ph.D. physicist)
944 Posts
Hi Grant,
I didn't see your questions because they weren't near the top! I'm used to getting responses on other messageboards that only amount to a string of derogatory adjectives, just
namecalling, not substantive.
I didn't notice that *your* response, by contrast, really said something! Thanks for your input!
- Joe Keller
(Keller) > >For reasonable masses, correction for the tidal gravity of the objects, reduces the variation of the Pioneer Anomalous Acceleration. The net Anomalous Acceleration becomes
fairly smoothly decreasing with distance from the sun. I think that despite the likely 0.01 solar mass for the combined objects,...
(Hallman) > How was this figure reached? For comparison, that would be about 10x Jupiter's mass.
(Keller) I've posted the theory for that in detail on the metaresearch.org messageboard. Basically the theory is that the symmetry of the CMB arises because of the symmetry of the
sun's gravitational field, and that the sun's gravitational field somehow produces the CMB! (That theory didn't originate with me.) The CMB dipole, according to my extension of this
theory, arises from the planets. This gives a formula for the mass needed at Barbarossa's distance. In turn, Barbarossa's period, hence distance, is implied by the rate of progression
of the 5:2 Jupiter:Saturn resonance.
(Keller) > >they fail to disrupt the solar system, because small shifts in the orbital planes of the known planets, counter the torque.
(Hallman) > Ok, that statement does not make sense. First, what causes the "small shifts"...
(Keller) It's just a system of interacting torques. The details are a mess (many-body problem) but the basic concept is simple. Basically, when a torque acts, it tilts an orbit, and
the new angle produces another torque. Eventually enough adjustment occurs in the strong torques (e.g., Jupiter-Saturn) to neutralize the disruptive effects of the weaker torques
(e.g., Saturn-Barbarossa). That's why, say, Alpha Centauri doesn't destroy the solar system through precession around the plane of Alpha Centauri's orbit, even after infinite time (or
so it's believed). Jupiter and Saturn are more linked by torque than are Saturn and Barbarossa. When you go out as far as Neptune or the Kuiper Belt, the situation is more complicated
because Barbarossa's torque becomes about as important as Jupiter's. (These torques are like that of the sun on the moon which causes the advance of the lunar
(Hallman) > and second, the statement is inconsistent with the assertion that the object(s) are at a "Jupiter:Saturn resonance point", which requires that Jupiter and Saturn affect
the object, whereas the object, which is alleged to be 10x heavier than Jupiter, does not affect Jupiter and Saturn.
(Keller) I'm not saying J/S have no effect, just that there's an effect on J/S, which we are observing when we observe the progression of the 5:2 resonance. The stablest situation is
for the big outside object (Barbarossa) to be at one of the resonance points. The resonance point follows the big outside object because that's the stablest scenario. It's an
extension of the basic idea of resonance.
Joe Keller
Posted - 25 May 2007 : 15:57:59
USA It seems that Lescarbault assumed a parabolic orbit (with argument of perihelion = 270#) for Vulcan, and LeVerrier a circular one. My rough calculation for such a parabolic orbit
944 Posts gives, in heliocentric ecliptic coordinates, delta(z)/delta(sqrt(x^2+y^2)) = tan(7.6#), agreeing with Lescarbault. This assumes Vulcan was observed at the descending node; more
accurately, 13# - (March 26 - March 21) = 8# before the descending node, gives 8.8#. For argument of perihelion = 180#, no distance gives a slow enough apparent speed: 0.33 AU gives
the slowest apparent speed but it's still 20% too fast.
My plot of 85 KBOs from 2005-2006, shows that although eccentricity and inclination are correlated, it is eccentricity, not inclination per se, that correlates with ascending nodes
near 14# or 194#. The most eccentric of the 85, had e=0.97 & omega=197#; next most, e=0.83 & omega=192#. This suggests that Vulcan, omega=13#, was a very eccentric big KBO, not a
typical short-period comet. Vulcan would have had significant gravity; its surface might have resembled Mercury's, hence no cometary tail. When Lescarbault's report reached LeVerrier,
nine months after the sighting, Vulcan would have been in the asteroid belt and maybe as dim as Neptune. Vulcan's failure to return before 2007 indicates that its major axis and
aphelion are > 2*28 = 56 AU.
From a preliminary sample, the 2006 subset (n=35, excluding those with i<3) of my KBOs, I find only 4 with argument of perihelion within 45# of 270#, vs. 10 within 45# of 0#, 8 within
45# of 90#, and 13 within 45# of 180#. Thus 23/35 have argument of perihelion nearer 0/180; this might be explained without Barbarossa. On the other hand, the clustering of the
Edgeworth-Kuiper Belt Objects' ascending nodes seems to require a large mass on an inclined orbit, i.e., a Barbarossa.
The most significant clustering of the longitude (not argument) of perihelion is near 270#: 11 near 270# (e.g., Vulcan?), 10 near 0#, 4 near 90#, 10 near 180# (excluding i<3). Again,
symmetry makes this seem to require additional explanation. When the 5 objects with i<3 are restored to this sample (to give n=40), longitude 236# minimizes the sum of the absolute
angular deviations.
The gravitational field can be approximated as a central inverse-square term, plus an inverse-cube term which has a central and a non-central part. The central inverse-cube term
causes perihelion advancement (see Goldstein's Classical Mechanics, Ch. 3, Exer. 7); the non-central inverse-cube term causes regression of the nodes. Above I found that the
Barbarossa, and known solar system, contributions to the non-central inverse-cube term, have the same derivative w.r.t. z (cylindrical coordinates) at the classical Kuiper Belt. By
Poisson's equation, this also holds for the central inverse-cube term's derivative w.r.t. r (cylindrical coords). So the classical Kuiper Belt lies where, for both node regression and
perihelion advancement, Barbarossa's vs. the known solar system's influences, are equally strong. At 52.6 AU, Barbarossa's contribution to the derivatives of the inverse-cube term, is
stronger than the known solar system's, by a factor of about sqrt(4*pi).
Near the beginning of this discussion of the Kuiper Belt (and ultimately its importance in refuting the Big Bang and orthodox Relativity) Dr. Van Flandern questioned the existence of
any type of barrier at 52.6 AU. Now I can address this objection directly. Let a KBO have semimajor axis 46 AU and eccentricity 0.15. These figures are only slightly more than the
median for KBO samples I've seen. Aphelion would be 53 AU, and the KBO would spend almost twice as much time near aphelion as near perihelion. I think that this is the essence of Dr.
Van Flandern's objection: the rather sudden, drastic reduction in KBOs, reported beyond 52-53 AU, can be real only if major axis and eccentricity are somehow correlated.
In my sample of 85, are 18 with semimajor axis 45 or 46 AU (rounded to the nearest AU); these would require e=0.17 or 0.14, resp., to reach 52.6 AU at aphelion. Four of these have
large eccentricities ranging from 0.37 to 0.66. Seven have e=0.13, 0.14 or 0.15, bringing them almost to 52.6 AU. The remaining seven have e=0.09 or smaller. There is a significant
gap at e=0.10 through 0.12 and e=0.16 through 0.36. In my sample are 25 with semimajor axis 43 or 44; these require e=0.22 or 0.20 resp. to reach 52.6 AU. One has e=0.24, one e=0.29
(both these were 44 AU). The remaining 23 have e <= 0.16 (the sole e=0.16 & e=0.15 both were 43 AUs). Thus all fell at least 0.06 eccentricity units under 52.6 AU or else were at
least 0.04 over.
In the same sample of 85, I shuffled the eccentricities by giving each KBO the eccentricity of the KBO 31,61 or 73 entries ahead. For each of the three shuffles, the aphelia histogram
decreased gradually throughout the range 48-57 AU; nothing special happened at 53 AU nor anywhere else. When I did not shuffle the eccentricities, the aphelia hostrogram declined only
slightly until almost exactly 52.6, then suddenly began declining by a factor of 2 for each additional AU of distance.
A theorem of Lagrange (see Poincare, "New Methods of Celestial Mechanics, 1892-1899, vol. 13 in AIP History of Modern Physics & Astronomy series, p. 407; also GW Hill, Astronomical
Journal 24(556):27+, 1904) says that in N-body motion, the major axis, i.e. energy, tends to be conserved. If so, then eccentricity changes only through change in the minor axis, i.e.
angular momentum. Prograde vortices of force (i.e. an acceleration vector with positive curl) near 52.6 AU, would impart angular momentum without energy, increasing the minor axis of
bodies approaching that barrier.
Posted - 29 May 2007 : 14:28:27
84 Posts Joe, have you posted since the 25th? The main header says there should be something from the 28th.
Joe Keller
Posted - 29 May 2007 : 17:36:17
USA quote:Originally posted by nemesis
944 Posts
Joe, have you posted since the 25th? The main header says there should be something from the 28th.
Dear Nemesis,
My "It seems that Lescarbault..." post is from the 25th but I revised it yesterday, the 28th. Thanks for checking!
Joe Keller
Joe Keller
Posted - 29 May 2007 : 18:29:52
USA Barbarossa & Frey also appear on the 1987 sky survey (I refer to this survey as "C"). So, Barbarossa & Frey are on the A, B, & C online survey scans, and on "G" (for J. Genebriera's
944 Posts photo). Barbarossa appears on the C plate as "C", the original object I had announced as Barbarossa. Frey appears as an object I had named "C6" in my own notes.
C (Barbarossa, 1987 La Silla red) RA 11 18 03.18 Decl -7 58 46.1
C6 (Frey, 1987 La Silla red) RA 11 17 43.1 Decl -7 48 38.5
The pair C/C6 can be added to the constant-speed great circle drawn through A2/A, B3/B, and Genebriera's Barbarossa/Frey of March 25, 2007. The heliocentric angular speeds from
1954-1986 and 1987-2007 become equal and consistent with observation, when Barbarossa's orbital distance from the sun is 197.664 AU, Barbarossa's orbital period 2810.03 yr, and the
mass ratio Barbarossa:Frey = 0.8936:0.1064 = 8.4:1. Then, the heliocentric angular speed from 1986 (the B plate) to 1987 (the C plate) differs from the speed before or after, by only
0.383%, equivalent to 1.56".
The angle between the 1954-1986 path and the 1987-2007 path is only 0.0023 radian. The angle between the 1986-1987 path and the paths before or after, is arcsin(0.11). This error is
consistent with a second Barbarossa satellite, Freya, with period between 2 and 50 years and mass between 1/15 and 1/8 Barbarossa's. I assume that the 1986-1987 path direction error
equals the maximum producible by Freya in circular orbit; the error is likely small between 1954 & 2007, if Freya makes at least one orbit. The relative smallness of the 1986-1987
path length error, suggests that Freya's orbital axis about Barbarossa, lies rather near Barbarossa's orbital plane about the sun, and that Freya's orbit is seen rather edge-on.
The relatively large and comparable masses of Frey & Freya, suggest a complicated three-body orbit. Such an orbit would be needed, because the four Barbarossa-Frey radius vectors do
not lie near any reasonable ellipse.
Joe Keller
Posted - 31 May 2007 : 16:48:51
USA I've found two objects consistent with Freya, Barbarossa's smaller planet:
944 Posts
B2 (1986) RA 11 16 57.0 Decl -7 53 29.6
C3 (1987) RA 11 18 37.6 Decl -7 54 09.5
If the mass ratios Barbarossa:Frey:Freya are adjusted to 14.3:2.04:1, the center-of-mass path ABCG (four time points, 1954-2007) is straight, and constant-speed to a precision
consistent with, assuming an average location on the orbit, eccentricity 0.015. In 1986, Freya appeared 71% as far from Barbarossa as was Frey, and in 1987, 86%.
As assigned, the points are inconsistent with an elliptical orbit for Frey. An alternative to chaotic orbits, is reassigning object "A" as Freya, not Frey. This only slightly affects
the overall fit, and gives three Freys & three Freyas, so elliptical orbits can be drawn.
I've looked at 15'x15' regions from 1954, 1986, 1987 and 2007. The regions chosen were, basically, those consistent with a Barbarossa orbit following that mean Jupiter:Saturn
resonance point nearest the CMB dipole. The above assignments as Barbarossa, Frey and Freya came from among 2400 possible assignments that I considered (a million different
assignments were possible but I considered only the brightest dots as Barbarossa or Frey). There were two more dependent than independent variables to be fit, and these were fit to
about one part in 40 (error / region width), i.e., 1 part in 40^2=1600 overall. (The main lack of perfect fit, is due to the uncertain contribution of the unknown 1954 & 2007 Freyas.)
The 2400 choices were far from stochastically independent. A well- or poorly-fitting choice of Barbarossa, Frey & Freya usually implies a good or poor fit by similar choices. In
effect there were far fewer than 2400 independent choices.
The J:S resonance points are 72# apart, but the (+) CMB dipole lies on Barbarossa's orbit only 2# behind Barbarossa. Because a causal lag is expected, this gives another factor of 36
in significance.
Additional significance arises from the smoothing of the Pioneer acceleration by subtracting Barbarossa's presumed tidal influence, and from the balance between Barbarossa and
planetary tidal (1/r^3) forces at the classic Kuiper Belt.
A week ago I sent another 30 emails to professional astronomers. Of the estimated 200 emails I've sent to professional astronomers about this (over several months' time) only one has
responded. The professional astronomer who responded didn't know my main purpose; I'd only asked him a trivial question.
I've read that Neptune first was observed by two "assistants". These would be the sociological equivalents of graduate students today. So, my new strategy will be to email graduate
students until one simply looks and finds out whether any of this is or is not there.
Joe Keller
Posted - 31 May 2007 : 22:30:23
USA (posted in response to an inquiry on another messageboard - JK)
944 Posts
Thanks for your interest! In the north especially, Leo is getting too far west for the best view. Amateur astronomer Steve Riley in California did get, I think, some photos showing
these objects with an 11", but the time of year was more favorable. I think he had a southern desert, maybe altitude, location somewhere in S. California without too much light
pollution, and he went to a lot of trouble to maximize his magnitude cutoff with the electronic camera (stacking, etc.). If you do make a photo, please send me a "private message";
I'd like to check it! The mass, diameter & albedo aren't really known. That's all theory. So, please look!
The recent image in which I have most confidence is Joan Genebriera's with an 18" and electronic camera on Tenerife, March 25, 2007. I showed it to the president of the Des Moines,
Iowa astronomy club. He did not think it was artifact. By comparison with a sky survey, I estimated the J2000 celestial coordinates as RA 11h 26m 22.2s, Decl -09# 04' 59". I theorize
that the distance from the sun is 198 AU. Here's how to correct for Earth parallax: Joan's photo was slightly after opposition. Your photo will be slightly before quadrature. The
difference in position between opposition & quadrature is 1/198 radian = 57.3/198 # = 0.29# = 17 arcminutes ("retrograde motion" due to Earth's motion). So, it's moved less than that.
Also, Barbarossa is, I estimate, moving around the sun about 1/sqrt(198)=1/14 as fast as Earth, which partly compensates. So let's say 10 arcminutes. Find a star chart, find Joan's
point (coordinates given above), then move 10 arcminutes west, parallel to the ecliptic, and aim there. That will be good enough if your field is 15'.
Joe Keller
Posted - 31 May 2007 : 22:54:52
USA Pulsar constancy is said to rule out acceleration of the sun, relative to the pulsars, greater than about the equivalent of a Jupiter at 200 AU (Zakamska et al, Astronomical Journal
944 Posts 130:1939+, 2005; I got this citation from a member of the "Bad Astronomy" messageboard). My mass estimate for Barbarossa might be 10x too high, or the error of this pulsar method
might be 10x greater than believed.
The above article cites another, about dynamical detection of dark matter in the solar system (DW Hogg, AJ 101:2274+, 1991). Its Fig. 5 and accompanying text indicate that detection
by residual errors in planetary ephemerides would require a Barbarossa at least 0.5 Jupiter mass, likely 2 Jupiter mass or more (maybe much more if there are systematic errors).
Detection by "modeling", i.e., a prospective least-squares fit of ephemeris errors, theoretically (no one has done it with modern data) would require 1/3 that mass. (It might be
easier simply to look.) Localization of Barbarossa within 1 degree would require at least 1.5 Jupiter mass, likely more than 6 Jupiter mass.
Posted - 01 Jun 2007 : 08:32:39
United Kingdom When I think of the aether, I think of something billions of times more "rigid" than steel but I also think of it as a viscoelastic substance. Add to that the idea that half the
964 Posts energy of mass goes to make up the aether of a body. The sun's aether "atmosphere" is centred on the centre of mass of that body but the solar system's as a whole is off centred.
Perhaps there's a tiny variation in aether density, which alters the red shift of pulsars downwards.
Joe Keller
Posted - 01 Jun 2007 : 15:16:04
USA quote:Originally posted by Stoat
944 Posts
When I think of the aether, I think of something billions of times more "rigid" than steel but I also think of it as a viscoelastic substance. Add to that the idea that half the
energy of mass goes to make up the aether of a body. The sun's aether "atmosphere" is centred on the centre of mass of that body but the solar system's as a whole is off centred.
Perhaps there's a tiny variation in aether density, which alters the red shift of pulsars downwards.
Thanks for posting. I think these are excellent insights. I might be able to elaborate on them.
- JK
Joe Keller
Posted - 01 Jun 2007 : 17:49:10
USA Zamaska & Tremaine (op. cit., Astronomical Journal 130:1939+, 2005) say that pulsar timing shows that the sun is not accelerated by any pull as strong as a brown dwarf (e.g., a 10
944 Posts Jupiter-mass object at 600 AU). Because they did not always average large numbers of pulsars, their work shows also that (millisecond) pulsars are not accelerated by any pull as
strong as a brown dwarf. (They adjusted for the sun's planets, and for the pulsars' known stellar companions. Pulsars rarely have planets; apparently this is, in part, because
supernovas destroy planets, at least nearby ones.)
Brown dwarf companions are common, perhaps usual. Only unusually bright and separated brown dwarf companions have been detectable, but even so, a star twelve light-years away has been
found to have two of these. Brown dwarfs survive supernovas because they rarely orbit closer than 40 AU (often 1000 AU).
Contemporary theory is, that pulsars lack brown dwarf companions because pulsars receive at birth an impulse or "kick" which causes them to leave behind distant planets and
companions, e.g., brown dwarfs. Usually only close stellar companions are durable enough and tightly bound enough to remain. Without such a companion, the pulsar is "ordinary". On the
other hand, such a companion eventually ages out of the main sequence, massively interacts with the pulsar, and transforms the pulsar into a "recycled" or even a "millisecond" pulsar.
Let's challenge contemporary theory (see: Lorimer & Kramer, Handbook of Pulsar Astronomy, 2005, p. 30). "Millisecond" pulsars always should keep, the white dwarf companions which
while giants provided the mass to spin up those millisecond pulsars. Yet sometimes millisecond pulsars are isolated, i.e., they lack Doppler evidence of any companion whatsoever. The
somewhat slower merely "recycled" pulsars whose companions went supernova (providing less total mass though in a much shorter time), often should lack said partners, because of the
"kick". Yet seldom are such recycled pulsars without a neutron-star companion.
A possible resolution is, that there is no "kick". The distance-age relationship of pulsars could be interpreted as constant small acceleration rather than constant velocity (see:
Lyne & Graham-Smith, Pulsar Astronomy, 2006, Fig. 8.7). The young neutron-star companions of "recycled" pulsars always are there, and almost always are detectable because they tend to
be close, at least in perihelion due to their eccentricity. The white dwarf companions of "millisecond" pulsars also always are there, but are undetectable by Doppler shift unless
closer than some limiting distance beyond which the Doppler shift does not occur ("ether iceberg"). The limiting distance happens to equal the distance at which escape speed equals
"kick" speed, and this limiting distance continually decreases. "Ordinary" pulsars usually have companions but these rarely are close enough to be on the "ether iceberg" and cause any
Doppler effect; otherwise they probably would have interacted and the pulsar would have become not ordinary. This amounts to a partial repudiation of orthodox special relativity
beyond a certain distance from the star. | {"url":"http://metaresearch.org/msgboard/topic.asp?TOPIC_ID=770&whichpage=15","timestamp":"2014-04-18T15:55:31Z","content_type":null,"content_length":"114849","record_id":"<urn:uuid:42a2b5ad-299a-4a57-805d-6c0eaf9a1f04>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00347-ip-10-147-4-33.ec2.internal.warc.gz"} |
New Pi Computation Record Using a Desktop PC
kdawson posted more than 4 years ago | from the more-digits-than-you dept.
hint3 writes "Fabrice Bellard has calculated Pi to about 2.7 trillion decimal digits, besting the previous record by over 120 billion digits. While the improvement may seem small, it is an
outstanding achievement because only a single desktop PC, costing less than $3,000, was used — instead of a multi-million dollar supercomputer as in the previous records."
cancel ×
Verification (3, Interesting)
Tukz (664339) | more than 4 years ago | (#30652224)
I didn't read the article, only the summery but it made me wonder.
Do they verify these numbers somehow?
Anyone can write down a series of a numbers and claim it's a specific sequence.
Not saying these numbers aren't correct, just a thought.
One thing to say (5, Informative)
DirtyCanuck (1529753) | more than 4 years ago | (#30652250)
From the FAQ
"How does your record compares to the previous one ?
The previous Pi computation record of about 2577 billion decimal digits was published by Daisuke Takahashi on August 17th 2009. The main computation lasted 29 hours and used 640 nodes of a T2K Open
Supercomputer (Appro Xtreme-X3 Server). Each node contains 4 Opteron Quad Core CPUs at 2.3 GHz, giving a peak processing power of 94.2 Tflops (trillion floating point operations per second).
My computation used a single Core i7 Quad Core CPU at 2.93 GHz giving a peak processing power of 46.9 Gflops. So the supercomputer is about 2000 times faster than my computer. However, my computation
lasted 116 days, which is 96 times slower than the supercomputer for about the same number of digits. So my computation is roughly 20 times more efficient. It can be explained by the following facts:
* The Pi computation is I/O bound, so it needs very high communication speed between the nodes on a parallel supercomputer. So the full power of the supercomputer cannot really be used.
* The algorithm I used (Chudnovsky series evaluated using the binary splitting algorithm) is asymptotically slower than the Arithmetic-Geometric Mean algorithm used by Daisuke Takahashi, but it makes
a more efficient use of the various CPU caches, so in practice it can be faster. Moreover, some mathematical tricks were used to speed up the binary splitting. " ( http://bellard.org/pi/pi2700e9/
faq.html [bellard.org] )
Mathematical and Programming Ownage.
Re:One thing to say (1, Insightful)
| more than 4 years ago | (#30652414)
Interesting, but it didn't really answer the question.
Re:One thing to say (0)
| more than 4 years ago | (#30652502)
They ran some verification tests, but admitted that since his memory doesn't have error correction there's some possibility of error. The initial verification test crashed one of the computers, and
would have taken quite some time to re-complete.
Re:One thing to say (3, Interesting)
HateBreeder (656491) | more than 4 years ago | (#30652804)
I would assume he only needs to verify the last 120 billion digits.
Assuming his algorithm can support serialization of its state into a check-point, he can simply recalculate the last 120 billion digits a couple of times and compare.
Assuming linear time to compute each digit: 120e9/2.7e12 * 116 =~ 5 days. not too bad.
Re:One thing to say (2, Interesting)
Lord Lode (1290856) | more than 4 years ago | (#30653228)
Hmm, for such a record attempt, do you actually have to calculate all these earlier digits? They're already known. Can anyone prove the computer calculated the already known digits first (instead of
getting them from a table) before finally getting to the 120 million new ones?
Re:One thing to say (1)
MrMr (219533) | more than 4 years ago | (#30653250)
Worse than that, I find it extremely unlikely that not a single bit failed in uncorrected memory during the whole 116 days...
see for instance:here [cnet.com]
Re:One thing to say (4, Funny)
| more than 4 years ago | (#30652436)
HE USED TRICKS!!!!1111Burn the witch...eehh communist...eeeh climate researcher...eeeeh....PI guy.
Re:One thing to say (1)
PingPongBoy (303994) | more than 4 years ago | (#30652440)
The Pi computation is I/O bound, so it needs very high communication speed between the nodes on a parallel supercomputer. So the full power of the supercomputer cannot really be used.
The algorithm I used (Chudnovsky series evaluated using the binary splitting algorithm) is asymptotically slower than the Arithmetic-Geometric Mean algorithm used by Daisuke Takahashi, but it makes a
more efficient use of the various CPU caches, so in practice it can be faster.
Before the next Top 500 list is published in June, maybe the benchmarking needs to be shaken up a little, huh? Intuition suggests the supercomputer could be reprogrammed to calculate more digits by a
few orders of magnitude. We still have a lot to learn when it comes to taken advantage of multiprocessors, as well as algorithms.
Re:One thing to say (4, Interesting)
PinkyGigglebrain (730753) | more than 4 years ago | (#30652468)
An answer is a reply but a reply is not always an answer.
Re:One thing to say (4, Insightful)
digitalhermit (113459) | more than 4 years ago | (#30652682)
In another thread someone had posted that there was no reason for any modern CPUs; the idea being that anything one could reasonably want to do with a computer was possible with decade old hardware.
This.. *This* article is why I enjoy the breakneck pace of processor speed improvements. The thought of being able to do some pretty serious computing on a relatively inexpensive bit of hardware --
even if it takes half a year to get results -- does what the printing press did. It allows the unwashed masses (of which I am one) a chance to do things that were once only the realm of researchers
in academia or the corporate world. Sure, all that you need to do some serious mathematics is a pen and paper, but more and more discoveries occur using methods that can only be performed with a
There's always the argument that cheap computers and cheap access to powerful software pollutes the space with hacks and dilletantes. People have said this about desktop publishing, ray tracing, and
even the growth of Linux. But it's this ability to do some amazing things with computers that makes it all worthwhile.
Re:One thing to say (0)
| more than 4 years ago | (#30652872)
Well, it is a neat thing that he has done, but what are you going to compare those 2700 trillion digits to?What kind of problem can you solve only if you know the 2000 000 000 000 000 th digit of pi?
But you are right about the CPUs. Anyone who thinks that CPUs are good enough would have to explain why most mobile devices run out of battery charge in less than a day even though their UIs are
sluggish and their apps not very advanced. We need a little bit more speed and a lot less energy consumption.
Re:One thing to say (1)
Ego_and_his_own (1704208) | more than 4 years ago | (#30653020)
There is something in Pi that is far beyond numbers. It hides an answer to some questions for sure. It puts math on abstract side of reality. Pi is weird....
Re:One thing to say (3, Funny)
Le Marteau (206396) | more than 4 years ago | (#30653184)
> It hides an answer to some questions for sure.
Could be. I'm not sure the answer will be in base 10, though.
Maybe, in base 36, beginning at the trillionth digit, pi is:
That would be amazing.
Re:One thing to say (1)
Le Marteau (206396) | more than 4 years ago | (#30653242)
Seriously, though... rather than going off into trying to pushing the limit on how many digits can be cranked out for pi, would it not be more interesting and perhaps more fruitful to search for
patterns, in numerous bases, in pi? Maybe there really IS "an ultimate answer" in some other base, just waiting for some geek in his mother's basement with an old Packard Bell to invest the energy.
Or maybe pi will end up just being a bunch of zeros or something after X many digits.
Either way, I'm sure it would make the front page of /.
Re:One thing to say (1)
dch24 (904899) | more than 4 years ago | (#30653118)
Come on... think a little.
What does large number theory, factorization, optimizations that offer 2000x speedups in this field, and specific information for desktop computers ... what is it about?
Go watch Sneakers [imdb.com] again.
Re:Verification (3, Informative)
msclrhd (1211086) | more than 4 years ago | (#30652412)
In TFA (especially the PDF), the verification method is to use another algorithm to check the output. The PDF on Fabrice's home page goes into more details.
NOTE: The machine they were using to generate the second result broke, so they used another (3rd) algorithm to generate the last digits.
Re:Verification (4, Interesting)
David Jao (2759) | more than 4 years ago | (#30652444)
I didn't read the article, only the summery but it made me wonder.
Do they verify these numbers somehow? Anyone can write down a series of a numbers and claim it's a specific sequence.
Not saying these numbers aren't correct, just a thought.
Perhaps this is why you should read the article. The press release [bellard.org] answers this question directly.
The binary result was verified with a formula found by the author with the Bailey-Borwein-Plouffe algorithm which directly gives the n'th hexadecimal digits of Pi. With this algorithm, the last
50 hexadecimal digits of the binary result were checked. A checksum modulo a 64 bit prime number done in the last multiplication of the Chudnovsky formula evaluation ensured a negligible
probability of error.
The conversion from binary to base 10 was verified with a checksum modulo a 64 bit prime number.
Re:Verification (1)
Xest (935314) | more than 4 years ago | (#30652528)
You don't need to verify that the number is correctly pi to the given digits, merely verify that the algorithm calculates the digits of pi correctly.
The algorithm can be proven correct in a number of relatively quick and easy ways.
The algorithm is really also arguably the most important part anyway rather than the digits themselves because it's the part of most use.
Re:Verification (1)
KowShak (470768) | more than 4 years ago | (#30652688)
The mathematical algorithm may be correct, but is the implementation of it correct and how do you verify it?
A computer program can not be proven to be correct in a number of relatively quick and easy ways.
Re:Verification (1)
SlothDead (1251206) | more than 4 years ago | (#30652972)
That does not really matter at all. It's not about getting a lot of correct digits of Pi.
Re:Verification (1)
dch24 (904899) | more than 4 years ago | (#30653124)
Breaking the record brings attention to the algorithm.
It means that the algorithm will get noticed.
Re:Verification (2, Interesting)
Xest (935314) | more than 4 years ago | (#30653196)
The implementation (compiled or uncompiled) is in itself an algorithm which can equally be checked because the language follows pre-defined logical rules which may act as axioms or depending on the
details of the algorithm it may be trivial to just use induction.
It's not like we're checking a full operating system or office suite here, so size isn't a restrictive problem in such a proof.
It may be that the processor itself hasn't been checked so that the results of executing that algorithm isn't correct either, but again, when it's the algorithm that matters, who cares? We know the
specifications of the language which may effectively act as axioms in a proof. The compiler may not be valid certainly, but as long as the algorithm (yes, mathematical and implementation) is correct
then that is what matters.
It is down to anyone then using the algorithm to ensure the other layers are correct enough for their purposes.
Re:Verification (0)
| more than 4 years ago | (#30652936)
hrmph..., by your reasoning software bugs should not have to exist.
You would instantaneously receive a Turing Award and a Nobel prize for Economics if you are able to keep your promise.
Re:Verification (1)
Xest (935314) | more than 4 years ago | (#30653060)
No, because most software is large and complex, doing things like mathematical induction on all code is infeasible for this reason.
In contrast, code to calculate something like this is relatively extremely small.
Effectively, the reason your argument doesn't hold is that although we can fairly trivially prove some algorithms correct, the method isn't scalable and hence doesn't scale to the scale of pretty
much any piece of modern software.
Re:Verification (0)
| more than 4 years ago | (#30652738)
It'll hardly matter when it's rounded to two significant places.
Poster must work in banking (1)
dredwerker (757816) | more than 4 years ago | (#30652230)
if they think 1.2 billion is small
Thats nice and all... (2, Funny)
fliptw (560225) | more than 4 years ago | (#30652232)
But will it help us in getting flying cars?
Re:Thats nice and all... (5, Funny)
LostCluster (625375) | more than 4 years ago | (#30652284)
Only if you avoid the square routes.
first posssttt (-1, Offtopic)
| more than 4 years ago | (#30652238)
it used windows ninghty 888
Finally! (5, Funny)
pEBDr (1363199) | more than 4 years ago | (#30652266)
Now I can finally get somewhat reasonable precision when calculating the radius of stuff!
Re:Finally! (1)
Opportunist (166417) | more than 4 years ago | (#30652316)
It's still waaaaaaaay off.
Can't get precision anymore these days, everything's just rush-rush, nobody takes the time to do it right...
Re:Finally! (1)
WillyDavidK (977353) | more than 4 years ago | (#30652454)
Too bad you'll need to upgrade your TI-83 to about 1.2TB of storage space to manipulate the number! And I hope you have some AAs that can last a few years under load...
Re:Finally! (0)
| more than 4 years ago | (#30652558)
I use a Casio FX-9860G, You insensitive clod!
Re:Finally! (0)
| more than 4 years ago | (#30652776)
HP48GX FTW! (I know
Re:Finally! (2, Funny)
asc99c (938635) | more than 4 years ago | (#30652758)
You'd also need something to prop up the other end of the new extended screen to display the number - Venus should be in about the right place when it gets a bit closer!
Re:Finally! (2, Interesting)
| more than 4 years ago | (#30652960)
There is a program package for Linux called Sage math where you can get a lot of digits in your constants. For example, to accurately calculate the circumference of a circular table with diameter=
1000 mm you could type:
All you need now is a decent measuring tape...
These two also work, in case you're worried about not getting good enough accuracy when you calculate Fourier coefficients or something:
Since Sage sets up a web server on your computer you can even do this inside a decent phone web browser, so you can get that precision out in the field, where you need it. :-)
Re:Finally! (0)
| more than 4 years ago | (#30653008)
yeah, better not use the TI-83 then, because of the lack of a usb port. it would take some heave cracking to get an external hard drive through that minijack. I'd recommend the TI-89 since it's got a
mini usb port.
Re:Finally! (-1, Offtopic)
userxie (1710194) | more than 4 years ago | (#30652762)
Very good artical.Thank you for sharing.Good luck ! http://www.airmax-shox.com/ [airmax-shox.com] nike air max [airmax-shox.com]
Re:Finally! (1, Funny)
| more than 4 years ago | (#30653240)
I still use the "old math", where you "measure" the radius. Now you kids calculate the radius with pi?
BTW, get off my lawn ...
Wow... (1)
LostCluster (625375) | more than 4 years ago | (#30652274)
They figured this out... they post some, not all of the data, and therefore survive the slashdotting.
Re:Wow... (3, Interesting)
MichaelSmith (789609) | more than 4 years ago | (#30652288)
Plain html is a wonderful thing. And as he points out, it would be easy to write a cgi script which returns a specified block of digits.
I wonder if he has checked for the circle?
Re:Wow... (1)
kae_verens (523642) | more than 4 years ago | (#30652646)
lol! nice. and for those that don't get it, read Contact by Carl Sagan.
Re:Wow... (1, Funny)
| more than 4 years ago | (#30652686)
Plain html is a wonderful thing.
yes, but why not do it in javascript? generating those digits on the fly is much more efficient from a slashdotting perspective.
Re:Wow... (1)
CraterGlass (893417) | more than 4 years ago | (#30653080)
Someone should check it for ASCII codes. Somewhere in there it probably says "Help! I'm being held prisoner in a universe factory!". (kudos to XKCD)
Re:Wow... (1)
MichaelSmith (789609) | more than 4 years ago | (#30653198)
Though I don't know what we would do with such a message. Might be a bit late to help.
this guy has a pretty impressive track record (5, Informative)
Trepidity (597) | more than 4 years ago | (#30652280)
For those not previously familiar with Fabrice Bellard, he's known for:
• LZEXE [bellard.org], very popular in the early 1990s as the first EXE-shrinker for DOS, or at least the first widely available one
• ffmpeg [ffmpeg.org], video decoding library which he started and headed for a number of years
• QEMU [nongnu.org], dynamic-translating generic emulator
Re:this guy has a pretty impressive track record (5, Informative)
msclrhd (1211086) | more than 4 years ago | (#30652492)
He also wrote the Obfuscated Tiny C Compiler (http://bellard.org/otcc/) in 2002 for the Obfuscated C contest, where otcc could compile itself. This became the Tiny C Compiler (TCC) which was picked
up by Robert Landley (but subsequently dropped a while later) that is a capable, fast C90/C99 compiler.
His projects page (http://bellard.org/) and the older projects (http://bellard.org/projects.html) contain a lot of interesting projects.
Also of note: Fabrice achieved the record for Pi computation in 1997 as well:
http://bellard.org/pi/pi_hexa.html [bellard.org]
http://bellard.org/pi-challenge/announce220997.html [bellard.org]
http://bellard.org/pi/ [bellard.org]
I'm not impressed - Superman was faster than a (2, Funny)
anti-NAT (709310) | more than 4 years ago | (#30652540)
speeding bullet, and was able to leap tall buildings in a single bound. Fabrice needs to lift his game.
Specs from the PC in question (4, Informative)
c0mpliant (1516433) | more than 4 years ago | (#30652302)
Core i7 clocking at 2.93GHz 6GB RAM 5 1.5TB Hard Drives (At least 7.2TB needed to store final result and base conversion)
He will be releasing the program he created for Windows (64bit only) and Linux
Re:Specs from the PC in question (2, Informative)
l0b0 (803611) | more than 4 years ago | (#30652970)
He will be releasing the program he created for Windows (64bit only) and Linux
PS: Not the source [bellard.org]
He needs some help... (5, Funny)
LostCluster (625375) | more than 4 years ago | (#30652320)
1 TB data files... somebody needs to help him with the compression! Oh, wait a minute.
Re:He needs some help... (1)
phantomfive (622387) | more than 4 years ago | (#30652636)
Wait, I have it right here! [wordpress.com] (could have gotten it inline, but slashdot doesn't do unicode, sorry).
silly (1, Flamebait)
dsanfte (443781) | more than 4 years ago | (#30652332)
There is an algorithm now for calculating the nth digit of Pi at a whim.
This is slightly retarded.
Re:silly (5, Informative)
Trepidity (597) | more than 4 years ago | (#30652348)
As he points out himself, he doesn't really care about calculating digits of Pi; it's a convenient hook on which to hang an interesting algorithms challenge. From the FAQ:
I am not especially interested in the digits of Pi, but in the various algorithms involved to do arbitrary-precision arithmetic. Optimizing these algorithms to get good performance is a difficult
programming challenge.
He also mentions elsewhere that of his code, "The most important part is an arbitrary-precision arithmetic library able to manipulate huge numbers stored on hard disks."
Re:silly (1)
i.of.the.storm (907783) | more than 4 years ago | (#30652512)
Yeah, it's more interesting how he exploited caching to good effect with his algorithm. This is not something we hear about all that often.
Re:silly (3, Insightful)
David Jao (2759) | more than 4 years ago | (#30652428)
There is an algorithm now for calculating the nth digit of Pi at a whim.
The algorithm [wikipedia.org] only works for hexadecimal digits. There is no known formula or algorithm for calculating the n-th decimal digit directly.
Having said that, the existence or non-existence of an n-th digit algorithm does not have any relevance on the silliness or non-silliness of computing trillions of digits of pi, unless the algorithm
is extremely trivial (i.e. computing the digit takes less CPU time than a byte of I/O), which is not the case here.
Re:silly (4, Informative)
Ambiguous Puzuma (1134017) | more than 4 years ago | (#30652842)
There is no known formula or algorithm for calculating the n-th decimal digit directly.
What about this [arxiv.org]?
I present here a way of computing the nth decimal digit of pi (or any other base) by using more time than the [BBP] algorithm but still with very little memory.
Re:silly (2, Interesting)
Bacon Bits (926911) | more than 4 years ago | (#30652782)
Knowing how to calculate the nth digit of Pi itself is slightly retarded.
The observable universe is about 50 billion light years across, which is about 4.27 * 10^26 meters. If we take a ring of atoms each roughly 1 Angstrom (10^-10 meters) apart with a diameter the size
of the observable universe and want to determine the circumference of the resulting circle, then knowing Pi to 40 or so places is sufficient that the error caused by the atoms themselves is greater
than that introduced by using an approximate value for Pi. Knowing Pi to 40 or so places is sufficient that you can calculate the difference in circumferences of the inner diameter of the ring and
outer diameter of the ring.
Knowing Pi to 40 places is basically sufficient for describing our entire universe and anything you could put into it. We've known the first 35 for four hundred years, and we've never needed that
much information to describe our universe.
Re:silly (0)
| more than 4 years ago | (#30652870)
Yes, 40 digits is good for everything. As long as you never do any sort of iterative process.
Re:silly (1)
AlecC (512609) | more than 4 years ago | (#30653142)
But pi seems to have to do with more than geometry. It seems to be embedded in the structure of space and particle physics in some way we do not really understand. That said, of course we will never
be able to make measurements whose total range is more than a few tens of orders of magnitude, so for physics, a short approximation is adequate. But mathematics exists in its own right - the fact
that it serves physics has always been secondary (to mathematicians).
Re:silly (1)
krou (1027572) | more than 4 years ago | (#30653234)
I thought it was 42 digits?
Re:silly (0)
| more than 4 years ago | (#30652920)
There is an algorithm now for calculating the nth digit of Pi at a whim.
Not in decimal, there isn't. The Borwein-Bailey-Plouffe algorithm [wolfram.com] only works on base 16. There are others [wolfram.com] for base 2, 64, and 729, but not 10.
Re:silly (1)
Tim C (15259) | more than 4 years ago | (#30652922)
That's like saying that the fact that we have horses and cars, etc, make hiking or cycling anywhere retarded.
Sometimes the point isn't the destination, it's the journey.
Re:silly (1)
AlecC (512609) | more than 4 years ago | (#30653134)
And he used that algorithm to verify his result by checking the last 50 digits.But, presumably, the algorithm he used for calculating the whole block is faster than this arbitrary algorithm, so it
would not have generated his billions of digits in the time he has actually taken. The achievement is to calculate all those digits on relatively low powered hardware, not any particular bits.
So... umm... (-1)
Opportunist (166417) | more than 4 years ago | (#30652334)
Well, 'til now I saw the Pi-calculating e-peen waving as something like basic research. Ya know, where you build better computers and then you don't find anything sensible to do with them, so let's
have them, say, find the next big prime (ok, being in cryptography I can see an application for that...) or have them calculate Pi to another few more billion spaces. Ya know, something that takes
ages and is a great burn-in test.
This time it's not a better computer. He used an existing computer and (again, I have to say) found a better algorithm for something. A better algo that made better use of the architecture. And while
great, it does not really serve any purpose, unless knowing Pi to another few more billion spaces actually is a purpose.
Could someone fill me in what purpose that may be?
Re:So... umm... (3, Insightful)
MichaelSmith (789609) | more than 4 years ago | (#30652352)
Could someone fill me in what purpose that may be?
Re:So... umm... (4, Informative)
JasterBobaMereel (1102861) | more than 4 years ago | (#30653236)
Basic research ..... you know that stuff that has no useful application now .....especially maths
Like group theory, invented in 1832 by Évariste Galois, had no really useful application until the mid 20th century ... Now quantum mechanics and so most of modern electronics uses it ....
Re:So... umm... (3, Informative)
Trepidity (597) | more than 4 years ago | (#30652366)
He mentions in the "press release" page that the most important thing developed in his code is "an arbitrary-precision arithmetic library able to manipulate huge numbers stored on hard disks", which
sounds basic-research-y. There's some more on that in the technical-details PDF, although unfortunately he says he doesn't plan to release the code (somewhat unusual, since most of his projects are
free software).
Re:So... umm... (0, Insightful)
| more than 4 years ago | (#30653200)
although unfortunately he says he doesn't plan to release the code (somewhat unusual, since most of his projects are free software).
More than unusual - it also means that for all practical purposes, his record is worthless. If we cannot look at the program he used to calculate these digits and verify (i.e., prove) that it's
actually correct, what have we actually gained?
Without the program OR the data, all we really have is one guy's claim that he set a new world record, in secret, with the result not even available.
Now, I have no reason to distrust Bellard, and I don't really doubt he really did what he claims to have done; make no mistake about that. I don't think he's lying or anything, but I'd like to be
able to verify what he did for myself, or at least have the possibility to. That's what science works like.
Re:So... umm... (1)
msclrhd (1211086) | more than 4 years ago | (#30652550)
Improving the algorithms for arbitrary precision arithmetic -- that is the area that Fabrice is interested in, not necessarily computing X number of digits of pi. That, and (a) it is interesting, (b)
it is a challenge and (c) let's do it for fun.
Did he find a message? (1)
wisebabo (638845) | more than 4 years ago | (#30652424)
I believe in "Contact" (the book by Carl Sagan, not the movie), the travelers ask the superintelligent aliens "Do you believe in God? To which they reply: "Yes" When asked why, they say "We have
proof" in the finding of a message in a transcendental number (pi?).
After reading the Wikipedia summary I understand that when the travelers come home and are accused of fabricating the whole thing, one of them tries to "find" this message by running their own
computer program. She finds a message, or does she? Is it just a (very unlikely?) statistical fluke? What is noise and what is message when you are dealing with a literally infinitely long string of
numbers? (Wasn't this also the plot behind one of Stanislaw Lem's books?).
I guess if he found a message the news would be all over the place by now so he didn't find a message (or maybe he's just keeping the insights to himself for stock market gains like in the movie
"Pi"). Anyway, how DO you go about finding patterns in a finite (if you can call 2.7 trillion finite!) string of numbers?
Re:Did he find a message? (1)
InlawBiker (1124825) | more than 4 years ago | (#30652486)
How could you not?
Re:Did he find a message? (2, Funny)
Dahamma (304068) | more than 4 years ago | (#30652570)
Exactly! And hence the discovery of our blessed lady of the grilled cheese sandwich...
Re:Did he find a message? (0)
| more than 4 years ago | (#30652848)
You must be a LISP programmer (or am I mistaken?)
So.... (0, Troll)
WillyDavidK (977353) | more than 4 years ago | (#30652474)
So what exactly has been accomplished here?
Re:So.... (2, Informative)
Fjodor42 (181415) | more than 4 years ago | (#30652596)
Read... The... Fine... (wait for it) Article!
Spoiler alert!
He developed a highly efficient library for arbitrary precision floating-point number calculations, capable of having a desktop machine best a supercomputer. Now go change your signature to "For lack
of a better question..." ;-)
2.7 trillion digits (1)
asterix_2k1 (781702) | more than 4 years ago | (#30652554)
..ought to be enough for everybody.
not something revolutionary (0)
| more than 4 years ago | (#30652562)
from the article :
Technologies relevant to the objectives of the TX program can be found in numerous disciplines and areas of research including: adaptive wing structures, ducted fan propulsion, lightweight composite
materials, advanced flight control technology for stable transition from vertical to horizontal flight, hybrid electric drive, advanced batteries, and others.
so no, they didn't waterboard the greenies :(
Pattern? (0, Redundant)
Silpher (1379267) | more than 4 years ago | (#30652584)
If you would put the outcomes in a graph would a pattern arise? If there is a predictable pattern perhaps computation could go a lot faster, but then I guess they would have figured that out by now.
( Or maybe I should call him? :P )
Re:Pattern? (2, Interesting)
Trepidity (597) | more than 4 years ago | (#30652626)
Depends on what you mean by "pattern", of course, but pi is conjectured to be normal [wikipedia.org], which would exclude many sorts of patterns. It's not proven, though.
Re:Pattern? (1)
Anubis IV (1279820) | more than 4 years ago | (#30652674)
It's pi, man. There are no patterns.
Re:Pattern? (1)
Silpher (1379267) | more than 4 years ago | (#30652764)
So if you would include pi in your fractal equation you'll get a infinite world with infinite diversity?
Re:Pattern? (1)
vorlich (972710) | more than 4 years ago | (#30652814)
There's a pattern to be found in everything provided you cut the logos and labels off your black 501's and never, ever enter a Tommy Hilfiger store.
Re:Pattern? (1)
PhunkySchtuff (208108) | more than 4 years ago | (#30653076)
Yeah, and if nothing else, watch out for the Michelin Man (Bibendum)
Too bad... (5, Funny)
hallux.sinister (1633067) | more than 4 years ago | (#30652628)
Only another four hundred billion decimal places, and they would have found the last one!
In others news, SuperPi 1M in less than 7 seconds (1)
majorme (515104) | more than 4 years ago | (#30652792)
Intel's new CPUs can calculate SuperPi 1M in less than 7 seconds [anandtech.com] when clocked at 6234.8MHz
fabrice BELLARD (1)
Antiocheian (859870) | more than 4 years ago | (#30652822)
Wasn't he the guy who developed lzexe ?
Anyway, what's with surnames spelled in caps ? Does he say "I am fabrice" and then he screams "BELLARD" when stating his name ?
Re:fabrice BELLARD (0)
| more than 4 years ago | (#30652944)
Because they understand that, in some cultures (such as the East Asian ones), surnames come first, and given names second. They just coded the information in capitalization.
Re:fabrice BELLARD (-1, Redundant)
| more than 4 years ago | (#30653040)
It's a French tradition to do so.
Re:fabrice BELLARD (-1, Redundant)
| more than 4 years ago | (#30653042)
its just how the french do it. Kinda makes it easier to work out which is the sirname
Re:fabrice BELLARD (1, Informative)
| more than 4 years ago | (#30653066)
I read something recently on this, it's (apparently) a French convention, presumably to make it clear which name is the surname. I don't know much about it, just something I saw on the Planet Debian
http://gwolf.org/blog/internationalizing-your-local-customs [gwolf.org]
Re:fabrice BELLARD (1)
PhunkySchtuff (208108) | more than 4 years ago | (#30653098)
http://bellard.org/ [bellard.org]
He's also the guy who launched ffmpeg [ffmpeg.org] and is working on Qemu [nongnu.org], among other things...
Re:fabrice BELLARD (0, Redundant)
AlecC (512609) | more than 4 years ago | (#30653156)
It is just the customary way in France. You see it all the time on French technical sites, and if you get email from French academics. Just a national quirk.
Anyone tried this on Maple? (1, Interesting)
| more than 4 years ago | (#30652884)
Has anyone tried to calculate PI to an ungodly precision on Maple/Mathematica/Mathlab/Macsyma/etc.?
I wonder if it is even possible on a computer of this guy's specs?
tub61rl (-1, Troll)
| more than 4 years ago | (#30653024)
of the olXD going [goat.cx]
I think I've spotted an error (1)
PhunkySchtuff (208108) | more than 4 years ago | (#30653074)
On his page with extracts of the digits of Pi [bellard.org], in the third column of the 799,999,951th digits, he's got a 2 where I think it should be a 5.
easy (1)
Tjebbe (36955) | more than 4 years ago | (#30653090)
I have just calculated a digit that's much further. It's 7, and it's somewhere around the 8 trillionth decimal. Give or take a few.
I guess that's a (0)
Dunbal (464142) | more than 4 years ago | (#30653138)
Fundamental difference between pure mathematicians and physicists/engineers. Haven't these people ever heard of significance [wikipedia.org]? I mean, apart from sheer nerd value, this has absolutely
no worth to science or humanity.
So, how do you calculate Pi? Seriously (0)
| more than 4 years ago | (#30653252)
Pi = C/d, a circle's circumference divided by its diameter.
So, how do you calculate Pi to X digits?
You can't measure the circle's C and d to X digits (where X is sufficiently large).
You'll eventually hit a significant digits limitation if you try to compute Pi from the standard formula.
So, how do you compute Pi to X digits?
Load More Comments | {"url":"http://beta.slashdot.org/story/129410","timestamp":"2014-04-16T16:45:43Z","content_type":null,"content_length":"274192","record_id":"<urn:uuid:82c0cd66-facd-45b0-b9d6-a1f195638d01>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00125-ip-10-147-4-33.ec2.internal.warc.gz"} |
calc2 problem
First, $p(x)\geq 0$. Next we need to find, $\int_{-\infty}^{\infty} p(x)dx$ Subdivide the interval as, $\int_{-\infty}^0p(x)dx+\int_0^{\infty}p(x)dx$ Since the integral is continous accept at
possibly one point we can chose to ignore that fact. Now, $\int_{-\infty}^0 p(x)dx=0$ because $p(x)=0$ for $x<0$. And, $\int_0^{\infty} p(x)dx=\int_0^{\infty} \lambda e^{-\lambda x}dx$ The
anti-derivative is, $e^{-\lambda x} | ^{\infty}_0 =1$ So it is indeed a probability density function. | {"url":"http://mathhelpforum.com/calculus/7771-calc2-problem.html","timestamp":"2014-04-19T04:23:44Z","content_type":null,"content_length":"32339","record_id":"<urn:uuid:1dc24761-a931-483d-a2c7-3d57c10bdfe2>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00253-ip-10-147-4-33.ec2.internal.warc.gz"} |
Trig in 3D Rotations?
Trig in 3D Rotations?
Say I was going to rotate a 3D field...like a fighting arena, on its y axis. I know that the matrix for y rotations involves trigonomic functions...but for that to happen, you need a right
triangle and a reference angle.
My question is: if it's not clearly a triangle...and it's something like a character, or an entire level....how you do determine its reference angle, and the size and location of the triangle?
Jeremy G
Actually you've got a little mis conception, and the source of the error is considering things on too grand a scheme. You see you are thinking you are rotating a triangle, or a character--rest
assured you are not! Actually you are rotating the VERTICES of the triangle of the character. Rotations all come down to the rotation of a point in space (2d, in cartesian xy; 3d in cartesian
xyz). Which also means infact, you do have the trig identies. Because of the way the points of space are depicted. X and Y (possibly Z). The x y and z components of a point are what form a right
angle with the axis of reference and presumably the origin. I find it hard to explain with out a chalk board, or white board (which makes me kinda laugh when I think about how theres always a
chalk/white board in math).
Any ways the key is: You dont move/rotate/scale an object, you move/rotate/scale verticies that make up an object.
HOpe that helps.
Look into matrix math and matrix concatenation. You use 4x4 homogneous matrices to represent the rotations either in euler angles or in quaternion form. Impossible to explain it all here.
Get a book. | {"url":"http://cboard.cprogramming.com/game-programming/57415-trig-3d-rotations-printable-thread.html","timestamp":"2014-04-24T10:46:01Z","content_type":null,"content_length":"7819","record_id":"<urn:uuid:66e82f84-38fe-4217-ac4e-c38c1bc5c369>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00106-ip-10-147-4-33.ec2.internal.warc.gz"} |
- Mathematical Programming , 1998
"... We consider the polyhedral approach to solving the capacitated facility location problem. The valid inequalities considered are the knapsack, flow cover, effective capacity, single depot, and
combinatorial inequalities. The flow cover, effective capacity, and single depot inequalities form subfamili ..."
Cited by 20 (2 self)
Add to MetaCart
We consider the polyhedral approach to solving the capacitated facility location problem. The valid inequalities considered are the knapsack, flow cover, effective capacity, single depot, and
combinatorial inequalities. The flow cover, effective capacity, and single depot inequalities form subfamilies of the general family of submodular inequalities. The separation problem based on the
family of submodular inequalities is NP-hard in general. For the well-known subclass of flow cover inequalities, however, we show that if the client set is fixed, and if all capacities are equal,
then the separation problem can be solved in polynomial time. For the flow cover inequalities based on an arbitrary client set, and for the effective capacity and single depot inequalities we develop
separation heuristics. An important part of all these heuristic is based on constructive proofs that two specific conditions are necessary for the effective capacity inequalities to be facet
defining. The proofs show precisely how structures that violate the two conditions can be modified to produce stronger inequalities. The family of combinatorial inequalities was originally developed
for the uncapacitated facility location problem, but is also valid for the capacitated problem. No computational experience using the combinatorial inequalities has been reported so far. Here we
suggest how partial output from the heuristic identifying violated submodular inequalities can be used as input to a heuristic identifying violated combinatorial inequalities. We report on
computational results from solving 60 small and medium size problems.
- INFORMS J. COMPUT , 1996
"... We study the two-level uncapacitated facility location (TUFL) problem. Given two types of facilities, which we call y-facilities and z-facilities, the problem is to decide which facilities of
both types to open, and to which pair of y- and z-facilities each client should be assigned, in order to sat ..."
Cited by 16 (3 self)
Add to MetaCart
We study the two-level uncapacitated facility location (TUFL) problem. Given two types of facilities, which we call y-facilities and z-facilities, the problem is to decide which facilities of both
types to open, and to which pair of y- and z-facilities each client should be assigned, in order to satisfy the demand at maximum profit. We first present two multi-commodity flow formulations of
TUFL and investigate the relationship between these formulations and similar formulations of the one-level uncapacitated facility location (UFL) problem. In particular, we show that all nontrivial
facets for UFL define facets for the two-level problem, and derive conditions when facets of TUFL are also facets for UFL. For both formulations of TUFL, we introduce new families of facets and valid
inequalities and discuss the associated separation problems. We also characterize the extreme points of the LP-relaxation of the first formulation. While the LP-relaxation of a multi-commodity
formulation provi...
- Mathematical Programming , 1989
"... Recently, several successful applications of strong cutting plane methods to combinatorial optimization problems have renewed interest in cutting plane methods, and polyhedral characterizations,
of integer programming problems. In this paper, we investigate the polyhedral structure of the capacitate ..."
Cited by 9 (1 self)
Add to MetaCart
Recently, several successful applications of strong cutting plane methods to combinatorial optimization problems have renewed interest in cutting plane methods, and polyhedral characterizations, of
integer programming problems. In this paper, we investigate the polyhedral structure of the capacitated plant location problem. Our purpose is to identify facets and valid inequalities for a wide
range of capacitated fixed charge problems that contain this prototype problem as a substructure. The first part of the paper introduces a family of facets for a version of the capacitated plant
location problem with constant capacity K for all plants. These facet inequalities depend on K and thus differ fundamentally from the valid inequalities for the uncapacitated version of the problem.
We also introduce a second formulation for a model with indivisible cus-tomer demand and show that it is equivalent to a vertex packing problem on a derived graph. We identify facets and valid
inequalities for this version of the problem by applying known results for the vertex packing polytope.
- TOP , 1995
"... We introduce a generalization of the well-known Uncapacitated Facility Location Problem, in which clients can be served not only by single facilities but also by sets of facilities. The problem,
called Generalized Uncapacitated Facility Location Problem (GUFLP), was inspired by the Index Selection P ..."
Cited by 8 (2 self)
Add to MetaCart
We introduce a generalization of the well-known Uncapacitated Facility Location Problem, in which clients can be served not only by single facilities but also by sets of facilities. The problem,
called Generalized Uncapacitated Facility Location Problem (GUFLP), was inspired by the Index Selection Problem in physical database design. We formulate GUFLP as a Set Packing Problem, showing that
our model contains all the clique inequalities (in polynomial number). Moreover, we describe an exact separation procedure for odd-hole inequalities, based on the particular structure of the problem.
These results are used within a branch-and-cut algorithm for the exact solution of GUFLP. Computational results on two different classes of test problems are given.
- Statistica Neerlandica , 1995
"... The polyhedral approach is one of the most powerful techniques available for solving hard combinatorial optimization problems. The main idea behind the technique is to consider the linear
relaxation of the integer combinatorial optimization problem, and try to iteratively strengthen the linear formu ..."
Cited by 5 (1 self)
Add to MetaCart
The polyhedral approach is one of the most powerful techniques available for solving hard combinatorial optimization problems. The main idea behind the technique is to consider the linear relaxation
of the integer combinatorial optimization problem, and try to iteratively strengthen the linear formulation by adding violated strong valid inequalities, i.e., inequalities that are violated by the
current fractional solution but satisfied by all feasible solutions, and that define high-dimensional faces, preferably facets, of the convex hull of feasible solutions. If we have the complete
description of the convex hull of feasible solutions all extreme points of this formulation are integral, which means that we can solve the problem as a linear programming problem. Linear programming
problems are known to be computationally easy. In Part I of this article we discuss theoretical aspects of polyhedral techniques. Here we will mainly concentrate on the computational aspects. In
particular we ...
- Discrete Applied Mathematics , 2004
"... We examine the feasibility polyhedron of the uncapacitated hub location problem (UHL) with multiple allocation, which has applications in the fields of air passenger and cargo transportation,
telecommunication and postal delivery services. In particular we determine the dimension and derive some cla ..."
Cited by 4 (0 self)
Add to MetaCart
We examine the feasibility polyhedron of the uncapacitated hub location problem (UHL) with multiple allocation, which has applications in the fields of air passenger and cargo transportation,
telecommunication and postal delivery services. In particular we determine the dimension and derive some classes of facets for this polyhedron. We develop a general rule about lifting facets from the
uncapacitated facility location (UFL) problem to UHL. Using this lifting procedure we obtain a new class of facets for UHL which dominates the inequalities in the original formulation. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=1451719","timestamp":"2014-04-17T01:03:51Z","content_type":null,"content_length":"28762","record_id":"<urn:uuid:7a9f0df4-dbdf-48af-bf69-1b2224b8a625>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00457-ip-10-147-4-33.ec2.internal.warc.gz"} |
Times Tables to 10, Timestables Tests
The following worksheets are multiplication fact tests. These tests should be done by students in a minute when the multiplication facts are known. If not, much practice using a variety of strategies
is necessary to ensure students commit the multiplication facts to memory.
Print the PDF with Answers on the 2nd Page of 1 Minute Multiplication Facts to 10
I can't emphasize enough that memorizing multiplication facts is important even though, our phones has quick access to calculators. It's as important to know the facts as it is to count, use the many
resources on this site and give your child the strategies needed to commit the timestable facts to memory. | {"url":"http://math.about.com/od/addingsubtracting/ss/Timestables10.htm","timestamp":"2014-04-16T18:57:27Z","content_type":null,"content_length":"45204","record_id":"<urn:uuid:6c74510d-4bac-4147-8328-80fffb37ef82>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00046-ip-10-147-4-33.ec2.internal.warc.gz"} |
I need a new rifle wanna help.. part 2 sorta..
After taking a look at the BC numbers using standard conditions at 1100 ft (28.35 BP, 55.1 deg F, 78% HUM), verses the standard sea level condition BC of .495, it jumps to a .5115 BC.
If you increase the BP to 28.53, it drops to a .5082 BC. But, when you raise the temp to just 80 deg F it jumps to a .5363 BC.
Raising the BP to 28.74 at 80 deg F = .5324 BC.
28.74 BP and raising temp to 100 deg F = .5576 BC.
Keeping temp at 100 deg F and lowering BP to 28.53 = .5615 BC
So, using standard sea level conditions (29.53 BP, 59 deg, 78% HUM) in a ballistics program, these corrected BC's (.532 - .562 BC) would be accurate, or the published .495 BC at your atmospheric
conditions, same result.
Without knowing the exact temp and BP at the time of the test, and measuring actual drop on the target to 600 yards or so, it leaves too many variables that can poentially skew the BC.
I use an 8' sheet of plywood standing upright covered with freezer paper, using the top edge as a vertical hold point, then fire groups at each 100 yard increment to 600 yards recording "actual" drop
in inches with no turret adjustments during the test. The rifle is zeroed "perfectly" at 100 yards before the test, or at least verified before hand. MV, temp and BP are recorded with the
4000 at this time before each test.
I believe this is probably the single most effective, and accurate method to determine a bullets BC, but I also use the Oehler M43 with screens at the muzzle, and downrange screens hooked to the
Oehler 35P for a two range MV determination of BC, or simply use the acoustic target and the M43.
Of the three ways of doing it though, I still prefer the plywood while measuring drop in inches over anything else. All have given me reliable numbers "if" all conditions are known exactly.
After BC and drops are known, shooting at these ranges again while "dialing" the zeros in will confirm your turrets accuracy, or point out a thread inconsistancy, uncalibrated pitch, etc, etc.
Twist rate will affect stability, and this will affect the BC, the other effects are probably unmeasurable, or unprovable, as the effects would be so small IMO. MV, temp and BP are major factors at
LR, as is the recoil effect from various rests or shooting positions.
The 308 is an efficient round, as is the
300 WSM
, although I believe bullet inconsistancies and imbalances contribute to BC variations. Twist consistancy, bore and groove quality, and diameter, powder selection for a specific cartridge, load
density, bullet base shape, jacket thickness, core hardness, muzzle pressure, barrel length and diameter are factors that contribute to bullet stability, balance, and integrity, and thus it's BC, all
aside from the bullet itselfs consistancy. | {"url":"http://www.longrangehunting.com/forums/f17/i-need-new-rifle-wanna-help-part-2-sorta-1086/index3.html","timestamp":"2014-04-18T07:32:54Z","content_type":null,"content_length":"81629","record_id":"<urn:uuid:76a37d16-6142-4c23-8b97-ad543aa0db6a>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00610-ip-10-147-4-33.ec2.internal.warc.gz"} |
Woburn ACT Tutor
Find a Woburn ACT Tutor
Experienced, dedicated, expert tutor specializing in math, any section of test prep, and high-level critical thinking, reading, and writing. I'm friendly, patient, eager to help you succeed - and
I guarantee you won't find anyone more knowledgeable or helpful, anywhere. (If you don't agree, I won't...
47 Subjects: including ACT Math, chemistry, English, reading
...I scored an 800 on this section of the SAT when I took it. I also have almost a year of experience tutoring in the SAT with an organization called Ivy Insiders and on my own. History was my
strongest subject in High School and I graduated with well over the required number of credits in the subject.
28 Subjects: including ACT Math, reading, English, writing
...For others, business and technology related concepts escape them. In my prior career I was often called upon to translate complex technical concepts into laymen's terms for a mass audience, or
as I liked to say, "I speak Geek." You are likely seeking a tutor because either you are struggling to keep up or trying to get ahead. I can help with that.
23 Subjects: including ACT Math, calculus, business, elementary math
...I teach high school through college students and can teach in person or, if convenient, via Skype. I don't want to take your tests or quizzes, so I may need to verify in some way that I'm not
doing that! If you happen to be Mandarin Chinese I know a little of your language: yi, ar, san, si ...I've taught Discrete Mathematics for undergraduates at SUNY Cortland.
14 Subjects: including ACT Math, calculus, geometry, GRE
My name is Derek H. and I recently graduated from Cornell University's College of Engineering with a degree in Information Science, Systems, and Technology. I have a strong background in Math,
Science, and Computer Science. I currently work as software developer at IBM.
17 Subjects: including ACT Math, statistics, geometry, algebra 1 | {"url":"http://www.purplemath.com/woburn_act_tutors.php","timestamp":"2014-04-18T22:04:58Z","content_type":null,"content_length":"23592","record_id":"<urn:uuid:6dc3e233-a918-41c3-b6cf-7cabe1f55c6e>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00092-ip-10-147-4-33.ec2.internal.warc.gz"} |
Calculating alpha and power
10-08-2011 04:08 PM #2
TS Contributor
Thanked 491 Times in 466 Posts
10-08-2011 03:53 PM #1
Thanked 0 Times in 0 Posts
Re: Calculating alpha and power
Calculating alpha and power
A box contains 20 balls, some red and some black. We want to test the hypotheses:
Null: there are exactly 4 red balls in the box.
Alternative: there are more than 4 red balls in the box.
Suppose we draw two balls without replacement, and our decision rule is to reject the null hypothesis if both balls are red.
i) calculate alpha, the significance level of the test.
ii) calculate the power of the test for the specific alternative situation where there are in fact 12 red balls and 8 black balls in the box.
A standard question with the definition of power/significance level.
The important thing in this example is that the sampling distribution is the hypergeometric distribution.
Re: Calculating alpha and power
Re 1): alpha, the Type 1 error, is not calculated. It is fixed (set in advance), say alpha=5%, alpha=10%. Then the test is said to be of size 5%, 10%.
Re: Calculating alpha and power
That's only true if we set alpha in advanced and then choose our cutoff to reflect that. Here we're choosing a cutoff and then figuring out what alpha is.
Re: Calculating alpha and power
Hmmm... Alpha is constant by it's very definition. It is chosen a priory. Do you mean we can figure out what p-value is?
Also, what do you mean by cutoff here: "then choose our cutoff to reflect that". What is this cutoff?
Last edited by d21e7x11; 10-09-2011 at 11:06 AM.
Re: Calculating alpha and power
But it is a function of the cutoff value. It doesn't need to be chosen a priori. In practice that is what (typically) happens. But for a homework problem to illustrate these concepts it doesn't
get set a priori.
Re: Calculating alpha and power
No. What Dason (and you) mean is that in the classical hypothesis testing set up, the significance level is fixed in advanced and the rejection rule is chosen to satisfy this condition. But now
given a rejection rule, the question try to ask you what is the original significance level is.
Of course in practice if you want to do hypothesis testing you are not doing the reverse way. But that is just an exercise.
Re: Calculating alpha and power
I think it would be interesting to find out if it was written like this "i) calculate alpha, the significance level of the test." in the text-book or it's drippydrop22's wording. Again, the
significance level of the test is set (ALWAYS!) a priory. Then after the data are collected, we can calculate the (observed) p-value. P-value IS NOT a Type 1 error!
Re: Calculating alpha and power
Just to let you know... It's not always the significant level that is set a priori. We just need to set something a priori. Whether it is the cutoff level (which implies a certain type-I error
rate) or alpha (which directly implies the type-I error rate) or the power against a certain alternative hypothesis (which once again implies a certain type-I error rate but isn't the thing we're
setting directly) or the expected FDR.
In the case of the question for the OP they're setting the cutoff level which implies what the type-I error rate given that the null is true will be - but it isn't the thing we're setting
directly - it's just something we can calculate once we have the cut off level.
Last edited by Dason; 10-09-2011 at 11:27 AM.
Re: Calculating alpha and power
Yes, thank you. Now here: "for the OP they're setting the cutoff level which implies what the type-I error rate given that the null is true will be", what is this cut off level? Cut off of what?
Re: Calculating alpha and power
Suppose we draw two balls without replacement, and our decision rule is to reject the null hypothesis if both balls are red.
It's right there in the OP
Re: Calculating alpha and power
Oh, so it's something problem-specific, and then in 1) it is beeing asked to translated it into what alpha is. Ok.
Re: Calculating alpha and power
That was the wording provided in the problem. I'm still not certain how to calculate the alpha given this problem. I was under the impression from lecture that alpha was typically set a priori.
We have never dealt with hypergeometric distributions, so I don't think that would be the way he would expect us to solve the problem. Is there any alternative ways to solve this problem?
I have been playing around with the probability of having four red balls (.2) and the probability of drawing two back to back if there are actually four in the bag (.032). But it is just a guess
at this point.
Re: Calculating alpha and power
Well you can derive the hypergeometric distribution from a simple combinatorics argument so that's probably the route your professor was looking for.
Re: Calculating alpha and power
i will try to use that. thank you all for your help.
10-09-2011 10:41 AM #3
10-09-2011 10:44 AM #4
10-09-2011 10:59 AM #5
10-09-2011 11:03 AM #6
10-09-2011 11:06 AM #7
TS Contributor
Thanked 491 Times in 466 Posts
10-09-2011 11:13 AM #8
10-09-2011 11:21 AM #9
10-09-2011 11:27 AM #10
10-09-2011 11:32 AM #11
10-09-2011 11:41 AM #12
10-09-2011 03:08 PM #13
Thanked 0 Times in 0 Posts
10-09-2011 03:10 PM #14
10-09-2011 03:47 PM #15
Thanked 0 Times in 0 Posts | {"url":"http://www.talkstats.com/showthread.php/20640-Calculating-alpha-and-power","timestamp":"2014-04-21T07:16:23Z","content_type":null,"content_length":"124545","record_id":"<urn:uuid:1ba99b00-e1bf-453a-b6fb-6d3f3c84c3c0>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00304-ip-10-147-4-33.ec2.internal.warc.gz"} |
Rigidity Theorems for Actions of Product Groups and Countable Borel Equivalence Relations
Results 1 - 10 of 20
"... Abstract. We prove that if a countable discrete group Γ is w-rigid, i.e. it contains an infinite normal subgroup H with the relative property (T) (e.g. Γ = SL(2, Z) ⋉ Z 2, or Γ = H × H ′ with H
an infinite Kazhdan group and H ′ arbitrary), and V is a closed subgroup of the group of unitaries of a f ..."
Cited by 34 (7 self)
Add to MetaCart
Abstract. We prove that if a countable discrete group Γ is w-rigid, i.e. it contains an infinite normal subgroup H with the relative property (T) (e.g. Γ = SL(2, Z) ⋉ Z 2, or Γ = H × H ′ with H an
infinite Kazhdan group and H ′ arbitrary), and V is a closed subgroup of the group of unitaries of a finite separable von Neumann algebra (e.g. V countable discrete, or separable compact), then any
V-valued measurable cocycle for a measure preserving action Γ � X of Γ on a probability space (X, µ) which is weak mixing on H and s-malleable (e.g. the Bernoulli action Γ � [0,1] Γ) is cohomologous
to a group morphism of Γ into V. We use the case V discrete of this result to prove that if in addition Γ has no non-trivial finite normal subgroups then any orbit equivalence between Γ � X and a
free ergodic measure preserving action of a countable group Λ is implemented by a conjugacy of the actions, with respect to some group isomorphism Γ ≃ Λ. There has recently been increasing interest
in the study of measure preserving actions of groups on (non-atomic) probability spaces up to orbit equivalence (OE), i.e. up to isomorphisms of probability spaces taking the orbits of one action
onto the orbits of
- J. Amer. Math. Soc
"... Abstract. We prove that if a countable group Γ contains a non-amenable subgroup with centralizer infinite and “weakly normal ” in Γ (e.g. if Γ is non-amenable and has infinite center or is a
product of infinite groups) then any measure preserving Γ-action on a probability space which satisfies certa ..."
Cited by 23 (3 self)
Add to MetaCart
Abstract. We prove that if a countable group Γ contains a non-amenable subgroup with centralizer infinite and “weakly normal ” in Γ (e.g. if Γ is non-amenable and has infinite center or is a product
of infinite groups) then any measure preserving Γ-action on a probability space which satisfies certain malleability, spectral gap and weak mixing conditions is cocycle superrigid. We also show that
if Γ � X is an arbitrary free ergodic action of such a group Γ and Λ � Y = T Λ is a Bernoulli action of an arbitrary infinite conjugacy class group, then any isomorphism of the associated II1 factors
L ∞ X ⋊Γ ≃ L ∞ Y ⋊Λ comes from a conjugacy of the actions. 1.
- Systems , 2005
"... Measure Equivalence (ME) is the measure theoretic counterpart of quasi-isometry. This field grew considerably during the last years, developing tools to distinguish between different ME classes
of countable groups. On the other hand, contructions of ME equivalent groups are very rare. We present a n ..."
Cited by 17 (1 self)
Add to MetaCart
Measure Equivalence (ME) is the measure theoretic counterpart of quasi-isometry. This field grew considerably during the last years, developing tools to distinguish between different ME classes of
countable groups. On the other hand, contructions of ME equivalent groups are very rare. We present a new method, based on a notion of measurable free-factor, and we apply it to exhibit a new family
of groups that are measure equivalent to the free group. We also present a quite extensive survey on results about Measure Equivalence for countable groups.
- Topology
"... Abstract. For every hyperbolic group and more general hyperbolic graphs, we construct an equivariant ideal bicombing: this is a homological analogue of the geodesic flow on negatively curved
manifolds. We then construct a cohomological invariant which implies that several Measure Equivalence and Orb ..."
Cited by 12 (0 self)
Add to MetaCart
Abstract. For every hyperbolic group and more general hyperbolic graphs, we construct an equivariant ideal bicombing: this is a homological analogue of the geodesic flow on negatively curved
manifolds. We then construct a cohomological invariant which implies that several Measure Equivalence and Orbit Equivalence rigidity results established in [MSb] hold for all non-elementary
hyperbolic groups and their non-elementary subgroups. We also derive superrigidity results for actions of general irreducible lattices on a large class of hyperbolic metric spaces. 1.
"... Abstract. Let Γ be a countable group and denote by S the equivalence relation induced by the Bernoulli action Γ � [0, 1] Γ, where [0,1] Γ is endowed with the product Lebesgue measure. We prove
that for any subequivalence relation R of S, there exists a partition {Xi} i≥0 of [0, 1] Γ with R-invariant ..."
Cited by 8 (1 self)
Add to MetaCart
Abstract. Let Γ be a countable group and denote by S the equivalence relation induced by the Bernoulli action Γ � [0, 1] Γ, where [0,1] Γ is endowed with the product Lebesgue measure. We prove that
for any subequivalence relation R of S, there exists a partition {Xi} i≥0 of [0, 1] Γ with R-invariant measurable sets such that R |X0 is hyperfinite and R |Xi is strongly ergodic (hence ergodic),
for every i ≥ 1. §1. Introduction and statement of results. During the past decade there have been many interesting new directions arising in the field of measurable group theory. One direction came
from the deformation/rigidity theory developed recently by S. Popa in order to study group actions and von Neumann algebras ([P5]). Using this theory, Popa obtained striking rigidity
- Int. Math. Res. Not. IMRN, (19):Art. ID rnm073
"... Abstract. These notes contain an Ergodic-theoretic account of the Cocycle Superrigidity Theorem recently discovered by Sorin Popa. We state and prove a relative version of the result, discuss
some applications to measurable equivalence relations, and point out that Gaussian actions (of “rigid ” grou ..."
Cited by 8 (0 self)
Add to MetaCart
Abstract. These notes contain an Ergodic-theoretic account of the Cocycle Superrigidity Theorem recently discovered by Sorin Popa. We state and prove a relative version of the result, discuss some
applications to measurable equivalence relations, and point out that Gaussian actions (of “rigid ” groups) satisfy the assumptions of Popa’s theorem. 1. Introduction and Statement
- In preparation
"... Abstract. We present some applications of Popa’s Superrigidity Theorem to the theory of countable Borel equivalence relations. In particular, we show that the universal countable Borel
equivalence relation E ∞ is not essentially free. 1. ..."
Cited by 7 (5 self)
Add to MetaCart
Abstract. We present some applications of Popa’s Superrigidity Theorem to the theory of countable Borel equivalence relations. In particular, we show that the universal countable Borel equivalence
relation E ∞ is not essentially free. 1.
"... Abstract. In this paper, we study the connections between properties of the action of a countable group Γ on a countable set X and the ergodic theoretic properties of the corresponding
generalized Bernoulli shift, i.e., the corresponding shift action of Γ on M X, where M is a measure space. In parti ..."
Cited by 7 (1 self)
Add to MetaCart
Abstract. In this paper, we study the connections between properties of the action of a countable group Γ on a countable set X and the ergodic theoretic properties of the corresponding generalized
Bernoulli shift, i.e., the corresponding shift action of Γ on M X, where M is a measure space. In particular, we show that the action of Γ on X is amenable iff the shift Γ � M X has almost invariant
sets. 1.
, 2007
"... Abstract. Let G be a countable group. We proof that there is a model companion for the approximate theory of a Hilbert space with a group G of automorphisms. We show that G is amenable if and
only if the structure induced by countable copies of the regular representation of G is existentially closed ..."
Cited by 5 (4 self)
Add to MetaCart
Abstract. Let G be a countable group. We proof that there is a model companion for the approximate theory of a Hilbert space with a group G of automorphisms. We show that G is amenable if and only if
the structure induced by countable copies of the regular representation of G is existentially closed. 1.
"... Abstract. The title refers to the area of research which studies infinite groups using measure-theoretic tools, and studies the restrictions that group structure imposes on ergodic theory of
their actions. The paper is a survey of recent developments focused on the notion of Measure Equivalence betw ..."
Cited by 4 (1 self)
Add to MetaCart
Abstract. The title refers to the area of research which studies infinite groups using measure-theoretic tools, and studies the restrictions that group structure imposes on ergodic theory of their
actions. The paper is a survey of recent developments focused on the notion of Measure Equivalence between groups, and Orbit Equivalence between group actions. We discuss known invariants and
classification results (rigidity) in both areas. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=387347","timestamp":"2014-04-18T08:29:13Z","content_type":null,"content_length":"34867","record_id":"<urn:uuid:d16e5928-e1a7-4cf7-b0eb-f578a6937e00>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00323-ip-10-147-4-33.ec2.internal.warc.gz"} |
Russell’s mathematical logic
Results 1 - 10 of 22
- Bulletin of Symbolic Logic , 2002
"... Abstract. In this article, we study the prehistory of type theory up to 1910 and its development between Russell and Whitehead’s Principia Mathematica ([71], 1910–1912) and Church’s simply typed
λ-calculus of 1940. We first argue that the concept of types has always been present in mathematics, thou ..."
Cited by 11 (6 self)
Add to MetaCart
Abstract. In this article, we study the prehistory of type theory up to 1910 and its development between Russell and Whitehead’s Principia Mathematica ([71], 1910–1912) and Church’s simply typed
λ-calculus of 1940. We first argue that the concept of types has always been present in mathematics, though nobody was incorporating them explicitly as such, before the end of the 19th century. Then
we proceed by describing how the logical paradoxes entered the formal systems of Frege, Cantor and Peano concentrating on Frege’s Grundgesetze der Arithmetik for which Russell applied his famous
paradox 1 and this led him to introduce the first theory of types, the Ramified Type Theory (rtt). We present rtt formally using the modern notation for type theory and we discuss how Ramsey, Hilbert
and Ackermann removed the orders from rtt leading to the simple theory of types stt. We present stt and Church’s own simply typed λ-calculus (λ→C 2) and we finish by comparing rtt, stt and λ→C. §1.
Introduction. Nowadays, type theory has many applications and is used in many different disciplines. Even within logic and mathematics, there are many different type systems. They serve several
purposes, and are formulated in various ways. But, before 1903 when Russell first introduced
- Logic Journal of the IGPL
"... The action of thought is excited by the initiation of doubt and ceases when belief is attained; so that the production of belief is the sole function of thought. ..."
Cited by 6 (1 self)
Add to MetaCart
The action of thought is excited by the initiation of doubt and ceases when belief is attained; so that the production of belief is the sole function of thought.
- International Journal of Human-Computer Studies
"... 2. Descriptive, formal and formalized ontologies 3. Variants of formalized ontology 4. Some data on formal ontologists 5. A note on Husserl’s conception of formal ontology ..."
Cited by 3 (1 self)
Add to MetaCart
2. Descriptive, formal and formalized ontologies 3. Variants of formalized ontology 4. Some data on formal ontologists 5. A note on Husserl’s conception of formal ontology
- 7, Centre de Recherches Sémiologiques, Université de Neuchâtel (Neuchâtel , 1992
"... In recent years a number of criticisms have been raised against the formal systems of ..."
, 1996
"... The Machine-Assisted Proof of Programming Language Properties Myra VanInwegen Advisor: Carl Gunter The goals of the project described in this thesis are twofold. First, we wanted to demonstrate
that if a programming language has a semantics that is complete and rigorous (mathematical), but not to ..."
Add to MetaCart
The Machine-Assisted Proof of Programming Language Properties Myra VanInwegen Advisor: Carl Gunter The goals of the project described in this thesis are twofold. First, we wanted to demonstrate that
if a programming language has a semantics that is complete and rigorous (mathematical), but not too complex, then substantial theorems can be proved about it. Second, we wanted to assess the utility
of using an automated theorem prover to aid in such proofs. We chose SML as the language about which to prove theorems: it has a published semantics that is complete and rigorous, and while not
exactly simple, is comprehensible. We encoded the semantics of Core SML into the theorem prover HOL (creating new definitional packages for HOL in the process). We proved important theorems about
evaluation and about the type system. We also proved the type preservation theorem, which relates evaluation and typing, for a good portion of the language. We were not able to complete the proof of
type prese...
, 1995
"... Mathematics is in a dramatic and massive process of changing, mainly due to the advent of computers and computer science. Our aim is to present a pocket image of this phenomenon; a "case study"
will give us the opportunity to describe some of these new ideas, problems, and techniques. Particularly, ..."
Add to MetaCart
Mathematics is in a dramatic and massive process of changing, mainly due to the advent of computers and computer science. Our aim is to present a pocket image of this phenomenon; a "case study" will
give us the opportunity to describe some of these new ideas, problems, and techniques. Particularly, we will be concerned with foreseeable mutations in the interaction between deductive and
experimental trends.
"... We construct a logic-enriched type theory LTTw that corresponds closely to the predicative system of foundations presented by Hermann Weyl in Das Kontinuum. We formalise many results from that
book in LTTw, including Weyl’s definition of the cardinality of a set and several results from real analysi ..."
Add to MetaCart
We construct a logic-enriched type theory LTTw that corresponds closely to the predicative system of foundations presented by Hermann Weyl in Das Kontinuum. We formalise many results from that book
in LTTw, including Weyl’s definition of the cardinality of a set and several results from real analysis, using the proof assistant Plastic that implements the logical framework LF. This case study
shows how type theory can be used to represent a non-constructive foundation for mathematics. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=343701","timestamp":"2014-04-20T18:16:42Z","content_type":null,"content_length":"31051","record_id":"<urn:uuid:17d34940-e851-4fbc-b2d4-d4c6b7c6ea27>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00157-ip-10-147-4-33.ec2.internal.warc.gz"} |
Consumption Function
June 25th 2011, 05:52 PM #1
Jun 2011
Consumption Function
I'm stumped on this qustion because I cant figure out a or the y intercept.
Question is...If your marginal propensity to save is always 20% and your break-even point is $10,000 then with $12,500 of disposable income, your consumption would be?
Using the consumption function formula I've got C= a + (.80)(12500). I'm stumped on how to determine a since his examples in class always gave us a. Appreciate any guidance!
Re: Consumption Function
I'm stumped on this qustion because I cant figure out a or the y intercept.
Question is...If your marginal propensity to save is always 20% and your break-even point is $10,000 then with $12,500 of disposable income, your consumption would be?
Using the consumption function formula I've got C= a + (.80)(12500). I'm stumped on how to determine a since his examples in class always gave us a. Appreciate any guidance!
Ok, so when $x = 10,000, \ y=0$
So $0=a+.8*10,000$
Re: Consumption Function
Great. Thanks so much that makes sense now. I'm getting the C=10,000
Re: Consumption Function
I would interpret "break even at 10000" to mean that you spend all of your income at that point, ie C(10000)=10000
$a + 0.2 \cdot 10000 = 10000$
$a = 8000$
(ie, positive, not negative).
Re: Consumption Function
Thank you both for your answers. Some people in my class got C = 12,500 and others got C= 10,000. Perhaps I should frame the question exactly as written.
The actual question is if your MPS is always 20% and your break-even point is $10,000, then with $12,500 of disposable income, your consumption would be:
Re: Consumption Function
oopsie, i got MPS and MPC the wrong way around. The MPC is 0.8 as per post #2 but i still think "break even" implies C(10000) = 10000, so:
a + 0.8(10000) = 10000
So C(12500) = 2000 + 0.8*12500 = 12000
Re: Consumption Function
I've never encountered "break even" in my economic studies, but my intuition is with SpringFan25. The first way presented just cannot work because the first term (i.e., "autonomous consumption")
is assumed to always be positive. Therefore, a value of -8000 doesn't make sense. We can write the consumption function as
$C(Y^d) = c_0 + c_1 \times Y^d$
Where $Y^d$ is disposable income (= income minus taxes and other transfers), $c_0$ is autonomous consumption (you always consume that much, equivalent to a fixed costs of a firm's production
function) and $c_1$ is the marginal propensity to consume (MPC = 1 - MPS). Then, as SpringFan25 pointed out, with the break even condition, we have
$C(10,000) = 10,000 = c_0 + (0.8)(10,000) = c_0 + 8000 \Rightarrow c_0 = 2,000$
Thus, this consumer always consumes $2,000 regardless of income. It is their "fixed consumption" we have to account for at any income. Thus,
$C(12,500) = 2,000 + (0.8)(12,500) = 12,000$
Therefore, although the marginal propensity to consume leads the consumer to spend 80% of their disposable income, this is only in addition to their $2,000 fixed consumption ("autonomous
June 25th 2011, 07:35 PM #2
MHF Contributor
Mar 2010
June 25th 2011, 08:43 PM #3
Jun 2011
June 26th 2011, 01:04 AM #4
MHF Contributor
May 2010
June 26th 2011, 02:16 PM #5
Jun 2011
June 26th 2011, 02:35 PM #6
MHF Contributor
May 2010
June 26th 2011, 09:54 PM #7
May 2011
Sacramento, CA | {"url":"http://mathhelpforum.com/business-math/183619-consumption-function.html","timestamp":"2014-04-18T17:26:45Z","content_type":null,"content_length":"48163","record_id":"<urn:uuid:369bb8a0-e244-48ef-8159-05d6ebc02498>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00179-ip-10-147-4-33.ec2.internal.warc.gz"} |
Starter for my Helles
I plan to repitch some 2206 that I saved from a Becks clone on 3/27 in to my Helles this weekend. Mr. Malty says the viability is 60% so I'm wondering how big of a starter I should make. I don't need
my usual 1G monster but what is the recommendation of the forum? I was thinking maybe a 2L at 1.038-1.040? | {"url":"https://www.homebrewersassociation.org/forum/index.php?topic=11842.msg148127","timestamp":"2014-04-18T09:14:47Z","content_type":null,"content_length":"48088","record_id":"<urn:uuid:e04b74e9-340e-42f1-b61b-b266939d43c2>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00469-ip-10-147-4-33.ec2.internal.warc.gz"} |
prove this:projectiles questions
June 1st 2011, 06:16 PM #1
prove this:projectiles questions
Object is projected from point O passes through point A and B.vertical distance between O-A is am,horizontal distance is bm.Vertical distance to B is bm and horizontal distance is am.
show that object's horizontal range is
what I did was:
bm=V cos Θ*t
t=x/V cos Θ
am=v Sin Θ*t-g/2*t²
am=V sin Θ(bm/V cos Θ)-g/2 *(bm/V cosΘ)²
am=V cos Θ*t
t=x/V cos Θ
bm=v Sin Θ*t-g/2*t²
bm=V sin Θ(am/V cos Θ)-g/2 *(am/V cosΘ)²
got two equations.but now what??
Thank you for your time
Object is projected from point O passes through point A and B.vertical distance between O-A is am,horizontal distance is bm.Vertical distance to B is bm and horizontal distance is am.
show that object's horizontal range is
what I did was:
bm=V cos Θ*t
t=x/V cos Θ
am=v Sin Θ*t-g/2*t²
am=V sin Θ(bm/V cos Θ)-g/2 *(bm/V cosΘ)²
am=V cos Θ*t
t=x/V cos Θ
bm=v Sin Θ*t-g/2*t²
bm=V sin Θ(am/V cos Θ)-g/2 *(am/V cosΘ)²
got two equations.but now what??
Thank you for your time
consider the parabolic trajectory of the projectile in the x-y plane ... it passes thru the points (0,0) , (b,a) , and (a,b)
mathematically, an inverted parabola that passes thru the origin can be written in the form
y = kx(R-x)
where k is a constant and R is the 2nd x-intercept (the "Range" in this case)
for point (b,a) ...
a = kb(R-b)
for point (a,b) ...
b = ka(R-a)
now use the two equations to solve for R in terms of a and b.
ah I didn't think we have to use that kind of equations for this.
can't we do this using only kinematics equations?
you can, but you'll end up having to eliminate the parameter of time anyway to arrive at the range in terms of a and b ... you end up with essentially the same equation I derived, just a bit
messier w/ the constants.
June 2nd 2011, 11:37 AM #2
June 2nd 2011, 03:58 PM #3
June 2nd 2011, 05:52 PM #4 | {"url":"http://mathhelpforum.com/math-topics/182208-prove-projectiles-questions.html","timestamp":"2014-04-16T17:16:55Z","content_type":null,"content_length":"42038","record_id":"<urn:uuid:ce641ed8-a3a6-4604-9a09-d74f79d82a2d>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00475-ip-10-147-4-33.ec2.internal.warc.gz"} |
Elliptic Regularity Theorem
March 29th 2011, 02:36 PM
Elliptic Regularity Theorem
Hi there,
I am following a proof of the elliptic regularity theorem, and I am having a few troubles in understanding where some of the workings come from.
I want to first prove that sing supp (u * v) is a subset of sing supp u + sing supp v.
The proof goes as follows:
Choose functions p (infinitely differentiable) and psi (infinitely differentiable with compact support) such that p = 1 on a neighbourhood of sing supp u and psi = 1 on a neighbourhood of sing
supp v. Then:
(u * v) = (pu + (1 - p)u) * (psiv + (1 - psi)v) = pu * psiv + pu * (1 - psi)v + (1 - p)u * psiv + (1 - p)u * (1 - psi)v.
Each of the convolutions other than pu * psiv has at least one infinitely differentiable factor, so is infinitely differentiable.
Then, apparently, it follows that:
sing supp (u * v) is a subset of supp pu + supp psiv which is a subset of supp p + supp psi.
How does this follow? I cannot see it. | {"url":"http://mathhelpforum.com/differential-geometry/176222-elliptic-regularity-theorem-print.html","timestamp":"2014-04-18T00:49:30Z","content_type":null,"content_length":"4008","record_id":"<urn:uuid:ce5afc50-f493-4341-b21d-52c039f50a0b>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00472-ip-10-147-4-33.ec2.internal.warc.gz"} |
Results 1 - 10 of 206
- Journal of Artificial Intelligence Research , 1994
"... There has been substantial recent interest in two new families of search techniques. One family consists of nonsystematic methods such as gsat; the other contains systematic approaches that use
a polynomial amount of justification information to prune the search space. This paper introduces a new te ..."
Cited by 360 (14 self)
Add to MetaCart
There has been substantial recent interest in two new families of search techniques. One family consists of nonsystematic methods such as gsat; the other contains systematic approaches that use a
polynomial amount of justification information to prune the search space. This paper introduces a new technique that combines these two approaches. The algorithm allows substantial freedom of
movement in the search space but enough information is retained to ensure the systematicity of the resulting analysis. Bounds are given for the size of the justification database and conditions are
presented that guarantee that this database will be polynomial in the size of the problem in question. 1 INTRODUCTION The past few years have seen rapid progress in the development of algorithms for
solving constraintsatisfaction problems, or csps. Csps arise naturally in subfields of AI from planning to vision, and examples include propositional theorem proving, map coloring and scheduling
problems. The probl...
- ACM COMPUTING SURVEYS , 2003
"... The field of metaheuristics for the application to combinatorial optimization problems is a rapidly growing field of research. This is due to the importance of combinatorial optimization
problems for the scientific as well as the industrial world. We give a survey of the nowadays most important meta ..."
Cited by 169 (14 self)
Add to MetaCart
The field of metaheuristics for the application to combinatorial optimization problems is a rapidly growing field of research. This is due to the importance of combinatorial optimization problems for
the scientific as well as the industrial world. We give a survey of the nowadays most important metaheuristics from a conceptual point of view. We outline the different components and concepts that
are used in the different metaheuristics in order to analyze their similarities and differences. Two very important concepts in metaheuristics are intensification and diversification. These are the
two forces that largely determine the behaviour of a metaheuristic. They are in some way contrary but also complementary to each other. We introduce a framework, that we call the I&D frame, in order
to put different intensification and diversification components into relation with each other. Outlining the advantages and disadvantages of different metaheuristic approaches we conclude by pointing
out the importance of hybridization of metaheuristics as well as the integration of metaheuristics and other methods for optimization.
- IN PROCEEDINGS OF ECP-99 , 1999
"... In the recent AIPS98 Planning Competition, the hsp planner, based on a forward state search and a domain-independent heuristic, showed that heuristic search planners can be competitive with
state of the art Graphplan and Satisfiability planners. hsp solved more problems than the other planners b ..."
Cited by 163 (14 self)
Add to MetaCart
In the recent AIPS98 Planning Competition, the hsp planner, based on a forward state search and a domain-independent heuristic, showed that heuristic search planners can be competitive with state of
the art Graphplan and Satisfiability planners. hsp solved more problems than the other planners but it often took more time or produced longer plans. The main bottleneck in hsp is the computation of
the heuristic for every new state. This computation may take up to 85% of the processing time. In this paper, we present a solution to this problem that uses a simple change in the direction of the
search. The new planner, that we call hspr, is based on the same ideas and heuristic as hsp, but searches backward from the goal rather than forward from the initial state. This allows hspr to
compute the heuristic estimates only once. As a result, hspr can produce better plans, often in less time. For example, hspr solves each of the 30 logistics problems from Kautz and Selman in less
than 3 seconds. This is two orders of magnitude faster than blackbox. At the same time
- Knowledge Engineering Review
"... Planning research in Artificial Intelligence (AI) has often focused on problems where there are cascading levels of action choice and complex interactions between actions. In contrast,
Scheduling research has focused on much larger problems where there is little action choice, but the resulting orde ..."
Cited by 94 (9 self)
Add to MetaCart
Planning research in Artificial Intelligence (AI) has often focused on problems where there are cascading levels of action choice and complex interactions between actions. In contrast, Scheduling
research has focused on much larger problems where there is little action choice, but the resulting ordering problem is hard. In this paper, we give an overview of AI planning and scheduling
techniques, focusing on their similarities, differences, and limitations. We also argue that many difficult practical problems lie somewhere between planning and scheduling, and that neither area has
the right set of tools for solving these vexing problems. 1 The Ambitious Spacecraft Imagine a hypothetical spacecraft enroute to a distant planet. Between propulsion cycles, there are time windows
when the craft can be turned for communication and scientific observations. At any given time, the spacecraft has a large set of possible scientific observations that it can perform, each having some
value or priority. For each observation, the spacecraft will need to be turned towards the target and the required measurement or exposure taken. Unfortunately, turning to a target is a slow
operation that may take up to 30 minutes, depending on the magnitude of the turn. As a result, the choice of experiments and the order in which they are performed has a significant impact on the
duration of turns and, therefore, on how much can be accomplished. All this is further complicated by several things:
- In Proceedings of IJCAI-97 , 1997
"... Many search trees are impractically large to explore exhaustively. Recently, techniques like limited discrepancy search have been proposed for improving the chance of finding a goal in a limited
amount of search. Depth-bounded discrepancy search offers such a hope. The motivation behind depth-bounde ..."
Cited by 79 (1 self)
Add to MetaCart
Many search trees are impractically large to explore exhaustively. Recently, techniques like limited discrepancy search have been proposed for improving the chance of finding a goal in a limited
amount of search. Depth-bounded discrepancy search offers such a hope. The motivation behind depth-bounded discrepancy search is that branching heuristics are more likely to be wrong at the top of
the tree than at the bottom. We therefore combine one of the best features of limited discrepancy search -- the ability to undo early mistakes -- with the completeness of iterative deepening search.
We show theoretically and experimentally that this novel combination outperforms existing techniques. 1 Introduction On backtracking, depth-first search explores decisions made against the branching
heuristic (or "discrepancies "), starting with decisions made deep in the search tree. However, branching heuristics are more likely to be wrong at the top of the tree than at the bottom. We would
like theref...
- In Proc. Third Int. Conf. on AI Planning Systems (AIPS-96 , 1996
"... Means-ends analysis is a seemingly well understood search technique, which can be described, using planning terminology, as: keep adding actions that are feasible and achieve pieces of the goal.
Unfortunately, it is often the case that no action is both feasible and relevant in this sense. The tradi ..."
Cited by 77 (6 self)
Add to MetaCart
Means-ends analysis is a seemingly well understood search technique, which can be described, using planning terminology, as: keep adding actions that are feasible and achieve pieces of the goal.
Unfortunately, it is often the case that no action is both feasible and relevant in this sense. The traditional answer is to make subgoals out of the preconditions of relevant but infeasible actions.
These subgoals become part of the search state. An alternative, surprisingly good, idea is to recompute the entire subgoal hierarchy after every action. This hierarchy is represented by a greedy
regression-match graph. The actions near the leaves of this graph are feasible and relevant to a sub. . . subgoals of the original goal. Furthermore, each subgoal is assigned an estimate of the
number of actions required to achieve it. This number can be shown in practice to be a useful heuristic estimator for domains that are otherwise intractable. Keywords: planning, search, means-ends
analysis Reinven...
- Artificial Intelligence , 1999
"... Classical planning is the problem of finding a sequence of actions to achieve a goal given an exact characterization of a domain. An algorithm to solve this problem is presented, which searches
a space of plan prefixes, trying to extend one of them to a complete sequence of actions. It is guided by ..."
Cited by 65 (2 self)
Add to MetaCart
Classical planning is the problem of finding a sequence of actions to achieve a goal given an exact characterization of a domain. An algorithm to solve this problem is presented, which searches a
space of plan prefixes, trying to extend one of them to a complete sequence of actions. It is guided by a heuristic estimator based on regression-match graphs, which attempt to characterize the
entire subgoal structure of the remaining part of the problem. These graphs simplify the structure by neglecting goal interactions and by assuming that variables in goal conjunctions should be bound
in such a way as to make as many conjuncts as possible true without further work. In some domains, these approximations work very well, and experiments show that many classical-planning problems can
solved with very little search. 1 Definition of the Problem The classical planning problem is to generate a sequence of actions that make a given proposition true, in a domain in which there is
perfect informati...
- European Journal of Operational Research , 1998
"... :- Due to the stubborn nature of the deterministic job-shop scheduling problem many solutions proposed are of hybrid construction cutting across the traditional disciplines. The problem has been
investigated from a variety of perspectives resulting in several analytical techniques combining generic ..."
Cited by 65 (2 self)
Add to MetaCart
:- Due to the stubborn nature of the deterministic job-shop scheduling problem many solutions proposed are of hybrid construction cutting across the traditional disciplines. The problem has been
investigated from a variety of perspectives resulting in several analytical techniques combining generic as well as problem specific strategies. We seek to assess a subclass of this problem in which
the objective is minimising makespan, by providing an overview of the history, the techniques used and the researchers involved. The sense and direction of their work is evaluated by assessing the
reported results of their techniques on the available benchmark problems. From these results the current situation and pointers for future work are provided. KEYWORDS:- Scheduling Theory; Job-Shop;
Review; Computational Study; 1. INTRODUCTION Current market trends such as consumer demand for variety, shorter product life cycles and competitive pressure to reduce costs have resulted in the need
for zero i...
- Journal of Heuristics , 2002
"... This paper presents a heuristic algorithm for solving RCPSP/max, the resource constrained project scheduling problem with generalized precedence relations. The algorithm relies, at its core, on
a constraint satisfaction problem solving (CSP) search procedure, which generates a consistent set of acti ..."
Cited by 65 (25 self)
Add to MetaCart
This paper presents a heuristic algorithm for solving RCPSP/max, the resource constrained project scheduling problem with generalized precedence relations. The algorithm relies, at its core, on a
constraint satisfaction problem solving (CSP) search procedure, which generates a consistent set of activity start times by incrementally removing resource conflicts from an otherwise temporally
feasible solution. Key to the effectiveness of the CSP search procedure is its heuristic strategy for conflict selection. A conflict sampling method biased toward selection of minimal conflict sets
that involve activities with higher-capacity requests is introduced, and coupled with a non-deterministic choice heuristic to guide the base conflict resolution process. This CSP search is then
embedded within a larger iterative-sampling search framework to broaden search space coverage and promote solution optimization. The efficacy of the overall heuristic algorithm is demonstrated
empirically on
- In Proceedings of AAAI-96 , 1996
"... We present an improvement to Harvey and Ginsberg's limited discrepancy search algorithm. Our version eliminates much of the redundancy in the original algorithm, generating each search path from
the root to the maximum search depth only once. For a uniform-depth binary tree of depth d, this reduces ..."
Cited by 64 (3 self)
Add to MetaCart
We present an improvement to Harvey and Ginsberg's limited discrepancy search algorithm. Our version eliminates much of the redundancy in the original algorithm, generating each search path from the
root to the maximum search depth only once. For a uniform-depth binary tree of depth d, this reduces the asymptotic complexity from O( d+2 2 2 d ) to O(2 d ). The savings is much less in a partial
tree search, or in a heavily pruned tree. We also show that the overhead of the improved algorithm on a uniform-depth b-ary tree is only a factor of b=(b\Gamma1) compared to depth-first search. This
constant factor is greater on a heavily pruned tree. Finally, we present empirical results showing the utility of limited discrepancy search, as a function of problem difficulty, on the NP-Complete
problem of number partitioning. 1 Introduction: Limited Discrepancy Search The best-known tree-search algorithms are breadth-first and depth-first search. Breadth-first search is rarely used in | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=58701","timestamp":"2014-04-19T06:21:26Z","content_type":null,"content_length":"41212","record_id":"<urn:uuid:e694ca50-d3c3-4ed7-a981-066e3ad16fc9>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00262-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Numpy-discussion] What is the sign of nan?
David Cournapeau david@ar.media.kyoto-u.ac...
Mon Sep 29 22:54:44 CDT 2008
Charles R Harris wrote:
> On Mon, Sep 29, 2008 at 9:02 PM, David Cournapeau
> <david@ar.media.kyoto-u.ac.jp <mailto:david@ar.media.kyoto-u.ac.jp>>
> wrote:
> Charles R Harris wrote:
> >
> > So the proposition is, sign, max, min return nan when any of the
> > arguments is nan.
> Note that internally, signbit (the C function) returns an integer.
> That is the signature of the ufunc. It could be changed...
Nope, I am talking about the C99 signbit macro. man signbit tells me:
signbit - test sign of a real floating point number
#include <math.h>
int signbit(x);
Compile with -std=c99; link with -lm.
More information about the Numpy-discussion mailing list | {"url":"http://mail.scipy.org/pipermail/numpy-discussion/2008-September/037780.html","timestamp":"2014-04-19T17:10:02Z","content_type":null,"content_length":"3611","record_id":"<urn:uuid:8d270d1f-3c0e-4d22-a08c-e07018cc5541>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00545-ip-10-147-4-33.ec2.internal.warc.gz"} |
integrate ln function
Do exactly what galactus said but be careful like mr fantastic said! $ln(x^2- 1)$ is an even function so you really only need to integrate for x> 1, where galactus' method has no problem, and then
use the same integral for x< -1.
Or you can go with integration by parts $\int{\ln(x^2-1)}$ sub. $\ln(x^2-1)=u \Rightarrow \frac{2x dx}{x^2-1}=du; dv = dx \Rightarrow v = x;$ so we have $x\ln(x^2-1)-2\int{\frac{x^2}{x^2-1}dx}$ $= x\
ln(x^2-1)-2\int{\frac{x^2-{\color{red}1}+{\color{red} 1}}{x^2-1}dx}$ $=x\ln(x^2-1)-2\int{\left (1+\frac{1}{x^2-1}\right )dx} =...$ correct me anyone if I did anything wrong. | {"url":"http://mathhelpforum.com/calculus/72352-integrate-ln-function-print.html","timestamp":"2014-04-23T21:49:54Z","content_type":null,"content_length":"7650","record_id":"<urn:uuid:1aa5ee93-1648-40cc-bad1-c65119d537e2>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00610-ip-10-147-4-33.ec2.internal.warc.gz"} |
Homotopy type of the complement to a subvariety of $\mathbb C^n$
up vote 6 down vote favorite
Let $V^k\subset \mathbb C^n$ be a sub variety, such that all its irreducible components have dimension $\ge k$. Is it true that $\mathbb C^n\setminus V^k$ has homotopy type of a CW complex of
dimension $\le 2n-k-1$?
Comments. 1)This is true for $k=n-1$, since in this case $\mathbb C^n\setminus V^{n-1}$ is affine. Case $k=0$ is trivial. 2) This question would help to answer: An analogue of Lefschetz hyperplane
theorem for complements to subvarieties in $\mathbb C^n$ ?
ag.algebraic-geometry at.algebraic-topology complex-geometry
add comment
1 Answer
active oldest votes
Take $n=4$ and let $V = \{ z_1=z_2=0 \} \cup \{ z_3=z_4=0 \}$. I claim that $\mathbb{C}^4 \setminus V$ is homotopic to $S^3 \times S^3$, which has nontrivial homology in degree $6$,
contrary to your supposed bound, which is in degree $5$.
Note that $\mathbb{C}^4 \setminus V = \left( \mathbb{C}^2 \setminus \{ (0,0) \} \right)^2$. Taking the quotient by $\mathbb{R}_{+}$, we see that $\mathbb{C}^2 \setminus \{ (0,0) \}$
is homotopic to $S^3$, so $\mathbb{C}^4 \setminus V$ is homotopic to $S^3 \times S^3$.
I can prove the required cohomology vanishing if you require that $V$ be Cohen-Macaulay.
up vote 8 down Write $U$ for $\mathbb{C}^4 \setminus V$. We have the Hodge-de Rham spectral sequence: $H^q(U, \Omega^p) \implies H^{p+q}(U, \mathbb{C})$. Singe $U$ is an open subset of $\mathbb{C}
vote accepted ^n$, we have $\Omega^p \cong \mathcal{O}^{\oplus \binom{n}{p}}$ so $H^q(U, \Omega^p) \cong H^q(U, \mathcal{O})^{\bigoplus \binom{n}{p}}$.
We can identify $H^q(U, \mathcal{O})$ with a local cohomology module of $V$, which the Cohen-Macaulay condition should force to be $0$ for $q > n-k-1$. So $H^q(\Omega^p)$'s will be
zero for $q>n-k-1$. Then the spectral sequence immediately forces cohomology to vanish for $p+q > n+(n-k-1)$, as you desired.
I have no idea of how to get a statement in homotopy out of the Cohen-Macaulay condition.
Thank you very much! Nice contrexample! – aglearner May 11 '11 at 13:09
add comment
Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry at.algebraic-topology complex-geometry or ask your own question. | {"url":"http://mathoverflow.net/questions/64595/homotopy-type-of-the-complement-to-a-subvariety-of-mathbb-cn?sort=oldest","timestamp":"2014-04-21T02:29:31Z","content_type":null,"content_length":"53233","record_id":"<urn:uuid:7a49be21-57d0-4423-a2ff-9a9bd4582ece>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00115-ip-10-147-4-33.ec2.internal.warc.gz"} |
Salome [6.5.0] bug: Cannot create arc with 3 co-linear points!
Up to Use
Salome [6.5.0] bug: Cannot create arc with 3 co-linear points!
Posted by
at August 29. 2012
Hello developers,
It seems that Salome 6.5.0 has regressed! Try the following in a GUI:
Vertex_1 = geompy.MakeVertex(100, 0, 0)
Vertex_2 = geompy.MakeVertex(100, 10, 0)
Vertex_3 = geompy.MakeVertex(100, 20, 0)
Then using the GUI try to create an arc using Vertex_2 as the center and the 2 other points as the start and end of the arc. Salome 6.5.0 fails with an error complaining that all 3 points are
co-linear. Neither available icon (method) works. I believe (although I have not gone back to testing with older Salome code) that one should be able to draw a semi-circle with a radius of 10.
Mathematically, there are two possible solutions to the problem. I am expecting it to give me at least one answer, based on which vertex is chosen as the starting point and the second possible
solution if I chose the other vertex.
Is anybody testing the newer Salome code for basic functionality at the 2D entity creation level? This seems to be an ongoing fundamental oversight in the code development process...
Re: Salome [6.5.0] bug: Cannot create arc with 3 co-linear points!
Posted by
Vadim SANDLER
at August 29. 2012
Hello JMB,
Indeed, this is NOT a regression - the same result is obtained on SALOME 6.4.0, 6.3.1, 5.1.6 (previous versions have not been checked, but most likely the result would be the same).
Actually, in this specific case - when all three points lay on the same line - there are infinit number of solutions, not only two.
Re: Salome [6.5.0] bug: Cannot create arc with 3 co-linear points!
Posted by
Vadim SANDLER
at August 29. 2012
Hello JMB,
To obtain required result, you can use MakeSketcher() functionality.
Re: Salome [6.5.0] bug: Cannot create arc with 3 co-linear points!
Posted by
at August 30. 2012
Previously Vadim SANDLER wrote:
Actually, in this specific case - when all three points lay on the same line - there are infinit number of solutions, not only two.
Hello Vadim SANDLER,
You are ABSOLUTELY RIGHT, there are an infinite number of solutions. The problem with using a sketcher is:
1. Once one dumps the study from a sketcher GUI session, it is more complicated to convert it to a parametric type python script.
2. Or I do not know how to do (1), since the sketcher's output is all in string format.
So I tried the sketcher and abandoned it. Thanks for the reply and refreshing my analytical geometry theory!
Regards, JMB | {"url":"http://www.salome-platform.org/forum/forum_10/724151779","timestamp":"2014-04-21T04:33:29Z","content_type":null,"content_length":"28487","record_id":"<urn:uuid:28589ce0-5fc7-4cd2-a321-ec2e84b56f70>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00635-ip-10-147-4-33.ec2.internal.warc.gz"} |
Trigonometry/The Gibbs Overshoot
On the previous page, it can be seen that the approximations to the square wave go slightly above the wave, and then come down again. As more terms are added, this overshoot gets slightly greater but
persists for a smaller range of values of x before dropping back to the right level. In the limit, as the number of terms tends to infinity, the maximum value reaches about 9% more than the level of
the square wave. The precise value is
$\frac{2}{\pi}\int_0^\pi \frac{\sin t}{t}\, dt \approx 1.089490$
This overshoot is known as the Gibbs effect, Gibbs phenomenon or Gibbs overshoot, after the mathematical physicist Josiah Gibbs (1839-1903), who explained the phenomenon in 1899.
This page or section is an undeveloped draft or outline.
You can help to develop the work, or you can ask for assistance in the project room.
Last modified on 8 January 2011, at 18:29 | {"url":"http://en.m.wikibooks.org/wiki/Trigonometry/The_Gibbs_Overshoot","timestamp":"2014-04-20T06:09:19Z","content_type":null,"content_length":"15244","record_id":"<urn:uuid:2891bdde-8790-4018-8d7d-1efc78102ac0>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00434-ip-10-147-4-33.ec2.internal.warc.gz"} |
Laplace transform and Fourier transform what is the different?
December 28th 2010, 06:51 PM
Laplace transform and Fourier transform what is the different?
I want to know what is the different between Laplace transform and Fourier transform?
Can anyone show me? and help me about it?
December 28th 2010, 06:58 PM
Bruno J.
Well, for one, they don't have the same definition. What else do you want to know? You have to be specific.
December 28th 2010, 07:28 PM
Chris L T521
They have a different kernel. In Laplace, the kernel is $e^{-st}$. In Fourier, the kernel is $e^{-2\pi i x t}$
Also, they are defined over different domains. Laplace is calculated for $t>0$, where as in Fourier, $t\in(-\infty,\infty)$.
As Bruno said, please be more specific in what you're asking. These are just a couple of the differences between these two transforms.
December 28th 2010, 11:30 PM
My master told me they are same frequency domain and I said no they are different and Fourier is frequency domain!
What is Laplace domain?
December 29th 2010, 01:35 AM
For an absolutely integrable real function $f(x)$ which is zero for all $x<0$, then $(Ff)(\omega)=(Lf)(i \omega)$ (assuming I have done the algebra right and give or take a multiplicative
constant due to how the FT is defined)
December 29th 2010, 03:29 AM
Your are right!
But I need show diffrent
In wiki I read this :
The Laplace transform is related to the Fourier transform, but whereas the Fourier transform resolves a function or signal into its modes of vibration, the Laplace transform resolves a function
into its moments. Like the Fourier transform, the Laplace transform is used for solving differential and integral equations
Can anyone explain it more for me?
December 29th 2010, 09:53 AM
Exactly what do you not understand in the phrase "the Fourier transform resolves a function or signal into its modes of vibration"?
And the phrase "the Laplace transform resolves a function into its moments" (you can Google for the definition of what the moments of a function are if need be)
Both are of use in differential equations because of the relationship between the FT and LT of a derivative and the function itself, and certain other properties which the two transforms have in
common such as the convolution theorems, ...
December 29th 2010, 07:55 PM
vibration and moments I can not find exact defenation!
December 29th 2010, 10:51 PM
The Fourier transform gives a representation of a function in terms of a linear supposition of sinusoids, that it analyses the function/signal into vibrational components.
The Laplace transform is related to the moment generating function, and allows the generation of the moments when they exist (in fact they both do). | {"url":"http://mathhelpforum.com/advanced-applied-math/167053-laplace-transform-fourier-transform-what-different-print.html","timestamp":"2014-04-17T10:06:40Z","content_type":null,"content_length":"11354","record_id":"<urn:uuid:a66be3c7-f762-49aa-af5e-07e8119396f0>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00295-ip-10-147-4-33.ec2.internal.warc.gz"} |