content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
ALEX Lesson Plan: The Real Number System
Lesson Plan ID: 11762
Title: The Real Number System
Overview/Annotation: This lesson focuses on a clear understanding of the real number system by using a variety of teaching strategies. A graphic organizer on a slideshow illustrates
how the classifications of numbers relate. A kinesthetic activity helps students compare the classifications.
Content Standard(s): MA2013 1. Know that numbers that are not rational are called irrational. Understand informally that every number has a decimal expansion; for rational numbers
(8) show that the decimal expansion repeats eventually, and convert a decimal expansion which repeats eventually into a rational number. [8-NS1]
Local/National Standards:
Primary Learning Objective(s): Students will classify numbers within the real number system. Students will identify numbers as rational or irrational. Students will classify rational numbers
as integers, whole numbers, or natural numbers.
Additional Learning Objective(s):
Approximate Duration of the Lesson: 31 to 60 Minutes
Materials and Equipment: 6 chairs, signs (Real, Rational, Irrational, Integer, Whole, Natural), container to draw numbers from, practice sheet for each student
Technology Resources Needed: PowerPoint presentation (see attached), desktop publishing software such as MS Word or Publisher for making signs and numbers (or use the attached file)
Background/Preparation: This lesson may be used as an introductory lesson where students have limited knowledge of the vocabulary and the teacher defines the classifications. This
lesson may also be used as reinforcement with students who have already defined the classifications.
Procedures/Activities: 1.)The teacher will need to define and describe each classification of the real number system. It is a good idea to illustrate the progression through time. For
example, when students were in kindergarten they had only an understanding of numbers that they could count, hence the term counting numbers (or natural
numbers). In first grade they understood zero and how it functioned as a placeholder. In seventh grade they learned to use integers. In Algebra II they will
learn an entirely new number system, complex numbers.
2.)Use the diagrams in the attached slideshow presentation to further illustrate the attributes of each classification.
3.)Select six students to sit in chairs in the front of the room facing the class. The first round should probably be with students who have a good understanding
of the concept.
4.)Give each student a sign to wear: (in this order) Real, Rational, Irrational, Integer, Whole, Natural.
5.)The remaining students will draw numbers from a container. As each student reads his/her number, the students in the front will stand if the number meets the
constraints of their classification.
6.)Teacher observation and assessment will determine how many rounds are played. When the teacher feels the connections have been made, a discussion will evolve.
Students will begin to see that Irrational and Rational never stand at the same time. They will also see that Real always stands. If Natural is standing, then
Whole and Integer will also. Allow students to make these observations. Do not rush the process.
7.)Give the students the practice sheet for classwork or homework.
8.)Be sure to give enough guided practice before assigning independent work.
Attachments:**Some files will display Numbers.doc
in a new window. Others will prompt Practice Sheet - Real numbers.doc
you to download. The Real Number System 2.ppt
Assessment Strategies: Re-teaching decisions will be made based on teacher observation of the kinesthetic activity and the follow-up practice. Students will also be assessed using a
unit test.
Extension: The lesson could be expanded by adding the complex number system to the kinesthetic activity. Students could also be asked to place various numbers in the
appropriate area on the diagram in the slideshow presentation.
Remediation: Special needs students may need to complete the practice sheet with peer pairing. The teacher may also provide the students will teaching aids such as the
diagram used on the slideshow or sets defining each number. classification.
Each area below is a direct link to general teaching strategies/classroom accommodations for students with identified learning and/or behavior problems such as: reading or math performance below
grade level; test or classroom assignments/quizzes at a failing level; failure to complete assignments independently; difficulty with short-term memory, abstract concepts, staying on task, or
following directions; poor peer interaction or temper tantrums, and other learning or behavior problems.
Presentation of Material Environment
Time Demands Materials
Attention Using Groups and Peers
Assisting the Reluctant Starter Dealing with Inappropriate Behavior
Be sure to check the student's IEP for specific accommodations.
Variations Submitted by ALEX Users: | {"url":"http://alex.state.al.us/lesson_view.php?id=11762","timestamp":"2014-04-18T10:35:04Z","content_type":null,"content_length":"17891","record_id":"<urn:uuid:47a531c4-8578-4431-9aeb-e8d075005088>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00508-ip-10-147-4-33.ec2.internal.warc.gz"} |
Probability Inequalities for the Sum in Sampling without Replacement. The Annals of Statistics, 2(1):39–48
Results 1 - 10 of 12
- Journal of Artificial Intelligence Research , 2004
"... Inductive learning is based on inferring a general rule from a finite data set and using it to label new data. In transduction one attempts to solve the problem of using a labeled training set
to label a set of unlabeled points, which are given to the learner prior to learning. Although transduction ..."
Cited by 22 (3 self)
Add to MetaCart
Inductive learning is based on inferring a general rule from a finite data set and using it to label new data. In transduction one attempts to solve the problem of using a labeled training set to
label a set of unlabeled points, which are given to the learner prior to learning. Although transduction seems at the outset to be an easier task than induction, there have not been many provably
useful algorithms for transduction. Moreover, the precise relation between induction and transduction has not yet been determined. The main theoretical developments related to transduction were
presented by Vapnik more than twenty years ago. One of Vapnik’s basic results is a rather tight error bound for transductive classification based on an exact computation of the hypergeometric tail.
While being tight, this bound is given implicitly via a computational routine. Our first contribution is a somewhat looser but explicit characterization of a slightly extended PAC-Bayesian version of
Vapnik’s transductive bound. This characterization is obtained using concentration inequalities for the tail of sums of random variables obtained by sampling without replacement. We then derive error
bounds for compression schemes such as (transductive) support vector machines and for transduction algorithms based on clustering. The main observation used for deriving these new error bounds and
algorithms is that the unlabeled test points, which in the transductive setting are known in advance, can be used in order to construct useful data dependent prior distributions over the hypothesis
space. 1.
- Proc. 20th Annual Conference on Computational Learning Theory , 2007
"... Abstract. We present data-dependent error bounds for transductive learning based on transductive Rademacher complexity. For specific algorithms we provide bounds on their Rademacher complexity
based on their “unlabeled-labeled ” decomposition. This decomposition technique applies to many current and ..."
Cited by 13 (2 self)
Add to MetaCart
Abstract. We present data-dependent error bounds for transductive learning based on transductive Rademacher complexity. For specific algorithms we provide bounds on their Rademacher complexity based
on their “unlabeled-labeled ” decomposition. This decomposition technique applies to many current and practical graph-based algorithms. Finally, we present a new PAC-Bayesian bound for mixtures of
transductive algorithms based on our Rademacher bounds. 1
- Journal of Machine Learning Research , 2002
"... We extend the VC theory of statistical learning to data dependent spaces of classifiers. ..."
, 2010
"... Estimation of distribution algorithms (EDAs) are widely used in stochastic optimization. Impressive experimental results have been reported in the literature. However, little work has been done
on analyzing the computation time of EDAs in relation to the problem size. It is still unclear how well ED ..."
Cited by 3 (3 self)
Add to MetaCart
Estimation of distribution algorithms (EDAs) are widely used in stochastic optimization. Impressive experimental results have been reported in the literature. However, little work has been done on
analyzing the computation time of EDAs in relation to the problem size. It is still unclear how well EDAs (with a finite population size larger than two) will scale up when the dimension of the
optimization problem (problem size) goes up. This paper studies the computational time complexity of a simple EDA, i.e., the univariate marginal distribution algorithm (UMDA), in order to gain more
insight into EDAs complexity. First, we discuss how to measure the computational time complexity of EDAs. A classification of problem hardness based on our discussions is then given. Second, we prove
a theorem related to problem hardness and the probability conditions of
- in Proc. 2009 IEEE Congr. Evol. Comput. (CEC’09 , 2009
"... Abstract—Despite the wide-spread popularity of estimation of distribution algorithms (EDAs), there has been no theoretical proof that there exist optimisation problems where EDAs perform
significantly better than traditional evolutionary algorithms. Here, it is proved rigorously that on a problem ca ..."
Cited by 3 (3 self)
Add to MetaCart
Abstract—Despite the wide-spread popularity of estimation of distribution algorithms (EDAs), there has been no theoretical proof that there exist optimisation problems where EDAs perform
significantly better than traditional evolutionary algorithms. Here, it is proved rigorously that on a problem called SUBSTRING, a simple EDA called univariate marginal distribution algorithm (UMDA)
is efficient, whereas the (1+1) EA is highly inefficient. Such studies are essential in gaining insight into fundamental research issues, i.e., what problem characteristics make an EDA or EA
efficient, under what conditions an EDA is expected to outperform an EA, and what key factors are in an EDA that make it efficient or inefficient. I.
, 2009
"... Marginal Distribution Algorithms (EDAs) which do not consider the dependencies among the variables. In this paper, on the basis of our proposed approach in [1], we present a rigorous proof for
the result that the UMDA with margins (in [1] we merely showed the effectiveness of margins) cannot find th ..."
Cited by 1 (1 self)
Add to MetaCart
Marginal Distribution Algorithms (EDAs) which do not consider the dependencies among the variables. In this paper, on the basis of our proposed approach in [1], we present a rigorous proof for the
result that the UMDA with margins (in [1] we merely showed the effectiveness of margins) cannot find the global optimum of the TRAPLEADINGONES problem [2] within polynomial number of generations with
a probability that is super-polynomially close to 1. Such a theoretical result is significant in sheding light on the fundamental issues of what problem characteristics make an EDA hard/easy and when
an EDA is expected to perform well/poorly for a given problem.
"... Abstract. We design the first efficient algorithms and prove new combinatorial bounds for list decoding tensor products of codes and interleaved codes. • We show that for every code, the ratio
of its list decoding radius to its minimum distance stays unchanged under the tensor product operation (rat ..."
Cited by 1 (0 self)
Add to MetaCart
Abstract. We design the first efficient algorithms and prove new combinatorial bounds for list decoding tensor products of codes and interleaved codes. • We show that for every code, the ratio of its
list decoding radius to its minimum distance stays unchanged under the tensor product operation (rather than squaring, as one might expect). This gives the first efficient list decoders and new
combinatorial bounds for some natural codes including multivariate polynomials where the degree in each variable is bounded. • We show that for every code, its list decoding radius remains unchanged
under m-wise interleaving for an integer m. This generalizes a recent result of Dinur et al. [6], who proved such a result for interleaved Hadamard codes (equivalently, linear transformations). •
Using the notion of generalized Hamming weights, we give better list size bounds for both tensoring and interleaving of binary linear codes. By analyzing the weight distribution of these codes, we
reduce the task of bounding the list size to bounding the number of close-by low-rank codewords. For decoding linear transformations, using rank-reduction together with other ideas, we obtain list
size bounds that are tight over small fields. Our results give better bounds on the list decoding radius than what is obtained from the Johnson bound, and yield rather general families of codes
decodable beyond the Johnson bound. 1.
, 2008
"... A new approach to Poisson approximation is proposed. The basic idea is very simple and based on properties of the Charlier polynomials and the Parseval identity. Such an approach quickly leads
to new effective bounds for several Poisson approximation problems. A selected survey on diverse Poisson ap ..."
Cited by 1 (0 self)
Add to MetaCart
A new approach to Poisson approximation is proposed. The basic idea is very simple and based on properties of the Charlier polynomials and the Parseval identity. Such an approach quickly leads to new
effective bounds for several Poisson approximation problems. A selected survey on diverse Poisson approximation results is also given.
"... Abstract. We illustrate a process that constructs martingales from raw material that arises naturally from the theory of sampling without replacement. The usefulness of the new martingales is
illustrated by the development of maximal inequalities for permuted sequences of real numbers. Some of these ..."
Add to MetaCart
Abstract. We illustrate a process that constructs martingales from raw material that arises naturally from the theory of sampling without replacement. The usefulness of the new martingales is
illustrated by the development of maximal inequalities for permuted sequences of real numbers. Some of these inequalities are new and some are variations of classical inequalities like those
introduced by A. Garsia in the study of rearrangement of orthogonal series. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=509455","timestamp":"2014-04-18T19:10:33Z","content_type":null,"content_length":"36710","record_id":"<urn:uuid:b0b3cdb4-b1e6-4849-9613-cf695b58c009>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00221-ip-10-147-4-33.ec2.internal.warc.gz"} |
Can anyone help me regarding
Author Can anyone help me regarding
Ranch Hand
Hello everybody..
Joined: Jan
18, 2007 Can anyone help me regarding the following question?
Posts: 110
Consider a machine that performs calculations 4 bits at a time. Eight-bit 2's complement numbers can be added by adding the four least significant bits, followed by the four most
significant bits. The leftmost bit is used for the sign, as usual. With 8 bits for each number, add -4 and -6, using 4-bit binary 2's complement arithmetic. Did overflow occur?
Did carry occur? Verify your numeric result.
Ranch Hand
Joined: Jan
18, 2007 Machine performance: (say) 0011 (i.e. 4 bits at a time)
Posts: 110 Eight-bit 2�s complement numbers: (say) 0110 0111
0111 (first)
0110 (second)
�The leftmost bit is used for the sign, as usual� What does it mean?
�With 8 bits for each number, add -4 and -6, using 4-bit binary 2's complement arithmetic�.. What is the procedure?
How can I find out whether overflows occur or not?
And what is a carry?
Joined: Mar
06, 2001
Posts: 13459 This still looks like a homework question, and it is still in the wrong forum.
I like...
subject: Can anyone help me regarding | {"url":"http://www.coderanch.com/t/292010/JSP/java/","timestamp":"2014-04-20T07:06:38Z","content_type":null,"content_length":"21397","record_id":"<urn:uuid:5409c8ce-1803-418b-88ea-140d9fd216d9>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00242-ip-10-147-4-33.ec2.internal.warc.gz"} |
Sample Midterm 2
In order to see a complete worked solution, click on the problem number. To get back to this test from the solution page, click where it says Sample Midterm 2 in the title space.
In problems 1 - 6, find any x and y intercepts, asymptotes, horizontal, vertical, or otherwise, find where the function assumes positive values and where it assumes negative values, the places where
the first derivative is either 0 or does not exists, where the function is increasing and decreasing, find the places where the second derivative is 0 or doesn't exist, and where the function is
concave up and concave down. Find the local maxima and minima and points of inflection, and sketch the graph.
1. y = x^3 - 3x 2. y = 3x^5 - 5x ^3 3.
4. 5. 6.
7. A person is going to make a box by taking a square piece of cardboard, which is 12 inches on a side, like the one in the picture to the right, cutting squares out of the corners, and folding the
edges like the other figure in the picture to the right. How big of a square should they cut out of the corners in order to maximize the volume of the resulting box?
8. A rectangular sheet of metal 20 feet long and 2 feet wide is bent down the middle to form a "V". Two triangular pieces of metal are soldered onto the ends to form a prismatic trough as in the
picutre to the right. What does the height of the triangle have to be to maximize the volume of the trough?
9. What are the dimensions of a right circular cylindrical can with a given surface area which will result in the maximum volume? | {"url":"http://www.sonoma.edu/users/w/wilsonst/Courses/Math_161/smt2/default.html","timestamp":"2014-04-18T08:05:52Z","content_type":null,"content_length":"4420","record_id":"<urn:uuid:c8b6a02e-e5a1-4d25-b598-893a29174dee>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00645-ip-10-147-4-33.ec2.internal.warc.gz"} |
Hmwk Help Arc length
Find the length of the curve.
8. 3x^(3/2) from x=0 to x= 5/9
9. y = 1/6 x^3 + 1/(2x) from x=1, to x=5
17) The spring of a spring balance is 6.0 in. long when there is no weight on the balance, and it is 9.0 in. long with 8.0 lb hung from the balance. How much work is done in stretching it from
6.0 in. to a length of 14.5 in.? | {"url":"http://mathhelpforum.com/calculus/81382-hmwk-help-arc-length.html","timestamp":"2014-04-20T02:40:31Z","content_type":null,"content_length":"41198","record_id":"<urn:uuid:24afc1e6-e049-4d51-b80f-8d5eca1fec3e>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00543-ip-10-147-4-33.ec2.internal.warc.gz"} |
Lomita Geometry Tutor
Greetings,My name is Chris and the following is a summary of my educational background, and my tutoring and teaching experience. I earned my MBA in June 2009 from the UCLA Anderson School of
Management (#1 Fully-Employed MBA Program 2007, Business Week). I also have a Master’s of Science in Biomed...
30 Subjects: including geometry, chemistry, calculus, physics
...I have recently taken and passed three college chemistry courses with an "A" and still have the material fresh in my mind. I often use a different approach when tutoring chemistry than I would
with math or physics, because I feel that chemistry is similar to learning a new language. Once you le...
10 Subjects: including geometry, chemistry, physics, calculus
...My students usually raise their grades by one letter after our sessions. I engage my students by thinking about various ways and teaching ticks and approaches in order to start the problem. I
have a vast experience in doing academic research having participated in projects at Loyola Marymount U...
14 Subjects: including geometry, calculus, physics, algebra 1
...I assign students full practice tests to determine their areas of strength and weakness. These test results help determine our study plan. I have 12 years of film production experience,
including writing and producing completed and distributed projects.
29 Subjects: including geometry, chemistry, reading, English
...If you really want to master difficult topics or perform up to your potential on standardized tests like the MCAT, learning how to apply your knowledge under stress is essential. Also, I
actually like teaching so I'll always show up with a smile on my face though I of course individually tailor ...
18 Subjects: including geometry, English, reading, chemistry | {"url":"http://www.purplemath.com/Lomita_Geometry_tutors.php","timestamp":"2014-04-17T19:39:30Z","content_type":null,"content_length":"23964","record_id":"<urn:uuid:2912ee31-f5ea-47b0-a7d0-9c15dcafc84c>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00281-ip-10-147-4-33.ec2.internal.warc.gz"} |
3D rotation problem, help please!
3D rotation problem, help please!
Hi all,
I'm writing a 3D space game. I have Vector3 to represent the position, velocity and angular velocity, and a Quaternion to represent the orientation of the spaceship.
Now I want to rotate the ship when I press the directional button, say turn it left when press left arrow key and turn it up when press the up arrow key. I have Vector3 to represent the force,
torque etc.
I also have a method named addForceAtPoint(Vector3 force, Vector3 position) so that I can add a force at particular point, say add it on the right wing to turn it left. The position can be
converted to the global coordinate no problem, but it's the force that bothers me. Say I call this:
//add a force on the z direction(initially the ship is moving in the positive z direction)
// at the point (30,0,0) - a point that is relative to the body, not in the global
// coordinate, it will be converted in the body of the method.
addForceAtPoint(Vector3(0,0,50), Vector3(30,0,0));
Now I can use the same point over and over again because it can be converted to the global coordinate every time, but I can't do the same to the force as it can't always be in the z direction.
The force should always be in the direction of the spaceship. That's what I don't know how to do.
If any of you kind souls ever encountered this problem or have experience on 3D rotation problem, I will much appreciate it if you can share a solution.
Many thanks!
If the force will always be in the direction of the spaceship, use spaceship.velocity (I'm assuming that's what you mean by "in the direction of the spaceship") and be done with it.
It looks like you want to do this from scratch (ie, without using openGL or DirectX or something) because (at least in openGL) this need not be this complex because you can use a matrix stack (I
believe they are there in DirectX too).
You could implement one yourself, the idea is fairly simple: You have a matrix representing the current orientation and location (a 4x4 matrix) of the ship, and one representing the next change
to take place -- but this one is not relative to the ship's position, it's relative to the origin, ie, as if the ship were dead center, with no rotation. So the transformation matrix is simple,
because this is no change:
--- orientation ------
x-axis y-axis z-axis pos
1 0 0 0 x
0 1 0 0 y
0 0 1 0 z
0 0 0 1 need 4th row for multiplication (will always be 0,0,0,1)
Multiplying another 4x4 matrix by this will leave it exactly the same (ie, no matter where the ship is and how it is oriented). Which means if you (for example) change row one column 4, the
product of the multiplication will have a different x-axis position. To rotate something relative to itself, multiply it's current state by a matrix tweaked w/r/t eg. the z value of the x-axis
AND the x value of the z-axis (turn left or right).
That's a bit of a headache, which is why most people would use an API like openGL or DirectX, which includes simple functions like "yRotate(degrees)" that do the matrix math for you.
In any case, then you can keep the "z force" as a simple constant because you would apply it by multiplying by a matrix with a positive value in row 3 column 4.
However, I thought you could do all that with quaterions anyway?
Thank you both for the advice. I've decided to just use Quaternion to rotate the ship, i.e. when a key pressed, just rotate the ship by certain amount without adding force.
However it is still a problem to me, as the quaternion of the ship is easy to get, how do I get the new quaternion that will be used to rotate the ship? Could you give me an example please?
Thank you both for the advice. I've decided to just use Quaternion to rotate the ship, i.e. when a key pressed, just rotate the ship by certain amount without adding force.
However it is still a problem to me, as the quaternion of the ship is easy to get, how do I get the new quaternion that will be used to rotate the ship? Could you give me an example please?
Quaternions have the axis and a rotation angle, so presumably if you know how much you want to rotate by, then you're done.
Note that rotations (whether by quaternions or any other method) are not absolute; if you've done a rotation, then further rotations will be done with respect to the new axes, not some
theoretical "absolute x-y-z".
tabstop, I think the OP wants to know what the new axis will be.
Thanks tabstop, would you let me know if this works:
//for example, i want to rotate the ship to the left
Quaternion temp = Quaternion(pi/12, 0, 1, 0);
player->orientation *= temp;
And after the rotation, how do I use the information stored in the quaternion to display the object on screen. I'm using openGL. Obviously glRotated is used, but I'm not sure what are the
parameters to pass to it.
Thanks tabstop, would you let me know if this works:
//for example, i want to rotate the ship to the left
Quaternion temp = Quaternion(pi/12, 0, 1, 0);
player->orientation *= temp;
And after the rotation, how do I use the information stored in the quaternion to display the object on screen. I'm using openGL. Obviously glRotated is used, but I'm not sure what are the
parameters to pass to it.
So type "man glRotate", or, if you're not on a *nix system, do the equivalent web search. You'll see:
void glRotated( GLdouble angle,
GLdouble x,
GLdouble y,
GLdouble z )
and I'll bet you can guess what goes where.
I've been fooling around with openGL for a little while and let me guess: this spaceship you are talking about is supposed to be "first person perspective", right? Like, it's not an object we see
zooming around and rotating in the view right? You want to be in the ship, looking out of it?
That is why you want to use quaternions, right? Because if the answer to all these questions is not yes, I would sooner chew my own hands off than go about rotation and transformation than the
way you are doing it.
Well...the spaceship I'm talking about is 3rd person perspective, while the camera follows the ship. I.e. when the ship turns, the camera turns as well, it's kinda like the control of WOW. Sorry
I haven't made that clear earlier.
I'm still new to those stuff, so If you think I'm on the wrong track, would you kindly let me know and possibly show me a way please?
By the way, I still don't know if the code I wrote in post #7 would work, so if you guys don't mind, can you tell me if it is correct?
Many thanks guys.
WELL! That is pretty fancy then isn't it.
I'm still new to those stuff, so If you think I'm on the wrong track, would you kindly let me know and possibly show me a way please?
I'm new too. I decided to avoid "quarternions" for now. Since I am kind of board with what I am supposed to be doing, maybe I will take a little time to see if what I am thinking of is possible
-- just using the matrix stack and the rotate/transform functions. I will admit thus far I've been working on simpler tasks than the 3rd person perspective flying ship.
Well...the spaceship I'm talking about is 3rd person perspective, while the camera follows the ship. I.e. when the ship turns, the camera turns as well, it's kinda like the control of WOW. Sorry
I haven't made that clear earlier.
I'm still new to those stuff, so If you think I'm on the wrong track, would you kindly let me know and possibly show me a way please?
By the way, I still don't know if the code I wrote in post #7 would work, so if you guys don't mind, can you tell me if it is correct?
Many thanks guys.
Well, assuming somebody at some point has written a class called Quaternion, that has a *= operator and a normalise function that do what they say on the tin.
Thank you both again for helping me.
What I meant by what type of parameters to pass to glRotated is that I know the function looks like glRotate(angle, x, y, z), what I meant to ask is that how do I get the information of 'angle',
'x', 'y' and 'z' out of the orientation quaternion?
Is it as simple as accessing its four elements in order? i.e. orientation.w, orentation.x etc.
Thank you both again for helping me.
What I meant by what type of parameters to pass to glRotated is that I know the function looks like glRotate(angle, x, y, z), what I meant to ask is that how do I get the information of 'angle',
'x', 'y' and 'z' out of the orientation quaternion?
Is it as simple as accessing its four elements in order? i.e. orientation.w, orentation.x etc.
Well ... yes. A quaternion is an angle and an axis, while glRotate expects an angle and an axis.
I just think you should have done some more thinking about glPushMatrix(), glPopMatrix(), and glGetFloatv() -- which with the last one you can extract the current values of the matrix -- before
some hoodwinking hooligan got you onto this 19th century math voyage. Which I'm sure it is interesting, but I will not be riding in your spaceship even if you let me (no offense). | {"url":"http://cboard.cprogramming.com/game-programming/113036-3d-rotation-problem-help-please-printable-thread.html","timestamp":"2014-04-20T02:18:20Z","content_type":null,"content_length":"26474","record_id":"<urn:uuid:a9d83b03-22c7-4b1c-bfa4-58a24656fa06>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00041-ip-10-147-4-33.ec2.internal.warc.gz"} |
Architecture -Triangles
A triangle is a polygon with three (3) sides, three vertices, and three angles, which are straight line segments. There are six different triangles, and here are 3. They are the equilateral,
isosceles, and scalene triangles. An equilateral triangle has equal sides and equal angles.
An isosceles triangle has two long sides and one short side.
A scalene triangle has no congruent sides and angles. Equilateral Isosceles Scalene
Right, Obtuse, and Acute triangles have angles that are right, obtuse, and acute angles. When you know right, obtuse, and acute angles, you can spot the triangles easily.
A right triangle has one right angle that measures 90 degrees.
The next triangle is an obtuse triangle and it has an obtuse angle which measures more than 90 degrees, but less than 180 degrees.
The last triangle is an acute triangle and it has an angle that is smaller than 90 degrees.
Right Triangle Obtuse Triangle Acute Triangle
All triangles have a total of 180 degrees. You can find triangles all over the place. For example, there may be a triangle on one side of your house. Triangles are featured a lot in architecture
because they are one of the most stable and strongest structures. One other example of a structure that has stood the test of time is the Menkaure Pyramid in Gaza, Egypt.
All triangles have legs. One example is a right angle. A right angle's legs are the two perpendicular lines.
A hypotenuse is the side of the right angle opposite of the right side.
A Pythagorean Theorem means that in a right angle, the area of the square on the hypotenuse is equal to the sum of the areas of both sides.
Pythagorean Theorem
Written as: a^2=b^2+c^2
Pythagoras was a famous Greek mathematician born about 569 B.C.
This is a picture of the Pythagorean Theorem
and the Greek mathematician Pythagoras. | {"url":"http://www.gwinnett.k12.ga.us/LilburnES/05-06geometry/Team2/Triangles_T2.html","timestamp":"2014-04-19T07:05:46Z","content_type":null,"content_length":"7028","record_id":"<urn:uuid:b0b648ef-002c-46c6-a6e2-b8999ca114fa>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00042-ip-10-147-4-33.ec2.internal.warc.gz"} |
converting a decimal to an int
10-12-2009, 08:15 AM
converting a decimal to an int
Hi, good to be here.
I am writing a program that simulates a projectile's trajectory. I have the mathematical equation to do so but I have a problem when I try to implement graphics.
The problem is that when I compute the math the resulting X and Y co-ordanints of the projectile ends up in a long decimal..
for example the X co-ordanint might be 9.2938 and the Y co-ordanint might be 17.3726.
The graphical framework I'm using only excepts integers when drawing objects to the screen so I can't use the exact position of the X and Y co-ordanints. This model does not have to be extremely
accurate so I figure the easy way to solve this problem is to round the number with "Math.round()" function.
This is math equation I'm trying to round...
yPos = Math.round((startHeight + velocity * Math.sin(angle) * time - gravity * time * time/2));
xPos = Math.round((startLength + velocity * Math.cos(angle) * time));
when I try to do this I get an error saying, "Possible loss of precision. found: long, required: int"
All the variables in this equation are type "double", and the xPos and Ypos variables are type int...
Can someon please enlighten me on the situation?
10-12-2009, 08:25 AM
10-12-2009, 08:26 AM
If accuracy is not a problem, why not just
double d = 5.43764826;
int i = (int) d;
Edit: And, if you want it to round up at .5 or higher (as is common in standard mathematics) just do
double d = 5.43764826;
int i = (int) (d + 0.5);
Assuming, of course, that the values are all between Integer.MIN_VALUE and Integer.MAX_VALUE.
10-12-2009, 08:31 AM
10-12-2009, 08:34 AM
Using "your" code
Simply rounding everything down
yPos = (int) (startHeight + velocity * Math.sin(angle) * time - gravity * time * time/2);
xPos = (int) (startLength + velocity * Math.cos(angle) * time);
rounding up at .5 or higher
yPos = (int) ((startHeight + velocity * Math.sin(angle) * time - gravity * time * time/2) + 0.5);
xPos = (int) ((startLength + velocity * Math.cos(angle) * time) + 0.5);
10-12-2009, 08:46 AM
Thanks both of you for the advice. I will try casting it as an int. I will post the results when I get it done. Hopefully it works
10-12-2009, 09:05 AM
Worked great!! thanks. Such a simple solution, I'm embarrassed I didnt try that before :p
10-12-2009, 09:26 AM
10-12-2009, 09:32 AM
Huh?? what is a defenestration time?
10-12-2009, 09:41 AM
Defenestration - Wikipedia, the free encyclopedia
Watch out for the lacerations when your turn comes. | {"url":"http://www.java-forums.org/new-java/21993-converting-decimal-int-print.html","timestamp":"2014-04-20T07:17:49Z","content_type":null,"content_length":"10760","record_id":"<urn:uuid:e13419cb-363b-442d-b65a-916adb87b987>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00543-ip-10-147-4-33.ec2.internal.warc.gz"} |
solving with elimination non standard form calculations
solving with elimination non standard form calculations Related topics: dividing polynomials by monomials worksheet
Cubed Root Function Ti-83
algebra statistics
calculator to solve algebraic
answers to workbook prentice hall algebra
Algebra With Pizzazz Page 55 Answers
online factorer
real ged cheat sheet
how to solve radicals
taks 8th grade math chart
conic section solvers
solve graph problems free
Author Message
ShriZathems Posted: Wednesday 27th of Dec 18:04
Well there are just two people who can guide me at this point in time, either it has to be some math guru or it has to be the Almighty himself. I’m sick and tired of trying to solve
problems on solving with elimination non standard form calculations and some related topics such as rational expressions and factoring polynomials. I have my finals coming up in a week
from now and I don’t know what to do ? Is there anyone out there who can actually take out some time and help me with my questions? Any sort of help would be really appreciated.
ameich Posted: Friday 29th of Dec 10:50
I’m know little in solving with elimination non standard form calculations. But , it’s quite complicated to explain it. I may help you answer it but since the solution is complex, I
doubt you will really understand the whole process of solving it so it’s recommended that you really have to ask someone to explain it to you in person to make the explaining clearer.
Good thing is that there’s this software that can help you with your problems. It’s called Algebrator and it’s an amazing piece of program because it does not only show the answer but it
also shows the process of solving it. How cool is that?
Bet Posted: Saturday 30th of Dec 17:24
Algebrator will not only help you do your assignments, but it will also provide explanations which will help you understand the concepts.
From: kµlt
øƒ Ø™
stkrj Posted: Sunday 31st of Dec 09:25
That’s what I’m looking for! Are you certain this will help me with my problems in math ? Well, it doesn’t hurt if I try it . Do you have any details to share that would lead me to the
product details?
Noddzj99 Posted: Sunday 31st of Dec 16:08
I guess you can find what you need here http://www.algebra-cheat.com/foundations-of-advanced-mathematics.html. From what I understand Algebrator comes at a price but it has unconditional
money back guarantee. That’s how I got it. I would advise you to take a look. Don’t think you will want to get your money back.
From: the
Vild Posted: Tuesday 02nd of Jan 15:59
Algebrator is a very great product and is certainly worth a try. You will find lot of interesting stuff there. I use it as reference software for my math problems and can say that it has
made learning math much more enjoyable. | {"url":"http://www.algebra-cheat.com/algebra-cheat/solving-inequalities/solving-with-elimination-non.html","timestamp":"2014-04-18T16:34:51Z","content_type":null,"content_length":"35645","record_id":"<urn:uuid:fb94c6bd-3ea2-4dbf-94dc-5feff168be58>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00232-ip-10-147-4-33.ec2.internal.warc.gz"} |
Hodge star operator
Hodge star operator
Let V be a $n$-dimensional ($n$finite) vector space with inner product $g$. The Hodge star operator (denoted by $\ast$) is a linear operator mapping $p$-forms on $V$ to $(n-p)$-forms, i.e.,
In terms of a basis $\{e^{1},\ldots,e^{n}\}$ for $V$ and the corresponding dual basis $\{e_{1},\ldots,e_{n}\}$ for $V^{*}$ (the star used to denote the dual space is not to be confused with the Hodge
star!), with the inner product being expressed in terms of components as $g=\sum_{{i,j=1}}^{n}g_{{ij}}e^{i}\otimes e^{j}$, the $\ast$-operator is defined as the linear operator that maps the basis
elements of $\Omega^{p}(V)$ as
$\displaystyle\ast(e^{{i_{1}}}\wedge\cdots\ $\ $\displaystyle\!\!\!\!\frac{\sqrt{|g|}}{(n-p)!}g^{{i_{1}l_{1}}}\cdots g^{{i_{p}% l_{p}}}\varepsilon_{{l_{1}\cdots l_{p}\,l_{{p+1}}\cdots l_
wedge e^{{i_{p}}})$ displaystyle {n}}}e^{{l_{{p+1}}}}% \wedge\cdots\wedge e^{{l_{{n}}}}.$
Here, $|g|=\det g_{{ij}}$, and $\varepsilon$ is the Levi-Civita permutation symbol
This operator may be defined in a coordinate-free manner by the condition
$u\wedge*v=g(u,v)\,\mathop{\bf Vol}(g)$
where the notation $g(u,v)$ denotes the inner product on $p$-forms (in coordinates, $g(u,v)=g_{{i_{1}j_{1}}}\cdots g_{{i_{p}j_{p}}}u^{{i_{1}\ldots i_{p}}}v^{{j_{1}% \ldots j_{p}}}$) and $\mathop{\bf
Vol}(g)$ is the unit volume form associated to the metric. (in coordinates, $\mathop{\bf Vol}(g)=\sqrt{\operatorname{det}(g)}e^{1}\wedge\cdots\wedge e^{n}$)
Generally $\ast\ast=(-1)^{{p(n-p)}}\operatorname{id}$, where $\operatorname{id}$ is the identity operator in $\Omega^{p}(V)$. In three dimensions, $\ast\ast=\operatorname{id}$ for all $p=0,\ldots,3$.
On $\mathbb{R}^{3}$ with Cartesian coordinates, the metric tensor is $g=dx\otimes dx+dy\otimes dy+dz\otimes dz$, and the Hodge star operator is
$\displaystyle\ast dx=dy\wedge dz,\ \ \ \ \ \ \ast dy=dz\wedge dx,\ \ \ \ \ \ % \ast dz=dx\wedge dy.$
Hodge operator, star operator
Mathematics Subject Classification
no label found | {"url":"http://planetmath.org/hodgestaroperator","timestamp":"2014-04-18T23:17:28Z","content_type":null,"content_length":"105697","record_id":"<urn:uuid:213a4984-33ad-4c7f-8543-4582b6357d52>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00025-ip-10-147-4-33.ec2.internal.warc.gz"} |
polar and cartesian coordinates
Next: absolute value, phase, modulus Up: Complex number arithmetic Previous: complex numbers in the
Since the time that coordinates were taken up as a way to give an algebraic treatment to geometric ideas, there has been an association between pairs of numbers and points in the plane. In the
traditional representation, the (real, imaginary) pair (x,y) has been seen as a point with x-coordinate x and y coordinate y; in fact the correspondence is so familiar that it seems redundant to try
to describe it.
Besides the rectangular cartesian coordinates, there are many others which can be used to locate points in the plane; one of the most convenient is polar coordinates, wherein
Familiarity with the power series expansions respectively of the exponential, sine, and cosine functions,
suggests combining the two trigonometric series series to get Euler's formula
or the more general polar form
for a complex number.
Next: absolute value, phase, modulus Up: Complex number arithmetic Previous: complex numbers in the Microcomputadoras | {"url":"http://delta.cs.cinvestav.mx/~mcintosh/comun/complex/node4.html","timestamp":"2014-04-17T15:26:37Z","content_type":null,"content_length":"7064","record_id":"<urn:uuid:0c0175d3-1d5e-4727-9401-b5d79b1c18d4>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00405-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathematiClub - Tutorials
The files below have samples of code designed to help you learn more about using
. Some of the links are to PDF files and others are
• Getting Help in Mathematica
This PDF document outlines simple ways to learn more about using Mathematica within Mathematica.
• Introduction to Mathematica
MathematiClub students wrote this for a math teacher workshop introducing Mathematica 5.2 (PDF).
• Quick Algebra Problem Generator (DRAFT)
Some simple code for making your own algebra problem generator for a variety of types of exercises (PDF).
• Introduction to Lists
This Mathematica notebook introduces the basics of working with lists. It includes brief descriptions and simple examples.
• Introduction to Graphics in Mathematica
This PDF document outlines the basics of using Graphics and two-dimensional primitives with directives.
• Functions and Graphics
This PDF document has some simple sample code to illustrate making using functions with graphics for repetition or Manipulate.
• Tables and Animation
An introduction to Table and creating animations using some simple examples. Includes an example GIF export.
(PDF Version for Mathematica 5.2).
• Control Types for Manipulate
This Mathematica notebook outlines the different types of controls that are available for Manipulate. It includes brief descriptions and simple examples. | {"url":"http://math.sduhsd.net/MathematiClub/tutorials.htm","timestamp":"2014-04-16T04:10:40Z","content_type":null,"content_length":"5631","record_id":"<urn:uuid:6d386e28-0957-4674-90c2-429d13e469ba>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00167-ip-10-147-4-33.ec2.internal.warc.gz"} |
Perfect Power Sum
Let $x$ and $y$ be positive whole numbers, and let $p$ be any odd prime.
It is well known that $x^3 + y^3$ is never equal to an odd prime.
But given that $n$ is a positive integer which contains an odd factor greater than one, prove that $x^n +y^n = p$ has no solutions.
Let $V = x^n + y^n$.
As V is a prime greater than two it is clear that $x \ne y$, otherwise the sum would be $2x^n$ which is divisible by 2. In addition $x \gt 1$, in which case $x^n \gt x$.
So without loss of generality let $x \gt y \ge 1$. Therefore $1 \lt x + y \lt V$.
It can be verified for odd values of $n$ that:
$V = (x + y)(x^{n-1} - x^{n-2}y + ... + x^2 y^{n-3} - x y^{n-2} + y^{n-1})$
In other words, $V$ is divisible by $x + y$, which lies between 1 and $V$. Hence for odd values of $n$, $V$ cannot be prime.
Suppose that $n = ab$, where $a = 2^k$ and $b$ is an odd greater than one.
$\therefore V &= x^n + y^n\\&= (x^a)^b + (y^a)^b\\&= (x^a + y^a)((x^a)^{b-1} - (x^a)^{b-2}(y^a) + ... - (x^a)(y^a)^{b-2} + (y^a)^{b-1})$
In the same way as before, $1 \lt x^a + y^a \lt V$, and as $V$ is divisible by $x^a + y^a$ it cannot be prime.
Hence we prove that $x^n + y^n$ is never equal to an odd prime unless $n$ is of the form $2^k$.
Given that $n = 2^k$, investigate when $V$ is prime.
Problem ID: 307 (20 Jan 2007) Difficulty: 4 Star | {"url":"http://mathschallenge.net/full/perfect_power_sum","timestamp":"2014-04-16T11:10:33Z","content_type":null,"content_length":"5427","record_id":"<urn:uuid:9b7ea949-afaf-4127-889b-69c6bf4d95b7>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00582-ip-10-147-4-33.ec2.internal.warc.gz"} |
Find the location of the single smallest absolute
BTW, as you may have figured out, there is no need to respond to this post. It was posted in error.
"Jeff" wrote in message <kmrfks$raj$1@newscl01ah.mathworks.com>...
> I need to locate the smallest nonzero magnitude entry in a vector. For vectors which don't contain any zeros or any repeated values I use this:
> ICmode = find(abs(D)==min(abs(D)));
> How do I locate the first smallest entry of the absolute value of vector D? (Note that I need the location in the original D, not the value.)
> Before posting this question, it occurred to me I could try
> nonZeroD = D(find(D))
> But with the zeros stripped out, the indexes of nonZeroD are wrong.
> Now I've discovered the 'first' parameter and note that you can use other conditionals in "find". I'm trying
> ICmode = find(abs(D)>0, 1, 'first')
> But this does not find the smallest entry, just the first nonzero entry. I combine that with my first effort
> find((abs(D)>0) && (abs(D)==min(abs(D))), 1, 'first')
> But I get an error saying the operands must be logical scalars (and it is logically wrong, anyway - if there is a zero entry, it will not find anything).
> Changed && to &.
> Trying:
> if ~exist('ICmode','var')
> % ICmode = find(abs(D)==min(abs(D)));
> nonZeroD = D(find(D));
> lambda = nonZeroD(find(abs(nonZeroD)==min(abs(nonZeroD))));
> lambda = max(lambda);
> ICmode = find(D==lambda,1,'first');
> end
> lambda = D(ICmode); | {"url":"http://www.mathworks.com/matlabcentral/newsreader/view_thread/328964","timestamp":"2014-04-23T15:05:11Z","content_type":null,"content_length":"32558","record_id":"<urn:uuid:52b25fae-c6f5-4a43-a186-3b70ddc23514>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00609-ip-10-147-4-33.ec2.internal.warc.gz"} |
Are there some numerical test to check if a map is a contraction?
up vote 4 down vote favorite
Let's say I have a multivariate function $$ f:D \to D, D \subset \mathbb R ^n, D \text{ compact}, $$ for which there is no closed form.
That is the only way to evaluate the function is to do it numerically.
For example, the price of some exotic insurance contracts: the rule governing the payout are simple, but have no closed formula.
If I suspect my function $f$ is a contraction, is it possible to prove it numerically?
My first though will be to evaluate numerically $f'$ at 1000 points of $D$.
Is this sufficient?
Probably not, have there been some work on such numerical tests?
My second though will be to assume $f'$ follows a normal distribution, then perform an hypothesis test.
Is this reasonable?
My use case is that, I search for a fixed point of $f$ using dynamic programming.
But the state space becomes too large.
If I can proove $f$ is a contraction, then I avoid the curse of dimensionality.
oc.optimization-control na.numerical-analysis
1 Why don't you just do many Piquard iterations and see if the convergence is geometric and the last iteration is nearly a fixed point? If it is, who cares why (you just need to find a fixed point,
right?) if it isn't, $f$ is certainly not contractive enough and you've got to try something else. – fedja Jun 19 '13 at 16:26
1 Interesting... I do not think anything useful can be said in such generality (obviously, in principle there may be small scale wiggles that no particular finite number of checks can catch) but if
you tell exactly how the function is defined, someone may be able to come up with some good idea. – fedja Jun 19 '13 at 18:26
1 Maybe Tischler's "Solving Decidability Problems with Interval Arithmetic" would be of some help? (You can google the paper name to find a free copy.) – Benjamin Dickman Jun 19 '13 at 20:01
1 Did you mean to write $f:D \to D$ rather than $f:D \to \mathbb R$? – Barry Cipra Jun 19 '13 at 20:05
1 You suspect that $f$ is a contraction, so that means you've already selected a metric space...do you want use a specific metric space? – Suvrit Sep 30 '13 at 1:38
show 7 more comments
Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook.
Browse other questions tagged oc.optimization-control na.numerical-analysis or ask your own question. | {"url":"http://mathoverflow.net/questions/134165/are-there-some-numerical-test-to-check-if-a-map-is-a-contraction","timestamp":"2014-04-16T16:51:45Z","content_type":null,"content_length":"53331","record_id":"<urn:uuid:349a2535-42e6-4201-9b29-b660fa9552bd>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00262-ip-10-147-4-33.ec2.internal.warc.gz"} |
Post a reply
The binomial expansion of the Left Hand Side
contains two terms, among many, which are
a^a*n and b^a*n which appear to make the LHS greater than the RHS,
but when we assign arbitrary values,
say a=10, b=1,000,000,000 and n=100
the LHS is (1,000,000,010)^1000, which would contain 9,001 digits;
the RHS becomes
100 x (100^1,000,000,000) which would contain more than 2 billion digits!
This happened because we assumed b>>n.
Otherwise, the LHS may be greater.
Say, when a=10, b=100, n=1000.
LHS would be 110^10,000 containing 20,414 digits and the RHS would be much smaller, viz. 100*(1000^100), containing approximately 300 digits! | {"url":"http://www.mathisfunforum.com/post.php?tid=1254&qid=11820","timestamp":"2014-04-18T06:11:06Z","content_type":null,"content_length":"17767","record_id":"<urn:uuid:78ed7948-90b8-4077-8f7a-03ca0dec4e61>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00290-ip-10-147-4-33.ec2.internal.warc.gz"} |
Cryptology ePrint Archive: Report 2008/135
Unbalanced Digit Sets and the Closest Choice Strategy for Minimal Weight Integer RepresentationsClemens Heuberger and James A. MuirAbstract: An online algorithm is presented that produces an optimal
radix-2 representation of an input integer $n$ using digits from the set $D_{\ell,u}=\{a\in\Z:\ell\le a\le u\}$, where $\ell \leq 0$ and $u \geq 1$. The algorithm works by scanning the digits of the
binary representation of $n$ from left-to-right (\ie from most-significant to least-significant). The output representation is optimal in the sense that, of all radix-2 representations of $n$ with
digits from $D_{\ell,u}$, it has as few nonzero digits as possible (\ie it has \emph{minimal weight}). Such representations are useful in the efficient implementation of elliptic curve cryptography.
The strategy the algorithm utilizes is to choose an integer of the form $d 2^i$, where $d \in D_{\ell,u}$, that is closest to $n$ with respect to a particular distance function. It is possible to
choose values of $\ell$ and $u$ so that the set $D_{\ell,u}$ is unbalanced in the sense that it contains more negative digits than positive digits, or more positive digits than negative digits. Our
distance function takes the possible unbalanced nature of $D_{\ell,u}$ into account. Category / Keywords: implementation / redundant number systems, minimal weightDate: received 25 Mar 2008, last
revised 26 Mar 2008Contact author: jamuir at cs smu caAvailable format(s): Postscript (PS) | Compressed Postscript (PS.GZ) | PDF | BibTeX Citation Version: 20080331:140303 (All versions of this
report) Discussion forum: Show discussion | Start new discussion[ Cryptology ePrint archive ] | {"url":"http://eprint.iacr.org/2008/135/20080331:140303","timestamp":"2014-04-17T13:14:05Z","content_type":null,"content_length":"3166","record_id":"<urn:uuid:51e01c15-9a18-4618-9345-236673458d7b>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00148-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Digest
Summaries of Media Coverage of Math
Edited by Mike Breen and Annette Emerson, AMS Public Awareness Officers
Contributors: Mike Breen (AMS), Claudia Clark (freelance science writer), Lisa DeKeukelaere (2004 AMS Media Fellow), Annette Emerson (AMS), Brie Finegold (University of Arizona), Baldur Hedinsson
(2009 AMS Media Fellow), Allyn Jackson (Deputy Editor, Notices of the AMS), and Ben Polletta (Drexel University)
January 2012
Brie Finegold summarizes blogs about the Elsevier boycott.
"Elsevier--my part in its downfall," Timothy Gower. Gower's Weblog, 21 January, 2012;
"Banning Elsevier." The n-Category Café, 26 January 2012;
"The cost of knowledge," by Terence Tao. What's new?, 26 January 2012;
"Elsevier's Publishing Model Might be About to Go Up in Smoke," by Tim Worstall. Forbes Magazine, 28 January 2012;
"Thinking about Elsevier replacements," by David Speyer. Secret Blogging Seminar, 30 January 2012;
"As Journal Boycott Grows, Elsevier Defends Its Practices," by Josh Fischman. The Chronicle of Higher Education, 31 January 2012.
On the heels of SOPA, PIPA, and the lesser known Research Works Act, many high-profile blogging mathematicians including Tim Gowers, Terence Tao, and John Baez have decided to protest the publishing
practices of Elsevier. During the course of just a few weeks, about 2500 scientists and mathematicians have signed a petition at the website thecostofknowledge.com/ declaring their unwillingness to
publish, referee, or do editorial work for the publisher. In 2006, the editorial board of Topology quit because of their disapproval of Elsevier's high prices and practice of bundling subscriptions
to their journals. Elsevier recently supported the Research Works Act, which prohibits open-access mandates from being applied to scientific research that was conducted using federal funding. While
the print media does not yet seem to be taking much notice, a recent article in Forbes references Gower's blog and comments that the protest is likely to spread beyond mathematics and the hard
sciences. Another article in The Chronicle of Higher Education also references Gowers and discusses Elsevier's response to the allegations from concerned researchers. Discussion has already begun on
the Secret Blogging Seminar and other blogs about replacements and alternatives to Elsevier's journals.
--- Brie Finegold
Return to Top
"Wall Street's Sexiest Model", by Emily Lambert. Forbes, 27 January 2012.
"Apologies to anyone who clicked on this story expecting to read about Christie Turlington or the latest Heidi Klum-Seal split news," this article begins. "But this is about Wall Street's sexiest
models---we're talking about math." The article is a question-and-answer interview with mathematics writer George Szpiro, whose latest book is Pricing the Future, a history of the Black-Scholes
equation that today dominates finance. The interviewer does not ask Szpiro much about the history of this equation or its impact. Most of her questions are actually assertions about how Black-Scholes
has caused problems in modern finance, and the aim is to get Szpiro to react to these assertions. For example, the interviewer asks, "How has Black-Scholes failed, or gone awry?" Szpiro explains that
the equation has not failed and is in fact correct but, for example, people sometimes use poor estimates of the equation's parameters, and in such cases the equation do not produce reliable
information. "That sounds like a major failing to me," the interviewer retorts. "If you have a model that assumes data is correct, and it's not, that's a recipe for disaster. To me that's an argument
to rely less on math, more on common sense. Wouldn't you agree?" Szpiro replies that the model did not fail, and of course one needs common sense, but if one puts incorrect data into a model, then
the result will be rubbish. On the whole, the interview does not provide much insight into the Black-Scholes equation or what Szpiro's book is like. Szpiro nevertheless maintains his sense of humor.
Asked if Black-Scholes is Wall Street's sexiest model, he replies that it is indeed very appealing. "I think the Bell Curve is also quite shapely... especially if it does not have fat tails."
--- Allyn Jackson
Return to Top
"A Math Study Provides Hints About the Game’s Gender Gap," by Dylan Loeb McClain. The New York Times, 21 January 2012;
"Gender equity: Doing the math", by Rosalind Barnett and Caryl Rivers. Los Angeles Times, 24 January 2012;
"Math-score differences arise from cultural factors, not innate biological differences, study suggests", by Susan Perry. MinnPost.com, 26 January 2012.
These three articles were inspired by a study that appeared in the January 2012 issue of the AMS Notices. The study, "Debunking Myths about Gender and Mathematics Performance", by Jonathan Kane and
Janet Mertz, examined data from 86 countries and found evidence that social factors, rather than biological ones, account for difference in boys' and girls' performance in mathematics. In particular,
they found that in countries with a high level of gender equity, both boys and girls do well in mathematics. The above-cited New York Times article appeared in the chess column. After a brief
description of the Notices article, McClain notes: "There are many similarities between chess and mathematics as disciplines, so the findings shed light on why women chess players are not as
successful as men.'' He then segues into a discussion of a chess game in a recent tournament in which a woman chess competitor displayed a particularly aggressive and incisive style.
The Los Angeles Times article is an opinion column written by two academics: Barnett is a senior scientist at the Brandeis Women's Studies Research Center, and Rivers is a Boston University
journalism professor. The pair wrote the book The Truth About Girls and Boys: Challenging Toxic Stereotypes About Our Children. "[I]t's time to call a halt to the endless search for a male math gene,
to the cry for more and more studies because someday we're going to find proof of men's innate superiority, and to the ongoing spate of articles saying that we really know that nature makes boys
better," Barnett and Rivers write. "It's time to move from bickering over data to building new evidence-based public policies that will empower all of our children---and, in turn, society." A
previous Math Digest item reported on the worldwide media coverage that ensued shortly after the publication of the Kane-Mertz article.
--- Allyn Jackson
Return to Top
"Better mathematics boosts image-processing algorithm," by Jacob Aron. New Scientist, 20 January 2012.
Much of modern technology depends on efficient transmission, reception, and storage of signals; an example is the way a smart phone efficiently compresses and stores audio and video files. Signal
processing is an area of research that draws on tools from engineering and mathematics to create algorithms for the efficient handling of all kinds of signals: sounds, images, sensor data,
experimental observations, etc. This brief article discusses an advance in which the combination of two different signal processing algorithms resulted in a new technique that can process certain
signals 10,000 times faster than other methods.
--- Allyn Jackson
Return to Top
"Plot Thickens on Oscar Ballot," by Carl Bialik. The Wall Street Journal, 14 January 2012.
The math formula used to select Oscar finalists is getting a makeover for the first time since 1936. The new formula which gives greater weight to the top vote on every ballot has Oscar enthusiasts
scrambling to figure out how the change will affect which stars get invited to walk the red carpet. Many find the new system perplexing. Steven Brams, a political scientist at New York University,
calls it "totally ad hoc," and adds, "If they want to engage in trial by error every year, I cannot stop them." However there are academics that defend the new. "While the system could be better, it
does a moderately good job of being fair and selecting good films," says Nicolaus Tideman, an economist at Virginia Tech. Donald G. Saari, a mathematician at the University of California, Irvine,
sees difficulties likely to arise. "Can the Academy find ways to get around these difficulties?" says Prof. Saari. "Of course. Many such methods exist." One thing both award analysts and
mathematicians agree on is that movies with passionate support from a relatively small group of people will benefit from the change, while pictures appealing to a wider audience are set to lose out.
--- Baldur Hedinsson
Return to Top
"Spot check." The Economist, 14 January 2012.
A math formula can apply to large things and small. The Economist reports on how a single formula can explain the pattern of tree patches in arid climates and at the same time the arrangement of
stripes and spots on feline coats. Mathematician Bonni Kealy of Washington State University gave a talk on the discovery at the January meeting of the AMS and MAA in Boston. For those curious about
how non-linear partial differential equations can produce such universal patterns, the research paper "A Nonlinear Stability Analysis of Vegetative Turing Pattern Formation for an
Interaction–Diffusion Plant-Surface Water Model System in an Arid Flat Environment' is available online.
--- Baldur Hedinsson
Return to Top
"Pasta Graduates from Alphabet Soup to Advanced Geometry," by Kenneth Chang. The New York Times, 9 January 2012.
This short but tasty article recounts a boil of recent mathematical activity around pasta shapes. The shapes have been plotted in Mathematica and blogged by fluid dynamics graduate student Sander
Huisman. They've appeared on a pop quiz in Christopher Tiee's vector calculus class. And, most impressively, they're the stars of a new 208 page book by architects Marco Guarnieri and George L.
Legendre. (No relation to the eponymous transform's inventor, it seems.) The book classifies 92 types of pasta into an edible cladogram, providing equations and plots of representative surfaces for
each one. The noodles surveyed include a new shape--a twisted Möbius strip named after Legendre's baby daughter Ioli--which remains theoretical due to difficulties gluing the edges together. (It
appears Klein bottle pasta is still in search of a namesake--not to mention a manufacturer.) Legendre says this labor of love is an "amalgamation of mathematics and cooking tips--the profane, the
sacred"--although it's not clear which he considers to be which. The article has links to the blog and the pop quiz, and a selection of figures from the book. See a diagram related to the pasta
geometry. Diagram and image: George L. Legendre, IJP Corporation.
--- Ben Polletta
Return to Top
"Blue Valley West freshman wins $10k in math contest," by Joe Robertson. The Kansas City Star, 10 January 2012.
Shyam Narayanan (at left, holding the check) was the big winner in the 2012 national Who Wants to Be a Mathematician, which took place January 6 at the Joint Mathematics Meetings in Boston. Shyam's
winnings are split 50-50 with the math department at his school, Blue Valley West High School (Overland Park, KS). Robertson writes that in addition to being great in math, Shyam is also a
competitive swimmer and loves to play the piano. His principal, Tony Lake, first knew that Shyam was sharp when Lake was asked to bring an honors high school geometry book to Shyam--who at the time
was in elementary school. Shyam hopes the school will use the money to begin a math club. See Shyam talk about his win.
Nine other students competed in the contest and about half were covered by their local media:
Read and hear more about all 10 contestants in the 2012 national Who Wants to Be a Mathematician (Photo by E. David Luria).
--- Mike Breen
Return to Top
"A Unique Expression Of Love For Math," by Ari Daniel Shapiro. NPR's All Things Considered, 10 January 2012.
Participants at the Joint Mathematical Meetings, held in Boston in early January, spent some time thinking outside the box—the typical mathematical powerpoint presentation/numbers on a chalkboard
box—with sessions and discussions on the mathematics of crochet, origami, and dancing. Host Ari Shapiro talks to Sarah-Marie Belcastro, a mathematician who crochets objects like Möbius bands to help
visualize complicated mathematical structures. He listens to Thomas Hull, a paper-folding enthusiast who explains that "origami is calculus," and watches a group of mathematicians defeat a computer
at backgammon using probability. He interviews a mathematician who leads participants in a full-body warm-up before delving into math-based "movement phrases." As Williams College math professor
Colin Adams explains, pursing alternative approaches to mathematics helps teachers maintain student interest long enough to demonstrate the beauty of the subject. (Photo: Laura McHugh, MAA Editorial/
Social Media.)
--- Lisa DeKeukelaere
Return to Top
"Mathematician claims breakthrough in Sudoku puzzle," by Eugenie Samuel Reich. Nature, 6 January 2012.
Using a novel algorithm and more than 7 million hours of computing time, Irish mathematician Gary McGuire has developed a plausible proof that Sudoku puzzles (fill-in-the-blank, 9x9 numeric grids
popularized in Japan) must be initially populated with at least 17 numbers in order for a unique solution to exist. Mathematicians had previously postulated that Sudoku grids with 16 or fewer
"clues" could not be solved uniquely, but checking all possible solutions to all possible 16-clue grids was outside the temporal realm of computational possibility.
McGuire truncated the problem with an algorithm that identified "unavoidable sets"—interchangeable numbers within a puzzle solution—based on the idea that one member of each unavoidable set must be
given as a clue in order to eliminate interchangeability. McGuire’s simplified approach still required such significant computing time that other mathematicians will not be able to verify his work
quickly, but participants at a recent conference indicated the proof is probably valid.
--- Lisa DeKeukelaere
Return to Top
"Supercomputer Wins Jeopardy!," by Leeaundra Keany. Discover, January 2012, page. 19.
Discover magazine, Leeaundra Keany argues that even as computers get ever more powerful and their number crunching algorithms more sophisticated, humans still possess complicated skills that
computers are far from acquiring. Computers might be able to detect humor or wordplay in a sentence, but they still don´t get the joke.
Watson, the computer that won Jeopardy, was #3 of the top 10 in Discover magazine's Top 100 science stories of 2011. Other top 100 science stories of 2011 related to mathematics: " "Computer Model
Mimics Infant Cognition," "The Too-Sure Thing," and "Could Random Airplane Boarding Speed Your Trip?".
(Image: Mathematical Moment "Answering the Question, and Vice Versa".)
--- Baldur Hedinsson
Return to Top | {"url":"http://ams.org/news/math-in-the-media/mathdigest-md-201201-toc","timestamp":"2014-04-17T21:51:46Z","content_type":null,"content_length":"36471","record_id":"<urn:uuid:bab25b63-4cc0-4437-8cef-4eddebf9b263>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00411-ip-10-147-4-33.ec2.internal.warc.gz"} |
Building a Test
SystemTest tests consist of elements, test vectors, and test variables. You can use each of these entities to create a variety of test scenarios ranging from a simple test that runs a series of
elements once to a full parameter sweep that iterates over the values of test vectors that you define. Test vectors and test variables map data between SystemTest and the model or unit under test.
You can optionally map test variables to results and perform analysis using the Test Results Viewer.
Test elements are discrete actions that run during test execution. You can use the test elements to integrate capabilities from other MathWorks products into SystemTest. SystemTest includes the
following test elements:
• MATLAB, which executes MATLAB code
• Simulink, which connects the test to a Simulink model and uses test vectors to sweep across the model's inputs and parameters
• Limit Check, which specifies pass/fail conditions
• Vector Plot, which plots array or vector data as the test is executing
• Scalar Plot, which plots scalar data for each iteration
• IF, which controls the flow of a test based on a logical condition
• Stop, which stops the test execution
• Subsection, which helps manage complex test structures
Detailed information about additional test elements for SystemTest is available.
Test vectors specify the values that vary during each test iteration. SystemTest lets you use multiple test vectors whose values are arguments (for algorithms) or parameters and input signals (for
models). You can define test vectors using any MATLAB expression or generate random values using probability distributions.
SystemTest includes normal (Gaussian) and uniform probability distribution functions.When used with Statistics Toolbox (available separately), SystemTest lets you also specify exponential, gamma,
lognormal, Student's t , or Weibull probability distributions.
Test variables are used to store data for use by test elements during execution. You can initialize variables in the Pre Test or Main Test with any MATLAB expression.
Next: Executing Tests | {"url":"http://www.mathworks.se/products/systemtest/description3.html?nocookie=true","timestamp":"2014-04-23T09:29:20Z","content_type":null,"content_length":"24571","record_id":"<urn:uuid:f0ce1160-0021-426f-9e37-f745d42b7bc9>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00581-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: December 2001 [00283]
[Date Index] [Thread Index] [Author Index]
Re: Simplify
• To: mathgroup at smc.vnet.net
• Subject: [mg32034] Re: Simplify
• From: "Alan Mason" <swt at austin.rr.com>
• Date: Wed, 19 Dec 2001 04:29:23 -0500 (EST)
• References: <9vmrje$hh7$1@smc.vnet.net>
• Sender: owner-wri-mathgroup at wolfram.com
<Matthias.Bode at oppenheim.de> wrote in message
news:9vmrje$hh7$1 at smc.vnet.net...
> Dear Colleagues,
> why should I not expect MATHEMATICA to (Full)Simplify
> (a^b)^(1/b) or a^b^(1/b) to a?
the reason is because they are not equal for general complex-valued a, b, c.
The following notebook illustrates how to make a rule to enforce the desired
simplification (which is correct only for a, b, c real), and also gives a
case where equality fails. z^w is defined as Exp[w Log[z]], and Log has a
branch cut; this complication is the reason for the failure.
test = (a^b)^(1/b)
test2 = (a^b)^c
rule = Power[Power[a_, b_], c_]\[RuleDelayed] Power[a, b c]
\!\(\((a_\^b_)\)\^c_ \[RuleDelayed] a\^\(b\ c\)\)
test /. rule
test2 /. rule
\!\(a\^\(b\ c\)\)
a = 2 + 3 I;b = 3 +I; c = -2-4I;
\!\(4.240023659792295`*^-7 - 7.75911100859049`*^-7\ \[ImaginaryI]\)
N[a^(b c)]
34864.2\[InvisibleSpace]-63800.3 \[ImaginaryI] | {"url":"http://forums.wolfram.com/mathgroup/archive/2001/Dec/msg00283.html","timestamp":"2014-04-18T00:40:53Z","content_type":null,"content_length":"35361","record_id":"<urn:uuid:3deb0426-33ac-4495-9446-1cf4dca6d267>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00488-ip-10-147-4-33.ec2.internal.warc.gz"} |
Oakland, CA Prealgebra Tutor
Find an Oakland, CA Prealgebra Tutor
I am a bilingual tutor in Spanish and English. Having been able to graduate from a distinguished University in Engineering shows my capacity with complex subjects. I know what it feels being
challenged in a educational and competitive environment.
22 Subjects: including prealgebra, reading, Spanish, writing
...My methods are proven,I believe the solution is inside you...I'll teach you how to use it. I'll teach...Defining WizardPeace My 12 years of tutoring experience includes Developing and Managing
Tutor Training and Tutorial services with a staff of over 20 math tutors servicing an average 4...
13 Subjects: including prealgebra, calculus, algebra 1, algebra 2
...Some of them were scoring in mid-500, when I started working with them. I bring my full attention and dedication to the students that I work with. Since every person is unique, I personalize
my approach to each student.
14 Subjects: including prealgebra, calculus, algebra 1, algebra 2
...To tutor MATLAB programming I will create a series of exercises designed to show the student the specific tools that they will need for their desired application. I am a trained engineer, with
an M.S. from UC Berkeley, and a B.S. from the University of Illinois at Urbana-Champaign. I have gradu...
15 Subjects: including prealgebra, Spanish, calculus, ESL/ESOL
...While I really enjoyed my grammar and conversational classes my favorite was the academic writing class. In this class I assisted students in writing papers, letters of intention, cover
letters for job applications, vocabulary acquisition, and test preparation for competency tests such as TOEFL and IELTS. I am also a graduate of San Francisco State University.
14 Subjects: including prealgebra, Spanish, reading, English
Related Oakland, CA Tutors
Oakland, CA Accounting Tutors
Oakland, CA ACT Tutors
Oakland, CA Algebra Tutors
Oakland, CA Algebra 2 Tutors
Oakland, CA Calculus Tutors
Oakland, CA Geometry Tutors
Oakland, CA Math Tutors
Oakland, CA Prealgebra Tutors
Oakland, CA Precalculus Tutors
Oakland, CA SAT Tutors
Oakland, CA SAT Math Tutors
Oakland, CA Science Tutors
Oakland, CA Statistics Tutors
Oakland, CA Trigonometry Tutors
Nearby Cities With prealgebra Tutor
Alameda prealgebra Tutors
Albany, CA prealgebra Tutors
Berkeley, CA prealgebra Tutors
Concord, CA prealgebra Tutors
Daly City prealgebra Tutors
Emeryville prealgebra Tutors
Fremont, CA prealgebra Tutors
Hayward, CA prealgebra Tutors
Lafayette, CA prealgebra Tutors
Piedmont, CA prealgebra Tutors
Richmond, CA prealgebra Tutors
San Francisco prealgebra Tutors
San Leandro prealgebra Tutors
San Pablo, CA prealgebra Tutors
Walnut Creek, CA prealgebra Tutors | {"url":"http://www.purplemath.com/Oakland_CA_prealgebra_tutors.php","timestamp":"2014-04-21T05:13:32Z","content_type":null,"content_length":"24068","record_id":"<urn:uuid:211b6e6c-f894-4780-a17a-cc5c578307ab>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00539-ip-10-147-4-33.ec2.internal.warc.gz"} |
Case Study – Single
Case Study – Single chart dual axis
There are many instances when one wants to show two different measurement scales each of which measures the same data. One such instance would be showing temperature readings in both the Fahrenheit
and the Celsius scales as in Figure 1, which shows the mean soil temperature through the summer months using both temperature measurement systems. For technical reasons discussed below, the best that
is possible is to simulate the effect. We will plot two series but make it look as though the chart contains just one.
Figure 1
What we need to know, obviously, is how the two systems relate to each other. In the case of temperatures, the relationship is [], where C is the temperature in Celsius and F is the temperature in
There are two key elements to remember in simulating the effect of a single chart with two axes. First, Excel will create a secondary axis only if we plot at least 2 series in the chart. Second,
since the two measurement scales are different we will have to calculate the temperatures in each of the two systems.
Consequently, we will plot the data as recorded in Fahrenheit and as calculated in Celsius, move the Celsius series to the secondary y-axis and align the two axis (minimum and maximum scale values)
so that the two series overlap so precisely that it appears as if the chart contains just the one series. The key to getting the two series to overlap is aligning both the minimum and the maximum
values on the two axes. Of course, aligning does not mean having the same values but having values that represent the same temperature. For example, if we wanted the minimum and maximum values to be
the freezing and boiling temperatures of water, respectively, we would have 32 and 212 as the minimum and the maximum values on the primary axis with the corresponding values on the secondary axis
being 0 and 100.
Suppose the actual data (observation date and temperature in °F) are in columns A and B. Then, calculate the readings in °C using the formula in the Concepts section. The result is as shown Figure 2
in Figure 2.
Now, use the data in columns A, B, and E to plot a line chart as in Figure 3. The result is shown in Figure 4. Figure 3
Figure 4
Delete the legend (select the legend and press the Delete key). Hide the y-axis gridlines (select the chart, then Chart | Chart Options… | Gridlines tab | in the Value (Y) Axis section uncheck the
Major Gridlines checkbox.
Change the format of the x-axis to a date format of d-mmm. Before closing the dialog box… Figure 5
…set the scale to a major unit of 1 month. Figure 6
Move the series corresponding to Mean (C) to the secondary axis (double-click the plotted series then select the Axis tab, then select the Plot series on Secondary axis option). Ensure that the
pattern remains a line with no markers.
Figure 7
Note that the shapes of the two series are starting to look alike. The chart has the values in Fahrenheit on the primary y-axis and in Celsius on the secondary y-axis. Clearly, the minimum
temperature, which happens to be in the first few days of April, is a little above freezing, which is 32°F or 0°C. So, we will rescale the primary y axis to have a minimum value of 32°F as in Figure
8 while ensuring that the secondary y-axis minimum value remains 0°C.
Figure 8
The result is that the two series are starting to overlap. Excel has set the maximum value for the primary y-axis as 92°F. This corresponds to [] or a Celsius value of 33.33°C. Hence, we should have
a maximum value of 33.33 on the secondary y-axis (while ensuring the minimum remains 0). As soon as we do that, the two series overlap precisely (Figure 1). | {"url":"http://www.tushar-mehta.com/excel/charts/0204-single%20graph%20dual%20axis.htm","timestamp":"2014-04-20T23:27:17Z","content_type":null,"content_length":"27827","record_id":"<urn:uuid:23d8591b-76f3-487a-bb79-469e9b395cda>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00309-ip-10-147-4-33.ec2.internal.warc.gz"} |
Digital Logic Gates Part-II
NOT Gate
The NOT gate performs the basic logical function called inversion or complementation. NOT gate is also called inverter. The purpose of this gate is to convert one logic level into the opposite
logic level. It has one input and one output. When a HIGH level is applied to an inverter, a LOW level appears on its output and vice versa.
If X is the input, then output F can be represented mathematically as F = X', Here apostrophe (') denotes the NOT (inversion) operation. There are a couple of other ways to represent inversion, F=
!X, here ! represents inversion. Truth table and NOT gate symbol is shown in the figure below.
Truth Table
NOT gate using "transistor-resistor" logic is shown in the figure below, where X is the input and F is the output.
When X = 1, The transistor input pin 1 is HIGH, this produces the forward bias across the emitter base junction and so the transistor conducts. As the collector current flows, the voltage drop
across RL increases and hence F is LOW.
When X = 0, the transistor input pin 2 is LOW: this produces no bias voltage across the transistor base emitter junction. Thus Voltage at F is HIGH.
BUF Gate
Buffer or BUF is also a gate with the exception that it does not perform any logical operation on its input. Buffers just pass input to output. Buffers are used to increase the drive strength or
sometime just to introduce delay. We will look at this in detail later.
If X is the input, then output F can be represented mathematically as F = X. Truth table and symbol of the Buffer gate is shown in the figure below.
Truth Table
NAND Gate
NAND gate is a cascade of AND gate and NOT gate, as shown in the figure below. It has two or more inputs and only one output. The output of NAND gate is HIGH when any one of its input is LOW (i.e.
even if one input is LOW, Output will be HIGH).
NAND From AND and NOT
If X and Y are two inputs, then output F can be represented mathematically as F = (X.Y)', Here dot (.) denotes the AND operation and (') denotes inversion. Truth table and symbol of the N AND gate
is shown in the figure below.
Truth Table
X Y F=(X.Y)'
NOR Gate
NOR gate is a cascade of OR gate and NOT gate, as shown in the figure below. It has two or more inputs and only one output. The output of NOR gate is HIGH when any all its inputs are LOW (i.e. even
if one input is HIGH, output will be LOW).
If X and Y are two inputs, then output F can be represented mathematically as F = (X+Y)'; here plus (+) denotes the OR operation and (') denotes inversion. Truth table and symbol of the NOR gate is
shown in the figure below.
Truth Table
X Y F=(X+Y)'
XOR Gate
An Exclusive-OR (XOR) gate is gate with two or three or more inputs and one output. The output of a two-input XOR gate assumes a HIGH state if one and only one input assumes a HIGH state. This is
equivalent to saying that the output is HIGH if either input X or input Y is HIGH exclusively, and LOW when both are 1 or 0 simultaneously.
If X and Y are two inputs, then output F can be represented mathematically as F = X
XOR From Simple gates
Truth Table
X Y F=(X
XNOR Gate
An Exclusive-NOR (XNOR) gate is gate with two or three or more inputs and one output. The output of a two-input XNOR gate assumes a HIGH state if all the inputs assumes same state. This is
equivalent to saying that the output is HIGH if both input X and input Y is HIGH exclusively or same as input X and input Y is LOW exclusively, and LOW when both are not same.
If X and Y are two inputs, then output F can be represented mathematically as F = X
Truth Table
X Y F=(X | {"url":"http://www.asic-world.com/digital/gates2.html","timestamp":"2014-04-17T09:35:08Z","content_type":null,"content_length":"34035","record_id":"<urn:uuid:3c3274fe-f9e2-4ee7-ada2-7c86fefb9bcc>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00133-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
rosa is training for track meet. she can run a mile in 6 minutes and 30 seconds. she wants to cut her time down by n seconds . write an expression for the mile time rosa is aiming for. a.give the
answer in seconds b.give the answer in minutes i need help anyone??
• one year ago
• one year ago
Best Response
You've already chosen the best response.
390-n=t 6.5-n=t No other solutions can be found without more information.
Best Response
You've already chosen the best response.
actually the n u would insert it on a number or an answer one of them :)
Best Response
You've already chosen the best response.
Lukecrayonz gave you the answer already. you are supposed to just find the expression/equation not solve for them
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50a45d0ae4b0f1696c139065","timestamp":"2014-04-19T12:47:22Z","content_type":null,"content_length":"32653","record_id":"<urn:uuid:19c4d2f1-7193-4e56-96bc-49f4cba31d51>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00649-ip-10-147-4-33.ec2.internal.warc.gz"} |
I can't think tonight or something
I kinda like arc, ^-1 kinda makes you want to make it a reciprocal not an inverse.
I don't know about real definitions, but secant, cosecant, and cotangent are reciprocal functions, while arc functions are inverse functions that have nothing to do with switching the numerator and
denominator, but instead the x and y.
Your an absolutely wonderful man, any girl that is yours is lucky and I know that from personal experience. ~KMT
Things me and God disagree about -foreign policy -Justin being funny -arc vs ^-1
cos(arccot(3/4) (there is a whole series of these, I think there is a trick I am missing, there is no regular radian for 3/4) i dont know if you figured this out yet, but thats not how inverse
functions work
cotangent inverse gives the angle of the cotangent with the contangent's value as its domain. use that and you can just draw a triangle and get A/H.
I am still confused, not considering cosine yet, the problem is basically cot (x) = 3/4, right? Im not quite sure for exactly cotangents work, but the only solutions I know for tangent are fractions
on the basic unit circle (1, (sqrt(3)/2)/(1/2), the opposite of that, and 0). Please forgive me if I completely off, I haven't taken trig anything in over 2 years which is part of my problem.
it's asking for the cosine of the angle that makes cotangent 3/4. cotangent = A/O = 3/4.
do you mean adjacent / opposite yeah uhh I still have no idea, I think I need to do some research on trig, because I couldn't even do this if it was tangent and my book gives no explaination
ok so I am physically looking at a triangle with a radian angle of 3 and a side of 4 uhhhhh do I can to like recreate this triagle, then find the cosine isn't radian 3 over 90 degrees ahhhh
no. the triangle has an angle, it can be either angle other than the 90 it really doesn't matter. its opposite sid is 4 and its adjacent side is 3.
so I am looking at a triangle with sides of 3, 4, and sqrt(21) and looking at cos(x) = 3/sqrt(21) right?
I was wondering what arcsin was until I read Rayne's post. Then I was like "Oh, sin^-1 !"
Originally made by LM: ~ I have said nothing because there is nothing I can say that would describe how I feel as perfectly as you deserve it. -- Kyle Schmidt ~ ~Silence is one of the hardest
arguments to refute. -- Josh Billings ~ * dragon_berry**Fallen_Wings*
~ I have said nothing because there is nothing I can say that would describe how I feel as perfectly as you deserve it. -- Kyle Schmidt ~
~Silence is one of the hardest arguments to refute. -- Josh Billings ~
what is the function that you need to plug 5 into to get the original function back | {"url":"http://pokedream.com/forums/showthread.php?21720-I-can-t-think-tonight-or-something&p=513498&viewfull=1","timestamp":"2014-04-16T23:09:52Z","content_type":null,"content_length":"133186","record_id":"<urn:uuid:58e0fefe-7ccb-47f3-b3b8-4117c1f71064>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00223-ip-10-147-4-33.ec2.internal.warc.gz"} |
Block and Tackle Efficiencies
Not long ago a friend was looking to upgrade the mainstay^[1] block and tackle system on his sailboat.
The question was if the proposed system would provide the anticipated reduction in force. It was an interesting question that, while seemingly straight forward, does have a couple of gotchas.
Block (pulley) and tackle (rope) calculations are usually pretty simple, with the mechanical advantage (MA) idealized as:
$MA = \frac{F_B}{F_A} = n$
where F[A] is the input force, F[B] is the load, and n rope sections.
Calculating the total force of multiple non-colocated blocks using the same tackle presented a fun challenge that requires one to take into account the individual location of the blocks and the
forces transfered.
I did not draw every single link of the block and tackle, but in general the problem looks something like this:
Considering the moment arm:
$F_{L} = (F_{A} \times A) + (F_{B} \times B)$
We devise these relations:
$F_{A} = n_{AD} \times F_{P} \times \arctan\left ( \frac{\left | A-D \right |}{h}\right )$
$F_{B} = n_{BC} \times F_{P} \times \arctan\left ( \frac{\left |B-C\right |}{h}\right ) + n_{BD} \times F_{P} \times \arctan\left ( \frac{\left |B-D\right |}{h}\right )$
where n[xy] is the mechanical advantage coefficient at a given point, x, with respect to another point, y; F[P] is the force exerted on the tackle (which must be uniform throughout!).
Using some assumptions regarding the lengths and relative positions of the blocks greatly simplifies the calculations, and we can plug ‘n chug from there:
$F_{L}=F_{P}\cdot \left ( \frac{n_{AD}}{2} + \frac{n_{BC}}{4} + \frac{n_{BD}}{4}\right )\cdot \cos{(\arctan(\frac{l}{8h}))}$
As expected, there is a loss of useful force due to the ropes not being normal to the boom and that causes the boom height/length ratio to become an interesting variable in these calculations. You
lose 10% of your power with a 1:3.87 ratio, 25% with a 1:7.06 ratio, and 50% with a 1:13.86 ratio.
To model a direct input (with no block and tackle), I assume that the force was applied at a point between the two blocks, A and B, and normal to the boom.
The increase from no block and tackle system to the current system (single boom aft block, A; double boom traveling block, B) is:
$\frac{F_{P}\cdot \left ( \frac{2}{2} + \frac{2}{4} + \frac{2}{4}\right )\cdot \cos{(\arctan(\frac{l}{8}))}}{\frac{3\cdot F_{P}}{8}} = \frac{16}{3}\cdot \cos{(\arctan(\frac{l}{8h}))}$
…assuming the boom heigh/length ratio is 7, the mechanical advantage is 1:4.01.
Moving from no block and tackle to the the proposed system (double boom aft block, A; triple boom traveling block, B) is:
$\frac{F_{P}\cdot \left ( \frac{4}{2} + \frac{3}{4} + \frac{3}{4}\right )\cdot \cos{(\arctan(\frac{l}{8}))}}{\frac{3\cdot F_{P}}{8}} = \frac{28}{3}\cdot \cos{(\arctan(\frac{l}{8h}))}$
…again, assuming the boom heigh/length ratio is 7, the mechanical advantage is now 1:7.02.
The mechanical advantage from current system to proposed system is: 1:1.75
$\frac{F_{P}\cdot \left ( \frac{4}{2} + \frac{3}{4} + \frac{3}{4}\right )\cdot \cos{(\arctan(\frac{l}{8h}))}}{F_{P}\cdot \left ( \frac{2}{2} + \frac{2}{4} + \frac{2}{4}\right )\cdot \cos{(\arctan(\
frac{l}{8h}))}} = \frac{3.5}{2} = 1:1.75$
It’s not quite the 1:2 advantage originally thought, but it’s close.
The bonus gotcha occurred when said friend opted to install a double boom aft block (A) and a triple deck traveler block (D), but keep the boom traveler block (B) as a double. Essentially, putting an
extra “loop” just between the boom aft block (A) and deck traveler block (D).
The imbalance of tension on the deck traveler block caused it to experience shear stress and bind on the traveler rail in ways it was not designed to — not good.
Converting the boom traveler block (B) to a triple and the deck traveler block (D) to a quad equalized the tension.
Problem solved.
1. rope from the top of the main-mast to the foot of the fore-mast on a sailing ship [↩] | {"url":"http://andrewferguson.net/2013/05/07/block-and-tackle-efficiencies/","timestamp":"2014-04-20T23:27:53Z","content_type":null,"content_length":"57980","record_id":"<urn:uuid:e62b44a3-8aa2-4bd6-881f-ddb95f275e44>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00355-ip-10-147-4-33.ec2.internal.warc.gz"} |
Register or Login To Download This Patent As A PDF
United States Patent 3,576,985
Lawrence May 4, 1971
A gravity profile containing gravitational anomalies is utilized and treated to provide information as to depths of the sources of the anomalies. The initial measurements of gravity potential along
the line of exploration making up the gravity profile are corrected for change in elevation and a plurality of averaged gravity profiles are generated. The generated profiles are subtracted from one
another to generate difference profiles in which the several anomalies appear in turn and in greatly enhanced form.
Inventors: Lawrence; Philip L. (Riverside, CT)
Assignee: Mobil Oil Corporation (
Appl. No.: 04/669,314
Filed: September 20, 1967
Current U.S. Class: 702/189 ; 708/813; 708/818
Current International Class: G06G 7/00 (20060101); G01V 7/00 (20060101); G01V 7/06 (20060101); G06G 7/19 (20060101); G06g 007/19 (); G01v 007/36 ()
Field of Search: 235/181,193 340/15.5 (C)/ 340/172.5 324/1
References Cited
U.S. Patent Documents
2794965 June 1957 Yost
2908889 October 1959 Petty
3022005 February 1962 Dickinson
3023966 March 1962 Cox et al.
3166709 January 1965 Doll
3209134 September 1965 Feagin et al.
Primary Examiner:
Morrison; Malcolm A.
Assistant Examiner:
Ruggiero; Joseph F.
Parent Case Text
This is a continuation of Ser. No. 214,973 filed Aug. 6, 1962 and now abandoned.
I claim:
1. The method of treating a gravity profile containing gravitational anomalies extending along a distance scale which comprises the following steps each executed by automatic computing apparatus:
generating and storing machine responsive signals representing the amplitudes of gravity potentials represented by said gravity profile,
algebraically and progressively adding together along said profile the stored machine responsive signals representing amplitudes spaced apart by an interval,
dividing the algebraic sum of said machine responsive signals by a factor proportional to the reciprocal of said interval to produce quotient machine responsive signals, and
recording the quotient machine responsive signals representative of the averaged profile in correlation with the scale of said gravity profile.
2. The method of treating gravity profiles containing gravitation anomalies which comprises the following steps each executed by automatic computing apparatus:
generating and storing machine responsive signals representing said gravity profiles,
generating from a stored machine responsive signal representing a gravity profile a plurality of averaged machine responsive signals representing gravity profiles smoothed by differing smoothing
intervals extending along said gravity profile, and
subtracting said plurality of averaged machine responsive signals one from the other to generate machine responsive signals representing difference profiles thereby to enhance the appearance of the
anomalies on the several difference profiles.
3. The method of claim 2 in which said averaged machine responsive signals are generated by integrating with respect to distance along said gravity profile,
algebraically adding together values from the integrated profile corresponding with
g(t+T)dt- g(t-T)dt
g(t) = gravity values along said gravity profile,
T = one-half the smoothing interval, and
t = distance along said gravity profile, to produce a resultant sum machine responsive signal, and
modifying said sum machine responsive signal by the factor 1/2T.
4. The method of claim 2 in which said smoothing intervals each differ one from the other by the same order of change in length.
5. The method of claim 3 in which said smoothing intervals each differ one from the other by an octave.
6. The method of claim 5 in which anomalies on said difference profiles are related to zones of depth approximately respectively one-half of the smoothing intervals of two generated machine
responsive signals subtracted one from the other.
7. The method of claim 2 in which each of said difference profiles is recorded and the separation distances at half amplitude of anomalies are recorded in terms of depth of their occurrence.
8. The method of claim 2 in which said gravity profile is extended at and without change from both its initial and final values by amounts equal to the maximum value of said smoothing intervals.
9. The method of treating gravity profiles containing gravitational anomalies which comprises the following steps each executed by automatic computing apparatus recording on a reproducible medium
signals representative of a gravity profile with the initial and final values of the gravity potential of said profile extended a distance along said medium proportional to a distance equal to the
maximum of the smoothing interval defined by the smoothing function hereinafter set forth,
generating from said medium electrical signals,
convolving with said electrical signals representative of said gravity profile g(t) a smoothing function U .sub.T (t) for generating averaged gravity profiles 2T/g(t) where each change of T is of the
same order as the preceding change, and subtracting in succession said profiles one from the other, the longer-smoothed from the shorter-smoothed profiles, for generating difference profiles thereby
to enhance the appearance of the anomalies on the several difference profiles.
10. The method of claim 9 in which said electrical signals representative of said averaged gravity profiles are recorded in succession prior to said subtraction thereof one from the other. 11The
method of processing gravity profiles representing gravitational potential along a line on the earth to accentuate anomalies therein comprising the following steps each executed by automatic
computing apparatus:
recording said gravity profiles along a length of a reproducible medium corresponding with a distance along said line on the earth,
converting the recording on said reproducible medium to a first electric signal representing said gravitational potential as a function of time,
integrating said first electric signal to produce a second electric signal representing integrated gravity potential as a function of time,
continuously subtracting a first component of said second electric signal from a second component of said electric signal which is separated in time from said first component by an interval which is
an integer multiple of a sampling interval to generate a third electric signal representing an average gravity profile,
continuously subtracting a third component of said second electric signal from a fourth component of said second electric signal which is separated in time from said third component by an interval
which is a different integer multiple of said sampling interval to generate a fourth electric signal representing an average gravity profile,
continuously subtracting said third electric signal from said fourth electric signal to generate an electric signal representing a difference profile, and
recording said electric signal representing said difference profile on a reproducible medium to produce a record in which the appearance of said
anomalies is enhanced. 12. A system for treating a gravity profile containing gravitational anomalies extending along a distance scale in which system elements are interconnected to comprise:
means for generating electrical signals representing the amplitudes of gravity potentials represented by said gravity profile,
storage means for storing the generated electrical signals,
means for extracting from said storage means a plurality of components of the signals stored therein, each time-shifted one from the other by a different smoothing interval related to said scale,
summation means, pairs of said components being applied to said summation means to produce at the output thereof signals representing the difference between the pairs of components applied to the
dividing means, the output of said summation means being applied to said dividing means, said dividing means being set to divide the output of said summation means by the smoothing interval
separating the pair of components applied to the input of said summation means, the output of said divider being an electrical signal representing a gravity profile averaged over a smoothing
means for storing a plurality of the electrical signals each of which represents a gravity profile averaged over a smoothing interval,
difference means,
means for extracting from storage pairs of said electrical signals representative of averaged gravity profiles averaged over different smoothing intervals, said last-named pairs of signals being
applied to said difference means, to generate at the output of said difference means an electrical signal representing a difference gravity profile, and
means for recording the output of said difference means on a reproducible medium to produce a gravitational record in which the appearance of said
anomalies is enhanced. 13. A system for treating a gravity profile containing gravitational anomalies extending along a distance scale in which system elements are interconnected to comprise:
means for generating electrical signals varying in proportion to the amplitudes of gravity potentials represented by said gravity profile,
means for storing said generated signals on a reproducible recording medium on a scale proportional to said distance scale,
pickup means associated with said last-named recording medium respectively spaced one from the other by amounts respectively equal to a plurality of smoothing intervals of increasingly greater
an adder for algebraically adding together the signals detected by said pickup means,
voltage dividing means,
means for applying the algebraic sum of said detected signals to said voltage dividing means for modifying the sum by a factor proportional to the reciprocal of the smoothing interval,
means for recording the output of said last-named means as representative of an averaged gravity profile in correlation with said scale of said stored first-named gravity profile, and
signal storing means interposed between said recording means and said voltage dividing means for storing electrical signals, together with means for reproducing the signals stored therein for
application to said recording means, the electrical signals from the output of said voltage
dividing means being applied to said signal storing means. 14. A system for treating a gravity profile containing gravitational anomalies extending along a distance scale in which system elements are
interconnected to comprise:
means for generating electrical signals varying in proportion to the amplitudes of gravity potentials represented by said gravity profile,
means for storing said generated signals on a reproducible recording medium on a scale proportional to said distance scale,
pickup means associated with said last-named recording medium respectively spaced one from the other by amounts respectively equal to a plurality of smoothing intervals of increasingly greater
an adder for algebraically adding together the signals detected by said pickup means,
voltage dividing means,
means for applying the algebraic sum of said detected signals to said voltage dividing means for modifying the sum by a factor proportional to the reciprocal of the smoothing interval,
means for recording the output of said last-named means as representative of an averaged gravity profile in correlation with said scale of said stored first-named gravity profile,
said means for recording having a plurality of recording and reproducing means for storage signals representative of a plurality of said averaged profiles,
difference means associated with said reproducing means, and
means for applying pairs of said signals representative of a plurality of said averaged profiles to said difference means, said difference means generating a plurality of output signals each
representing the difference
between different pairs of signals recorded by said recording means. 15. A system for treating a gravity profile containing gravitational anomalies extending along a distance scale in which system
elements are interconnected to comprise:
means for generating electrical signals varying in proportion to the amplitudes of gravity potentials represented by said gravity profile,
means for storing said generated signals on a reproducible recording medium on a scale proportional to said distance scale,
pickup means associated with said last-named recording medium respectively spaced one from the other by amounts respectively equal to a plurality of smoothing intervals of increasingly greater
an adder for algebraically adding together the signals detected by said pickup means,
voltage dividing means,
means for applying the algebraic sum of said detected signals to said voltage dividing means for modifying the sum by a factor proportional to the reciprocal of the smoothing interval,
means for recording the output of said last-named means as representative of an averaged gravity profile in correlation with said scale of said stored first-named gravity profile,
said means for recording having a plurality of recording and reproducing means for storing signals representative of a plurality of said averaged profiles,
difference means associated with said reproducing means,
means for applying pairs of said signals representative of a plurality of said averaged profiles being applied to said difference means, said difference means generating a plurality of output signals
each representing the difference between different pairs of signals recorded by said recording means, and
recording means for recording as difference profiles said output signals.
. The method of processing data representing physical properties of the earth including anomalies extending along a distance scale which comprises the following steps each executed by automatic
computing apparatus:
generating and storing machine responsive signals representing the amplitudes of said physical properties along a profile,
algebraically and progressively adding together along said profile the stored machine responsive signals representing amplitudes spaced apart by an interval,
dividing the algebraic sum of said machine responsive signals by a factor proportional to the reciprocal of said interval, and
recording the quotient of said last-named step as machine responsive signals representative of the averaged profile in correlation with the scale of said physical properties.
This invention relates to methods of and means for transforming gravitational potentials existing along a profile into a form which accentuates anomalies and provides information as to the depths of
the structures giving rise to them.
Measurements of gravity taken along a line of exploration and thus providing a gravity profile have long been used in geophysical exploration for the reason that the gravity potential along the
profile will vary with change in the density of structures located generally below each point of measurement. Gravitational maps and gravity profiles though having s substantial degree of usefulness
have nevertheless left much to be desired in providing information as to the depth and character of the masses which give rise to gravitational anomalies in gravity profiles.
In accordance with the present invention, there is utilized a gravity profile containing gravitational anomalies. The gravity profile comprises measured values of gravity or gravity potential along
the line of exploration and is generally available in the form of a graph or in a table of gravity measurements either as recorded in the field or as taken from a gravity map in correlation with
successive distances from a reference point along the profile. After correction of the measurements for change in elevation along and near the profile, there are generated from the resultant gravity
profile a plurality of averaged gravity profiles each differing from the other by reason of the use of a different averaging operator, each successively longer than the last. The generated profiles
are then subtracted one from another to generate difference profiles. On these difference profiles the several anomalies appear in turn and in greatly enhanced form. The enhancement, the distinctive
character of the anomaly and the increased amplitude all appear as functions of the depths of the sources of the anomalies.
The present invention takes advantage of the fact that the special "period" or length of an anomaly as it appears in the gravity profile is, among other factors, a function of the depth of the source
of that anomaly. Accordingly, a reinforced or enhanced peak appearing on one of the difference profiles will be representative of a source located in a depth range which will lie below the earth's
surface no greater than a distance between about one-half of the lengths of the averaging operators applied to the two averaged profiles subtracted one from the other to produce the difference
profile. Advantage is further taken of a known simple amplitude relationship determined in reference to the occurrence of the maximum amplitude of an anomaly. Thus if half the horizontal distance
from the point of maximum amplitude to the point of half amplitude be taken, the amplitude value at that half-distance point will then provide information as to whether the anomaly in question may be
considered due to a concentrated mass at a substantial depth or to a shallower distributed mass. For concentrated (cylindrical or spherical) mass anomalies, the difference curves are a direct
indication of depth. For distributed masses, the difference curves give the maximum possible depth indication, but the actual depth may be less, depending on the assumed shape.
The present invention may be practiced both by analog and digital procedures and in each case greatly extends the information available to geophysicists in their study of the nature of subsurface
formations as revealed by gravity profiles.
For further objects and advantages of the invention together with detailed instructions of how to construct apparatus embodying the invention and to practice the methods of the invention, reference
is to be had to the following detailed description taken in conjunction with the accompanying drawings, in which:
FIG. 1 illustrates in part a plurality of stations at which gravitational measurements are made and the manner in which these readings have in the past been averaged;
FIG. 2 illustrates a gravity profile prepared for further processing in accordance with the present invention and illustrating three averaging intervals in connection therewith;
FIG. 3 is a sketch of an averaging operator labeled in accordance with the terms used in equations utilized in explaining the invention;
FIGS. 4 and 5 respectively are illustrations of unit step, or "Heaviside" functions, one positive and one negative, and both including mathematical expressions therefor;
FIG. 6 presents for ready reference the basic equation, the solution of which is provided in accordance with the present invention;
FIG. 7 diagrammatically illustrates an apparatus and system of the analog type embodying the invention;
FIGS. 8, 9 and 12 are graphs plotted with distance as abscissae and gravity potentials in gravity units gu as ordinates illustrative of the nature and character of data provided in accordance with
the present invention;
FIG. 10 diagrammatically illustrates an apparatus by means of which there may be subtracted one profile from another; and
FIG. 11 illustrates data obtained in accordance with the invention and the manner in which it is utilized to delineate subsurface formations.
Referring now to FIG. 1, there have been illustrated a plurality of stations at which gravity or gravity potential may be measured. It has been conventional to establish a gravity measuring station
at the intersection of vertical lines or columns designated A--K respectively and rows or horizontal lines 1--11. Only selected stations have been illustrated as small rectangles: those needed to
emphasize the manner in which gravity measurements in the past have been averaged. Thus, only on rows 10 and 11 have all of the stations thereof been illustrated and these by open circles.
If it be desired to obtain the residual gravity at station E-5, there will be subtracted from the gravity potential at E-5 the average of the gravity readings along a circle of selected size having
its center at E-5. Thus, readings at stations A-5, B-2, B-8, E-9, H-2, H-8 and I-5 will be added together and divided by 8. If residual values be desired along a gravity profile such as 5-5, then it
will be understood that the foregoing laborious procedure will have to be carried out for the plurality of stations lying along profile 5-5 and further, that the computations will differ for each
station since the values around the circle with its center successively at the several stations of profile 5-5 will differ. The averaging and difference procedures just described have been utilized
to provide information on anomalies which may appear at a depth approximating the radius of the aforesaid circle. These methods, however, utilize sampling intervals of the order of the periods of
anomalies and inject digital processing noise.
In accordance with the present invention, there are provided rapid depth estimates for the anomalies present in a gravity profile with enhanced separation of such anomalies from regional effects as
well as separation of basement or sedimentary anomalies from near-surface, erratic density variations and erratic topographic effects. At the same time, processing noise is moved to a frequency well
above anomaly frequencies. The manner in which these objectives are achieved will now be set forth in terms of a typical graph 20, obtained from a profile such as FIG. 2, representing a gravity
profile with gravity potential g (t) as ordinates plotted against distance as abscissae. It is to be noted that the first point 20a on graph or profile 20 appears at a substantial distance from the
origin and that the last point 20n a substantial distance from the terminal end of graph 20. The measurements of the gravity potential at points 20a and 20n have been extended respectively to the
left and to the right and each through a distance equal to, or greater than, one-half the length of the longest averaging operator utilized. By so extending the measured values, there is eliminated
first-order truncation error and there is minimized distortion in the averaging of the data due to the ends of the line. It may here be noted the data represented by the extensions just described are
eliminated after the computations, later to be described, have been completed.
With the gravity profile 20 available, a sampling interval will be selected which, as shown, may be the interval h corresponding with the horizontal distances between adjacent gravity potentials as
appearing on the profile 20. These may correspond with the separation distances between measuring stations, or the interval h may be selected, as for example, as one-quarter mile. In FIG. 2 these
quarter-mile points have been illustrated and they appear as dots on a smooth curve illustrating the gravity profile. Station values will frequently be obtained at distances other than the desirable
h. In this event, the selected values are obtained from a smooth curve drawn through the observed station values in the manner shown in FIG. 2. This method automatically interpolates gravity values
between the observed values and also takes into account any apparent error in a station value because it is inappropriate in that it falls an appreciable distance from the smooth curve. In this
connection, the interval h is chosen to be less than one-half the length of the shortest fluctuation in the observed gravity potential, i.e., the distance between the beginning and end of the
shortest anomaly as measured along the abscissa. This procedure of selecting h is in accordance with Shannon's sampling theory.
Mathematically, the smoothing operator, illustrated in FIG. 3, must be some integer multiple of the sample interval h for discrete operators. If the midpoint of the operator be taken as t= O, where t
is distance (t= ih, where i = any integer), then the operator may be expressed as U (t). The total length of the operator will be 2T. This concept is particularly helpful since U (t) may be
decomposed into two unit step functions, namely, .+-. U.sub.o (t.+-. T). FIG. 4 illustrates that U.sub.o (t+ T) is a unit step function; similarly, - U.sub.o (t- T) is a unit step function, the
former being positive and the latter negative in sign. Thus these two unit functions, when algebraically added, define the smoothing operator of FIG. 3. Further extensions of this technique will be
later explained.
If a function such as gravity potential g(t) be convolved with a unit step function, the result will be the integral of g(t). Accordingly, there may be written the following equation and for
convenience presented in FIG. 6:
in the foregoing equation, the expression 2T/g(t) is the average gravity potential taken over the averaging length 2T. As will be later explained, the function U.+-.T (t) will be utilized for
obtaining running averages of the several values appearing throughout the length of the gravity profile 20. Though an analog method of performing that averaging has been illustrated in FIG. 7, it is
clear that one may utilize instantaneous values of gravity potential along the profile 20, and average these values as the averaging operator is moved along profile 20. This can conveniently be done
by converting the instantaneous values to binary numbers and the averages obtained by a conventional averaging program for a digital computer. In the program for the computer, the foregoing Equation
(1) applies with the substitution of (ih) for the term (t) and an operational summation sign for each of the integral signs. The interval h is substituted for (dt).
The result of averaging the gravity values over the profile 20 will be the production of a new gravity profile then representative of average values of g(t) for the first averaging interval 2T which,
as above suggested, may correspond with a distance of one-half mile if T=h and h=one-quarter mile.
The foregoing operations are then repeated with successively different averaging intervals. It is preferred that these intervals each differ from the other by an octave (approximately) or that the
second and each successive averaging interval have approximately twice the length of the preceeding one. Thus, if the first smoothing interval 2T corresponds with T=h of FIG. 2, the second smoothing
interval 4T will have twice the length of interval 2T and the third interval 8T will have twice the length of interval 4T. The smoothing intervals will progressively change and have values of 16T,
32T, 64T, 128T and more, if desired.
Although the averaging operators illustrated herein are 2.sup.n T long (where n=1, 2, 3, 4, ... etc.), it is often desirable to use operators (2.sup.n -1) T long, that is, 3T, 7T, 15T, 31T, 63T and
127T length smoothing intervals, since such smoothed curves, in digital operations, are centered about the observed gravity values.
The several resultant averaged gravity profiles are then subtracted one from the other to generate difference profiles. The difference profiles will contain enhanced representation of anomalies as a
function of the depths of the masses or sources giving rise to the respective anomalies. These multiple operations may be rapidly performed, thus to provide in conveniently usable form new and
improved data from gravity profiles.
In the analog embodiment of the invention, as illustrated in FIG. 7, the gravity profile 20 of FIG. 2 has been converted to a reproducible signal 20s as a continuously variable area record on film
30. This film or recording medium 30, though illustrated as of the p
ographic type, could just as well be a magnetic tape having recorded thereon the changes in gravity potential as shown by curve 20 of FIG. 2 and with an associated pickup head for generating signals
representative of continuous gravity profile 20. For the p
hotographically reproducible signals, a light source 31 in conventional manner (lenses and light slits having been omitted to simplify the drawing) directs a plane of light across the film 30. Light
passing through the continuously variable area record is received by a p
hotoresponsive cell 32. The film is driven at a constant speed by motor 33 and has been illustrated as of the endless type for convenient repetitive playback. Since the length of the profile is
related to distance and since it is being driven at constant speed, it will be understood that there will have been established proportionality between distance and time.
The output from the p
hotoresponsive means 32 is amplified by an amplifier 34 and applied to an integrator 36. This integrator is preferably of high quality and may be of the kind illustrated at pages 12--20 in the book
by Korn and Korn entitled "Electronic ANALOG Computers," Second Edition (1956). The integrator 36 performs the integrating steps as called for by Equation (1) as appearing in FIG. 6. The output from
the integrator, g (t) dt, with further amplification if desired, is applied to a magnetic recording head 37 disposed in recording relationship with a magnetic recording head 37 disposed in recording
relationship with a magnetizable medium, such as magnetic tape, carried by a drum 38 supported on a shaft 39 driven by a motor 40, or if desired, from the motor 33.
As shown, there are provided in association with the drum 38 a plurality of pickup heads 41--47.
The pickup or reproducing heads 41--47 have respectively associated therewith amplifiers 51--57. Though only two pickup heads 41 and 43 need be utilized with these heads adjustable circumferentially
on the drum 38, in practice, it will be preferred to utilize a multiplicity of pickup heads all circumferentially adjustable on the drum 38. Each of the respective pairs will have separation
distances corresponding with the differing selected smoothing intervals. Thus, if a reference point be chosen on the drum 38, such for example, as the position of the pickup head 42, the purpose of
which will be later explained, then the pickup head 41 will have a distance from the reference position at pickup head 42 corresponding with the interval -T of FIGS. 2 and 3. With clockwise rotation
of drum 38, pickup head 41 will be lagging, that is to say, the integral of the gravity potential arrives at pickup head 41 after the time occurrence t by the interval T. Similarly, it will be seen
that the pickup head 43 is in a leading position, that is, spaced from the reference position of pickup head 42 by the interval T. Thus, the time occurrence t of the events on the profile 20 of FIG.
2 arrive at this pickup head 43 in time advanced by the interval T. In terms of the expressions appearing in FIGS. 4 and 5, pickup head 41 corresponds with the (t-T) expression, and pickup head 43
corresponds with the expression (t+T). The negative sign of the expression of FIG. 5 (-U.sub.o) is taken into account by the polarity of the connection from pickup head 41 to amplifier 51 or, of
course, that polarity can be established at the output of the amplifier.
As the observed gravity profile, after integration and after extension of its initial and final values as described in connection with FIG. 2, is recorded on the drum 38 it then appears at pickup
head 43. This pickup head reproduces the initial value until the drum not only has moved the recorded signal to pickup head 41 but also through a distance corresponding with the greatest sampling
interval 128T. Thus as the first of the fixed values 20a, FIG. 2, arrives at pickup head 43, that value will be combined with successive values appearing at pickup head 43. Finally, as the end of the
record arrives at pickup head 43 corresponding with the last fixed value 20n, FIG. 2, pickup head 43 continues to generate signals corresponding with value 20n until the end of the extended record.
The foregoing, of course, applies equally to the pickup heads having separation distances equal to 128T. It is in this manner that there is avoided the truncation error already described.
From the foregoing analysis, it will now be clear that the pickup head 41 generates at the output of amplifier 51 signals corresponding with the expression [- g(t-T)dt] and that the pickup head 43
generates at the output of amplifier 53 signals corresponding with the expression [ g(t+T)dt]. The negative sign ahead of the first mentioned expression has already been taken care of by the
connection to or from the amplifier 51 opposite to the connection to or from the amplifier 53.
The two expressions within the brackets of Equation (1), FIG. 6, are now algebraically added together by applying the outputs from the amplifiers 51 and 53 through contacts 60a and 60b by way of
summing resistors 61 and 62 to a summing amplifier 63. Thus, the output of amplifier 63 provides a solution for the two terms within the brackets of Equation (1) of FIG. 6. By applying the output of
amplifier 63 to a potentiometer 64 and by suitably setting the movable contact associated with this voltage divider, the output at line 65 will be the output of the amplifier 63 modified by the
expression 1/2T: the reciprocal of the smoothing interval 2T. Thus, the movable contact of potentiometer 64 may be adjusted as the selector switch 60 is operated to change the connections to the
amplifier for solving the equation with a different smoothing interval.
The solution of Equation (1) of FIG. 6 which has thus been provided appears as the output from potentiometer 64 and corresponds with the expression 2T/g(t). It is this signal which may be applied
directly to the amplifier 93 of the recorder. However, and for reasons which will later become apparent, that signal is applied by way of contact 60c of selector switch 60 to a recording head 67 of a
recording drum 38a carrying a recording medium, such as magnetic tape. In practice, the mechanical connection shown by the broken line connection from the movable contact of potentiometer 64 will
ordinarily be omitted, and instead there will be provided summing amplifiers for each pair of pickup heads 47--47, etc., and with several potentiometers corresponding with potentiometer 64 set in
terms of the respective smoothing intervals. Thus, as the drum 38 is rotated, there will concurrently be recorded by recorder head 67--73, etc., averaged gravity profiles, each averaged with a
different length interval. Thus, consistent with the assumptions earlier made, the spacing between pickup heads 41 and 43 will be equal to 2T, the spacing between heads 44 and 45 equal to 4T, the
spacing between heads 46 and 47 equal to 8T, and so on until the final smoothing interval, which may be 128T, will have been set by pickup heads associated with drum 38. Since not all of these pickup
heads have been illustrated in association with drum 38, the final recording head 73 carries the label of 2.sup.n T/g(t), where (n=1, 2, 3, 4, 5, 6, 7).
Though the subtraction of one averaged gravity profile from another may be accomplished by suitable switching between the plurality of pickup heads 74--81, it is to be understood that all of the
desired difference profiles may be concurrently generated by providing a plurality of summing amplifiers. For purposes of simplicity, only two amplifiers 82 and 83 are shown in FIG. 7. One of them,
the amplifier 82 has its output connected to the associated recording head 90 in recording relationship with a recording medium, such as magnetic tape on a drum 91. In practice, the drum 91 will be
wide and will have recording heads and pickup heads in number respectively equal to the number of difference gravity-profiles selected for an embodiment of the present invention.
The pickup head 75 generates a signal corresponding with the expression 2T.sub.2 /g(t), and the pickup head 76 generates a signal corresponding with the expression 4T.sub.4 /g(t). Thus, if the
connections to pickup head 76 as applied to the input of the summing amplifier 82 be reversed relative to the polarity of the conductors of pickup head 75, it will be seen that the summing amplifier
82 will provide an output corresponding with the difference between the aforesaid averaged gravity profiles, i.e., 2T.sub.2 /g(t)-2T.sub.4 /g(t). This difference gravity profile is now recorded on
the tape carried by the drum 91. It is reproduced by the pickup head 92, amplified by an amplifier 93, and applied to control the rotation of a motor 94 which through a belt or violin string 95
positions a recording head or pen 96 laterally of a recording medium 98. This recording medium, which can be a recording chart, is driven from shaft 39 which carries the drums 38, 38a and 91.
Recorders of the type referred to are well known to those skilled in the art and are highly suited to the generation of the difference gravity profile as detected by pickup head 92. In practice, a
multiplicity of recorders may be utilized, but for reasons of economy it will sometimes be preferred to utilize switching as between the respective pickup heads 74--81 for successive generation of
the difference gravity profiles by means of which valuable information may be obtained from the original gravity profile illustrated in FIG. 2. As further illustrated in FIG. 7, the second difference
gravity profile may be generated by applying the output from amplifier 83 to the recording head 90. This difference profile, for illustration of the broad principle, has been shown in FIG. 7 as
2T.sub.2 /g(t)-2T.sub.n /g(t). Thus, there have been disclosed several embodiments by means of which the invention may be practiced.
Now that there has been explained the manner in which an analog solution has been provided for the equation of FIG. 6, it is again emphasized that the equation may be solved by utilizing the
smoothing operators as set forth above, digitizing the values from the gravity profile 20 with the extensions at the ends as already described and that after averaging with the several length
operators, a computer may be readily programmed to generate outputs representative of difference profiles and that such binary outputs representative of difference profiles and that such binary
outputs from computers may be, through a decoder, applied to the amplifier 93 to generate the difference profiles in manner just explained.
Referring now to FIGS. 8 and 9, there will be considered the multiplicity of averaged profiles and difference profiles all generated as described above.
In FIG. 8 the curve 101 has been plotted from observed data, i.e., from gravity measurements taken along a line of exploration to provide the gravity profile. This gravity profile is converted into
the reproducible signal 20s, FIG. 7, integrated at 36 and the smoothing interval 2T.sub.2 established to produce outputs applied to the amplifier 63. After modification by the potentiometer 64 there
is produced at conductor 65 an output equal or proportional to the gravity potential of curve 101 smoothed by the operator 2T.sub.2. This is the signal recorded by recording head 67 on drum 38a.
As shown in FIG. 10, the signal recorded by recording head 67 is detected by the pickup or playback head 75 and through a multiple point switch 97 is applied to the amplifier 82 with its output
connected to the recording head 90 of drum 91. The pickup or playback head 92 applies the resultant output to amplifier 93 and thence to the motor 94 which generates on the recording medium 98 the
gravity profile 102 of FIG. 8.
By operating the switch 97 to the successive positions with concurrent change in positions of selector switch 60 of FIG. 7 there will be generated the smoothed gravity potential curves 102--108, FIG.
8, each respectively smoothed with smoothing operators 2T, 4T, 8T, 16T, 32T, 64T and 128T. The result of these smoothing operations is the progressive removal of higher (spacial) frequencies (shorter
length oscillations) present in the original observed gravity profile 101, these high frequency components being due to gravity anomalies closer to the surface. With the smoothing operator
proportional to samples about a quarter of a mile apart, the 128T interval (T = one-quarter mile) will largely average out anomalies less than about 16 miles across (these periodicities then being
less than about 32 miles).
By now taking the differences between the foregoing progressively smoothed curves 101--108, anomalies appearing in the form of wavelets of differing frequency will be accentuated. Thus as explained
in detail in connection with FIG. 7 (and in which there was directly obtained the curves now to be described) the curve or difference gravity potential 112 of FIG. 9 is the result of subtracting the
gravity potential 103 of FIG. 8 from the gravity potential 102. For convenience, each curve of FIG. 9 has in the left-hand margin a reference to the operations which produced it, i.e. 2--4, being
indicative of the manner in which the curve 112 was obtained. Thus the difference profile 113 resulted from the subtraction of curve 104 from 103; difference profile 114 was obtained from subtracting
curve 105 from 104. The difference profile 115 was obtained from subtracting profile 106 from 105. The difference profile 116 was obtained by subtracting profile 107 from 106. The difference profile
117 was obtained by subtracting profile 108 from 107. Thus, the profile of longer smoothing interval is in each case subtracted from a profile of a shorter smoothing interval.
Referring again to the difference profile 112 and with the foregoing assumed selection of samples of one-quarter of a mile, anomalies ranging in period from about one-quarter to about one-half of a
mile wide will be enhanced. Thus there appears on gravity potential 112 a wavelet having a positive-going component 112a and a negative-going component 112b. In accordance with the present invention,
advantage is taken of the fact that the anomaly period or length is a function of the depth of the source of the anomaly. Since the width of the anomaly on say the positive component 112a can have a
width at half maximum amplitude which can range from between 800 ft. to 1500 ft., it is known at once that the wavelets 112a and 112b will be representative of anomalies within or near the foregoing
range. If the width of the positive component 112a at 50 percent of maximum amplitude be measured, it will be found that the depth Z will approximate 1500 feet. It is for this reason that the
difference profile 112 has on it the label .DELTA.Z (the range of depth from 800 feet to 1500 feet). Similarly, the remaining difference profiles 113--117 have been labeled with their respective
depth ranges and also with depths as determined from the width of the wavelets at their halfway points. In this connection it is to be noted that on difference profile 113 the depth of the anomaly is
given as 5500 feet which is beyond the range of .DELTA.Z for this gravity profile. The same disparity exists for the depth Z of 7200 feet for the difference profile 114. In both profiles, however, it
will be noted that the two troughs 113b and 113c indicate that the anomaly 113a has both a positive and negative portion. The negative-going portion can be illustrated by the broken line 113d. Thus
the fact that two anomalies superimpose explains the disparity between the predicted range of depth and that determined by the measurements described above. Accordingly, the negative going wave can
be ascertained as indicated by the broken-line representation 114d.
In the gravity profile 115 "picks" can be made at the 50 percent points for the negative-going component 115b and for the positive-going component 115a. However, knowing that the negative-going
anomaly has been developing as explained in connection with gravity potentials 113 and 114, the difference profile 115 may be considered as lacking in sufficient definiteness to indicate a simple
anomaly and thus attention may be directed to the difference profile 116. Here it will be seen that the negative-going character of the anomaly is now quite pronounced and picks may be readily made
as indicated by the vertical lines intersecting the negative-going component 116b which provide a depth for the anomaly at 16,000 feet. The broken-line portion of the component 116b can be reliably
drawn since it is known that there still remains some effect from the positive-going components which appeared in the earlier difference profiles 112--115. In the final difference profile 117 there
is obtained a very smooth negative component 117b providing even greater reliability in the pick of the 50 percent points for determination of the depth Z approximating 30,000 feet.
Inasmuch as the profiles of FIG. 8 and 9 are all plotted with distance along the line of exploration as abscissae, information as to the location along the profile of the anomaly just described is
also indicated.
Thus, the midpoints of the wavelets of FIG. 9 on which the pick-lines have been illustrated by the vertical lines described above may now be plotted, FIG. 11, with distance for abscissae and depth as
ordinates. The small circles 112m--114m, 116m and 117m are respectively indicative of depths to the center of the respective anomalies determined by the gravity profiles 112--114, 116 and 117 of FIG.
9. It will be observed that there is migration to the left with increase of depth of the center points of the respective anomalies. The information obtained as to the distribution of the anomaly at
each point, since it is gradually increasing in volume, makes it possible to draw, as indicated by the dashed line, the outline of the assumed total anomaly. This dashed curve P shows that the
anomaly comprises a narrow spine extending upwardly toward the surface from one side of a much larger volume, larger with increasing depth. This upwardly rising spine suggests the presence of a salt
dome and thus indicates the desirability of a seismic survey over the region in question. It is in this manner that the present invention greatly extends the information available from gravity
surveys and to a point where complex subsurface anomalies may be identified, such as the salt dome illustrated in FIG. 11.
Further in accordance with the invention, the nature of residual effects of gravity represent information needed in the interpretation of gravity data. In accordance with the present invention and
particularly the arrangements of FIGS. 7 and 10 a set of smoothed residuals of gravity potential may be readily obtained as illustrated in FIG. 12. Thus, if a switch 97 be in its illustrated
position, FIG. 10, and switch 128 closed with the polarity of the signals from the pickup or playback head 81 reversed from pickup head 75, it will be seen that there will be applied to the summing
amplifier 82 the difference between signals detected by playback head 75 and those from playback head 81. More specifically, signals representative of the smoothed gravity profile 108 of FIG. 8 will
be subtracted from the smoothed gravity profile 102. These, it will be recalled, have sampling intervals respectively equal to 128T and 2T. The result of the described operations is the production on
the recording medium 98 of the smoothed residual gravity potential 122 of FIG. 12. As the selector switch 97 is moved to the left through its switching positions there will be successively generated
the smoothed residual gravity potential curves 123--127 of FIG. 12, each curve of profile having an additional labeling indicating the several subtractions which take place, the final smoothed
gravity potential 108 of FIG. 8 in each case being subtracted in turn from the remaining gravity potentials of FIG. 8.
It is to be noted that in FIG. 8 the potential curves 101--108 have been plotted with each unit of magnitude of ordinates equal to 50 gravity units. In one embodiment of the invention this unit was 1
inch. In FIGS. 9 and 12 the same unit of magnitude for the ordinates for the same embodiment of the invention are each made equal to five gravity units. The distance scale is 10,000 feet per unit. In
the present drawings, the scales for the illustrated data have been selected for its display in much the same form as in said described embodiment.
Now that there has been described an analog system embodying the invention together with the manner in which information can be generated as to depth of anomalies, it will be understood that other
modifications may be made and that certain features of the invention can be utilized without other features. Moreover, additional information can be generated from gravity profiles, particularly
where there is considerable interest in distinguishing between near-surface anomalies of relatively great area and somewhat deeper anomalies of greater density and less area.
It has been found that if the gravity profile 101 of FIG. 8 be differentiated prior to recording on the drum 38, shallow distributed masses will produce components on the resultant gravity curve
having sharp edges which disappear in the later records. Thus one can distinguish between the concentrated deeper masses and the shallow distributed masses. To differentiate signals applied to the
recording head 37, FIG. 7, a differentiator may be included in circuit with the recording head 37. Without such a differentiator, however, it is known that the pickup head 42 reproduces the quantity
g (t) dt. By applying this quantity to a differentiator 52a and by closing the switch 52b there will be obtained and applied to the recording head 66 the function g (t). However, since the function g
(t) forms the input to the integrator 36 it will be seen that by operating the switch 36a to its uppermost position the integrator 36 will be bypassed and the function g (t) will be directly applied
to the recording head 37. Hence, switch 52b may remain open. In this way the differential of the function above described may be utilized in generating smoothed gravity profiles as explained for
profiles 102--108 of FIG. 8 and also utilized in the production of difference profiles such as shown in FIG. 9 and also for smoothed residuals of the kind already described in connection with FIG.
Referring again to FIGS. 2--6, it will be remembered that the methods and procedures outlined in connection with FIG. 7 produce a convolution of a function g (t) with the operator of FIG. 3. While in
this case the function g (t) is a function of gravity potential, it is to be understood that the method may also be applied to other functions such as x (t) which varies in amplitude along a scale.
The convolution will then be with an operator which may comprise at least the two step functions of FIGS. 4 and 5, respectively of positive and negative sign and respectively occurring at different
positions along the abscissa scale of the function x (t). This convolution is, according to the right-hand side of the equation of FIG. 6, performed by algebraically adding together at least two
functions respectively representing summations of integrals of x (t), these integrals being displaced one from the other along said scale by amounts corresponding with the displacements one from the
other of said step functions.
* * * * * | {"url":"http://patents.com/us-3576985.html","timestamp":"2014-04-21T09:44:23Z","content_type":null,"content_length":"65972","record_id":"<urn:uuid:f6d5f627-facd-494a-817a-e67de24a5807>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00089-ip-10-147-4-33.ec2.internal.warc.gz"} |
Status Imaging Forum “ADCs for Imagers”
November 15th, 2013
I just want to give the imaging community a quick update on the registration situation for the forum “ADC’s for Imagers”.
Because the interest was/is much higher than expected, a second session will be organized (this was already announced earlier), and the number of seats is each session is slightly increased (from 24
to 32).
At this moment registration for the forum is still possible, because :
- for the session on Dec. 16 & 17, 2013, there are still 2 seats left,
- for the session on Dec. 19 & 20, 2013, there are still 3 seats left.
If anyone is still interested for registering, take your chance ! Keep in mind that in 2014 the forum will be organized again, but with a different subject !
Albert, 15-11-2013.
How to Measure Full Well Capacity (2)
November 4th, 2013
In the previous blog, the measurement of the full well capacity (FWC) was explained based on the measurement of the output signal versus the input signal. The input signal was generated by a
constant light source in combination with a varying exposure time. But once all output data is available, not only the average value of the pixels can be calculated but also the temporal noise level
for every pixel. This will be done in this blog.
From the discussion of the photon-transfer curve and its properties, it was learned that when the photon shot noise is the dominant noise source, the following formula can be written down :
n[temp]^2 = k×(S[out ]- S[off])
with :
n[temp] : the total temporal noise on pixel level,
k : conversion gain,
S[out] : the average signal on pixel level,
S[off] : offset value, or the average signal for 0 s exposure time.
(Normally I use a sigma-symbol for the noise, but the bloody software does not accept the sigma-symbol.)
So instead of looking after the saturation level of the signal, one can also look after the saturation or the peak level in noise and try to calculate the FWC based on the noise measurements. The
FWC is defined at the point at which the temporal noise reaches its maximum value.
1) Saturation of the sensor is larger than maximum value of the ADC.
In such a case, most of the time the maximum value of the sensor or camera ADC is set such that the complete ADC range covers the linear part of the sensor’s output response. An example of a camera
in which the ADC defines the maximum output level of the system is shown in Figure 1, where the sensor noise variance is shown as a function of the exposure time (the data already collected in the
previous blog is reused here).
Figure 1 : Noise variance as a function of exposure time, under a constant illumination level.
Shown is the temporal noise variance as a function of the exposure time at a constant illumination level (the exact value of the light input is not important for this measurement, as long as it stays
constant). As can be observed, the transition from a monotonically increasing output value of the variance to zero goes pretty abruptly. This is a clear indication that the ADC defines the
saturation level. Moreover, the peak value of the temporal noise variance is equal to 194,600 DN.
For this example, the definition of the full well capacity is equal to the variance peak value divided by k, minus the offset value of the noise variance at 0 s exposure time divided by k, or
(194,600/k) – (7667/k) = 63,887 – 2517 = 61,370 DN.
Taking into account the conversion gain of the sensor (3.046 DN/electron, for the TIFF format it is 64x larger than what can be measured at the output of the sensor), this results in a FWC = 20,148
2) Saturation of the sensor is smaller than maximum value of the ADC.
In this case, the FWC needs a clear definition : is FWC referring to the saturation level of the sensor/camera, or is FWC referring to the maximum linear part of the sensor’s output swing ? The
former can be referred to a FWC[sat], while the latter can be indicated by FWC[lin]. But now the question arises : how to define the linear part of the sensor’s output swing ? In the previous blog,
FWC[lin] was set at the point where the sensor’s output deviated maximum 3 % of the linear behavior. Taking that definition and transferring it to the noise variance measurement, now FWC[lin] will
be defined at the point where the noise variance deviates maximum k×(3 %) = 4.5 %.
An example of a camera in which the ADC maximum output value is larger than the saturation level of the sensor is shown in Figure 2.
Figure 2 : Noise variance as a function of exposure time, under a constant illumination level.
Shown is the temporal noise variance as a function of the exposure time with a constant illumination level (the exact value of the light input is not important for this measurement, as long as it
stays constant). As can be observed, the transition from a monotonically increasing output value to a decrease of the noise variance goes smoothly. This is a clear indication that the ADC is NOT
defining the saturation level of the system.
For this example, the definition of the full well capacity at saturation is equal to the maximum level of the noise variance divided by k, minus the offset of the noise variance measured at 0 s
exposure time and also divided by k, or (41,260/k) – (2389/k) = 27,013 DN. Taking into account the conversion gain of the sensor (1.493 DN/electron), this results in a FWC[sat] = 18,093 electrons.
But as mentioned before, this is only half of the story, because the sensor’s response is very nonlinear close to saturation. For that reason the linearity (INL) of the sensor is characterized and
plotted in Figure 2 as well. At the point where the real output characteristic deviates 4.5 % from its regression line, the FWC[lin] is defined. In this example, the following number can be found :
(39,780/k) – (2389/k) = 25,044 DN, translating in FWC[lin] = 16,774 electrons.
It should be clear that this last number is very much depending on the definition of FWC[lin]. If the 4.5 % deviation is translated in 1.5 %, the value for the FWC[lin] will become smaller, or if
the 4.5 % deviation is translated in 7.5 %, the opposite becomes valid.
Note : the data shown in Figures 1 and 2 are obtained from the same sensor, with the same light input. The difference between the two measurements is a difference in camera setting, such that the
analog gain of the sensor and the reference voltage of the ADC result in an overall camera gain difference of a factor of 2.
Explained in this blog is the measurement of FWC based on noise variance. Again it can be learned that the values obtained for the FWC strongly depend on the exact definition of the full well
capacity. Lesson to take away : if the FWC is specified in an image sensor’s datasheet, first ask yourself “How is the FWC defined ?”.
See you next time !
Albert, 04-11-2013.
Post-Doc Opening at TU Delft
October 13th, 2013
On a very short notice, I will start with a couple of new projects at the TU Delft. The main emphasis of the projects will be ultra-low noise CMOS image sensors. A post-doc position is open for a
project leader for these projects. Preferably, the candidate for this position has a background (=PhD degree) in solid-state image sensing and/or mixed signal-design. Those who might be interested
can directly contact me : a.j.p.theuwissen at tudelft.nl, and it would be very helpful if you can send me your resume right away.
Albert, 13-10-2013.
How to Measure Full Well Capacity (1)
September 27th, 2013
The next parameter to be characterized is the full well capacity of the sensor. But before any measurement or characterization can be done it is important to make clear what is the definition of the
full well capacity (FWC). To come to that point, let’s treat two different situations, the first one in which the ADC is setting the saturation of the sensor and the second one in which the ADC is
not setting the saturation of the sensor.
1) Saturation of the sensor is larger than maximum value of the ADC. In such a case, most of time the camera/sensor designer is setting the maximum value of the ADC such that the complete ADC
range covers the linear part of the sensor’s output response. An example of a camera in which the ADC defines the maximum output level of the system is shown in Figure 1.
Figure 1 : Sensor output value as a function of exposure time, under a constant illumination level.
Shown is the sensor output as a function of the exposure time with a constant illumination level (at this stage of the discussion, the exact value of the light input is not important for this
measurement, as long as it stays constant, so that various setting of the camera and/or sensor can be compared with each other). On the right axis the integral non-linearity is shown as well. As
can be observed, the transition from a monotonically increasing output value to saturation goes pretty abruptly. This is a clear indication that the ADC defines the saturation level. Moreover, the
value of the saturated output is equal to 2^16 – 1 = 65535 DN (2^16 is coming from the TIFF format).
For this example, the definition of the full well capacity is equal to the saturation level (of the ADC) minus the offset at zero seconds exposure time, or 65535 – 2106 = 63429 DN. Taking into
account the conversion gain of the sensor (3 DN/electron, for the TIFF format it is 64x larger than what can be measured at the output of the sensor), this results in a FWC = 19820 electrons.
2) Saturation of the sensor is smaller than maximum value of the ADC. In this case, the FWC needs a clear definition : is FWC referring to the saturation level of the sensor/camera, or is FWC
referring to the maximum linear part of the sensor’s output swing ? The former can be referred to a FWC[sat], while the latter can be indicated by FWC[lin]. But then the question arises : how to
define the linear part of the sensor’s output swing ? Very often, FWC[lin] is defined at the point where the deviation of the sensor’s output and an ideal straight line is maximum 3 %, or at the
point where the sensor’s output is linear up to 97 % or better. An example of a camera in which the ADC maximum output value is larger than the saturation level of the sensor is shown in Figure 2.
Figure 2 : Sensor output value as a function of exposure time, under a constant illumination level.
Shown on the left vertical axis is the sensor output as a function of the exposure time with a constant illumination level (the exact value of the light input is not important for this measurement,
as long as it stays constant), shown on the right vertical axis is the corresponding integral non-linearity (INL). As can be observed, the transition from a monotonically increasing output value to
saturation goes smoothly. This is a clear indication that the ADC is NOT defining the saturation level of the system.
For this example, the definition of the full well capacity at saturation is equal to the saturation level minus the offset at zero seconds exposure time, or 51880 – 1602 = 50278 DN. Taking into
account the conversion gain of the sensor (1.5 DN/electron), this results in a FWC[sat] = 33518 electrons.
But as mentioned before, this is only half of the story, because the sensor’s response is very nonlinear close to saturation. For that reason the linearity (INL) of the sensor is characterized and
plotted in Figure 2 as well. At the point where the real output characteristic deviates 3 % from its regression line, the FWC[lin] is defined. In this example, the following number can be found :
45860 – 1602 = 44258 DN, translating in FWC[lin] = 29505 electrons.
It should be clear that this last number is very much depending on the definition of FWC[lin]. If the 3 % deviation is shifted to 1 %, the value for the FWC[lin] will become smaller, or if the 3 %
deviation is shifted to 5 %, the opposite becomes valid.
Note : the data shown in Figures 1 and 2 are obtained from the same sensor, with the same light input. The difference between the two measurements is a difference in camera setting, such that the
analog gain of the sensor and the reference voltage of the ADC result in an overall camera gain difference of a factor of 2.
Explained in this blog is the measurement of FWC based on linearity measurements. Again it can be learned that the values obtained for the FWC strongly depend on the exact definition of the full
well capacity. Lesson to take away : if the FWC is specified in an image sensor’s datasheet, first ask yourself “How is the FWC defined ?”.
See you next time !
Albert, 27-09-2013.
Playing Time (3)
September 9th, 2013
Once more thanks for all the reactions.
I checked the reactions again this morning, and it is clear that the right answer/suggestion came from David San Segundo Bello (imec). Already in one of the very first reactions, he mentioned a
possible drift of the LED light source due to an AC variation on top of the DC voltage. Afterwards Guy Meynants (CMOSIS) repeated the answer of David, but also added to it the method to check it
out, namely by means of noise measurements. That actually completed the story. So I think it is fair to give both guys a bottle of wine.
Albert, 09-09-2013.
Playing Time (2)
September 6th, 2013
It was surprising to see the amount of reactions on the previous “Playing Time” blog. Thanks for all the remarks, questions, suggestions, also through the www.image-sensors-world.blogspot.com
Remember what the issue was : measurements were done with a constant LED light input at two different settings :
- gain = 1, exposure time = 42.24 ms, 100 images, and,
- gain = 4, exposure time = 10.56 ms, 100 images.
A constant switch between the two settings was realized, and in total 21 (times 100 images) measurements were done to check the reproducibility. Based on the numbers shown, one would expect the same
output in all situations.
A first issue, being the offset that does not scale with the gain, was corrected by subtracting the (measured seperately) offset for all measurements. A second issue, being the incorrect ratio of
gain setting (theory 1:4, reality 1:3.8) was corrected, and then the final result is shown in Figure 1.
Figure 1 : Sensor output values (corrected for the offset and corrected for incorrect gain setting) as a function of measurement number (each dot represents the average value of a 50×50 ROI of 100
To find the root cause of the fluctuations fo the output signal, the temporal noise on pixel level is calculated, this excludes the FPN of the pixels. The result of the measurement is shown in
Figure 2.
Figure 2 : Noise calculations of the measurements performed in Figure 1 (each dot represents the average value noise value of a 50×50 ROI of 100 images).
Notice that, although the output signal in both cases is expected to be the same, that is not the case for the noise ! The temporal noise is dominated by photon shot noise and if the cases of “gain
= 1” are set as the references, then the photon shot noise (expressed in electrons) for the cases of “gain = 4” is a factor of 2 less in the charge domain (4 times less photons). But with a gain
setting that is 4 times higher, the temporal noise in the digital domain becomes a factor of 4 higher. Compared to the cases of “gain = 1”, the noise in the digital domain of the cases of “gain = 4”
will be factor of 2 higher. This is pretty much the case for the measurements (proving that the measurements and calculations were done right).
If the noise of the “gain = 4” cases is reduced by a factor of 2, and if then the noise results are plotted together with the output signals of the sensor, Figure 3 is generated.
Figure 3 : Measured output signal (corrected where needed) and calculated temporal noise (adapted where needed) of the measurements performed.
With a bit of imagination, a similar pattern that was present in the measurements of the output signal can be found in the calculated temporal noise. There is enough correlation between both to
conclude that the changes in the output signal was coming from the light source, because the fluctiations are reflected in the (photon) shot noise results as well. (After some experiments, it was
found that the power supply for the LEDs was causing the issues, because its output voltage was not stable over time.)
This exercise illustrates the power of using noise measurements as a diagnostic tool !
Albert, 06-09-2013.
Status Imaging Forum “ADCs for Imagers”
September 3rd, 2013
Due to the large interest for the first Imaging Forum “ADCs for Imagers”, two sessions are now scheduled : the first one on 16 & 17 Dec. 2013, the second one on 19 & 20 Dec. 2013. Unfortunately a
few people had to cancel their pre-registration, so a few seats (for both sessions) are back available for those who still are interested. Please notice that a third session will not be organized.
In the year 2014, another forum will be planned, but with another subject !
Albert, 3-9-2013.
Playing Time (1)
August 19th, 2013
By doing all these non-linearity measurements, I was thinking to a kind of test to check out the reciprocity. What is going to be reported here is not exactly what is meant by the definition of
reciprocity, but it has some relation to it.
What is done is the following :
- the sensor is illuminated with a fixed DC-powered LED light source,
- at a camera gain setting equal to 1 (minimum value), the exposure time is adjusted such that the output value is about 75 % of saturation. Under the conditions present, the exposure time turned out
to be 42.24 ms,
- the output signal of the sensor as well as its offset value were measured by means of calculating the average values over a 50 x 50 window and using 100 images,
- next the gain of the camera is set to 4 (maximum value), the exposure time is reduced by a factor of 4 to 10.56 ms,
- the output signal of the sensor as well as its offset value were measured again over the same window and again using 100 images,
- the above sequence of switching between minimum gain and maximum gain (with adjusted exposure time) was repeated 10 times.
The result of this measurement is shown in Figure 1.
Figure 1 : Sensor output value (corrected for the offset) as a function of measurement number, for each measurement the camera gain and exposure time are matched to each other.
On the horizontal axis the measurement number is shown, on the vertical axis the corrected sensor output is shown. A couple of observations can be made :
- the sensor output values obtained at low gain, high exposure time are not equal to each other. Despite of the large amount of data that is averaged, still quite a bit of noise is present,
- the sensor output values obtained at high gain, low exposure time are neither equal to each other,
- in principle a change in gain would perfectly be compensated by an inverse change of the exposure time, but neither this can be seen in the measurements. From other measurement it could be learned
as well that the ratio of sensor output does not exactly matches the ratio of the camera gain settings. So if a gain = 1 corresponds indeed to a gain of 1, the gain = 4 setting does not exactly
equals to a gain factor of 4, but comes much closer to 3.81.
The same data as present in figure 1 is repeated in figure 2, but now the regression line is calculated for the two sets of data (low gain and high gain).
Figure 2 : Same data as present in figure 1, but now with the regression lines added.
An more-than-interesting remark can be made now that the regression lines are added : there seems to be a pattern present between the deviation of the measured data and the regression line. Any idea
where this effect is coming from ? You get a bottle of good French wine for the correct explanation !
Albert, 20-08-2013.
Registration for the Imaging Forum ADC’s for Imagers
July 26th, 2013
The registration for the forum is going very fast. The first session scheduled for Dec. 16 & 17, 2013 is fully booked by now. All new registrations for the forum that come in are automatically
linked to the second session of the forum scheduled for Dec. 19 & 20, 2013. As mentioned earlier, the second session will follow the same agenda as the first one. Location as well as speaker will
also be the same. For those of you still interested in the forum, please notice that at this moment, a third session is not scheduled.
Albert, 26/7/2013.
First IMAGING FORUM, Dec. 16th-17th, 2013.
July 22nd, 2013
A few months ago I announced the first imaging forum that will focus on “ADCs for Image Sensors.” In the meantime the agenda is fixed, and registration for the forum is open.
Because of the great response after the first announcement, a second session of the forum is scheduled to take place on Dec. 19th-20th, 2013. Same location, same agenda, same speaker.
The agenda as well as the registration information can be found at : www.harvestimaging.com/forum.php
Albert, 22-07-2013. | {"url":"http://harvestimaging.com/blog/?paged=2","timestamp":"2014-04-25T06:45:17Z","content_type":null,"content_length":"41617","record_id":"<urn:uuid:8811bbfc-7dcb-42f7-8d66-d2105a173ecd>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00260-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: st: svy commands for random effects model
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
Re: st: svy commands for random effects model
From jpitblado@stata.com (Jeff Pitblado, StataCorp LP)
To statalist@hsphsun2.harvard.edu
Subject Re: st: svy commands for random effects model
Date Mon, 20 Nov 2006 15:06:11 -0600
Robert Bozick <rbozick@jhu.edu> wants to fit a random effects model using
survey data:
> I am working on an analysis that will require the use of a random effects
> model on a data set that includes two observations per individual. The
> general form of the code I am using to estimate the model is:
> xtreg y x1 x2 x3, re i(id)
> (id = the case id for each individual)
> The sample uses a stratified cluster design and I therefore have to use the
> svy commands when analyzing this data set.
> I am using Stata 8.0 and could not find any documentation on the application
> of svy commands with estimating a random effects model.
> Is there a svy command or some other procedure available so that I can
> obtain the correct standard errors for this model?
There are no estimation commands (in Stata 9 or earlier) that will fit a
random effects model using the linearized variance estimator for survey data.
If Robert were using Stata 9 and was willing to use -xtreg, mle- to fit the RE
model, he could use the -svy jackknife- prefix command to get design-based
variance estimates via the jackknife.
NOTE: The -xtreg, re- command does not accept weights, but
-xtreg, mle- allows -iweight-s.
If Roberts survey design variables were named swgt (sampling weight), strid
(strata id variable), and psuid (PSU id variable), he would type
. svyset psuid [iw=swgt], strata(strid)
. svy jackknife _b: xtreg y x1 x2 x3, mle i(id)
I'm assuming that -id- is nested within -psuid- (panels are nested within the
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2006-11/msg00620.html","timestamp":"2014-04-17T09:51:57Z","content_type":null,"content_length":"7056","record_id":"<urn:uuid:fef67643-6a53-4a86-9ce6-23f6c6b24992>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00368-ip-10-147-4-33.ec2.internal.warc.gz"} |
Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material,
please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 41
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks
are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally
inserted. Please APPENDIX B 41 APPENDIX B ANALYSES OF SOUND SPECTROGRAMS OF “HOLD EVERYTHING...” B-1. TIME AND FREQUENCY ANALYSIS Several sound spectrograms were made of the first and second halves
of the “hold everything...” expression on Channels I and II. Two of these pairs are given in Figures B-1 and B-2. Although some similar features can be observed in comparing the two channels in
Figures B-1 and B-2, it is difficult to tell if the similar features occur somewhat at random or if corresponding features occur at corresponding times over the entire 3 1/2 second duration of the
message, as must be the case if the corresponding features are associated with the same transmission. For this reason, the following analysis was made of two successive pairs of sound spectrograms
which were butted together, with an overlapping sound spectrogram being used to ensure that the sound spectrograms were combined properly. The result is shown in Figure B-3. Twenty-seven
corresponding features have been marked on Channels I and II in Figure B-3. Since the timings of the corresponding features were to be studied later, two observers were used in the identifications to
diminish the danger that human prejudice on the timing would affect the identification. The first observer, looking at the sound spectrograms of both Channels I and II but making no measurements,
marked on Channel II 27 points which he felt were sufficiently characteristic and sufficiently well reproduced on Channel I to be identifiable there by an independent observer. Then a series of 27
xerographic copies were prepared of different portions of Channel II, extending 1/2 second to each side of the single identified characteristic and with no indication of time scale on any of the
Channel II strips. These strips and the Channel I sound spectrogram were presented to a second observer who was asked to mark on the Channel I sound spectrogram what he considered to be the most
similar characteristic to the one marked on each Channel II strip. He was asked to do so by pattern recognition and not by measurement. His marks located the black dots on the Channel I tape of
Figure B-3. It was found that the use the print version of this publication as the authoritative version for attribution.
OCR for page 41
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks
are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally
inserted. Please APPENDIX B 42 second observer correctly identified 26 out of the 27 characteristics. In the one case of disagreement (characteristic I) the second observer subsequently agreed that
the intended identification was better than the one he selected. Only after all the identifications had been made were the times and frequencies of each characteristic measured and recorded in Table
B-1. These are plotted in Figure 4. It can be seen that the points fall markedly close to a straight line with the only exception being the misidentified characteristic I. use the print version of
this publication as the authoritative version for attribution.
OCR for page 41
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks
are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally
inserted. Please use the print version of this publication as the authoritative version for attribution. APPENDIX B 43
OCR for page 41
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks
are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally
inserted. Please APPENDIX B 44 A straight line of the form T'=α+βT"+u was fit to the (T', T") data. Under the copy hypothesis that the signal on Channel I is a noisy copy of that on Channel II, the
values of u are determined by measurement errors in the presence of noise, and there may occasionally be an outlier due to the matching noncorresponding features on the two channels. The robust
linear regression routine RLIN in the Minitab 80.1 interactive statistical package5 yields the estimated fit T'= -0.0253+1.0599T"+u* A sequence of regressions in which outliers are dropped one or two
at a time yields the fit T'= -0.0216+1.0593T"+u. Here, the points 9, 11, 13, 18, and 19 were dropped. The standard deviation of the fitted residuals of the remaining 22 points (adjusted for 20
degrees of freedom) is su=0.0092 and the estimated standard deviations of the two coefficients above are 0.0037 and 0.0016, respectively. The five outliers in the column labeled u=∆T are marked with
a D. All other values of u are less than 0.015 in absolute value. The ratios R=F"/F' of the measured frequencies at the paired points in the two channels were computed. A sequence of averages in
which outliers are dropped eliminates four ratios numbered (1, 5, 8, 13) and yields an average and standard deviation sR=0.0272. is an estimate of β. The standard deviation of is so that is a less
accurate estimate of β than that derived from the regressions above. The values of v=R -1.064=∆R use the print version of this publication as the authoritative version for attribution.
OCR for page 41
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks
are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally
inserted. Please APPENDIX B 45 are listed with the outliers marked by a D. Finally we calculate and list Except for the two outliers, marked D, corresponding to points (1, 5) all of the values of ∆F
are less than 0.09 kHz in absolute value and have a sample mean of −6.92 Hz and standard deviation of 48 Hz. These data are consistent with the copy hypothesis, a probability of about 1/4 or less of
an incorrect match and relatively small measurement errors in the time and frequency measurements. To be more specific, let us suppose (i) that the Channel II markings are precise, (ii) the Channel I
markings may be either wrong or correct, but displaced by an amount due to the noise, and (iii) each measurement has a reading error. For example suppose where t' and t" are the exact times of the
corresponding events, un is the contribution of the distortion due to noise, T' and T" are the observed times, and and are the reading errors. Then and, assuming independence of the residuals, (The
lack of statistical independence between T" and u=∆T raises a technical problem which is minor in the present context and won't be discussed here). use the print version of this publication as the
authoritative version for attribution.
OCR for page 41
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks
are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally
inserted. Please APPENDIX B 46 Because the process of discarding outliers tends to bias the estimated standard deviation downwards, one would expect σu to be about 0.01 which is consistent with and
although other combinations are also plausible considering, the data and the measurement techniques. The five outliers, one of which is much larger than the others, suggest that the probability of
incorrect match may be as large as 1/4. A similar analysis may be applied to the frequencies. If we write where f' and f" are the exact frequencies, vn is the contribution of the distortion due to
noise, F' and F" are the observed frequencies, and and are the reading errors. Then where the probability distribution of ν=∆R can be approximated by one with mean 0 and standard deviation which
averages out to approximately where is the average of the (F')−1 values. Also ∆F=F'−β−1F" has mean 0 and standard deviation use the print version of this publication as the authoritative version for
attribution. and
OCR for page 41
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks
are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally
inserted. Please APPENDIX B 47 the relation is approximately maintained by the estimates. Could the observed coincidences have occurred even if the message on Channel I were not a copy of that on
Channel II? Suppose that as an alternative hypothesis we assume that it was a different message and the time coincidences took place because the features marked maxima, minima, flats, and downward
slopes occur frequently on Channel I and a similar feature could, at random, be close by to one being sought. For example, there are 18 peaks in a 3.6 second interval. Thus at random, peaks would
occur at an average spacing of 0.2 seconds and, according to the Poisson process calculation, the probability of at least one peak within a time of δ seconds from a specified time would be p=1−e−2δ/
0.2. The frequencies of the other features are no greater than that of peaks; hence, the probability of a coincidence within |∆T|≤0.015 is p=l−e−0.15=0.14. We have 22 such coincidences out of 27
trials. Granted that we selected estimates of α and β to increase the number of such coincidences, we may, to be conservative, eliminate two of these coincidences. We then have 20 out of 25
coincidences. Assuming independence, the number of such coincidences has a binomial distribution with mean 25×(0.14)=3.5 and standard deviation 1.73. Then 20 is 9.51 standard deviations away from the
mean and the probability of getting as many as 18 coincidences is about 2.1×10−13. Note that 25 of 27 values of ∆F are less than 0.1 kHz in absolute value. If each of these ∆F were uniformly
distributed over a narrow range of ±0.3 kHz, the probability of 25 or more independent absolute values less than 0.1 would be very small (2×l0−10). In fact, this probability would be 0.001 even if
the range of values of ∆F were cut in half to ±0.15 kHz. The sound spectrograms shown in Figure 4 are similar to those in Figure B-3 except that one recording is slowed down 6.7% to bring the use the
print version of this publication as the authoritative version for attribution.
OCR for page 41
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks
are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally
inserted. Please use the print version of this publication as the authoritative version for attribution. APPENDIX B except for a few points, such as I, that have been adjusted for a better fit in
Figure 4. 48 ratio of apparent recorder speeds closer to unity. The black dots indicate the same features as in Figure B-3
OCR for page 41
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks
are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally
inserted. Please APPENDIX B 49 B-2. MEAUSREMENTS OF EASILY IDENTIFIED FREQUENCY RATIOS ON SOUND SPECTROGRAMS A casual inspection of the original sound spectrograms of sections of Channel I and
Channel II recordings for the time interval identified as containing the phrase “hold everything...” show marked similarities, but with the most clearly defined frequencies on Channel I being
somewhat lower than those on corresponding sections of Channel II. Since the analysis of the preceding section shows that the measured times between corresponding events on Channel I are longer than
on Channel II, by about 6%, it seemed worth measuring the frequency ratios of corresponding signals that were particularly well suited for frequency measurement; if the two sound spectrograms really
did originate from a single 3.5 second long signal on Channel II, which was fed by cross talk onto Channel I, then the frequency ratio must depart from unity by that same approximately 6%. This was
our working hypothesis at the time, so the frequency ratio measurements provided a test of the hypothesis—if the frequency ratio was not approximately 1.06 the hypothesis would have been totally
disproved. One of the Committee members, therefore, measured the frequency ratio at five corresponding sections of the records. The sections to be measured were selected by a simple criterion that
can be used by any interested person. The frequency must stay constant (a horizontal band, by visual inspection) for at least 1/30 second, and it must be clearly visible on both channels at
corresponding times. It is not required that the frequency bands originate from speech components of the signals on Channel II. Anyone listening to this section of Channel II will hear, in addition
to the sentence starting with “Hold everything secure...,” a number of tones that are both amplitude and frequency modulated. These tones are as useful as the speech components in proving that a
signal on Channel II was imprinted by cross talk onto Channel I at the time of the conjectured “shots”. The above mentioned criterion was satisfied by five sections of the two tapes, which are
identified by their original times T', on Channel I. They are as follows: use the print version of this publication as the authoritative version for attribution.
OCR for page 41
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks
are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally
inserted. Please APPENDIX B 50 Section Time 1 centered at T' = 0.67 seconds 2 " = 2.19 " 3 " = 3.13 " 4 " = 3.31 " 5 " = 3.52 " The measurement of each frequency was made in the following way: an
indentation was made in the surface of the glossy print, near the center of each “band,” with a sharp point. The observer then looked at the mark, to check that it was as nearly centered as possible
in the vertical direction. On the few occasions that it appeared to be above or below the center of the band, a new mark was made, and checked to be adequately centered. Only after the observer was
satisfied that he had placed ten marks correctly— one for each of five bands, on two spectrograms—did the measurements begin. The measurement consisted of a linear interpolation between adjacent
kilohertz lines using a millimeter scale as the measuring device. The following five ratios came out of the measurements just as described: Section Frequency Ratio 1 1.054 2 1.066 3 1.065 4 1.052 5
1.067 Mean Value 1.061±0.007 This value is consistent with the time ratio 1.059±0.002 found from the slope of the line relating the time coordinates on the two channels in Figure 5. Another Committee
member made independent measurements of the average of the same frequency ratios and found a mean value of 1.063±0.007 use the print version of this publication as the authoritative version for
OCR for page 41
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks
are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally
inserted. Please use the print version of this publication as the authoritative version for attribution. APPENDIX B frequency ratio is cross talk from Channel II. R=1.062±0.007. 51 In view of the
close agreement between this pair of independent measurements, we conclude that the mean strong support to the hypothesis that the “hold eveything...” signals observed on Channel I were imprinted by
The excellent agreement between the time-derived, and the frequency-derived ratio of tapes speeds lends
OCR for page 41
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks
are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally
inserted. Please APPENDIX B 52 B-3. ALTERNATIVE TIME AND FREQUENCY ANALYSES OF SOUND SPECTROGRAMS The analyses in Appendixes B-1 and B-2 may be subject to some criticism. A certain amount of
subjectivity derives from the fact that the first observer was looking at the sound spectrograms from both channels while he marked points on Channel I. The strips in Channel II were one second wide,
which is a substantial portion of the entire 3.5 second spectrogram. Consequently the 27 strips had large overlapping parts. To the extent that observer 2 recalled what he did on previous matches or
to the extent that he used the same cues in the overlapping portions, the resulting times were dependent observations. A theory that uses estimates and conclusions based on independence assumptions
may overestimate the significance or accuracy of these conclusions and estimates. However, this experiment was supplemented by several variations that derived similar results. Some of these were more
careful to avoid the subjectivity and to reduce considerably the dependence aspects of the experiment presented here. These are not reported in detail, because they were carried out using xerographic
copies of photographs using several scales, and relatively crude measuring instruments (graph paper in place of rulers). A presentation here would be more complex and the photographs would lack
clarity. a) Initial experiments In chronological order, an initial experiment was carried out where 28 pairs of corresponding points were measured on both Channel I and Channel II by an observer who
studied both spectrograms simultaneously for characteristic features. A least squares analysis of these highly subjective data gave the fitted relation T'=−0.0402+1.0673T" and the ratios of the
observed frequencies R=F"/F' averaged to 1.0728. use the print version of this publication as the authoritative version for attribution.
OCR for page 41
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks
are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally
inserted. Please APPENDIX B 53 A “robust” analysis of the pairs (T', T") in the first experiment, where three outliers were discarded, gave the estimated relation T'=−0.0235+1.0633 T"+u where the
residual u had estimated standard deviation 0.0159 and the estimated standard deviation of the coefficient β of T" was 0.0028. An alternate robust linear regression, implemented on the Minitab-1980
interactive statistical package5 under the command RLIN, gave T'=−0.0295+1.0626T"+u A second experiment was by an observer who measured the central frequencies of 5 distinct pairs of broad horizontal
sections appearing at comparable times and with relatively high frequencies. The ratios of these central frequencies R averaged and had sample standard deviation 0.0072. b) A more objective
experiment on the timing At this point a more objective procedure was carried out using xerographic copies of a reduced photograph of the spectrograms. The observer was given the experimenter's
explanation of the theory that messages were broadcast on Channel II and picked up by the stuck microphone located near a receiver of Channel II. The observer was shown copies of Channel I and II for
two other messages that had been well duplicated; Y—“You want...Stemmons” and S—“Says they came from....” It was explained that dark portions meant loud signals and sharp changes that were dark would
probably be well reproduced under the theory. The observer was asked to mark about 20 spots on Channel II that would be likely to be well reproduced. The observer was not given an opportunity to
study Channel I of H—the spectrogram suspected of being “Hold everything secure....” Twenty strips of Channel II of H, each between 0.2 and 0.3 seconds long, were reproduced by xerox with the marked
point in the center. The estimate use the print version of this publication as the authoritative version for attribution.
OCR for page 41
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks
are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally
inserted. Please APPENDIX B 54 was used to locate corresponding points on Channel I. Strips of a xerox of Channel I were cut out. These strips were 3/4 second long and were centered at a point
displaced from by a random quantity uniformly distributed on the interval (−.3, .3) in seconds. Corresponding strips were paired and these pairs were arranged in random order. A second observer was
asked to align the two strips of each pair and to locate on Channel I a vertical mark corresponding to the time of the mark in Channel II. This observer was allowed to use as much context as was
available, in the approimxately 0.3 second of Channel II and 0.75 seconds of Channel I in the pair, to help him make the mark. It was not necessary for him to find a feature corresponding to the
point marked. He, too, had the theory explained to him, and he was informed that there might be a consistent difference in the frequencies on the two channels. This experiment requires some balance
in selecting the widths of the strips. If both strips are too narrow, one is bound to get (T', T") points that lie close to the line T'=1.0673T"−0.0404 and a good fit will not be convincing. If the
strip on Channel II is too narrow and that of Channel I is very wide, it will be very easy for the observer to be misled by similar characteristics elsewhere. This would reduce the efficiency and
power of the experiment. If the strip on Channel II is wide, then the different strips will overlap substantially and memory and the cues the observer uses may make results on different strips
dependent. As the experiment was carried out the 20 strips of Channel II had pairs with some overlap, but in the random order of presentation these small strips looked quite distinct. When the times
were measured, the deviations, ∆T=T'−(1.0633T"−0.0235), between the measured time in Channel I and the time anticipated by the robust estimate of the straight line, were calculated. Thirteen of these
were no larger than 0.054 seconds, one was 0.075 seconds and the remaining 6 of 0.203 seconds or more. The mean and standard deviation of the thirteen smaller deviations were −0.016 seconds and 0.026
seconds. The root mean square deviation was 0.029 seconds. These results are consistent with the copy hypothesis if one anticipates misclassifications about 1/4 of the time and measurement error due
to noise and measurement accuracy of about 0.03 second (about 0.07 inch on the scales used). use the print version of this publication as the authoritative version for attribution.
OCR for page 41
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks
are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally
inserted. Please APPENDIX B 55 Under the randomness alternative hypothesis, that the two messages are unrelated and any matching of features is randomly located, one may estimate that the probability
of being within 0.054 seconds of the expected point to be about 0.2.* The number of such coincidences out of 20 independent trials would be binomially distributed with mean 4 and standard deviation
1.79 and 13 successes corresponds to (13−4−.5)/(l.79)=4.75 standard deviations from the mean and is highly unlikely. Moreover, subtracting 2 of the 13 successes to compensate for the choice of the
linear fit would still make this match very unlikely. Then we would have (11−4−.5)/1.79=3.63 standard deviations with P=0.0006. The poor quality of the xerographic copies with which this experiment
was carried out and the low-quality measuring instruments explain in part why the standard deviation of the observed discrepancies were much larger than those observed with the data presented in
Table B-1. c) A more objective experiment on the frequencies The experimenter selected 14 dark horizontal bands on a xerox copy of Channel II. The time points T" of these horizontal bands were
measured. Corresponding times on Channel I given by were located. The subject was requested to mark the central frequency of the bands on Channel II. Then the subject was requested to locate bands on
Channel I at the times marked and to mark the central frequency. These central frequencies were measured and labeled F1 and F2 for the two channels. The ratio R=F2/F1 was calculated and ranged from
1.337 to 1.024. Deleting 4 outliers, the average was and the sample standard deviation was sR=0.0116. use the print version of this publication as the authoritative version for attribution. *Under
the randomness hypotheses, the distribution of the discrepancy corresponds to the sum of the off center random displacement (uniform from −.3 to .3) and an independent random choice in (−.375, .375)
along the Channel I strip. Since this latter choice is almost uniform except for a possible bias toward the center, it was modeled as the sum of two uniforms from (−.3 to .3) which has a symmetric
triangular distribution from −.6 to .6. The probability that this sum is between −.054 and +.054 is
OCR for page 41
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks
are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally
inserted. Please APPENDIX B 56 These data are consistent with a hypotheses that Channel I is a noisy version of Channel II which leads to a wrong pairing about 1/3 of the time and that when the
correct pairing is made, the noise distortion and measurement error in the individual central frequency readings corresponds to about kHz or about 15 Hz. By no stretch of the imagination could these
readings be consistent with a purely random location of horizontal bands theory. Even a much more restrictive hypothesis, assuming that another speech was uttered in a similar cadence with similar
frequencies of vowels and mechanisms yielding strong horizontal bands, was shown to be implausible as long as these bands were allowed to fluctuate at random within narrow ranges determined by the
empirical data. use the print version of this publication as the authoritative version for attribution.
OCR for page 41
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks
are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally
inserted. Please APPENDIX B 57 B-4. DIGITAL CALCULATIONS OF CROSS CORRELATIONS BETWEEN CHANNEL I ANDCHANNEL II If indeed “hold everything...” on Channel II was transmitted to and recorded on Channel
I at the time occupied by the assumed “shots”, then the digital cross-correlation of the short-time acoustic (energy) spectra of the two Channels should show a correlation substantially larger than
that which would be achieved by chance. This was studied by a member of the Committee and two collaborators. The Channel I and Channel II recordings were digitized and the short-term acoustic spectra
were taken and stored in a digital computer. The printouts of these spectra are shown in Figures B-4, B-5 and B-6. These digital spectrograms were computed directly from magnetic tapes and did not
involve the use of the FBI sound spectrogram equipment. An objective measure of similarity of two spectral matches is obtained from the cross correlation coefficient, defined as for the functions X
and Y by ccc=(Σ X·Y)/[(Σ X·X)·(Σ Y·Y)]1/2. This cross correlation coefficient would be reduced if one of the recordings were played at the wrong speed, or if the recording at one time were compared
with the same or a different recording at a different time. The first cross correlation coefficients were made from the same Channel I and II recorded copies that were used in preparing Figures 3, 4,
B-1, and B-2. It was found that the biggest peak for the cross correlation coefficient occurred for a relative warp (or speed ratio) of 1.06 in agreement with the other two manual approaches for
comparing Channels I and II, a 1% deviation of warp from optimum diminished the peak substantially. Unfortunately, that Channel II copy contains many repeats caused by the Gray Audograph machine in
playback. Accordingly another tape copy was prepared by members of the Committee directly from the original Audograph plastic disk itself and by the use of a standard turntable and tone arm, use the
print version of this publication as the authoritative version for attribution.
OCR for page 41
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks
are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally
inserted. Please APPENDIX B 58 thus producing a tape without compensation for the fact that the disk was originally recorded at constant linear track speed. It was this tape that was used in
preparing the sound spectrograms shown in Figures B-4, B-5, and B-6. The Channel II signals are from the 7.5 ips tape recording of the Gray Audograph record played on a turntable (12/9/81). The tape
was played at 3.75 ips when digitized for these experments: hence, the rate of change of the correction factor was assumed to be half the measured rate of 0.0005 per second. The signals were
digitized at 20000 samples per second, and a 400-pt Fourier transform was computed every 200 samples (10 millisec), using a 400-pt Blackman window. The correlations were performed on portions of the
200-pt spectra, which have a point spacing of 50 Hz. The high frequencies of the Channel I spectra were boosted at a rate of 6 db per 1000 Hz and then normalized to a constant energy in the band of
interest. Figure 6 gives the cross correlation coefficient for the “hold everything...” segments when the relative speed was selected to give the largest peak that occurred when the Channel II signal
was sped up slightly by compressing the time scale by a factor that varied from 0.957 to 0.961 (changing at the rate of 0.00025 per sec). Figure 6 is a plot of the 750 correlation coefficients
obtained by sliding 2.50 secs of Channel I along 10.00 secs of Channel II, 0.01 secs at a time, using frequencies in the band 600 Hz to 3500 Hz. For comparison the cross correlation coefficients of
the unambiguous segment “You want...Stemmons” are plotted in Figure 7 with the time scale of Channel II stretched by a factor that varied from 1.013 to 1.015. The shape of the peak is vey similar to
that for the “hold eveything...” segment. The background is somewhat smoother simply because there is less noise in Channel I at this time. Channel I, however, in neither case gives a perfect
reproduction of Channel II. It has lost some of the high and low frequencies and, as one would expect, there are tones present on Channel I that are not on Channel II. The marked narrow peaks of the
cross correlation curves clearly show by an objective test that the “hold everything...” segment of Channel II is present on Channel I at the same location as the acoustic impulses. use the print
version of this publication as the authoritative version for attribution.
OCR for page 41
About this PDF file: This new digital representation of the original work has been recomposed from XML files created from the original paper book, not from the original typesetting files. Page breaks
are true to the original; line lengths, word breaks, heading styles, and other typesetting-specific formatting, however, cannot be retained, and some typographic errors may have been accidentally
inserted. Please APPENDIX B 59 Inspection of the spectrograms of Figure B-6 shows the presence of a Channel II brief tone beginning at time 32.00 secs and extending to 32.08 secs. It resumes at time
32.24 and disappears once more at 32.43. The Channel II brief tone is clearly visible in the Channel I spectrogram aligned by the relative timing obtained from Figure 6. A strong Channel I heterodyne
is observed to begin at time 32.03 and to end at 32.17 secs. The resumption of the Channel II brief tone in Channel I at 32.24 secs is clearly weak and gradually grows in strength. These observations
can be made more quantitatively from Figures B-7 and B-8, which are “printer plots” of the relevant regions of the Channel II and Channel I spectra. The vertical bars outlining the Channel II brief
tone (and the same time-frequency bins in Channel I) not only guide the eye, but allow the quantitative calculation of the energy between the bars. The digits printed are the “bin energy” in
decibels, each unit corresponding to a 4-db range. By the end of the first Channel II brief tone at time 32.08, it has been suppressed by about 10 db relative to its value before the Channel I
heterodyne appeared at 32.03. When the Channel II brief tone reappears at 30.24 secs, the AGC has suppressed it by approximately 20 db, and it recovers to its original value only at about 32.43 secs,
some 0.26 secs after the end of the Channel I heterodyne at 32.17 secs. That this AGC action is not due to a later recorder or a re-recording is demonstrated by the fact that much stronger Channel II
brief tones are present on the Channel I recording, without showing the drop in intensity which is induced by the Channel I heterodyne. use the print version of this publication as the authoritative
version for attribution. | {"url":"http://books.nap.edu/openbook.php?record_id=10264&page=41","timestamp":"2014-04-18T08:29:14Z","content_type":null,"content_length":"145574","record_id":"<urn:uuid:121d09e2-975a-432b-9155-5fb0a096a7b5>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00559-ip-10-147-4-33.ec2.internal.warc.gz"} |
Scala/Higher-order functions 1
From Wikibooks, open books for an open world
Higher-order functions are functions that either takes as argument or returns a function. By using functions instead of simpler values, higher-order functions become very flexible. A simple example
is testing whether at least one element in a list passes some test. By using an existing higher-order function defined for List, we don't have to write the code that tests each element, but only have
to write a function that contains the test itself. For instance, assume that we want to test whether some list contains the number 4:
def isEqualToFour(a:Int) = a == 4
val list = List(1, 2, 3, 4)
val resultExists4 = list.exists(isEqualToFour)
println(resultExists4) //Prints "true".
In the above example, we first define our function that contains the test (equality to 4). We then define our list that happens to contain the number 4 (meaning that the final result should be true).
In the third line, we call the method "exists", which takes our function containing the test, apply it to the elements of the list, and returns whether the function was true for at least one of the
elements. Since the list indeed contains 4, the final result is true, which is also what is printed.
If we instead wanted to test whether all the numbers in the list is equal to 4 (which is clearly false), we would use the "forall" method instead. "forall" tests whether the given function is true
for every single element in the list.
val resultForall4 = list.forall(isEqualToFour)
println(resultForall4) //Prints "false".
As expected, the result is false. Note that we didn't have to redefine the function containing the test. By separating the testing into a test function ("isEqualToFour"), and the logic that applies
the test function into higher-order functions ("exists" and "forall"), we avoid a considerable amount of duplication.
Another common higher-order function is that of "map". Let's say that you have a list of numbers and want to change each number in the list independently, for instance multiplying each number by some
constant. That is exactly what "map" does: it takes a transformation function and applies it to each element independently to create a new list. Let's see it in action:
def multiplyBy42(a:Int) = 42*a
val resultMultiplyBy42 = list.map(multiplyBy42)
println(resultMultiplyBy42) //Prints "List(42, 84, 126, 168)".
By using "map" we avoid having to apply the function to each element and to construct the new list ourselves.
There are plenty of other higher-order functions defined for not just List, but most of the other collections, as well as for other classes in the Scala library. Some of the note-worthy functions
include "reduce" and "foldLeft"/"foldRight".
"reduce" takes a function that takes two elements and combine them somehow into a new element of the same type, and keeps doing that until there is only one, resulting element. Examples of uses of
"reduce" includes cases such as when you want to find the sum or the total product of some numbers, or want to combine a lot of strings into one string, maybe by putting something like "\n", "," or
";" between subsequent strings.
"foldLeft"/"foldRight" are basically sequential transformations. While "map" takes each element and transforms it independently, the folds goes through the collection sequentially, taking each
element and the previous result, and transforming it into a new result (such as a new list or a sum). The folds are more difficult to use than "map" and "reduce", but are more flexible, and can in
fact be used to define both "map" and "reduce" themselves. The "Left" and "Right" refers to which direction the fold goes through the elements. | {"url":"https://en.wikibooks.org/wiki/Scala/Higher-order_functions_1","timestamp":"2014-04-16T22:41:33Z","content_type":null,"content_length":"30927","record_id":"<urn:uuid:6baf63f7-ee7b-4a69-b507-ca8e349eaf10>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00346-ip-10-147-4-33.ec2.internal.warc.gz"} |
Area Under Curve
Find $\int_{1}^{5} f(x)dx$ if $\int_{0}^{1} f(x)dx = -2.6$ and $\int_{0}^{5} f(x)dx = 1.5$
I know for this question we do:
$\int_{0}^{5} f(x)dx = \int_{0}^{1} f(x)dx + \int_{1}^{5} f(x)dx$
Which rearranged turns into:
$\int_{1}^{5} f(x)dx = \int_{0}^{5} f(x)dx - \int_{0}^{1} f(x)dx$
But I do not know what I am to do...
Thanks : ) | {"url":"http://mathhelpforum.com/calculus/69302-area-under-curve.html","timestamp":"2014-04-16T07:32:04Z","content_type":null,"content_length":"40745","record_id":"<urn:uuid:227a5359-1ebc-4aba-a8b6-d5497e52f83e>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00116-ip-10-147-4-33.ec2.internal.warc.gz"} |
East Lake, CO Precalculus Tutor
Find an East Lake, CO Precalculus Tutor
...I have plenty of experience with tutoring all subjects in high school math. I have experience tutoring the ASVAB math portion to those that are eager to gain a high score. I have helped those
that are interested and those that have already tested.
17 Subjects: including precalculus, chemistry, calculus, physics
I love math. For this reason I left a successful career of more than twenty years to study mathematics. After receiving a degree in mathematics, I am continuing on towards a PhD in applied
6 Subjects: including precalculus, calculus, geometry, algebra 1
...I love books and I love the stories they tell. I love the written word as a means of communication. I enjoy crafting differing types of essays, poetry, scientific reports, journal entries,
stories, and basically anything that employs the use of the English language in a descriptive or communicative way.
21 Subjects: including precalculus, reading, chemistry, English
...I am currently on my fifth home built computer and have experience troubleshooting hardware and networking issues from Windows 7 back to Windows 98. I took geometry in middle school and
pursued my interest in Mathematics through Math Counts, an extracurricular math program. I took a specific mathematic course for Linear Algebra & Differential equations.
18 Subjects: including precalculus, chemistry, calculus, geometry
...These properties included the Zero-Product Property, Pythagorean Theorem, and the Quadratic Formula. I am proficient in Algebra 2, and a number of it's principles. As a student in high school,
I covered it's most basic concept: properties of polynomial, exponential and logarithmic functions to include their graphs and inverses.
15 Subjects: including precalculus, chemistry, calculus, physics
Related East Lake, CO Tutors
East Lake, CO Accounting Tutors
East Lake, CO ACT Tutors
East Lake, CO Algebra Tutors
East Lake, CO Algebra 2 Tutors
East Lake, CO Calculus Tutors
East Lake, CO Geometry Tutors
East Lake, CO Math Tutors
East Lake, CO Prealgebra Tutors
East Lake, CO Precalculus Tutors
East Lake, CO SAT Tutors
East Lake, CO SAT Math Tutors
East Lake, CO Science Tutors
East Lake, CO Statistics Tutors
East Lake, CO Trigonometry Tutors
Nearby Cities With precalculus Tutor
Bow Mar, CO precalculus Tutors
Columbine Valley, CO precalculus Tutors
Commerce City precalculus Tutors
Dacono precalculus Tutors
Eastlake, CO precalculus Tutors
Edgewater, CO precalculus Tutors
Erie, CO precalculus Tutors
Federal Heights, CO precalculus Tutors
Firestone precalculus Tutors
Henderson, CO precalculus Tutors
Lafayette, CO precalculus Tutors
Lakeside, CO precalculus Tutors
Northglenn, CO precalculus Tutors
Thornton, CO precalculus Tutors
Westminster, CO precalculus Tutors | {"url":"http://www.purplemath.com/East_Lake_CO_Precalculus_tutors.php","timestamp":"2014-04-18T08:45:01Z","content_type":null,"content_length":"24263","record_id":"<urn:uuid:523cdaab-4713-47ce-be90-8d2ddfb36650>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00430-ip-10-147-4-33.ec2.internal.warc.gz"} |
400th Post Giveaway!
When I started this blog I had no idea where it would take me or how long I would continue to blog. While my posts aren't as prolific as in the beginning, I've managed to get to 400 and have no
intention of stopping any time soon! It's been a wonderful journey where I've made new and close friends and learned new quilting techniques as well as improved on many of the old! Most of this is in
no small part due to all of you who read my blog and have blogs of your own!
In celebration of this little milestone and a huge thank you to all that have stuck with me thus far, I thought why not have a guessing game for my giveaway? Yup - guess the number of beans in the
jar! The person who comes the closest to the correct number will win a quilt kit! Yes! To help you out somewhat I'll tell you that the total height of the jar is 7.25 inches and it is filled to the
very top with red kidney beans.
At it's widest circumference the jar measures 10.5 inches.
Here is the kit I'm giving away. It is for the Bunny Run Kit from Bunny Hill Designs. The pattern is in this copy of the Australian Homespun magazine which is also included.
The kit has everything you need including the backing - how great is that? The finished quilt measures 36 x 40 inches. You can start now and have it finished for Easter next year!
There are only a few rules for this giveaway:
1) You must have a Google account to leave your guess on the bean count. This will not be open to Anonymous commenters. If you don't have a Google account you can go here and sign up for free.
2) Please do not blog about this or post any photos from this on your blog.
3) One comment/guess per person, please.
I'll leave this open through Monday, April 23rd. The winner will be announced on Tuesday, April 24th.
Good Luck!!
38 comments:
Congrats on 400! Obviously, there are 798 beans in that jar. Aren't there?
Congrats on the 400th post. I say there are 923 beans in the jar! Btw, darling quilt kit!
Congratulations on your 400th! I love reading your blog.
I am going to guess 1,400 beans. Looks like a nice number to me.
Happy quilting!
Congrats on your milestone! I'm gonna guess 688 beans! Thanks for such a generous prize! :)
This comment has been removed by the author.
Wow, 400 posts! I think I have read them all! Keep them coming! My guess is 2577!
i'm going to guess 500 and congrats on your 400 post
Congratulations on your 400th post. I have enjoyed reading each blog,and always look forward to the next installment.
Congratulations on 400 posts! I love your blog. =)
My guess is....400 beans, hehe.
I am going to say 760. The kit is lovely!
It's been fun getting to know you. I'm not good at this sort of thing so here is a random number off the top of my head 400
Congrats to you! And such a generous giveaway! My guess is 225...
Congrats on your 400 post ! My guess is 830 Beans
Wonderful news about your 400th post. Congratulations. I know that there are exactly 927 beans in that jar. I am positive. So, pick me, pretty please.
Congratulations on your 400th! I love reading your blog.
I am going to guess 401 beans.
Love your blog. Congratulations on 400th post. I am going to guess 850 beans.
Thanks for the giveaway.
Holy smokes, Candace...that's a lot of posts!!! And I've enjoyed every single one of them. I'm always excited when I see you've written a new post. I can't wait to see what you have to share!!!
Happy 400th, dear friend!!! I am going to guess 499.
I am never very good at guessing how many. I will take a stab and guess 654.
Congrats on the 400 posts! I guess around 388. Lucky number :)
My lucky guess is 367. Thank you and congratulations.
Have enjoyed your posts about quilting, fishing, travels, and your transformation of Squash House. Oh, your gardening, too. My guess is 875.
Happy 400th post! My guess is 444 beans.
Congratulations on your 400th post! I am going to guess there is 525 beans in that jar. Thanks for the chance.
Wow, 400! Congrats to you! Love following your blog. Thanks for the giveaway! I'm sure there are 773 beans in that jar. Positively!
612 is my guess. Congrats on reaching 400 posts!
How about 2123 beans in the jar.
400 hundred posts, huh? Where has the time gone?
I guess 192! I don't know why, just a guess. Congratulations on 400 posts, that's truly amazing!
My guess is 1023 beans, plus one little rock! Three cheers for your 400th post. That is quite an accomplishment. Lucky for all of us that you started blogging!
I am going to guess 400! It would be silly not to *s*
Hi Candace.... Congratulations on your 400th post... Time sure does fly by... My guess on the bean's will be 475... Thanks for the chance to win such lovely things... Hugs :)
Congratulations Candace and thank you for your blog postings! I will guess 770.
Wow! 400 posts! That is wonderful! I consulted with hubs and he said without a doubt there are 875 in that jar! And he was Air Force supply so he should know! lol
Enjoy your contest!
Congrats! I am guessing 510 in honour of my 51st birthday coming up!
532 beans! I have read your blog almost since the beginning and feel you are like a friend to me as well! Congratulations Connie!
I guess 457 beans.
I am going to guess 653!
Congrats on 400 posts! Love your blog.
400th post! Awesome! I just started my blog in January! My guess 1,570!!! THat's a lot of beans!!!
CONGRATS ON 400+ !
I'LL GUESS THERE ARE 822 BEANS!
THANKS FOR SHARING! | {"url":"http://www.squashhousequilts.com/2012/04/400th-post-giveaway.html?showComment=1334913007498","timestamp":"2014-04-17T21:26:06Z","content_type":null,"content_length":"172705","record_id":"<urn:uuid:b74a016b-d359-4507-81f5-6fefef7a4efc>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00156-ip-10-147-4-33.ec2.internal.warc.gz"} |
Shortcut Formulas for Ratio and proportion problems-Tricks with questions for exams
Ratio and proportions questions are asked in many exams including competitive exams like IBPS, SSC etc. and also other qualifying exams. These ratio and proportion problems takes a little more time
to solve if the candidate does not know the shortcut trick to solve these problems. It is very important for any candidate to learn the shortcut tricks to solve these ratio and problems problems if
they are preparing for competitive exams like ssc or ibps etc. The ratio and proportions problems can be solved easily with the applications of the shortcut formulas provided below. Please remember
them if you want to solve ratio and proportion questions faster and want to save time in your exams so that you can do more questions and ultimately attempt most questions in your exams,
then you need to learn Shortcut Tricks to solve questions quickly.
Some of the AWESOME
SHORTCUT TRICKS ARE GIVEN HERE. Shortcut Method of Solving number of Coins in a sum of money is discussed here all the shortcut tips and tricks are discussed here
Here are some very important ratio and proportion problems shortcuts
If a number x is divided in the ratio a:b ,then
1^st part will be=ax/(a+b)
2^nd part will be= bx/(a+b)
Or if the number x is divided in three ratios as a:b:c , then
1^st part will be=ax/(a+b+c)
2^nd part will be=bx/(a+b+c)
3^rd part will be= cx/(a+b+c)
The ratio of milk to water in a mixture is A:B . if P liters of water is added to the mixture, then milk to water mixture ratio becomes A:C ,
then the quantity of milk in the mixture is
=AP/(C-B) liters
And the quantity of water in the mixture is
= BP/(C-B) liters
If a number x is added to a ratio a:b so that the ratio becomes c:d ,
Then x= (ad-bc)/(c-d)
If there are two numbers whose sum and difference is a and b respectively,
then the ratio of those numbers will be = (a+b)/(a-b)
if two quantities A and B are in the ratio a:b ,
If two numbers are given in the ratio a:b and P in both numbers, the ratio becomes c:d ,
1^st number = aP(c-d)/(ad-bc)
2^nd number = bP(c-d)/(ad-bc)
Sum of numbers = [P(a+b)(c-d)]/(ad-bc)
Difference of numbers = [P(a-b)(c-d)]/(ad-bc)
If the ratio of incomes of two persons is a:b , and also ratio of their expenses is c:d , and each person saves a sum of x rupees,
Income of 1^st person = ax(d-c)/(ad-bc)
Income of 2^nd person = bx(d-c)/(ad-bc)
In a mixture of milk and water, ratio of milk to water is 5:1 . if 5 liters of water is added to the mixture, the ratio becomes 5:2. Determine the quantity of milk in mixture initially?
So in the given question, A=5, B=1, C=2, P=5
Now putting the formula as given above
Quantity of milk = AP/(C-B) = (5*5)/(2-1) = 25 liters
So the answer is
25 liters of milk is there in the initial mixture of milk and water.
Stay tuned for more to come soon. | {"url":"http://www.meriview.in/2013/10/shortcuts-for-ratio-and-proportions.html","timestamp":"2014-04-18T23:16:11Z","content_type":null,"content_length":"96129","record_id":"<urn:uuid:b25cdff6-2e8f-4830-9e44-12c2c6d82c59>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00213-ip-10-147-4-33.ec2.internal.warc.gz"} |
[kal′kyo̵̅o̅ lā′s̸hən, -kyə-]
1. the act or process of calculating
2. something deduced by calculating; estimate; plan
3. careful planning or forethought, esp. with selfish motives
1. a. The act, process, or result of calculating.
b. An estimate based on probabilities.
2. Careful, often cunning estimation and planning of likely outcomes, especially to advance one's own interests.
Related Forms:
(usually uncountable, plural calculations)
1. (mathematics, uncountable) The act or process of calculating.
2. (mathematics, countable) The result of calculating.
3. (countable) Reckoning, estimate.
By my calculation, we should be there by midnight.
4. (countable) An expectation based on circumstances. | {"url":"http://www.yourdictionary.com/calculation","timestamp":"2014-04-19T22:12:16Z","content_type":null,"content_length":"46366","record_id":"<urn:uuid:c78a65d1-c8d0-447b-b6c6-2ea56fadae4b>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00374-ip-10-147-4-33.ec2.internal.warc.gz"} |
Prove or Disprove
March 24th 2010, 11:35 AM #51
March 23rd 2010, 02:16 PM #50
March 23rd 2010, 01:02 PM #49
March 23rd 2010, 12:26 PM #48
Super Member
March 23rd 2010, 11:33 AM #47
March 23rd 2010, 11:25 AM #46
Quite clear. Grazie tanto!
Emakarove, where are you? We need to exterminate this flea about the order of quantifiers.
You are still confusing the order of the quantifiers :P
$\exists x \forall y:xy=1$ - this means that there is a single x, such that for every y that you pick, xy=1. So, logically, how do you disprove such a statement? you simply find one such y, for
which $xy eq 1$.
This is the essence of the proof - you assume by contradiction that there exists such an x. That means that for every y that you pick, xy=1 should be true.
Specifically, it should be true for y=1. That means 1*x = 1 -> x = 1 (remember, one x for all y).
So you have that x*y = 1*y = 1 for all real y. To prove that is not true, simply take any other y - for example y2 = 2. Your assumption implies that x*y2 = 1*y2 = 1, but 1*y2 = 1*2 = 2. So you
get 2=1, which is obviously a contradiction.
I plotted for all real number $y$ as a diagonal line on x-y axis and then plotted $x= \frac{1}{y}$. By plotting $x= \frac{1}{y}$, I have made sure none of the real number y is excluded. I found
that all $\forall y \in \mathbb{R}-\{0\}$ is possible on the diagonal line, but for x, I found it a very different story, namely, $x=\frac{1}{y}$ approaches neither of the axes--neither x nor y,
which means that if $xy=1$, neither $x$ nor $y$ can be a zero.
Since the statement says $\sim \exists x \forall y:xy=1$, we choose any $y$ then find $x$; in other words, by choosing any x then find y would makes no sense.
If you take $y=2$, you need not stick to $x=1$ because it did not say for all $x$. Since $\exists x$ means at least one, I do have one, namely x=1/2.
At any rate, it's very nice of you to help me. I enjoy reading many of you posts. You are quite good at math.
Writing with logic symbols may help sometimes, though in this case it comes really close: the statement $eg\exists x\,\forall y P(x,y)$ is logically equivalent to the
statement $\forall x\,\exists y\,eg P(x,y)$ , so the difference, imo, between what it is and what was being understood by some is only the negation of P(x,y) in the last formula...
So why is setting y=0 so bad when trying to prove the statement is TRUE using contradiction?
March 24th 2010, 02:53 PM #52
March 24th 2010, 04:31 PM #53
Super Member | {"url":"http://mathhelpforum.com/discrete-math/134726-prove-disprove-4.html","timestamp":"2014-04-18T08:29:30Z","content_type":null,"content_length":"58421","record_id":"<urn:uuid:d0f517fe-0e79-48ce-8444-1aa1ff019b2a>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00591-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
• one year ago
• one year ago
Best Response
You've already chosen the best response.
Karo Ya Maro!
Best Response
You've already chosen the best response.
Find the magnetic field at the centre of a circle as shown in the figure:|dw:1334905913736:dw|
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
|dw:1334906434547:dw|Is it like this\?
Best Response
You've already chosen the best response.
@Ishaan94 bhai, u r rite......
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
@apoorvk , @Mani_Jha , @Ishaan94 bhaiyas, please help.
Best Response
You've already chosen the best response.
Matlab qudrant ke centre point pe jahaan dono radius lines mil rahe hain, right?
Best Response
You've already chosen the best response.
@apoorvk bhaijan, yes u r rite. But I think the question is wrong. Let me post anther one.
Best Response
You've already chosen the best response.
The next question is ready. Be ready to answer it out.
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Find the magnetic field induction at the center "O".
Best Response
You've already chosen the best response.
@apoorvk , @Ishaan94 bhaiyajan, please answer me. 2 more questions still there.
Best Response
You've already chosen the best response.
|dw:1334909020335:dw| yehi na?
Best Response
You've already chosen the best response.
at centre point?
Best Response
You've already chosen the best response.
Yes, apoorv bhai, Ur diagram is correct. please solve at the earliest.
Best Response
You've already chosen the best response.
To jo linear sections hain, unke kaaran B zero hi rahega Centre point pe. Ab do semicircles hain. radius R and R/2. dono keliye magnetic centre point pe nikaalenge. Ampere's circuital law padha
hai? Apply that and find out the B
Best Response
You've already chosen the best response.
Arre bhai, I don't know which formulae to apply. Please solve it by using the formuale, post all formulae regarding how to find magnetic field in different circuits. Please bhai. Post some sample
questions of each type as to where we should apply which formula and find out.
Best Response
You've already chosen the best response.
|dw:1334909634188:dw| Isko lekin aise samjhaane se nahi samjhoge. NCERT hai 12th class ki? agar nahi hai toh net se download karo, and fourth chapter mein ek chhota sa section hai. use padh lo,
clear ho jayega funda.
Best Response
You've already chosen the best response.
Is formula ka matlab hai ki, The net current enclosed in a circular loop of length 'l', produces a magnetic field of magnitude 'B'.
Best Response
You've already chosen the best response.
**rather the net current passing through (within/between) a loop.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/4f910bfae4b000310faf0ee6","timestamp":"2014-04-18T23:57:08Z","content_type":null,"content_length":"179275","record_id":"<urn:uuid:488183f6-324a-473b-a4f8-10b9c6da683a>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00094-ip-10-147-4-33.ec2.internal.warc.gz"} |
The particle system model of income and wealth more likely to imply an analogue of thermodynamics in social science
Angle, John (2011): The particle system model of income and wealth more likely to imply an analogue of thermodynamics in social science.
Download (893Kb) | Preview
The Inequality Process (IP) and the Saved Wealth Model (SW) are particle system models of income distribution. The IP’s social science meta-theory requires its stationary distribution to fit the
distribution of labor income conditioned on education. The Saved Wealth Model (SW) is an ad hoc modification of the particle system model of the Kinetic Theory of Gases (KTG). The KTG implies the
laws of gas thermodynamics. The IP is a particle system similar to the SW and KTG, but less closely related to the KTG than the SW. This paper shows that the IP passes the key empirical test required
of it by its social science meta-theory better than the SW. The IP’s advantage increases as the U.S. labor force becomes more educated. The IP is the more likely of the two particle systems to
underlie an analogue of gas thermodynamics in social science as the KTG underlies gas thermodynamics.
Item Type: MPRA Paper
Original The particle system model of income and wealth more likely to imply an analogue of thermodynamics in social science
English The particle system model of income and wealth more likely to imply an analogue of thermodynamics in social science
Language: English
Keywords: Inequality Process; Kinetic Theory of Gases; labor income distribution; particle system; Saved Wealth Model, social science analogue of thermodynamics
D - Microeconomics > D0 - General > D03 - Behavioral Economics; Underlying Principles
Subjects: D - Microeconomics > D3 - Distribution > D31 - Personal Income, Wealth, and Their Distributions
C - Mathematical and Quantitative Methods > C1 - Econometric and Statistical Methods and Methodology: General > C15 - Statistical Simulation Methods: General
Item ID: 28864
Depositing John Angle
Date 20. Feb 2011 20:22
Last 20. Feb 2013 00:50
Angle, John. 1983. "The surplus theory of social stratification and the size distribution of personal wealth." 1983 Proceedings of the American Statistical Association, Social Statistics
Section. Pp. 395 400. Alexandria, VA: American Statistical Association.
_____. 1986. "The surplus theory of social stratification and the size distribution of Personal Wealth." Social Forces 65:293 326.
_____. 1990. "A stochastic interacting particle system model of the size distribution of wealth and income." 1990 Proceedings of the American Statistical Association, Social Statistics
Section. Pp. 279 284. Alexandria, VA: American Statistical Association.
_____, 1992. "The Inequality Process and the distribution of income to blacks and whites". Journal of Mathematical Sociology 17:77 98.
_____. 1993. “Deriving the size distribution of personal wealth from ‘the rich get richer, the poor get poorer’ “. Journal of Mathematical Sociology 18:27-46.
_____. 1996. "How the gamma law of income distribution appears invariant under aggregation". Journal of Mathematical Sociology. 21:325-358.
_____. 1997. "A theory of income distribution". 1997 Proceedings of the American Statistical Association, Social Statistics Section. Pp. 388-393. Alexandria, VA: American Statistical
_____. 2002. "The statistical signature of pervasive competition on wages and salaries". Journal of Mathematical Sociology. 26:217-270.
_____. 2003. "Imitating the salamander: a model of the right tail of the wage distribution truncated by topcoding@. November, 2003 Conference of the Federal Committee on Statistical
Methodology, [ http://www.fcsm.gov/events/papers2003.html ].
_____. 2006. “The Inequality Process as a wealth maximizing process”. Physica A 367: 388-414.
_____. 2007a. “A mathematical sociologist's tribute to Comte: sociology as science”. Footnotes [monthly newsletter of the American Sociological Association] 35(No. 2, February): 10,11. [
on-line at: http://www2.asanet.org/footnotes/feb07/fn9.html . Its web-log is on-line at http://members.asanet.org/Forums/view_forum.php?id=11 .
_____. 2007b. “The Macro Model of the Inequality Process and The Surging Relative Frequency of Large Wage Incomes”. Pp. 171-196 in A. Chatterjee and B.K. Chakrabarti, (eds.), The
Econophysics of Markets and Networks (Proceedings of the Econophys Kolkata III Conference, March 2007). Milan: Springer.
_____. 2009a. "Two similar particle systems of labor income distribution conditioned on education". In JSM Proceedings, Business and Economics Statistics Sections. Pp. 1003-1017. CD-ROM.
Alexandria, VA: American Statistical Association.
_____, François Nielsen, and Enrico Scalas. 2009b. “The Kuznets Curve and the Inequality Process”. In Banasri Basu, Bikas K. Chakrabarti, Satya R. Chakravarty, Kausik Gangopadhyay,
editors, Econophysics and Economics of Games, Social Choices and Quantitative Techniques. Milan: Springer.
Aptech Systems, inc. 2009. GAUSS (Version 9.0): User Guide. Black Diamond, WA: Aptech Systems.
Bejan, A. 1997. Advanced Engineering Thermodynamics. Second Edition. New York: Wiley-Interscience.
References: Chakraborti, A., B.K. Chakrabarti. 2000. “Statistical mechanics of money: How saving propensity affects its distribution”. European Physics Journal B: 17: 167- 170.
Chatterjee, A., B.K. Chakrabarti, and S. Manna. 2004. “Pareto law in a kinetic model of market with random saving propensity”. Physica A 335: 155-163.
Council of Advisers. 2005. Economic Report of the President. Washington, DC: U.S. Government Printing Office.
Current Population Surveys, March 1962-2004. [machine readable data files]/ conducted by the Bureau of the Census for the Bureau of Labor Statistics. Washington, DC: U.S. Bureau of the
Census [producer and distributor], 1962-2004. Santa Monica, CA: Unicon Research Corporation [producer and distributor of CPS Utilities], 2005.
Dragulescu, A. and V.Yakovenko. 2000. “Statistical mechanics of money”. European Physics Journal B 17: 723-729.
__________. 2001. “Exponential and power-law probability distributions of wealth and income in the United Kingdom and the United States”. Physica A 299: 213-221.
Fischer-Cripps, A.C. 2004. The Physics Companion. Bristol and Philadelphia: Institute of Physics Publishing.
Gyftopoulos, E.P. and G.P. Beretta. 2005. Thermodynamics: Foundations and Applications. Mineola, New York: Dover.
Ispolatov, P. L. Krapivsky, and S. Redner. 1998 “Wealth distributions in asset exchange models,” The European Physical Journal B 2: 267–276.
Lenski, G. 1966. Power and Privilege. New York: McGraw-Hill.
Lux, Thomas. 2005. “Emergent statistical wealth distributions in simple monetary exchange models: a critical review”.Pp. 51-60 in A. Chatterjee, S. Yarlagadda, and B.K. Chakrabarti,
(eds.), Econophysics of Wealth Distributions, (the proceedings volume of the International Workshop on the Econophysics of Wealth Distributions, March, 2005, Kolkata, India). Milan,
Italy: Springer.
___. 2008. “Applications of Statistical Physics in Economics and Finance”. In J. Barkley Rosser Jr., (ed.). Handbook of Research on Complexity. London: Edward Elgar.
Nemhauser, George and Laurence Woolsey. 1988. Integer and Combinatorial Optimization. New York: Wiley.
Patriarca, M., A. Chakraborti, and K. Kaski. 2004. “A statistical model with a standard gamma distribution”. Physical Review E 70: article # 016104.
Patriarca, Marco, Els Heinsalu, and Anirban Chakraborti. 2006. "The ABCD's of statistical many-agent economy models". [ on-line at http://arxiv.org/abs/physics/0611245/ ].
Scalas, Enrico, Mauro Gallegati, Eric Guerci, David Mas, and Allessandra Tedeschi. 2006. "Growth and Allocation of Resources in Economics: The Agent-based Approach". Physica A 370:
Sinha, Sitabhra, Arnab Chatterjee, Anirban Chakraborti, and Bikas Chakrabarti. 2011. Econophysics. Weinheim: Wiley.
Whitney, Charles. 1990. Random Processes in Physical Systems. New York: Wiley, page 220.
Yakovenko, V.M. and J. B. Rosser, Jr., "Colloquium: Statistical Mechanics of Money, Wealth, and Income", Reviews of Modern Physics 81, 1703–1725 (2009). [on-line at http://
URI: http://mpra.ub.uni-muenchen.de/id/eprint/28864 | {"url":"http://mpra.ub.uni-muenchen.de/28864/","timestamp":"2014-04-20T00:45:07Z","content_type":null,"content_length":"31937","record_id":"<urn:uuid:14feecf9-a26b-47f6-a17f-ab66c8f7a82e>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00425-ip-10-147-4-33.ec2.internal.warc.gz"} |
Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material,
please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 270
Biographical Memoirs: Volume 61 This page in the original is blank.
OCR for page 270
Biographical Memoirs: Volume 61 SOLOMON LEFSCHETZ September 3, 1884-October 5, 1972 BY PHILLIP GRIFFITHS, DONALD SPENCER, AND GEORGE WHITEHEAD1 SOLOMON LEFSCHETZ was a towering figure in the
mathematical world owing not only to his original contributions but also to his personal influence. He contributed to at least three mathematical fields, and his work reflects throughout deep
geometrical intuition and insight. As man and mathematician, his approach to problems, both in life and in mathematics, was often breathtakingly original and creative. PERSONAL AND PROFESSIONAL
HISTORY Solomon Lefschetz was born in Moscow on September 3, 1884. He was a son of Alexander Lefschetz, an importer, and his wife, Vera, Turkish citizens. Soon after his birth, his parents left
Russia and took him to Paris, where he grew up with five brothers and one sister and received all of his schooling. French was his native language, but he learned Russian and other languages with
remarkable facility. From 1902 to 1905, he studied at the École Centrale des Arts et Manufactures, graduating in 1905 with the degree of mechanical engineer, the third youngest in a class of 220. His
reasons for entering that institution were complicated, for as he said, he had been ''mathematics mad'' since he had his first contact with geometry at thirteen.
OCR for page 270
Biographical Memoirs: Volume 61 Since he was not a French citizen, he could neither see nor hope for a career as a pure mathematician. The next best thing was engineering because, as he believed, it
used a lot of mathematics. Upon graduating in 1905, Lefschetz decided to go to the United States, for a time at least, with the general purpose of acquiring practical experience. First, he found a
job at the Baldwin Locomotive Works near Philadelpia. But he was particularly attracted to electrical engineering, which, at that time, was a nonexistent specialty at the École Centrale. In view of
this, in January 1907 he became an engineering apprentice in a regular course at the Westinghouse Electric and Manufacturing Company in Pittsburgh. The course consisted of being shifted from section
to section every few weeks. He wound up in the transformer testing section in the late fall of 1907, and in mid-November of that year, he was the victim of a testing accident, as a consequence of
which he lost both hands.2 After some months of convalescence, he returned to the Westinghouse Company, where, in 1909, he was attached to the engineering department in the section concerned with the
design of alternating-current generators. Meanwhile, Lefschetz had become increasingly dissatisfied with his work there, which seemed to him to be extremely routine. So he resumed, first as a hobby,
his mathematical studies that had been neglected since 1903. After a while he decided to leave engineering altogether and pursue mathematics. He left the Westinghouse Company in the fall of 1910 and
accepted a small fellowship at Clark University, Worcester, Massachusetts, enrolling as a graduate student. The mathematical faculty consisted of three members: William Edward Story, senior professor
(higher plane curves, invariant theory); Henry Taber (complex analysis, hypercomplex number systems); and Joseph de Perott (number
OCR for page 270
Biographical Memoirs: Volume 61 theory). At the École Centrale there were two professors of mathematics, Émile Picard and Paul Appel, and each had written a three-volume treatise: Analysis (Picard)
and Analytical Mechanics (Appel). Lefschetz plunged into these and, with a strong French training in basic mathematics, was all set to attack a research topic suggested by Professor Story, namely, to
find information about the largest number of cusps that a plane curve of given degree may possess. Lefschetz made an original contribution to this problem and obtained his Ph.D. summa cum laude in
1911. In the Record of Candidacy for the Ph.D., it is stated by Henry Taber that it was an "excellent examination, the best ever passed by any candidate in the department," and signed by him under
the date June 5, 1911. Clark University had a fine library with excellent working conditions, and Lefschetz made good use of it. By the summer of 1911 he had vastly improved his acquaintance with
modern mathematics and had laid a foundation for future research in algebraic geometry. He had also become more and more closely associated with another mathematics student at Clark, Alice Berg
Hayes, who became his wife on July 3, 1913, in North Brookfield, Massachusetts. She was to become a pillar of strength for Lefschetz throughout the rest of his life, helping him to rise above his
handicap and encouraging him in his work. Lefschetz' first position after Clark was an assistantship at the University of Nebraska in 1911; the assistantship was soon transformed into a regular
instructorship. In 1913 he moved to the University of Kansas, passing through the ranks to become a full professor in 1923. He remained at the University of Kansas until 1924. Then, in 1924 came the
call to Princeton University, where he was visiting professor (1924-25); associate professor (1925-27); full professor (1927-33); and from 1933 to 1953, Henry Burchard
OCR for page 270
Biographical Memoirs: Volume 61 Fine Research Professor, chairman of the Department of Mathematics 1945-53 and emeritus from 1953. The years in the Midwest were happy and fruitful ones for Lefschetz.
The almost total isolation played in his development "the role of a job in a lighthouse which Einstein would have every young scientist assume so that he may develop his own ideas in his own way."3
His two major ideas came to him at the University of Kansas. The first idea is described by Lefschetz as follows. Soon after his doctorate he began to study intensely the two-volume treatise of
Picard-Simart, Fonctions Algébriques de Deux Variables, and he first tried to extend to several variables the treatment of double integrals of the second kind found in the second volume. He was
unable to do this directly, and it led him to a recasting of the whole theory, especially the topology.4 By attaching a 2-cycle to the algebraic curves on a surface, he was able to establish a new
and unsuspected connection between topology and Severi's theory of the base, constructed in 1906, for curves on a surface. The development of these and related concepts led to a Mémoire, which was
awarded the Bordin Prize by the French Academy of Sciences in 1919. The translated prize paper is given in the Bibliography (1921,3). The first half of the Mémoire, with some complements, is embodied
in a famous monograph (1924,1). The general idea for the second most important contribution also came to Lefschetz in Lawrence, Kansas, and it is the fixed-point theorem which bears his name. Almost
all of Lefschetz' topology arose from his efforts to prove fixed-point theorems. In 1912, L. E. J. Brouwer proved a basic fixed-point theorem, namely, that every continuous transformation of an
n-simplex into itself has at least one fixed point. In a series of papers, Lefschetz obtained a much more general result for any continuous transfor-
OCR for page 270
Biographical Memoirs: Volume 61 mation of a topological space X into itself where the restrictions on X were progressively weakened. In 1923, he proved the theorem for compact orientable manifolds
and, by introducing relative homology groups, he extended it in 1927 to manifolds with boundary; his theorem then included Brouwer's. In 1927, he also proved it for any finite complex and, in 1936,
for any locally connected topological space. In the 1920s and 1930s, as a professor at Princeton University, Lefschetz was wholly occupied with topology, and he established many of the basic results
in algebraic topology. For example, he created a theory of intersection of cycles (1925,1; 1926,1), introduced the notion of cocycle (which he called pseudo-cycle) and proved the Lefschetz duality
theorem (see 1949,1 for an exposition of the fixed-point theorem and the duality theorem). His Topology was published in 1930 (1930,1), and his Algebraic Topology was published in 1942 (1942,1). The
former was widely acclaimed and established the name topology in place of the previously used term analysis situs; the latter was less influential but secured the use of the name algebraic topology
as a replacement for combinational topology.5 Lefschetz was an editor of the Annals of Mathematics from 1928 to 1958, and his influence dominated the editorial policy that made the Annals into a
foremost mathematical journal. In 1943 Lefschetz became a consultant for the U.S. Navy at the David Taylor Model Basin near Washington, D.C. There he met and worked with Nicholas Minorsky, who was a
specialist on guidance systems and the stability of ships and who brought to Lefschetz' attention the importance of the applications of the geometric theory of ordinary differential equations to
control theory and nonlinear mechanics. From 1943 to the end of his life, Lefschetz'
OCR for page 270
Biographical Memoirs: Volume 61 main interest was centered around ordinary nonlinear differential equations and their applications to controls and the structural stabilities of systems. Lefschetz was
almost sixty years old when he turned to differential equations, yet he did original work and stimulated research in this field as a gifted scientific administrator. In 1946, the newly established
Office of Naval Research funded a project on ordinary nonlinear differential equations, directed by Lefschetz, at Princeton University. This project continued at Princeton for five years past
Lefschetz' retirement from the university in 1953. Meanwhile, the Research Institute for Advanced Study was formed in Baltimore, Maryland, as a division of the Glen L. Martin Aircraft Company, and in
1957, Lefschetz established the Mathematics Center under the auspices of the institute and was entrusted with the recruitment of five mathematicians and about ten younger associates. He obtained the
cooperation of Professor Lamberto Cesari of Purdue University and appointed Professor J. P. LaSalle of Notre Dame and Dr. J. K. Hale of Purdue to the group, the former as his second in command. After
some six years it was necessary to transfer the center elsewhere, and the move, carried out by LaSalle, resulted in their becoming part of the Division of Applied Mathematics at Brown University. The
group was later named the Lefschetz Center for Dynamical Systems. LaSalle was director and Lefschetz became a visiting professor, traveling there from Princeton once a week. Lefschetz continued his
work at Brown until 1970, two years before his death. In 1944, Lefschetz joined the Institute de Mathematicas of the National University of Mexico as a part-time visiting professor, and this
connection continued until 1966. At the Institute, he conducted seminars, gave volunteer courses, and continued his research. He found a number of ca-
OCR for page 270
Biographical Memoirs: Volume 61 pable young men there and sent several of them to Princeton University for further advanced training up to the doctorate and beyond. From 1953 to 1966 he spent most of
his winters in Mexico City. Lefschetz received many honors. He served as president of the American Mathematical Society in 1935-36. He received the Bôcher Memorial Prize of the American Mathematical
Society in 1924, and in 1970 he received the first award of the Steele Prize, also of the American Mathematical Society. He received the Antonio Feltrinelli International Prize of the National
Academy of Lincei, Rome, in 1956; the Order of the Aztec Eagle of Mexico in 1964; and the National Medal of Science (U.S.) in 1964. He was awarded honorary degrees by the University of Prague,
Prague, Czechoslavakia; University of Paris, Paris, France; the University of Mexico; and Brown, Clark, and Princeton universities. He was a member of the American Philosophical Society and a foreign
member of the Academie des Sciences of Paris, the Royal Society of London, the Academia Real de Ciencias of Madrid, and the Reale Instituto Lombardo of Milan. A symposium in honor of Lefschetz'
seventieth birthday was held in Princeton in 1954,6 and in 1965 an international conference in differential equations and dynamical systems was dedicated to him at the University of Puerto Rico. The
international Conference on Albegraic Geometry, Algebraic Topology and Differential Equations (Geometric Theory), in celebration of the centenary of Lefschetz' birth, was held at the Centro de
Investigaci ón del IPN, Mexico City, in 1984. LEFSCHETZ AND ALBEGRAIC GEOMETRY In order to discuss Lefschetz' contributions to algebraic geometry, I shall first describe that field and its evolution
OCR for page 270
Biographical Memoirs: Volume 61 up until the period during which Lefschetz worked. Then I will give a somewhat more detailed description of some of his major accomplishments. I will conclude with a
few observations about the impact of his work in algebraic geometry. In simplest terms, algebraic geometry is the study of algebraic varieties. These are defined to be the locus of polynomial
equations Here the xi are coordinates in an affine space and the Pa are polynomials whose coefficients are in any field K For our purposes, it will be convenient to take K to be the complex numbers,
as this was the case in classical algebraic geometry and in almost all of Lefschetz' work. It is worth noting, however, that he was one of the first to consider the case where K is an arbitrary
algebraically closed field of characteristic zero. In fact, the so-called Lefschetz principle as expanded in his book Algebraic Geometry (1953, 1) roughly states that any result from the complex case
remains valid in this more general situation. In addition to using complex numbers, it is also convenient to add to the above locus the points at infinity. This is accomplished by homogenizing the
polynomials Pa and considering the resulting locus V in the complex projective space PN defined by the homogenized equations. Two algebraic varieties V and V' are to be identified if there is a
rational transformation that takes V to V' and is generically one to one there. These are called birational transformations, and T estab-
OCR for page 270
Biographical Memoirs: Volume 61 lishes an isomorphism between the fields K(V') and K(V) of rational functions on V' and V, respectively. In the nineteenth century the intensive study of algebraic
curves —that is, algebraic varieties of dimension one—was undertaken by Abel, Jacobi, Riemann, and others. On an algebraic curve C given by a single affine equation, in the plane, special objects of
interest were the abelian integrals where R(x,y) is a rational function. For example, the hyperelliptic integrals are abelian integrals on the hyperelliptic curve y2 = (x-a1) (x-an) In addition to
the indefinite integral (3), abelian sums and periods where γ is a closed path on C, were of considerable interest. A major reason for studying abelian integrals and their periods was that these
provided an extremely interesting class of transcendental functions, such as the elliptic function p(u) defined up to an additive constant by
OCR for page 270
Biographical Memoirs: Volume 61 It was Riemann who emphasized that studying C up to birational equivalence is equivalent to studying the abstract Riemann surface associated to the curve (2). Assuming
that f is irreducible, in modern terms is a connected, complex manifold of dimension one for which there is a holomorphic mapping whose image is C and where π: → C is generically one to one. Viewed
as an oriented real two-manifold, the Riemann surface has a single topological invariant, its genus g, and we have the familiar picture where δ1 δg, γ1 γg form a canonical basis for H1(, Z). The
introduction of greatly clarifies the study of abelian integrals. For example, in terms of a local holomorphic coordinate z on , the rational differential ω = R(x,y)dx above is given by the
expression where
OCR for page 270
Biographical Memoirs: Volume 61 4. Topology can be described as the study of continuous functions, and it is customary to use the work "map" or "mapping" when referring to such functions. 5. F.
Nebeker and A. W. Tucker, "Lefschetz, Solomon," in Dictionary of Scientific Biography, Supplement II, 1991. 6. Algebraic Geometry and Topology, a Symposium in Honor of S. Lefschetz, edited by R. H.
Fox, D. C. Spencer, and A. W. Tucker, Princeton University Press, 1957, pp. 1-49. 7. Ibid, note 6.
OCR for page 270
Biographical Memoirs: Volume 61 BIBLIOGRAPHY OF S. LEFSCHETZ 1912 Two theorems on conics. Ann. Math. 14:47-50. On the V33 with five nodes of the second species in S4. Bull. Am. Math. Soc. 18:384-86.
Double curves of surfaces projected from space of four dimensions Bull. Am. Math. Soc. 19:70-74. 1913 On the existence of loci with given singularities. Trans. Am. Math. Soc. 14:23-41. (Doctoral
dissertation, Clark University, 1911.) On some topological properties of plane curves and a theorem of M öbius. Am. J. Math 35:189-200. 1914 Geometry on ruled surfaces. Am. J. Math. 36:392-94. On
cubic surfaces and their nodes. Kans. Univ. Sci. Bull. 9:69-78. 1915 The equation of Picard-Fuchs for an algebraic surface with arbitrary singularities. Bull. Am. Math. Soc. 21:227-32. Note on the
n-dimensional cycles of an algebraic n-dimensional variety. R. C. Mat. Palermo 40:38-43. 1916 The arithmetic genus of an algebraic manifold immersed in another Ann. Math. 17:197-212. Direct proof of
De Moivre's formula. Am. Math. Mon. 23:366-68. On the residues of double integrals belonging to an algebraic surface Quart. J. Pure Appl. Math. 47:333-43. 1917 Note on a problem in the theory of
algebraic manifolds. Kans. Univ. Sci. Bull. 10:3-9. Sur certains cycles à deux dimensions des surfaces algébriques. R. C. Accad. Lincei 26:228-34. Sur les intégrales multiples des variétiés
algébriques. C. R. Acad. Sci. Paris 164:850-53.
OCR for page 270
Biographical Memoirs: Volume 61 Sur les intégrals doubles des variétiés algébriques. Annali Mat. 26: 227-60. 1919 Sur l'analyse situs des variétiés algébriques. C. R. Acad. Sci. Paris 168:672-74. Sur
les variétiés abéliennes. C. R. Acad. Sci. Paris 168:758-61. On the real folds of Abelian varieties. Proc. Natl. Acad. Sci. U.S.A. 5:103-6. Real hypersurfaces contained in Abelian varieties. Proc.
Natl. Acad. Sci. U.S.A. 5:296-98. 1920 Algebraic surfaces, their cycles and integrals. Ann. Math. 21:225-28. (Correction, Ann. Math. 23:333.) 1921 Quelques remarques sur la multiplication complexe.
Comptes Rendus du Congrès International des Mathématiciens, Strasbourg, September 1920. Toulouse: É. Privat. Sur le théorème d'existence des fonctions abéliennes. R. C. Accad. Lincei 30:48-50. On
certain numerical invariants of algebraic varieties with application to Abelian varieties. Trans. Am. Math. Soc. 22:327-482. 1923 Continuous transformations of manifolds. Proc. Natl. Acad. Sci.
U.S.A. 9:90-93. Progrès récents dans la théorie des fonctions abéliennes. Bull. Sci. Math. 47:120-28. Sur les intégrales de seconde espèce des variétiés algébriques. C. R. Acad. Sci. Paris
176:941-43. Report on curves traced on algebraic surfaces. Bull. Am. Math. Soc. 29:242-58. 1924 L'analysis situs et la géométrie algébrique. Collection de monographies publiée sous la direction de M.
Émile Borel. Paris: Gauthier-Villars. (New edition, 1950.)
OCR for page 270
Biographical Memoirs: Volume 61 Sur les integrals multiples des variétiés algébriques. J. Math. Pure Appl. 3:319-43. 1925 Intersections of complexes on manifolds. Proc. Natl. Acad. Sci. U.S.A.
11:287-89. Continuous transformations of manifolds. Proc. Natl. Acad. Sci. U.S.A. 11:290-92. 1926 Intersections and transformations of complexes and manifolds. Trans. Am. Math. Soc. 28:1-49.
Transformations of manifolds with a boundary. Proc. Natl. Acad. Sci. U.S.A. 12:737-39. 1927 Un théorème sur les fonctions abélinnes. In Memorian N. I. Lobatschevskii, pp. 186-90. Kazan, USSR:
Glavnauka. Manifolds with a boundary and their transformations. Trans. Am. Math. Soc. 29:429-62, 848. Correspondences between algebraic curves. Ann. Math. 28:342-54. The residual set of a complex on
a manifold and related questions Proc. Natl. Acad. Sci. U.S.A. 13:614-22, 805-7. On the functional independence of ratios of theta functions. Proc. Natl. Acad. Sci. U.S.A. 13:657-59. 1928
Transcendental theory; singular correspondences between algebraic curves; hyperelliptic surfaces and Abelian varieties. In Selected Topics in Algebraic Geometry, vol. 1, chapters 15-17, pp. 310-95.
Report of the Committee on Rational Transformations of the National Research Council, Washington. NRC Bulletin no. 63. Washington, D.C.: National Academy of Sciences. A theorem on correspondence on
algebraic curves. Am. J. Math. 50:159-66. Closed point sets on a manifold. Ann. Math. 29:232-54. 1929 Géométrie sur les surfaces et les variétiés algébriques. Mémorial des Sciences Mathématiques,
Fasc. 40. Paris: Gauthier-Villars.
OCR for page 270
Biographical Memoirs: Volume 61 Duality relations in topology. Proc. Natl. Acad. Sci. U.S.A. 15:367-69. 1930 Topology. Colloquium Publications, vol. 12. New York: American Mathematical Society. Les
transformations continues des ensembles fermés et leurs points fixes. C. R. Acad. Sci. Paris 190:99-100. (With W. W. Flexner.) On the duality theorems for the Betti numbers of topological manifolds
Proc. Natl. Acad. Sci. U.S.A. 16:530-33. On transformations of closed sets. Ann. Math. 31:271-80. 1931 On compact spaces. Ann. Math. 32:521-38. 1932 On certain properties of separable spaces. Proc.
Natl. Acad. Sci. U.S.A. 18:202-3. On separable spaces. Ann. Math. 33:525-37. Invariance absolute et invariance relative en géométrie algébrique. Rec. Math. (Mat. Sbornik) 39:97-102. 1933 On singular
chains and cycles. Bull. Am. Math. Soc. 39:124-29. (With J. H. C. Whitehead.) On analytical complexes. Trans. Am. Math. Soc. 35:510-17. On generalized manifolds. Am. J. Math. 55:469-504. 1934
Elementary One- and Two-Dimensional Topology. Princeton, N.J.: Princeton University. (Mimeograph.) On locally connected and related sets. Ann. Math. 35:118-29. 1935 Topology. Princeton, N.J.:
Princeton University. (Mimeograph.) Algebraicheskaia geometriia: metody, problemy, tendentsii. In Trudy Vtorogo Vsesoiuznogo Matematischeskogo S''ezda, Leningrad, 24-30 June 1934, vol. 1, pp. 337-49.
Leningrad-Moscow. Chain-deformations in topology. Duke Math. J. 1:1-18. Application of chain-deformations to critical points and extremals Proc. Natl. Acad. Sci. U.S.A. 21:220-22.
OCR for page 270
Biographical Memoirs: Volume 61 A theorem on extremals. I, II. Proc. Natl. Acad. Sci. U.S.A. 21:272-74. On critical sets. Duke Math. J. 1:392-412. 1936 On locally-connected and related sets (second
paper). Duke Math. J. 2:435-42. Locally connected sets and their applications. Rec. Math. (Mat. Sbornik) 1:715-17. Sur les transformations des complexes en sphères. Fund. Math. 27:94-115.
Matematicheskaia deiatel'nost'v Prinstone. Usp. Mat. Nauk 1:271-73. 1937 Lectures on Algebraic Geometry. Part 1. 1936-37. Princeton, N.J.: Princeton University Press. (Planograph.) Algebraicheskaia
geometriia. Usp. Mat. Nauk 3:63-77. The role of algebra in topology. Bull. Am. Math. Soc. 43:345-59. On the fixed point formula. Ann. Math. 38:819-22. 1938 Lectures on Algebraic Geometry. Part 2.
1937-38. Princeton, N.J.: Princeton University Press. On chains of topological spaces. Ann. Math. 39:383-96. On locally connected sets and retracts. Proc. Natl. Acad. Sci. U.S.A. 24:392-93. Sur les
transformations des complexes en sphères (note complèmentaire). Fund. Math. 31:4-14. Singular and continuous complexes, chains and cycles. Rec. Math. (Mat. Sbornik) 3:271-85. 1939 On the mapping of
abstract spaces on polytopes. Proc. Natl. Acad. Sci. U.S.A. 25:49-50. 1941 Abstract complexes. In Lectures in Topology; The University of Michigan Conference of 1940, pp. 1-28. Ann Arbor: University
of Michigan Press. 1942 Algebraic Topology. Colloquium Publications, vol. 27. New York: American Mathematical Society.
OCR for page 270
Biographical Memoirs: Volume 61 Topics in Topology. Annals of Mathematics Studies, no. 10. Princeton, N.J.: Princeton University Press. (A second printing, 1951.) Émile Picard (1856-1941): Obituary.
American Philosophical Society Yearbook 1942, pp. 363-65. 1943 N. Kryloff and N. Bogoliuboff. Introduction of Nonlinear Mechanics. Annals of Mathematics Studies, no. 11. Translation by S. Lefschetz.
Princeton, N.J.: Princeton University Press. Existence of periodic solutions for certain differential equations Proc. Natl. Acad. Sci. U.S.A. 29:29-32. 1946 Lectures on Differential Equations. Annals
of Mathematics Studies, no. 14. Princeton, N.J.: Princeton University Press. 1949 Introduction to Topology. Princeton Mathematical Series, no. 11. Princeton, N.J.: Princeton University Press. A. A.
Andronow and C. E. Chaikin. Theory of Oscillations. English language edition ed. S. Lefschetz. Princeton, N.J.: Princeton University Press. Scientific research in the U.S.S.R.: Mathematics. Am. Acad.
Polit. Soc. Sci. Ann. 263:139-40. 1950 Contributions to the Theory of Nonlinear Oscillations, ed. S. Lefschetz. Annals of Mathematics Studies, no. 20. Princeton, N.J.: Princeton University Press. The
structure of mathematics. Am. Sci. 38:105-11. 1951 Numerical calculations in nonlinear mechanics. In Problems for the Numerical Analysis of the Future, pp. 10-12. National Bureau of Standards,
Applied Math. Series, no. 15. Washington, D.C.: U.S. Government Printing Office. 1952 Contributions to the Theory of Nonlinear Oscillations, vol. 2, ed. S. Lefschetz. Annals of Mathematics Studies,
no. 29. Princeton, N.J.: Princeton University Press.
OCR for page 270
Biographical Memoirs: Volume 61 Notes on differential equations. In Contributions to the Theory of Nonlinear Oscillations, vol. 2, pp. 67-73. 1953 Algebraic Geometry. Princeton Mathematical Series,
no. 18. Princeton, N.J.: Princeton University Press. Algunos trabajos recientes sobre ecuaciones diferenciales. In Memoria de Congreso Cientifico Mexicana U.N.A.M., Mexico, vol. 1, pp. 122-23. Las
grades corrientes en las matemáticas del siglo XX. In Memoria de Congreso Cientifico Mexicana U.N.A.M., Mexico, vol. 1, pp. 206-11. 1954 Russian contributions to differential equations. In
Proceedings of the Symposium on Nonlinear Circuit Analysis, New York, 1953, pp. 68-74. New York: Polytechnic Institute of Brooklyn. Complete families of periodic solutions of differential equations
Comment Math. Helv. 28:341-45. On Liénard's differential equation. In Wave Motion and Vibration Theory, pp. 149-53. American Mathematical Society Proceedings of Symposia in Applied Math., vol. 5. New
York: McGraw-Hill. 1956 On a theorem of Bendixson. Bol. Soc. Mat. Mexicana 1:13-27. Topology, 2nd ed. New York: Chelsea Publishing Company. (Cf. 1930,1.) 1957 On coincidences of transformations. Bol.
Soc. Mat. Mexicana 2:16-25. The ambiguous case in planar differential systems. Bol. Soc. Mat. Mexicana 2:63-74. Withold Hurewicz. In memoriam. Bull. Am. Math. Soc. 63:77-82. Sobre la modernizacion de
la geometria. Rev. Mat. 1:1-11. Differential Equations: Geometric Theory. New York: Interscience.(Cf. 1962,1.) 1958 On the critical points of a class of differential equations. In Contributions to
the Theory of Nonlinear Oscillations, vol. 4, pp. 19-28. Princeton, N.J.: Princeton University Press.
OCR for page 270
Biographical Memoirs: Volume 61 Liapunov and stability in dynamical systems. Bol. Soc. Mat. Mexicana 3:25-39. The Stability Theory of Liapunov. Lecture Series no. 37. College Park, Md.: University
Institute for Fluid Dynamics and Applied Mathematics. 1960 Controls: An application to the direct method of Liapunov. Bol. Soc. Mat. Mexicana 5:139-43. Algunas consideraciones sobre las matemáticas
modernas. Rev. Unión Mat. Argent. 20:7-16. Resultados nuevos sobre casos criticos en ecuaciones diferenciales Rev. Unión Mat. Argent. 20:122-24. 1961 The critical case in differential equations. Bol.
Soc. Mat. Mexicana 6:5-18. Geometricheskaia Teoriia Differentsial'nykh Uravnenii. Moskva: Izd-vo Inostrannoi Lit-ry. (Translation of 1957,5.) (With J. P. LaSalle.) Stability by Liapunov's Direct
Method. New York: Academic Press. (With J. P. LaSalle.) Recent Soviet contributions to ordinary differential equations and nonlinear mechanics. J. Math. Anal. Appl. 2:467-99. 1962 Differential
Equations: Geometric Theory, 2nd rev. ed. New York: Interscience. (Ed. with J. P. LaSalle.) Recent Soviet Contributions to Mathematics. New York: Macmillan. 1963 On indirect automatic controls. Trudy
Mezhdunarodnogo Simpoziuma po Nelineinym Kolebaniyam, pp. 23-24. Kiev: Izdat. Akad. Ukrain. SSR. Some mathematical considerations on nonlinear automatic controls. In Contributions to Differential
Equations, vol. 1, pp. 1-28. New York: Interscience. Elementos de Topologia. Ciudad de México: Universidad Nacional Autónoma de México.
OCR for page 270
Biographical Memoirs: Volume 61 (Ed. with J. P. LaSalle.) Proceedings of International Symposium on Nonlinear Differential Equations and Nonlinear Mechanics, Colorado Springs, 1961. New York:
Academic Press. 1964 Stability of Nonlinear Automatic Control Systems. New York: Academic Press. 1965 Liapunov stability and controls. SIAMJ. Control Ser. A 3:1-6. Planar graphs and related topics.
Proc. Natl. Acad. Sci. U.S.A. 54:1763-65. Recent advances in the stability of nonlinear controls. SIAM Rev. 7:1-12. Some applications of topology to networks. In Proceedings of the Third Annual
Allerton Conference on Circuit and System Theory, pp. 1-6. Urbana: University of Illinois. 1966 Stability in Dynamics. William Pierson Field Engineering Lectures, March 3, 4, 10, 11, 1966. Princeton,
N.J.: Princeton University School of Engineering and Applied Science. 1967 Stability of Nonlinear Automatic Control Systems. Moscow: Izdat. ''Mir." (A translation of 1964,1, in Russian.) 1968 On a
theorem of Bendixson. J. Diff. Equations, 4:66-101. A page of mathematical autobiography. Bull. Am. Math. Soc. 74:854-79. 1969 The Lurie problem on nonlinear controls. In Lectures in Differential
Equations, ed. A. K. Aziz, vol. 1, pp. 1-19. New York: Van Nostrand-Reinhold. The early development of algebraic geometry. Am. Math. Mon. 76:451-60. Luther Pfahler Eisenhart, 1876-1965: A
Biographical Memoir. Bio-
OCR for page 270
Biographical Memoirs: Volume 61 graphical Memoirs, vol. 40, pp. 69-90. Washington, D.C.: National Academy of Sciences. 1970 Reminiscences of a mathematical immigrant in the United States. Am. Math.
Mon. 77(4). 1971 The early development of algebraic topology. Bol. Soc. Brasileira Mat. 1:1-48. AUXILIARY REFERENCES R. C. Archibald, A Semicentennial History of the American Mathematical Society, I,
New York, 1938, pp. 236-40. Algebraic Geometry and Topology, a Symposium in Honor of S. Lefschetz , edited by R. H. Fox, D. C. Spencer, and A. W. Tucker, Princeton University Press, 1957, pp. 1-49.
Selected Papers by S. Lefschetz, including the book I 'analysis situs, Chelsea Publishing Company, Bronx, New York, 1971. Sir William Hodge, "Solomon Lefschetz, 1884-1972," in Biographical Memoirs of
Fellows of the Royal Society, 19, London, 1973; reprinted in Bulletin of the London Mathematical Society, 6 (1974), pp. 198-217 and in The Lefschetz Centennial Conference, I, Mexico 1984, D.
Sunderaraman, ed., published as Contemporary Mathematics, 58.1, American Mathematical Society, 1986, pp. 27-46. J. K. Hale and J. P. La Salle, The Contribution of Solomon Lefschetz to the Study of
Differential Equations, typed manuscript prepared for Hodge in writing the above article. J. P. La Salle, "Memorial to Solomon Lefschetz," in IEEE Transactions on Automatic Control, vol. AC-18 (
1973), pp. 89-90. Lawrence Markus, "Solomon Lefschetz, an Appreciation in Memoriam," Bull. Am. Math. Soc., vol. 79 ( 1973), pp. 663-80. William Hodge, "Solomon Lefschetz, 1884-1972," in Yearbook of
the American Philosophical Society ( 1974), pp. 186-93. "Lefschetz, Solomon," in National Cyclopedia of American Biography, 56 ( 1975), James T. White and Co., Clifton, New Jersey, pp. 503-4. F.
Nebeker and A. W. Tucker, "Lefschetz, Solomon" in Dictionary of Scientific Biography, Supplement II, 1991. | {"url":"http://books.nap.edu/openbook.php?record_id=2037&page=270","timestamp":"2014-04-21T09:47:42Z","content_type":null,"content_length":"71850","record_id":"<urn:uuid:e41f95f3-9a1a-44fe-b2bd-4043f0e5c6e8>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00258-ip-10-147-4-33.ec2.internal.warc.gz"} |
PHYS-375 Quantum Mechanics
The main emphasis is on wave mechanics and its application to atoms and molecules. One-electron atoms are discussed in detail. Additional topics discussed are electronic spin and atomic spectra and
structure. Nuclei, the solid state, and fundamental particles are also considered. Prerequisite: Physics 306 and Mathematics 231. (Concurrent registration in Mathematics 231 is allowed with
permission of the Instructor.) A course including linear algebra is recommended. Not offered 2012-2013. | {"url":"http://www.calvin.edu/academics/majors-minors/course-description.html?course=PHYS-375","timestamp":"2014-04-18T22:14:48Z","content_type":null,"content_length":"1726","record_id":"<urn:uuid:ba850630-026c-46ef-a630-79f09ae37113>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00477-ip-10-147-4-33.ec2.internal.warc.gz"} |
Lightspeed variable say
Original URL: http://www.theregister.co.uk/2013/03/25/lightspeed_variable/
Lightspeed variable say intellectuels français
Vacuum isn't always empty, French boffins argue
Posted in Science, 25th March 2013 22:51 GMT
French researchers have proposed a mechanism by which the observed speed of light might not be the constant we think it is: it could, in fact, vary at the attosecond level.
It's not, however, time to reach for the “the old boffins were wrong!” template, because their reasoning is actually elegant and simple: we know that vacuum isn't really vacuum, therefore the
condition for a universally-constant lightspeed (that is, the speed of light in a vacuum) never actually exists.
The study, published in European Physical Journal D and also available on Arxiv here, also takes a shot at explaining three fundamental constants, the permissivity of vacuum, the permeability of
vacuum, and the speed of light. Far from “escaping any physical explanation”, the study states, these constants “emerge naturally from the quantum theory”.
Let's tackle the speed of light: James Clerk Maxwell first imagined that light-speed in a vacuum is a constant throughout the universe. However, we've known for some time that if you look at vacuum
at a quantum level, it doesn't stay a vacuum: pairs of particles and antiparticles (quarks and antiquarks, electron-positron) that arise spontaneously and annihilate each other in vacuum
That “quantum noise” is a spooky property that looks like the Universe is watching us back: it preserves Heisenberg's uncertainty theorem, since even the coldest vacuum of space can't be perfectly
described as being empty; and vacuum fluctuations can even interfere with our most sensitive physical instruments.
The study, by Marcel Urban of France's University of Paris-Sud in Orsay, suggests that the energy fluctuations in vacuum could also affect the speed of light, since there's a statistical chance that
light will interact with these “virtual particles”. As noted in publisher Wiley's announcement, “The fluctuations of the photon propagation time are estimated to be on the order of 50 attoseconds per
square root metre* of crossed vacuum, which might be testable with the help of new ultra-fast lasers.”
The researchers also say their hypothesis could provide a prediction for the number of ephemeral particles in a given amount of space: the “ground state” of the unperturbed vacuum contains “a finite
density of charged ephemeral fermions antifermions pairs” in their model. ®
Correction: The original version of this story stated 50 attoseconds per square metre, instead of per square root metre. ® | {"url":"http://www.theregister.co.uk/Print/2013/03/25/lightspeed_variable/","timestamp":"2014-04-21T07:05:16Z","content_type":null,"content_length":"8626","record_id":"<urn:uuid:9267d571-8c24-4fa2-bbf7-7fa9934daee1>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00009-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Determine the type of boundary line and shading for the graph of the inequality y > 2x + 4 A) Dashed line with shading on the side that includes the origin. B) Solid line with shading on the side
that does not include the origin. C) Dashed line with shading on the side that does not include the origin. D) Solid line with shading on the side that includes the origin.
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/514a2dbbe4b07974555d597d","timestamp":"2014-04-21T12:34:59Z","content_type":null,"content_length":"54177","record_id":"<urn:uuid:556c7c21-93e3-462a-a0c1-7540a04ceabb>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00170-ip-10-147-4-33.ec2.internal.warc.gz"} |
Sargent, GA Precalculus Tutor
Find a Sargent, GA Precalculus Tutor
...When in college, I was an editor of the school newspaper. Professionally, I have written many journal articles and user manuals. I would like to help you in any area that you feel needs
27 Subjects: including precalculus, English, physics, French
...I have taught a 6-8th Grade Study Skills class at Fayette Middle School. I have a Master's degree, and am certified in the State of GA for Special Education Needs, as demonstrated on my
certificate. I currently work with students with Asperger's, ADHD, etc.
48 Subjects: including precalculus, English, reading, writing
...I received my masters in science with a focus on educational courses. I am gifted certified by the state of Georgia and am also a certified teacher with 9+ years of successful teaching. I have
taught in the middle/elementary grade levels.
25 Subjects: including precalculus, reading, algebra 1, dyslexia
...I hold myself to a high standard and ask for feedback from students and parents. I never bill for a tutoring session if the student or parent is not completely satisfied. While I have a 24
hour cancellation policy, I often provide make-up sessions.
8 Subjects: including precalculus, statistics, trigonometry, algebra 2
...This enjoyment of helping others inspired me to continue tutoring in my spare time. Recently I have tutored students in various math courses ranging from Basic Math to Algebra 2. I am very
patient and passionate about what I do.
9 Subjects: including precalculus, algebra 1, ACT Math, algebra 2
Related Sargent, GA Tutors
Sargent, GA Accounting Tutors
Sargent, GA ACT Tutors
Sargent, GA Algebra Tutors
Sargent, GA Algebra 2 Tutors
Sargent, GA Calculus Tutors
Sargent, GA Geometry Tutors
Sargent, GA Math Tutors
Sargent, GA Prealgebra Tutors
Sargent, GA Precalculus Tutors
Sargent, GA SAT Tutors
Sargent, GA SAT Math Tutors
Sargent, GA Science Tutors
Sargent, GA Statistics Tutors
Sargent, GA Trigonometry Tutors | {"url":"http://www.purplemath.com/sargent_ga_precalculus_tutors.php","timestamp":"2014-04-17T21:44:13Z","content_type":null,"content_length":"23842","record_id":"<urn:uuid:db4aa606-b5df-44d0-a1b5-575b9501bef7>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00247-ip-10-147-4-33.ec2.internal.warc.gz"} |
P. Naumov, M. Protzman, Equilibria Interchangeability in Cellular Games, Proceedings of Eleventh Conference on Logic and the Foundations of Game and Decision Theory (LOFT), Bergen, Norway, July 2014
(to appear)
J. Kane, P. Naumov, The Ryōan-ji Axiom for Common Knowledge on Hypergraphs, Synthese (to appear)
K. Harjes, P. Naumov, Functional Dependence in Strategic Games, Notre Dame Journal of Formal Logic, (to appear)
J. Kane, P. Naumov, Symmetry in Information Flow, Annals of Pure and Applied Logic, 165(1), pp. 253-265, 2014
P. Naumov, I. Simonelli, Strict Equilibria Interchangeability in Multi-Player Zero-Sum Games, Journal of Logic and Computation (to appear)
K. Harjes, P. Naumov, Cellular Games, Nash Equilibria, and Fibonacci Numbers, Proceedings of 4th International Workshop on Logic, Rationality, and Interaction, Hangzhou, China, October 9-12, 2013,
pp. 149-161
J. Kane, P. Naumov, Symmetries and Epistemic Reasoning, Proceedings of 14th International Workshop on Computational Logic in Multi-Agent Systems, Coruña, Spain, September 16-18, 2013, pp. 190-205
P. Naumov, B. Nicholls, On Interchangeability of Nash Equilibria in Multi-Player Strategic Games, Synthese, v. 190, Issue 1 Supplement, pp 57-78, 2013
P. Naumov, B. Nicholls, Rationally Functional Dependence, Journal of Philosophical Logic (to appear)
K. Harjes, P. Naumov, Functional Dependence in Strategic Games, In Fabio Mogavero, Aniello Murano and Moshe Y. Vardi: Proceedings of 1st International Workshop on Strategic Reasoning (SR 2013), Rome,
Italy, March 16-17, 2013, Electronic Proceedings in Theoretical Computer Science 112, pp. 9-15.
J. Kane, P. Naumov, Epistemic Logic for Communication Chains, Proceedings of the 14th Conference on Theoretical Aspects of Rationality and Knowledge (TARK `13), pp. 131-137, Chennai, India, January
P. Naumov, B. Nicholls, R.E. Axiomatization of Conditional Independence, Proceedings of the 14th Conference on Theoretical Aspects of Rationality and Knowledge (TARK `13), pp. 148-155, Chennai,
India, January 2013
P. Naumov, Independence in Information Spaces, Studia Logica, v. 100, pp. 953-973, 2012
S. Holbrook, P. Naumov, Fault Tolerance in Belief Formation Networks, 13th European Conference on Logics in Artificial Intelligence (JELIA `12), September 26-28, 2012, Toulouse, France, pp. 267-280,
Springer 2012
S. Miner More, P. Naumov, Calculus of Cooperation and Game-Based Reasoning about Protocol Privacy, ACM Transactions on Computational Logic, 13(3):22 (2012)
P. Naumov, I. Simonelli, Strict Equilibria Interchangeability in Multi-Player Zero-Sum Games, 10th Conference on Logic and the Foundations of Game and Decision Theory (LOFT `12), June 18-20, 2012,
University of Sevilla, Spain
P. Naumov, B. Nicholls, Rationally Functional Dependence, 10th Conference on Logic and the Foundations of Game and Decision Theory (LOFT `12), June 18-20, 2012, University of Sevilla, Spain
S. Miner More, P. Naumov, Hypergraphs of Multiparty Secrets, Annals of Mathematics and Artificial Intelligence, 62(1-2): 79-101, 2011
S. Miner More, P. Naumov, Logic of Secrets in Collaboration Networks, Annals of Pure and Applied Logic, 162(12):959-969, 2011.
P. Naumov, B. Nicholls, Game Semantics for the Geiger-Paz-Pearl Axioms of Independence, Proceedings of the Third International Workshop on Logic, Rationality and Interaction (LORI III), Guangzhou
(Canton), China, October 2011, LNAI 6953, pp. 220-232, Springer 2011
S. Miner More, P. Naumov, B. Sapp, Concurrency Semantics for the Geiger-Paz-Pearl Axioms of Independence, in 20th Conference on Computer Science Logic (CSL 2011), Bergen, Norway, pp. 443-457,
September 2011
S. Miner More, P. Naumov, Functional Dependence on Hypergraphs of Multiparty Secrets, in 12th International Workshop on Computational Logic in Multi-Agent Systems (CLIMA XII), Barcelona, Spain, July
2011, LNAI 6814, pp. 29-40, Springer 2011
S. Miner More, P. Naumov, B. Nicholls, A. Yang, A Ternary Knowledge Relation on Secrets, in Krzysztof R. Apt (Ed.): Proceedings of the 13th Conference on Theoretical Aspects of Rationality and
Knowledge (TARK-2011), Groningen, The Netherlands, July 2011. ACM 2011, pp. 46-54
M. Donders, S. Miner More, P. Naumov, Information Flow on Directed Acyclic Graphs (full version), in Lev D. Beklemishev, Ruy de Queiroz (Eds.): Logic, Language, Information and Computation - 18th
International Workshop, WoLLIC 2011, Philadelphia, PA, USA, May 2011. Proceedings. LNCS 6642 Springer 2011, pp. 95-109
R. Kelvey, S. Miner More, P. Naumov, and B. Sapp, Independence and Functional Dependence Relations on Secrets, 12th International Conference on the Principles of Knowledge Representation and
Reasoning, (KR '10), Toronto, Canada, May 2010, pp. 528-533
S. Miner More, P. Naumov, Hypergraphs of Multiparty Secrets, 11th International Workshop on Computational Logic in Multi-Agent Systems, CLIMA XI (Lisbon, Portugal), LNAI 6245, pp. 15-32. Springer,
S. Miner More, P. Naumov, An Independence Relation for Sets of Secrets, Studia Logica, v.94(1):73-85, 2010
S. Miner More, P. Naumov, On Interdependence of Secrets in Collaboration Networks, 12th Conference on Theoretical Aspects of Rationality and Knowledge (TARK '09), July 2009, Stanford University, pp.
S. Miner More, P. Naumov, An Independence Relation for Sets of Secrets, 16th Workshop on Logic, Language, Information and Computation (WoLLIC '09), Tokyo, Japan, June 2009, pp. 296-304
Pavel Naumov, On Meta Complexity of Propositional Formulas and Propositional Proofs, Archive for Mathematical Logic, pp. 35-52, v. 47, n. 1, 2008
Pavel Naumov, Upper Bounds on Complexity of Frege Proofs with Limited Use of Certain Schemata, Archive for Mathematical Logic, pp. 432-446, v. 45, 2006
Pavel Naumov, On Modal Logic of Deductive Closure, Annals of Pure and Applied Logic, pp. 218-224, v. 141, n.1-2, 2006
Pavel Naumov, Logic of Subtyping, Theoretical Computer Science, pp. 167-185, v. 357, n.1-3, 2006
Pavel Naumov, On Modal Logics of Partial Recursive Functions, Studia Logica, pp. 295-309, v. 81, 2005
Pavel Naumov, On Modal Logic of Deductive Closure, The Bulletin of Symbolic Logic, v. 11, n. 2, pp. 288-289, 2005
Pavel Naumov, On Modal Logics of Computable Functions, Annual Meeting of the Association for Symbolic Logic, Pittsburgh, May 2004, The Bulletin of Symbolic Logic, v. 11, n. 1, p. 113, 2005
Pavel Naumov, On Modal Logic of Deductive Closure, Logic Colloquium 2004, ASL European Summer Meeting, p. 129, Turin, Italy, July 2004
Pavel Naumov, An extension of the classical propositional logic by type constructors, Winter Meeting of the Association for Symbolic Logic, Baltimore, January 2003, The Bulletin of Symbolic Logic, v.
9, n. 2, pp. 254-255, 2003
P. Naumov, M.-O. Stehr, and J. Meseguer, The HOL/NuPRL Proof Translator: A Practical Approach to Formal Interoperatability, The 14th International Conference on Theorem Proving in Higher Order
Logics, Edinburgh, Scotland, September 2001, pp. 329-345, Springer, Lecture Notes in Computer Science
M.-O. Stehr, P. Naumov, and J. Meseguer, A Proof-Theoretic Approach to HOL-Nuprl Connection with Applications to Proof Translation, 15th International Workshop on Algebraic Development Techniques/
General Workshop of the Common Framework Initiative, Genova, Italy, April 2001, pp. 329-345
Pavel Naumov, Formalization of Isabelle Meta Logic in NuPRL, Supplemental proceedings of The 13th International Conference on Theorem Proving in Higher Order Logics, Portland, Oregon, 2000
R. Constable, P. Jackson, P. Naumov, and J. Uribe, Constructively Formalizing Automata Theory, in Proof, Language, and Interaction: Essays in Honour of Robin Milner, MIT Press, 2000
Pavel Naumov, Importing Isabelle Formal Mathematics into NuPRL, Theorem Proving in Higher Order Logics: Emerging Trends 1999, Supplemental proceedings of The 12th International Conference on Theorem
Proving in Higher Order Logics, Nice, France, 1999
Pavel Naumov, Undecidability of Second Order Provability Logic with Witness Comparison, Moscow University Mathematics Bulletin, v. 48, 1993, n. 3, pp. 13-15. Vestnik Moskovskogo Universiteta. Seriya
I. Matematika, Mekhanika, v. 48, n. 3, 1993, pp. 14-17 (Russian)
Pavel Naumov, Undecidability of Goedel-Loeb Logic with Quantifiers over Propositional Variables, Moscow University Mathematics Bulletin, v. 48, 1993, n. 2, pp. 11-13. Vestnik Moskovskogo
Universiteta. Seriya I. Matematika, Mekhanika, v. 48, n. 2, 1993, pp. 13-16 (Russian)
Pavel Naumov, Modal logics that are conservative over intuitionistic predicate calculus, Moscow University Mathematics Bulletin, v. 46, n. 6, 1991, pp. 58-61. Vestnik Moskovskogo Universiteta. Seriya
I. Matematika, Mekhanika, v. 46, n. 6, 1991, pp. 86-90 (Russian) | {"url":"http://pavelnaumov.com/","timestamp":"2014-04-16T07:13:00Z","content_type":null,"content_length":"32479","record_id":"<urn:uuid:aa5f8d20-ae97-4601-b5b6-2f86696822b8>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00175-ip-10-147-4-33.ec2.internal.warc.gz"} |
Waukegan Calculus Tutor
Find a Waukegan Calculus Tutor
...Algebra 2 builds on the skills learned in Algebra 1 and digs further into variable mathematics. Topics include: functions and graphing (linear, quadratic, logarithmic, exponential), complex
numbers, systems of equations and inequalities, and relations. This can also include beginning trigonometry and probability and statistics.
11 Subjects: including calculus, geometry, algebra 1, algebra 2
...I teach math in the summer as well, for students who want to continue / catch up / excel in their math skills, or prepare themselves for ACT Math. I really enjoy teaching math, and that is what
I do best.I have tutored many students in the basic Algebra course. I have worked with students who have difficulty understanding the basic foundations of the course.
11 Subjects: including calculus, geometry, algebra 2, trigonometry
I taught math in high school and college, so I know what is expected of math students, and how to explain it. Math is not so difficult if it is explained well. And I can do that.
9 Subjects: including calculus, physics, geometry, algebra 1
...Many students who hated math started liking it after my tutoring. That is my specialty. I provide guidance based on their attitude interest and ability.
12 Subjects: including calculus, geometry, statistics, algebra 1
...I have tutored differential equations to college students for about 10 years. Taking into account that this mathematical subject serves as a foundation and the powerful analytical tool for many
scientific disciplines including physics, engineering, chemical and biological kinetics, I concentrate...
8 Subjects: including calculus, geometry, algebra 1, algebra 2 | {"url":"http://www.purplemath.com/waukegan_calculus_tutors.php","timestamp":"2014-04-19T19:51:05Z","content_type":null,"content_length":"23836","record_id":"<urn:uuid:d1fcea47-c05b-4711-a0a3-6b40fe52e28a>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00492-ip-10-147-4-33.ec2.internal.warc.gz"} |
Patterns in Haskell revisited
by Alex on March 6, 2012
A while back I came up with this way of representing musical patterns as pure functions in Haskell:
data Pattern a = Pattern {at :: Int -> [a], period :: Int}
These patterns can be composed nicely with pattern combinators, creating strange polyrhythmic structures, see my
earlier post
for info.
This turned out just great for representing acid techno, see for example
this video
of people dancing to Dave and I. I was using Tidal which uses a representation similar to the above (and Dave was using his lovely
However lately I’ve been wanting to make music other than acid techno, in particular in preparation for a
performance with Hester Reeve
, a Live Artist.
After a lot of fiddling about, I seem to be settling on this:
data Pattern a = Atom {event :: a}
| Arc {pattern :: Pattern a,
onset :: Double,
duration :: Maybe Double
| Cycle {patterns :: [Pattern a]}
| Signal {at :: Double -> Pattern a}
I’ve got rid of periods, now patterns always have a relative period of 1. However they can be scaled down by being enclosed in an Arc pattern, and given a floating point duration and time phase
offset (which in music parlance is called an onset), which should be less than 1. A Cycle pattern consists of a number of Arcs, which may overlap in time.
The end result is a nice representation of cyclic patterns within patterns, with floating point time so that events don’t have to occur within the fixed time grids of the acid techno I’ve been
It is also still possible to represent a pattern as a function, which is what a Signal is in the above.
The Functor definition is straightforward:
instance Functor Pattern where
fmap f p@(Atom {event = a}) = p {event = f a}
fmap f p@(Arc {pattern = p'}) = p {pattern = fmap f p'}
fmap f p@(Cycle {patterns = ps}) = p {patterns = fmap (fmap f) ps}
fmap f p@(Signal _) = p {at = (fmap f) . (at p)}
The Applicative functor definition isn’t so bad either:
instance Applicative Pattern where
pure = Atom
Atom f <*> xs = f <$> xs
fs <*> (Atom x) = fmap (\f -> f x) fs
(Cycle fs) <*> xs = Cycle $ map (<*> xs) fs
fs <*> (Cycle xs) = Cycle $ map (fs <*>) xs
fs@(Arc {onset = o}) <*> s@(Signal {}) = fs <*> (at s o)
fs@(Arc {}) <*> xs@(Arc {}) | isIn fs xs = fs {pattern = (pattern fs) <*> (pattern xs)}
| otherwise = Cycle []
fs@(Signal {}) <*> xs = Signal $ (<*> xs) . (at fs)
fs <*> xs@(Signal {}) = Signal $ (fs <*>) . (at xs)
Here’s how to turn a list into a pattern:
class Patternable p where
toPattern :: p a -> Pattern a
instance Patternable [] where
toPattern xs = Cycle ps
ps = map (\x -> Arc {pattern = Atom $ xs !! x,
onset = (fromIntegral x) / (fromIntegral $ length xs),
duration = Nothing
) [0 .. (length xs) - 1]
And here’s how to make a Signal pattern of a sinewave:
-- sinewave from -1 to 1
sinewave :: Pattern Double
sinewave = Signal {at = f}
where f x = Arc {pattern = Atom $ (sin . (pi * 2 *)) x,
onset = mod' x 1,
duration = Nothing
-- sinewave from 0 to 1
sinewave1 :: Pattern Double
sinewave1 = fmap ((/ 2) . (+ 1)) sinewave
Finally, here’s how to multiply a Cycle of discrete events by a Signal, thanks to our Applicative definition:
(*) <$> toPattern [1 .. 16] <*> sinewave
Well this may all be rather trivial, but somehow I find this really exciting, that continuous functions can be multipled by (potentially) complex discrete, hierarchical patterns with such tersity.
Furthermore that time can be manipulated outside of fixed grids. I’ve been putting off making sounds from this, to try not to prejudice possibilities, but am really looking forward to experimenting
with it live in performance.
It’s probably not of much use to anyone else at the moment, but the code is over here.
2 thoughts on “Patterns in Haskell revisited”
1. Cool stuff. I’d note that both `length xs` and `xs !! x` are O(n), so that makes `toPattern :: [a] -> Pattern a` O(n^2), but it doesn’t have to be. You can look the length up once, and use
`zipWith` to iterate through your elements
instance Patternable [] where
toPattern xs = Cycle ps
n = length xs
ps = zipWith mkArc xs [0..]
mkArc x i = Arc x ((fromIntegral i) / (fromIntegral n)) Nothing
2. Great tip, thanks Noah! | {"url":"http://yaxu.org/patterns-in-haskell-revisited/comment-page-1/","timestamp":"2014-04-18T18:10:29Z","content_type":null,"content_length":"47739","record_id":"<urn:uuid:9b53ca99-d9ee-4812-ab9f-ea1fd53e5c9d>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00380-ip-10-147-4-33.ec2.internal.warc.gz"} |
Predictors, responses and residuals: What really needs to be normally distributed?
February 18, 2013
By petrkeil
Many scientists are concerned about normality or non-normality of variables in statistical analyses. The following and similar sentiments are often expressed, published or taught:
• "If you want to do statistics, then everything needs to be normally distributed."
• "We normalized our data in order to meet the assumption of normality."
• "We log-transformed our data as they had strongly skewed distribution."
• "After we fitted the model we checked homoscedasticity of the residuals."
• "We used non-parametric test as our data did not meet the assumption of normality."
And so on. I know, it is more complex than that, but still, it seems like normal distribution is what people want to see everywhere, and that having things normally distributed opens the door to the
clean and powerful statistics and strong results. Many people that I know regularly check if their data are normally distributed prior to the analysis, and then they either try to "normalize" them by
e.g. log-transformation, or they adjust the statistical technique accordingly based on the frequency distribution of their data. Here I will examine this more closely and I will show that there may
be less assumptions of normality than one might think.
Example of some widely used models
The most general way to find out what needs to be the distribution of your data and residuals is to look at the complete and formal definition of the model that you are fitting - I mean both its
deterministic and its stochastic components. Take the example of Normal linear regression of response and predictor (function lm(y~x) or glm(y~x, family="Gaussian") in R):
which can be rewritten to
where is variance, and are intercept and slope respectively, and is number of observations. Strictly speaking, and also have their own distributions (e.g. normal), but let's not go there for now. If
you write such definition of the model, you have it all at one place, all assumptions are written here. And you can see clearly that the only assumption of "normality" in this model is that the
residuals (or "errors" ) should be normally distributed. There is no assumption about the distribution of the predictor or the response variable . So there is no justification for "normalizing" any
of the variables prior to fitting the model. Your predictors and/or response variables can have distributions skewed like hell and yet all is fine as long as the residuals are normal. The same logic
is valid for ANOVA - a skewed and apparently non-normal distribution of the response variable does not necessarily imply that one has to use non-parametric Kruskall-Wallis test. It is only the
distribution of ANOVA residuals which needs to meet the assumption of normality.
Now take an example of another widely used model - the log-linear Poisson regression model of count observations and predictors (function glm(y~x, family="Poisson") in R):
where and are model parameters, and is number of observations. This is the complete description of the model. All of its assumptions are laid here. There is no other hidden assumption somewhere in
the literature. And you can see clearly: nothing has to be normally distributed here. Not the predictors, nor the response, nor the residuals.
Finally, let's examine another hugely popular statistical model, the logistic regression of binary response (or proportion) against predictor (function glm(y~x, family="Binomial") in R):
Again, the complete model is here, with all its assumptions, and there is not a single Normally distributed thing.
This is hardly surprising to anyone already familiar with the inner structure of Generalized Linear Models (GLM) - a category that encompasses all of the above mentioned models. It is why GLMs were
invented at the first place, i.e. in order to cope with non-normal residual ("error") structures. It is well known that Poisson and logistic regressions have heteroscedastic residuals (Kutner et al.
2005). Moreover, it is also known that exactly for this reason it is problematic to use the residuals for model diagnostics or to judge if the model is appropriate (Kutner et al. 2005). It has been
advised to transform the residuals to something called Pearson residuals (in order to normalize them?), but I still do not understand why one would want to do that - as I mentioned, there is no
assumption of normality anywhere in Poisson or logistic regressions. Yet still, there is the R function plot.glm() which one can apply to any fitted glm object and it will pop out the residual
diagnostic plots. What are these good for in the case of logistic and Poisson regressions?
Why do people still normalize data?
Another puzzling issue is why do people still tend to "normalize" their variables (both predictors and responses) prior to model fitting. Why this practice emerged and prevailed even when there is no
assumption that would call for it? I have several theories for this: ignorance, tendency to follow statistical cookbooks, error propagation etc.
Two of the explanations seem more plausible: First, people normalize the data in order to linearize relationships. For example, by log-transforming the predictor one can fit an exponential function
using the ordinary least squares machinery. This may seem sort of ok, but then why not specify the non-linear relationship directly in the model (through an appropriate link function for instance)?
Also, the practice of log-transforming of the response can introduce serious artifacts, e.g. in the case of count data with zero counts (O'Hara & Kotze 2010).
A second plausible reason for the "normalizing" practice was suggested by my colleague Katherine Mertes-Schwartz: Maybe it is because researchers try to resolve the problem that their data were
collected in a highly clumped and uneven way. In other words, it is very common that one works with data that have high number of observations aggregated at particular part of a gradient, while
another part of the gradient is relatively under-represented. This results in skewed distributions. Transforming of such distributions then results in seemingly regular spread of observations along
the gradient and elimination of outliers. This can be actually done with a good intention in mind. However, it is also fundamentally incorrect.
I would like to finish this post by a couple of summarizing advices to anyone who is troubled by skewed frequency distributions of the data, by heteroscedastic residuals or by the violations of the
assumption of normality.
• If you are unsure about the assumptions of the predefined model that you are just about to use, try to write down its complete formal definition (especially its stochastic part). Try to
understand what the model actually really does, dissect it and have a look at its components.
• Learn your probability distributions. Bolker (2008) offers a brilliant bestiary of the most commonly used distributions from which you can learn what are they good for and in which cases do they
emerge. My opinion is that such knowledge is an alternative and reliable way to judge if the model is appropriate.
• Learn to specify your models in a Bayesian way, e.g. in BUGS language (used by OpenBUGS or JAGS). This will give you very clear insight into how the models work and what are their assumptions.
• Think twice before transforming your data in order to normalize their distributions (e.g. see O'Hara & Kotze 2010). The only semi-valid reason for transforming you data is to linearize the
relationships. However, make sure that you really need to do that as it is usually possible to specify the non-linear relationship directly within the model (e.g. through the log link function in
the Poisson regression).
I am aware that my understanding may be limited. Hence, if you have any comments, opinions or corrections, do not hesitate to express them here. Thank you!
Bolker, B. (2008) Ecological models and data in R. Princeton University Press.
Kutner, M.H. et al. (2005) Applied linear statistical models. Fifth edition. McGraw & Hill.
O'Hara & Kotze (2010) Do not log-transform count data. Methods in Ecology and Evolution, 1, 118-122.
for the author, please follow the link and comment on his blog:
Are you cereal? » R
daily e-mail updates
news and
on topics such as: visualization (
), programming (
Web Scraping
) statistics (
time series
) and more...
If you got this far, why not
subscribe for updates
from the site? Choose your flavor:
, or | {"url":"http://www.r-bloggers.com/predictors-responses-and-residuals-what-really-needs-to-be-normally-distributed/","timestamp":"2014-04-16T04:22:07Z","content_type":null,"content_length":"52678","record_id":"<urn:uuid:8f2dac12-8067-414d-8be3-34827cb8b7ff>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00550-ip-10-147-4-33.ec2.internal.warc.gz"} |
Compose area formula from image
March 25th 2013, 06:41 AM #1
Jan 2013
Compose area formula from image
Hello everyone,
The prelude to an optimalisation problem with areas is the composition of the area formula. However, I have come across an image from which I just cannot seem to create the formula, however I
have got the feeling that it should be incredibly easy. Could someone please take a look?
Someone places n of these boxes upon one another (n does not have to be an integer) to create a block with volume 1000.
Prove that the area formula is:
Thanks in advance,
Tom Koolen
Re: Compose area formula from image
Area formula for what area? The total observable surface area? Or the surface area that can be seen in your picture?
And what have you done on this? Have, you, for example, calculated what n must be?
Re: Compose area formula from image
The total observable surface area. And I have tried my best on this problem, but there is no work that I can show except for these two equations:
Area = n(10r^2+60r)
Volume = n(25r^2) = 1000
And I honestly have no clue as to what I have to do with two variables in this problem.
Re: Compose area formula from image
For the volume when n blocks are placed one above the other.
Volume V = area of base * height
= 5r^2 * 5n = 25r^2 n
For the lateral surface area = perimeter of the base * height
= 2 ( 5r + r ) * 5n= 60 r^2 n.
The area of the base and the top = 2 ( 5r *r) = 10r^2
Now you can proceed to solve the question
March 25th 2013, 07:20 AM #2
MHF Contributor
Apr 2005
March 25th 2013, 12:12 PM #3
Jan 2013
March 25th 2013, 08:30 PM #4
Super Member
Jul 2012 | {"url":"http://mathhelpforum.com/algebra/215521-compose-area-formula-image.html","timestamp":"2014-04-17T09:50:55Z","content_type":null,"content_length":"39181","record_id":"<urn:uuid:26255815-6be1-4885-aeb2-e79548f3760b>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00135-ip-10-147-4-33.ec2.internal.warc.gz"} |
beginner cauchy integral equations
all circles positively oriented. Use Cauchy's Integration Formulas to compute each of the following integrals. HINT: use partial fractions where appropriate.
a) $\int_{|z-2|=2} \frac{cosz}{(z-3)^2(z+1)}dz$
b) $\int_{|z+2|=1} \frac{sinz}{z^2(z-1)^2}dz$
c) $\int_{|z+2|=2} \frac{ze^{zt}}{(z+1)^3}dz$ , t>0
The Cauchy integral formula : $\int_C \frac{f(z)dz}{z-z_o}=2\pi i f(z_o)$
NOW FOR c) in the denominator -1 is interior to C so i go as follows,
$\int_{|z+2|=2} \frac{ze^{zt}}{(z+1)^3}dz=\int_{|z+2|=2} \frac{ze^{zt}}{(z-(-1))^{2+1}}dz$ so n=2 in the extension of Cauchy's integral formula given as $\int_C \frac{f(z)dz}{(z-z_0)^{n+1}}=\frac
{2\pi i}{n!}f^{(n)}(z_0)$
so i let f(z) = $ze^{zt}$ then f'(z) = $e^{zt}+zte^{zt}$ and f ''(z) = $2te^{zt}+zt^2e^{zt} = e^{zt}(2t+zt^2)$
hence f '' (-1) = $e^{-t}(2t-t^2)$and then by the extended Cauchy integral formula i get
$\int_{|z+2|=2} \frac{ze^{zt}}{(z+1)^3}dz = \frac{2\pi i}{2} e^{-t}(2t-t^2)= \pi i e^{-t}(2t-t^2)$ , is this correct ?
OK, things i don't know :
Now in a) neither 3 nor -1 are interior to C so neither of them are my Zo point. The same goes for b) in the denominator neither 0 nor 1 are interior to C, so where do i find Zo ? Now if the
answer is to use partial fraction decomposition, my attempt :
for a) $\frac{cosz}{(z-3)^2(z+1)}= \frac{A}{z-3}+\frac{B}{(z-3)^2}+\frac{C}{z+1}$
$cosz = A(z-3)(z+1)+B(z+1)+C(z-3)^2$
$cosz=z^2(A+C)+z(-2A+B-6C)+(-3A+B+9C)$ and now what ? How do i equate anything??? I have the same confusion for b). I aprreciate any help. | {"url":"http://mathhelpforum.com/differential-geometry/185852-beginner-cauchy-integral-equations.html","timestamp":"2014-04-18T21:47:07Z","content_type":null,"content_length":"48779","record_id":"<urn:uuid:715dfde5-fb1e-4de1-be50-d08f822798bd>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00560-ip-10-147-4-33.ec2.internal.warc.gz"} |
Static vs. Dynamic Arrays, Getting Undefined Behavior
01-17-2012 #1
Registered User
Join Date
Jan 2012
Static vs. Dynamic Arrays, Getting Undefined Behavior
Hello everyone! First post, and pretty new to C++, so I apologize for any blatantly dumb questions that I may ask. Just trying to learn!
I'm working on a Project Euler problem that has me looking for the largest prime factor of a number. (Specifically, a very large number.) My main programming background is in MATLAB, and I have 3
different versions of code to solve the problem there, but MATLAB is just incapable of handling the value I am trying to evaluate. (~600 billion.) Hence, here's me, trying to learn C++, hoping
that it will be efficient enough to handle the problem set forth, as well as furthering my goal of learning C++! hahaha
My question is regarding static vs. dynamic memory allocation for arrays. My code is running as expected, but before it was, I was getting some strange errors, and it brought up something I was
confused about. At this point, I'm shying away from dynamic arrays just because they seem to be a hair too complicated for my noobishness. I'm trying to get my foundation set in C++ first, then
I'll start to tackle some more of the finer points of the language.
In my code, I declare an array "factors" with "halfVal" cells. "halfVal" is defined beforehand, and I don't need it to change later on, so static should be fine, but I'm not sure how to declare
it statically by using the value of halfVal. Is it possible to use the value of a variable, in this case halfVal, to statically define the number of cells in an array? If so, how would I do that?
#include <iostream>
#include <cmath>
using namespace std;
int main (int argc, const char * argv[])
//Declare variables. Initialization may or may not be necessary?
int halfVal = 1, counter = 0;
double value = 0, quotient = 0;
//Decalre value to find the largest prime value of
value = 30; //Goal Value is 600851475143
//Calculate halfVal for efficiency of following for loop
halfVal = (value / 2) + 1;
//Declare array. Can I make this a static array while still using the value of halfVal somehow? I don't need a dynamic allocation of memory.
int factors[halfVal];
//Determine the quotient of the value divided by num (from 2 to halfVal) for logical test
for( int num = 2; num < halfVal ; num++ ){
quotient = (value / num);
//Logical test to determine whether or not num is a factor. If so, store in factors array and update counter
if( quotient == ceil( quotient ) ) {
factors [counter] = num;
counter = counter + 1;
for( int i = 0; i < counter; i++ ) {
//If counter is replaced with halfVal, then I get undefined behavior. Something to do with dynamic array?
cout << "Factor: " << factors[i] <<"\n";
Also, another question I have is if I try to access a value in the factors array that is declared, but not redefined in the first for-if loop, then I get undefined behavior when I try to print
those values. For example, the factors of 30 are 1, 2, 3, 5, 6, 10, 15, 30. That is 8 total values. If halfVal is set to 16, (and therefore the declaration of the factors array should be 16 total
cells) and then I try to print factors[0] through factors[15], after I pass the first 8 cells that have been redefined in the first for-if loop, I get undefined behavior. My output gives me
something like:
Factors: 2
Factors: 3
Factors: 5
Factors: 6
Factors: 10
Factors: 15
Factors: 1510532867
Factors: 5937525205
(The last several values are obviously the undefined behavior.)
My hunch is that since I have a dynamically allocated array, the values aren't initially defined, so when I try to access them, I get undefined behavior. Is it possible to initialize those values
when the array is declared to avoid this type of problem (still keeping it a dynamic array)? If so, how does one go about that. If not, any tips on where I can start to look for a function to
clear out the unused cells? No explanation needed, just a point in the right direction for that one.
Thanks so much, and sorry for the long post!
As factors is an automatic variable (on stack), it's not reset and contains garbage when you declare it. It does contains halfVal items (the size is correct). If you add an output in your program
when you assign factors[counter], you'll see you only setup the first 6 items (from 0 to 5) so everything else is just random values.
factors is a variable length array, which is a non-standard language feature. You could either create a fixed size array that is as large as you will ever need, or use a container like
std::vector<int> factors(halfVal);
This should also zero initialise the elements of factors.
C + C++ Compiler: MinGW port of GCC
Version Control System: Bazaar
Look up a C++ Reference and learn How To Ask Questions The Smart Way
As factors is an automatic variable (on stack), it's not reset and contains garbage when you declare it. It does contains halfVal items (the size is correct). If you add an output in your program
when you assign factors[counter], you'll see you only setup the first 6 items (from 0 to 5) so everything else is just random values.
So if I understand correctly, when a variable length array is declared, the initialized "values" of the array are just garbage until I reassign them with something useful. Is that correct?
factors is a variable length array, which is a non-standard language feature. You could either create a fixed size array that is as large as you will ever need, or use a container like
std::vector<int> factors(halfVal);
This should also zero initialise the elements of factors.
I haven't worked with containers like the one mentioned before, so I am unsure how. Any tutorials you could direct me towards so I can start learning them? The standard vs. non-standard language
features concept is still a confusing one for me.
So if I understand correctly, when a variable length array is declared, the initialized "values" of the array are just garbage until I reassign them with something useful. Is that correct?
You should check your compiler's documentation, but yes
I haven't worked with containers like the one mentioned before, so I am unsure how. Any tutorials you could direct me towards so I can start learning them?
Borrow Josuttis' The C++ Standard Library - A Tutorial and Reference from your local library. It is slightly outdated but still useful.
C + C++ Compiler: MinGW port of GCC
Version Control System: Bazaar
Look up a C++ Reference and learn How To Ask Questions The Smart Way
For example, the factors of 30 are 1, 2, 3, 5, 6, 10, 15, 30.
Euler problem 3 is about prime factors, otherwise finding the largest divisor would be a bit too trivial. The problem might look intimidating, but I suppose it can be solved with simple trial
division. You definitely don't need an array that large, and whatever programming language, the memory usage would be too large.
(Tried at ideone.com, Python with trial division by odd integers yields the answer in 0.03 seconds without any array. It only comes down to getting the algorithm right.)
Last edited by anon; 01-17-2012 at 11:18 AM.
I might be wrong.
Thank you, anon. You sure know how to recognize different types of trees from quite a long way away.
Quoted more than 1000 times (I hope).
Euler problem 3 is about prime factors, otherwise finding the largest divisor would be a bit too trivial. The problem might look intimidating, but I suppose it can be solved with simple trial
division. You definitely don't need an array that large, and whatever programming language, the memory usage would be too large.
(Tried at ideone.com, Python with trial division by odd integers yields the answer in 0.03 seconds without any array. It only comes down to getting the algorithm right.)
When I finished this problem on the Project Euler site, a PDF became available describing the best way to tackle this problem. There are a few constraints you can place on the loops which
drastically cut down computation time.
I won't give you the answer, but links to good methods should come up if you just google "testing for primeness" "finding prime factors"
Edit: I posted as a reply to this post because there IS an element of trial division, but you can do it in a clever way which cuts down on iterations. Also, I don't think I used an array (I don't
have the files on this comp). To the OP, if you like math, and I assume you do since you're doing PE problems, there are a few properties of prime and composite numbers that help you out. There
are fancy sounding names for the theorems or methods, but it's something you could likely come up with on your own in the pencil and paper realm, by messing around with some (not complicated)
Last edited by Ocifer; 01-17-2012 at 11:49 AM.
W7, SL6 -- mingw, gcc, g++, code::blocks, emacs, notepad++
Euler problem 3 is about prime factors, otherwise finding the largest divisor would be a bit too trivial.
I know, I said that in the beginning of my post. I was merely just determining factors first, then sorting prime factors from that list. I don't doubt there is a more efficient way to do it,
however with C++ still being very new to me, I decided to go with the "easy approach" and then trim the fat or rework the code for efficiency.
When I finished this problem on the Project Euler site, a PDF became available describing the best way to tackle this problem. There are a few constraints you can place on the loops which
drastically cut down computation time.
I won't give you the answer, but links to good methods should come up if you just google "testing for primeness" "finding prime factors"
Edit: I posted as a reply to this post because there IS an element of trial division, but you can do it in a clever way which cuts down on iterations. Also, I don't think I used an array (I don't
have the files on this comp). To the OP, if you like math, and I assume you do since you're doing PE problems, there are a few properties of prime and composite numbers that help you out. There
are fancy sounding names for the theorems or methods, but it's something you could likely come up with on your own in the pencil and paper realm, by messing around with some (not complicated)
Thanks! I started to optimize slightly in my MATLAB prototypes, however in MATLAB, the factor function returns prime factors of a number, yet the value used in PE was too large for it. (Max value
is 2^32.) At that point, I was pretty confident I had to do it in another language, and C++ was my choice. I will optimize soon, as soon as I have a properly functions version.
I'm not sure this is a matter of optimization, but rather using a rather unsuitable algorithm to begin with. 300 billion trial divisions is way too much work, even if you sort out the memory
problems (the large value in question has just a few prime factors, and hence rather few divisors). You could cut it down a little by taking into account that the largest prime factor of
composite n can't be larger than sqrt(n).
I would also doubt the floating point math. C++11 has 64-bit integers that is enough for this problem.
Last edited by anon; 01-18-2012 at 02:13 AM.
I might be wrong.
Thank you, anon. You sure know how to recognize different types of trees from quite a long way away.
Quoted more than 1000 times (I hope).
But if you insist on finding the divisors, you can add them to a std::vector, which is a dynamically growing array. The reason is that you don't know how many divisors there's going to be (you'd
know that after finding how many prime factors there are
Last edited by anon; 01-18-2012 at 03:23 AM.
I might be wrong.
Thank you, anon. You sure know how to recognize different types of trees from quite a long way away.
Quoted more than 1000 times (I hope).
Yes I'm sorry. I wasn't sure if I would ruin the learning process, but data overflow is one of the intended stumbling blocks of this problem. However, I would be shocked to learn that MATLAB
doesn't support long integers (I don't have experience). You should be able to accommodate larger integers in MATLAB, it's serious software. Look for datatype documentation before you change
languages just to solve this problem...though I do like c++
Also, consider your own mental processes when you try to factorize smaller (more humane
Consider 700
What's the first prime? 2.
2 divides 700 leaving a quotient of 350
2 divides 350 leaving a quotient of 175
2 DOES NOT divide 175
What's the next prime? 3
3 DOES NOT divide 175
What's the next prime? 5
5 divides 175 leaving a quotient of 35
5 divides 35 leaving a quotient of 7
5 doesn't divide 7
next prime: 7
7/7 = 1 and you are done
700 = 2^2 * 5^2 * 7
The way I presented is highly suggestive of how one might program a machine to do it, as a hint. Most people would divide out the 7 first and the 100 falls apart into 25 * 4 = 5^2 * 2^2, but you
don't want to complicate this by trying to program intuition. Note I'm not considering even numbers after 2 -- the reason why is not too hard to see.
There are further tweaks which can be done to this basic idea, but this should give you enough efficiency to solve the problem. To really get the most of this problem, make sure to read the PDF
that becomes available.
EDIT: So there is an element of trial division, but we don't implement the division operator per se. We're testing divisibility by n, and this is often done using the modulo operator ( % ). One
can conceptualize this as slowly "shaving off" factors only if doing so leaves an integer quotient. Bahh I've said too much.
Last edited by Ocifer; 01-18-2012 at 11:00 PM.
W7, SL6 -- mingw, gcc, g++, code::blocks, emacs, notepad++
Thanks for all the input and advice everyone! This problem was definitely a multifaceted learning experience for me. Regardless of how one learns the material, the important part is learning it.
I think I learned a lot, even if it may have been down the wrong path, but I also learned a lot by you all guiding me towards the light. Much appreciated, once again! Hopefully I'll be able to
help people out in a similar fashion one day. I'm a bit too green for that currently, though.
01-17-2012 #2
Registered User
Join Date
Apr 2008
01-17-2012 #3
01-17-2012 #4
Registered User
Join Date
Jan 2012
01-17-2012 #5
01-17-2012 #6
The larch
Join Date
May 2006
01-17-2012 #7
Registered User
Join Date
Apr 2010
01-17-2012 #8
Registered User
Join Date
Jan 2012
01-18-2012 #9
The larch
Join Date
May 2006
01-18-2012 #10
The larch
Join Date
May 2006
01-18-2012 #11
Registered User
Join Date
Apr 2010
01-28-2012 #12
Registered User
Join Date
Jan 2012 | {"url":"http://cboard.cprogramming.com/cplusplus-programming/145276-static-vs-dynamic-arrays-getting-undefined-behavior.html","timestamp":"2014-04-17T07:32:17Z","content_type":null,"content_length":"106612","record_id":"<urn:uuid:534d9831-d33b-4391-a23d-4e162de3f556>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00195-ip-10-147-4-33.ec2.internal.warc.gz"} |
Littleton, CO Prealgebra Tutor
Find a Littleton, CO Prealgebra Tutor
I have over ten years of experience teaching and tutoring at the high school and college levels. I received my bachelor's degree in Physics from Lewis and Clark College in Portland, Oregon, and
my master's degree in Physics from the University of Utah. Subjects I have taught include the following:...
11 Subjects: including prealgebra, physics, calculus, geometry
...I have successfully applied my expertise in Economics in my professional career -- including senior-level business leadership positions with IBM and Deloitte. I have a strong practical
background in corporate Finance, gained through over 20 years of senior-level management and operations improve...
40 Subjects: including prealgebra, English, writing, reading
...I have worked primarily with high-school and college age students, in algebra, geometry, trigonometry, pre-calculus, and calculus ABC. I also do test preparation for the SAT, ACT, GRE, and
GMAT tests. References: "It's so refreshing to see him actually feeling good about math again, and for once, he's not dreading the final exam.
20 Subjects: including prealgebra, calculus, geometry, ASVAB
I have a B.S. degree in Mathematics & Computer Science. I'm a Certified Microsoft Office Specialist. I'm currently a Math Fellow at Denver Public School.
9 Subjects: including prealgebra, geometry, ESL/ESOL, algebra 1
...I have also been certified as an ACT tutor by both The Princeton Review and Studypoint, and I'm familiar with many ACT testing strategies than can help students increase their scores. In
addition, I have a 3 year substitute authorization for Colorado public schools and am certified for grades K - 12. This certification was issued August 9th, 2012.
30 Subjects: including prealgebra, reading, English, writing
Related Littleton, CO Tutors
Littleton, CO Accounting Tutors
Littleton, CO ACT Tutors
Littleton, CO Algebra Tutors
Littleton, CO Algebra 2 Tutors
Littleton, CO Calculus Tutors
Littleton, CO Geometry Tutors
Littleton, CO Math Tutors
Littleton, CO Prealgebra Tutors
Littleton, CO Precalculus Tutors
Littleton, CO SAT Tutors
Littleton, CO SAT Math Tutors
Littleton, CO Science Tutors
Littleton, CO Statistics Tutors
Littleton, CO Trigonometry Tutors
Nearby Cities With prealgebra Tutor
Arvada, CO prealgebra Tutors
Aurora, CO prealgebra Tutors
Bow Mar, CO prealgebra Tutors
Centennial, CO prealgebra Tutors
Cherry Hills Village, CO prealgebra Tutors
Columbine Valley, CO prealgebra Tutors
Denver prealgebra Tutors
Englewood, CO prealgebra Tutors
Greenwood Village, CO prealgebra Tutors
Highlands Ranch, CO prealgebra Tutors
Lakewood, CO prealgebra Tutors
Littleton City Offices, CO prealgebra Tutors
Sheridan, CO prealgebra Tutors
Westminster, CO prealgebra Tutors
Wheat Ridge prealgebra Tutors | {"url":"http://www.purplemath.com/Littleton_CO_Prealgebra_tutors.php","timestamp":"2014-04-21T15:04:40Z","content_type":null,"content_length":"24334","record_id":"<urn:uuid:7584fe17-30eb-46ec-96c4-8bd4c7c2851f>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00032-ip-10-147-4-33.ec2.internal.warc.gz"} |
Enhanced radiation of a dipole placed between a metallic surface and a nanoparticle
Geshev, Pavel I and Dickmann, Klaus (2006) Enhanced radiation of a dipole placed between a metallic surface and a nanoparticle. Journal of Optics A: Pure and Applied Optics, 8 (4). S161-S173.
Full text is not hosted in this archive but may be available via the Official URL, or by requesting a copy from the corresponding author.
Official URL: http://stacks.iop.org/1464-4258/8/S161
Enhanced dipole radiation in the presence of a flat metallic surface and a metal nanoparticle is considered on the basis of Maxwell's equations. For the case of axi-symmetrical illumination the
initial problem is reduced to a system of boundary integral equations for the angular component of the magnetic field and its normal derivative. A boundary element method is used to solve the system
of integral equations. The scattering of convergent cylindrical electromagnetic waves from a nanoparticle placed near a surface is calculated. The dipole placed between the nanoparticle and the
surface is excited by the enhanced field in the gap and re-radiates electromagnetic waves of the same frequency into space. This dipole radiation in turn is enhanced by the nanoparticle/surface
system. Two intensity enhancement factors are calculated: (1) the enhancement of the local electric field at the dipole position by the nanoparticle/surface system; and (2) the increase in dipole
radiation due to the presence of a metallic nano-object. For very small gaps (1 nm) between the surface and nanoparticle, these factors reach very large values. At some frequencies the enhancement
factors exhibit large resonance peaks which can be explained as plasmon resonances in the nanoparticle/surface system. For various shapes of metal nanoparticles and for different distances in the
particle/dipole/surface configuration, the total intensity enhancement factor (the product of the two factors described above) is calculated using the developed model. The very large enhancement
factors obtained in our calculations can be considered as a theoretical basis for single molecule Raman spectroscopy.
Item Type: Article
ID Code: 5417
Deposited By: Prof. Alexey Ivanov
Deposited On: 06 Feb 2010 11:11
Last Modified: 06 Feb 2010 11:53
Repository Staff Only: item control page | {"url":"http://www.nanoarchive.org/5417/","timestamp":"2014-04-20T00:43:57Z","content_type":null,"content_length":"15898","record_id":"<urn:uuid:87c913d9-8dba-4f88-b498-214da7c4139d>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00591-ip-10-147-4-33.ec2.internal.warc.gz"} |
python for loop
Diez B. Roggisch deets at nospam.web.de
Wed Apr 1 09:50:36 CEST 2009
Lada Kugis schrieb:
> On 01 Apr 2009 01:26:41 GMT, Steven D'Aprano
> <steven at REMOVE.THIS.cybersource.com.au> wrote:
>> Why Python (and other languages) count from zero instead of one, and
>> why half-open intervals are better than closed intervals:
>> http://www.johndcook.com/blog/2008/06/26/why-computer-scientists-count-from-zero/
>> http://www.cs.utexas.edu/users/EWD/transcriptions/EWD08xx/EWD831.html
> steven, thanks for answering,
> yes, i saw the second one a little time ago (someone else posted it as
> well in really cute handwriting version :) and the first time just
> now, but the examples which both of them give don't seem to me to be
> that relevant, e.g. the pros don't overcome the cons.
> imho, although both sides (mathematical vs engineer) adress some
> points, none of them give the final decisive argument.
> i understand the math. point of view, but from the practical side it
> is not good. it goes nicely into his tidy theory of everything, but
> practical and intuitive it is not.
You keep throwing around the concept of "intuition" as if it were
something that existis in a globally fixed frame of reference. It is not.
From [1]:
Klein found that under time pressure, high stakes, and changing
parameters, experts used their base of experience to identify similar
situations and intuitively choose feasible solutions.
In other words: your gained *knowledge* (including mathematical
know-how) determines what's intuitive - to *you*.
So all arguing about intuition is rather irrelevant - to me, zero-based
counting is intuitive, and it is so with the same right as you prefer
OTOH zero-based counting has technical reasons that also reduces the
amount of leaky abstraction problems [2], as it models the underlying
reality of memory locations + indexing.
[1] http://en.wikipedia.org/wiki/Intuition_(knowledge)
[2] http://www.joelonsoftware.com/articles/LeakyAbstractions.html
More information about the Python-list mailing list | {"url":"https://mail.python.org/pipermail/python-list/2009-April/531285.html","timestamp":"2014-04-17T20:50:13Z","content_type":null,"content_length":"4932","record_id":"<urn:uuid:11577e24-a235-4ed2-9239-5d2be27b10a7>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00568-ip-10-147-4-33.ec2.internal.warc.gz"} |
Decomposing typed lambda calculus
Decomposing typed lambda calculus into a couple of categorical programming languages
In Proc. 6th International Conference on Category Theory and Computer Science (CTCS'95), Springer LNCS 953 (1995) 200-219
We give two categorical programming languages with variable arrows and associated abstraction/reduction mechanisms, which extend the possibility of categorical programming [1,2] in practice. These
languages are complementary to each other -- one of them provides a first-order programming style whereas the other does higher-order -- and are ``children'' of the simply typed lambda calculus in
the sense that we can decompose typed lambda calculus into them and, conversely, the combination of them is equivalent to typed lambda calculus. This decomposition is a consequence of a semantic
analysis on typed lambda calculus due to C. Hermida and B. Jacobs [3].
Pointers to Related Work
• [1] T. Hagino, A Categorical Programming Language. PhD thesis ECS-LFCS-87-38, University of Edinburgh (1987).
• [2] J.R.B. Cockett and T. Fukushima, About Charity. Technical report 92/480/18, University of Calgary (1992).
• [3] C. Hermida and B. Jacobs, Fibrations with indeterminates: contextual and functional completeness for polymorphic lambda calculi. Mathematical Structures in Computer Science 5(4) (1995)
• A.J. Power and H. Thielecke, Environments, continuation semantics and indexed categories. In Proc. TACS'97, Springer LNCS 1281 (1997) 391-414.
• A.J. Power and H. Thielecke, Closed Freyd- and kappa-categories. In Proc. ICALP'99, Springer LNCS 1644 (1999).
Back to Hassei's Research Page / Home Page | {"url":"http://www.kurims.kyoto-u.ac.jp/~hassei/papers/ctcs95.html","timestamp":"2014-04-16T13:14:13Z","content_type":null,"content_length":"2644","record_id":"<urn:uuid:3febb943-a0b7-46b6-9001-2d074c747484>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00062-ip-10-147-4-33.ec2.internal.warc.gz"} |
rewriting-0.1: Generic rewriting library for regular datatypes. Source code Contents Index
Portability non-portable
Generics.Regular.Rewriting.Base Stability experimental
Maintainer generics@haskell.org
Summary: Base generic functions that are used for generic rewriting.
Functor (fmap)
class GMap f where
class Crush f where
crush :: (a -> b -> b) -> b -> f a -> b
flatten :: Crush f => f a -> [a]
class Zip f where
fzipM :: Monad m => (a -> b -> m c) -> f a -> f b -> m (f c)
fzip :: (Zip f, Monad m) => (a -> b -> c) -> f a -> f b -> m (f c)
fzip' :: Zip f => (a -> b -> c) -> f a -> f b -> f c
geq :: (b ~ PF a, Regular a, Crush b, Zip b) => a -> a -> Bool
class GShow f where
class LRBase a where
class LR f where
left :: (Regular a, LR (PF a)) => a
right :: (Regular a, LR (PF a)) => a
Functorial map function.
Functor (fmap)
Monadic functorial map function.
The GMap class defines a monadic functorial map.
fmapM :: Monad m => (a -> m b) -> f a -> m (f b) Source
Crush functions.
class Crush f where Source
The Crush class defines a crush on functorial values. In fact, crush is a generalized foldr.
crush :: (a -> b -> b) -> b -> f a -> b Source
flatten :: Crush f => f a -> [a] Source
Flatten a structure by collecting all the elements present.
Zip functions.
The Zip class defines a monadic zip on functorial values.
fzipM :: Monad m => (a -> b -> m c) -> f a -> f b -> m (f c) Source
fzip :: (Zip f, Monad m) => (a -> b -> c) -> f a -> f b -> m (f c) Source
Functorial zip with a non-monadic function, resulting in a monadic value.
fzip' :: Zip f => (a -> b -> c) -> f a -> f b -> f c Source
Partial functorial zip with a non-monadic function.
Equality function.
geq :: (b ~ PF a, Regular a, Crush b, Zip b) => a -> a -> Bool Source
Equality on values based on their structural representation.
Show function.
class GShow f where Source
The GShow class defines a show on values.
gshow :: (a -> ShowS) -> f a -> ShowS Source
Functions for generating values that are different on top-level.
class LRBase a where Source
The LRBase class defines two functions, leftb and rightb, which should produce different values.
The LR class defines two functions, leftf and rightf, which should produce different functorial values.
left :: (Regular a, LR (PF a)) => a Source
Produces a value which should be different from the value returned by right.
right :: (Regular a, LR (PF a)) => a Source
Produces a value which should be different from the value returned by left.
Produced by Haddock version 2.4.2 | {"url":"http://hackage.haskell.org/package/rewriting-0.1/docs/Generics-Regular-Rewriting-Base.html","timestamp":"2014-04-19T17:16:32Z","content_type":null,"content_length":"31150","record_id":"<urn:uuid:f05b9757-597b-4b29-92d7-b6600be033f8>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00319-ip-10-147-4-33.ec2.internal.warc.gz"} |
inspired by segway™ - making a self-balancing 2 wheel vehicle
Monday, 29 June 2009
Riding the 95% finished product!
Posted by Macaba at 15:57
11 comments:
1. Nice work.
I'm planing on making one for myself, do you already have a total cost for this? Will you make your arduino .pde available??
Thanks man
2. Hey Romeu,
Total cost was around £500, which is what I aimed for so that went well.
I will post my arduino .pde in the near future (poke me if i don't!).
3. Very nice workmanship and ingenuity. Is it fully operational now? How much ride time do you get out of your batteries?
Congratulations on a great project.
4. Wow that is awesome! Do you have the schematics available online? I cant wait to see the .pde. Great Job!
5. Love the "zzzzrrrrzzzzzrrrr" sounds. Do the motors and chains make that noise, or did you use the old playing card-in-the-spokes trick? (So hoping it's the latter!)
Awesome project. Well done!
6. I wonder if it is possible to use a brushless engine in one of those...
Something like this:
I know that the speed controller will be a problem to find or build... But other than that those motors are efficient and dont wear out as much as brushed.
7. wow me and a partner built one of these for a projects class last semester, we only had the one semester to do it in so it didn't get 100% bout 90 or so... it worked it just didn't turn and
needed to smooth out the code alil but it did balance. It was amazing to see how close ours was to yours (the frame was near i dentical minus the hand positions) but its cool to see what we
coulda accomplished if we had a couple more weeks! heres a link if your interested http://www.chasecooley.blogspot.com/
we had to write often for the grade so alot of it is venting but their is arduino code and such on there but nice job I am really impressed
8. The zzzzzrrrrzzzzzzzzz noise is the chain system, its a nice noise! (sure beats the gearbox sound of conventional gearing).
I've posted the arduino PDE if anyone is interested.
It wouldn't be possible to use a hobby sensorless brushless motor, there are too many issues (0 start torque) and I think would have to be retrofitted with sensors.
9. Dear Macaba
I will like to build a segway but i don't know from where to buy the electronic part. I'll be happy if you have some plan, or some project to show me. Thank you. My email is lupenimih@yahoo.it
10. Is there any advantage of using 'self balancing technique' for riding a vehicle like this?
using a 3rd wheel(freewheel) and a joystick controlled mechanism for riding rite..?
waiting for ur answer...
11. Hİ
I WANT YOU TO KNOW THIS SUPPER WORKINK BRAVO BUT I TYRED
BUT THIS NOT WORKINK WHATS WRONG PLEASE HELP ME
int xPin = 2; // select the input pin for the potentiometer
int gyroPin = 1;
int steerPin = 3;
int ledPin = 13; // select the pin for the LED
int pwmPinL = 9;
int pwmPinR = 10;
int enPin = 7;
float angle = 0;
float angle_old = 0;
float angle_dydx = 0;
float angle_integral = 0;
float balancetorque = 0;
float rest_angle = 0;
float currentspeed = 0;
int steeringZero = 0;
int steering = 0;
int steeringTemp = 0;
float p = 8; //2
float i = 0; //0.005
float d = 1300; //1000
float gyro_integration = 0;
float xZero = 0;
int gZero = 445; //this is always fixed, hence why no initialisation routine
unsigned long time, oldtime;
int pwmL;
int pwmR;
boolean over_angle = 0;
void setup() {
unsigned int i = 0;
unsigned long j = 0; //maximum possible value of j in routine is 102300 (100*1023)
pinMode(ledPin, OUTPUT); // declare the ledPin as an OUTPUT
TCCR1B = TCCR1B & 0b11111000 | 0x01;
for (i = 0; i < j =" j" steeringzero =" analogRead(steerPin);" xzero =" j/100;" oldtime =" micros();" time =" micros();">= (oldtime+5000)){
oldtime = time;
steering = (analogRead(steerPin) - steeringZero)/(15+(abs(angle)*8));
//-----OVER ANGLE PROTECTION-----
if (angle > 20 || angle < -20) { digitalWrite(enPin,HIGH); over_angle = 1; delay(500); } //-----END----- if (over_angle) { //if over_angle happened, give it a chance to reset when segway is level
if (angle <> -1) {
digitalWrite(enPin, LOW);
over_angle = 0;
else {
//-----calculate rest angle-----
if (currentspeed > 10)
rest_angle = 0;
angle_integral += angle;
balancetorque = ((angle+rest_angle)*p) + (angle_integral*i) + (angle_dydx*d);
angle_dydx = (angle - angle_old)/200; //now in degrees per second
angle_old = angle;
currentspeed += (balancetorque/200);
pwmL = (127 + balancetorque + steering);
if (pwmL < pwml =" 0;"> 255)
pwmL = 255;
pwmR = (127 - balancetorque + steering);
if (pwmR < pwmr =" 0;"> 255)
pwmR = 255;
analogWrite(pwmPinL, pwmL);
analogWrite(pwmPinR, pwmR);
void calculateAngle() {
//Analogref could be as small as 2.2V to improve step accuracy by ~30%
//uses small angle approximation that sin x = x (in rads). maybe use arcsin x for more accuracy?
//analogref is off the gyro power supply voltage, and routine is calibrated for 3.3V. maybe run acc/gyro/ref off 1 3.3V regulator, an
//accurately measure that.
//routine runs at 200hz because gyro maximum response rate = 200hz
float acc_angle = 0;
float gyro_angle = 0;
acc_angle = (((analogRead(xPin)-xZero)/310.3030)*(-57.2958);
gyro_angle = ((analogRead(gyroPin) - gZero)*4.8099)/200;
gyro_integration = gyro_integration + gyro_angle; //integration of gyro and gyro angle calculation
angle = (gyro_integration * 0.99) + (acc_angle * 0.01); //complementary filter
gyro_integration = angle; //drift correction of gyro integration | {"url":"http://diysegway.blogspot.com/2009/06/riding-95-finished-product.html","timestamp":"2014-04-18T18:10:43Z","content_type":null,"content_length":"84312","record_id":"<urn:uuid:2a5896d2-6045-4edb-82a9-ac82a43de513>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00644-ip-10-147-4-33.ec2.internal.warc.gz"} |
Puzzle Playground - Red, White, and Blue Balls Solution
Home / Puzzle Playground / Puzzles / Puzzles with Weighings /
This was a really hard and tricky challenge, and we got a lot of different solutions to this clever puzzle. Among them there were wrong solutions too. The main mistake in them was that some
possible combinations of balls on the balance scales were overlooked, and - as the result - all these solutions don't give the correct and full solution for each of the six balls in any given
Actually there are several different ways how to determine the light and the heavy balls in each of the three pairs. We show some of your solutions to illustrate these ways.
Also we got some solutions which imply that our balance scales can produce different angles (indicated with an arrow) or some distinguishable mutual displacements of the pans, and so this can show
that some combinations of balls are "more unequal" or "less unequal" when balls on different pans have different weights.
In fact these solutions can't be fully accepted since in our picture we show simple balance scales which can indicate just whether the both pans hold the same weights or different.
Still these solutions are interesting, but unfortunately the same mistake - overlooked combinations - was in all but one of them. See the last solution below.
A solver called our attention that "... according to Gardner in his "Mathematical Circus" this problem was originally purposed by Paul Curry..." Thanks a lot for this important remark!
Solution by Kiruthika K.
1. Take 2 balls of same color (say blue).
2. Take one ball each of the other 2 colors; and put each one along with a Blue ball on the trays of the balance. (say Blue & White against Blue & Red)
3. There are 2 possibilities:
i) If the balance is equal, then of the Red and the White balls, one is heavy and the other is light.
ii) Keeping track of which Blue ball was paired with which color, weigh the White and the Red balls against each other. Find which one is heavy. (let us say, White turns out to be the heavier one)
iii) Then the Blue weighed with the White is light and the Blue weighed with Red is heavy.
i) One side is heavier.
ii) The Blue ball on the heavier side is the heavy Blue one.
iii) Put both Blue balls on one side against the White and the Red balls on the other.
iii) If the side with the Blue balls is heavier, then the Red and the White balls are the light ones of their respective colours.
iv) If the other side is heavier, then the Red and the White are both heavy balls.
v) If the balance is equal, the ball that was paired with the heavy Blue ball during the first time is heavy and the other is light.
Red: R
White: W
Blue: B
1st weighing:
B1R1 Vs B2W1
2nd weighing:
R1 Vs W1 (say R1 is heavier, which implies W2 is also heavy)
B1 is light and B2 is heavy. We know R1 and W2 are heavy therefore R2 and W1 are light.
B1R1 is heavier, then B1 is the heavy one and B2 the lighter.
2nd weighing:
B1B2 Vs R1W1
If B1B2 is heavier, R1 and W1 are both light, hence R2 and W2 are both heavy.
If R1W1 is heavier, R1 and W1 are both heavy, hence R2 and W2 are light.
If the balance is equal, R1 is heavy and W1 is light, therefore R2 is light and W2 is heavy.
Solution by David Low
Nothing like a long shower to clear out the cobwebs...
Place a red and blue marble on one side, and a red and white marble on the other.
Case 1. The two sides are the same:
Since there are not four heavy marbles or four light marbles, each side must have one light and one heavy marble. Thus the heavy red marble has a light marble with it, and the light red marble has
a heavy marble with it.
For the second massing, just compare the two red marbles. The heavy and light red marbles are directly discovered. Moreover, the marble with the heavy red marble in the first massing is now known
to be light, and the marble with the light red marble in the first massing is now known to be heavy. The untouched blue and white marbles will respectively be the opposite of their known
same-colour partners.
Case 2. One side is heavier.
The heavier side cannot have the light red marble, since such a situation would give at most one heavy marble on the heavier side, and at least one heavy (red) marble on the lighter side, which is
impossible. So the heavier side must have the heavy red marble, and the lighter side has the light red marble. The red marbles are discovered.
For the second massing, place the two red marbles on one side, and the one blue and one white marble from the first massing on the other side. If the blue/white side is heavier than the two reds,
both marbles are heavy. If the blue/white side is lighter, both are light. If the balance is the same, then one marble is light and the other heavy. The marble that was with the heavy red in the
first massing is heavy and the marble with the light red in the first massing is light, since the reverse would have resulted in a tie in the first massing.
As before, the untouched blue and white marbles will respectively be the opposite of their known same-colour partners.
Solution by Gregory Clayborne
This is a great puzzle. The hardest part is giving the solution in simple steps, but here goes...
Let's first label our balls... R1,R2,W1,W2,B1,B2 and then start weighing...
Weighing 1: Lets weigh R1,W1 vs R2,B1
There are two possible results:
The scales will balance or they won't.
If the scales balance: R1,W1 = R2,B1
then we know that there is a Heavy and a Light on each side. We just don't know who's who. We know this because the Reds can't both be Heavy nor can they both be Light. So we have the following
possibilities for R1,W1 = R2,B1:
R1(H),W1(L) = R2(L),B1(H) or
R1(L),W1(H) = R2(H),B1(L)
Notice that this makes W1 the opposite of R1 and B1 the opposite of R2 so.....
Reds opposite Whites R1 = W2, R2 = W1
Reds equal Blues R1 = B1, R2 = B2
For the second weighing we just weigh the Reds against each other and the above equations will finish the results.
If the scales don't balance then it gets a little harder.
We know that R1 can't equal R2 and since the scales didn't balance then we know that
which ever side was Heavy has the Heavy Red ball. Don't believe me do you. Alright.
Let's say that the weighing looked like this:
R1,W1 > R2,B1 thus R1,W1 Heavier than R2,B1. If R1 was actually Light (thus making R2 Heavy) then the only values W1 and B1 could have would be Heavy and Light respectively. BUT that would have
given us R1(L),W1(H) > R2(H),B1(L). WHICH WOULD HAVE BALANCED (see L,H = H,L) SO R1 has to be Heavy if it's scale went down.
That being proven, let's stick with the assumption that R1(H) and R2(L) just to make the
logic easier to follow...
The first weighing then gives us the results...
W1 and B1 are the same (both Heavy or both Light) OR
W1 opposite B1 with W1 being the same as R1 and B1 being the same as R2.
This leads us to the final weighing...
We just switch R2 and W1 and wind up weighing R1,R2 vs. W1,B1.
If the scales balance then R1(H) R2(L) and W1(H) B1(L)
If the scales don't balance than R1(H) R2(L) and W1 = B1.
But wait, that doesn't tell us if W1 and B1 are Heavy or Light?
Just answer the question. Did the W1,B1 side of the scale go up or down. There's your answer. The final scale would look like one of the following:
R1(H),R2(L) > W1(L),B1(L) OR
R1(H),R2(L) < W1(H),B1(H)
Ta daaaa.
Like I said. This is a great puzzle. Hope you could get through the logic. I really need to draw it out. Gregory
Solution by Shashi
I consider the pair as R1,R2; B1,B2 ; W1,W2
First weighing
R1+W1 against R2+B1
case 1: if it balances then
second weighing,
Weigh R1 and R2 and find out which is heavier, this also tells which of (W1,B1) is heavier.
case 2 if it does not balance
second weighing
weigh R1+R2 against W1+B1
if R1+R2 side goes down then both W1 and B1 are lighter balls of their pairs
if W1+B1 side goes down then both W1 and B1 are heavier balls of their pairs
if it balances
then the ball which was with lighter red ball in the first weighing is lighter i.e., if R1 was lighter then W1 is also lighter
Solution by Marcus Dunstan
6 Balls
White = W1, W2
First weigh: W1+R1 v W2+B1
If balanced then R1 & B1 must be one HEAVY (H) and one LIGHT (L) (because we know W1 and W2 are one HEAVY and one LIGHT)
If balanced, the second weigh is to swap R1 and B1. ie second weigh is W1+B1 v W2+R1
The side of scales that goes down has 2 HEAVY balls, side of scales that goes up has 2 LIGHT balls.
>From this situation weight of all 6 balls is now known.
If first weigh W1+R1 v W2+B1 is not balanced then you know that the side that goes DOWN must contain the HEAVY white ball eg. W1+R1 v W2+B1
Possible combinations: H+H v L+H, H+H v L+L, H+L v L+L then W1 is HEAVY
Possible combinations: L+H v H+H, L+L v H+H, L+L v H+L then W2 is HEAVY
Cannot be combinations: H+L v L+H or L+H v H+L as scales would balance
So at this stage you know the weight of the two white balls only
*** HOWEVER you must also note the colour of the ball on the same side as the LIGHT white ball as this will have bearing depending on the results of the second weigh.
The second weigh would then be W1+W2 v R1+B1
If W1+W2 side goes down then combination must be H+L v L+L or L+H v L+L ie
both R1 and B1are LIGHT
If W1+W2 side goes up then combination must be H+L v H+H or L+H v H+H ie
both R1 and B1 are HEAVY
If W1+W2 balances with R1+B1 then you must have one HEAVY and one LIGHT
ball on the R1+B1 side
The only way this can occur (bearing in mind the results of the first weigh) is if the coloured ball noted above *** was LIGHT
>From this situation weight of all 6 balls is now known.
Solution by Jensen Lai
Label the balls R1, R2, B1, B2, W1 and W2.
Place R1 and B1 on one side of the scale and B2 and W2 on the other side of the scale. There are two possible outcomes. They are equal or they are unequal.
Outcome 1: The two sides are equal.
If two balls on the same side were the same weight, then there would be 4 balls of the same weight. However there are only 3 heavy balls and 3 light ones. Therefore, two balls on the same side,
are of different weights.
In the second weighing, weigh R1 and R2. From this weighing it can be determined which red ball is heavy and which is light. Whichever R1 is, B1 is the opposite since they were on the same side in
the first weighing. B2 is the opposite of B1. W2 is the
opposite of B2 since they were on the same side of the first weighing and W1 is the opposite of W2. So, if the two sides are equal in the first weighing, then R2, B1 and W2 are of the same weight,
and R1, B2 and W1 are of the same weight. The second weighing determines which 3 are heavier and which 3 are lighter.
Outcome 2: The two sides are unequal.
There are four balls on the scales. Two are blue so one of them is lighter and the other one is heavier. Whichever one was on the heavier side must be the heavy blue ball. (The lighter blue ball
could not have been on the heavier side because a light blue ball and a heavy other ball is not heavier than a heavy blue ball and a light other ball). The two remaining balls (R1 and W2) are
either the same weight or they are different.
In the second weighing, weight R1 against W1. If R1 and W2 are the same weight, R1 and W1 must be different. If R1 and W2 are different weights, R1 and W1 will be equal. So the second weighing can
be used to determine whether R1 and W2 are the same or different.
If the second weighing is unequal, R1 and W2 are the same weight and the second weighing will show whether they are heavy or light. R1 is the same as W2 and R2 and W1 are the same. If the second
weighing is balanced, then R1 and W2 are different and the heavier one is whichever one was on the heaviest side of the first weighing. The heavier one could not have been with the lighter blue
ball or else the first weighing would have been equal and that would be outcome 1 and not outcome 2. R2 is different to R1 and W1 is different to W2 it will be known which balls are heavier and
which are lighter.
Solution by Alan Lemm
You have two red (R1, R2), two white (W1, W2) & two blue (B1, B2) balls.
The following are the possibilities for the light and heavy balls (L = light, H = heavy):
B1 R1 W1 B2 R2 W2 CASE #
H H H L L L 1
H H L L L H 2
H L H L H L 3
H L L L H H 4
L H H H L L 5
L H L H L H 6
L L H H H L 7
L L L H H H 8
Suppose you weigh B1 & R1 (LEFT) against B2 & W1 (RIGHT).
The following indicates the result in each case:
CASE # RESULT
1 LEFT HEAVIER
2 LEFT HEAVIER
3 BALANCED
4 LEFT HEAVIER
5 RIGHT HEAVIER
6 BALANCED
7 RIGHT HEAVIER
8 RIGHT HEAVIER
The second weighing depends upon the result of the first weighing. First, the balanced cases (3 & 6):
You will notice that in each case, R1, B2, and W2 weigh the same as each other, so they are either all the light balls or the heavy balls. Therefore, you weigh R1 against R2.
If R1 is heavier, then you have case 6.
If R1 is lighter, then you have case 3.
Now for the cases where the left side is heavier (1, 2, 4):
In each case, B1 is the heavy ball, and B2 is the light ball. Now you have to contend with the red and white balls. If you weigh R1 against W2, the result will be different for each case.
If R1 is heavier, you have case 1.
If R1 is lighter, then you have case 4.
If the two balls balance, then you have case 2.
Finally, the cases where the right side is heavier (5, 7, 8):
In each case, B1 is the light ball, and B2 is the heavy ball. Again, you have to contend with the red and white balls. If you weigh R1 against W2, the result will again be different for each case.
If R1 is heavier, you have case 5.
If R1 is lighter, then you have case 8.
If the two balls balance, then you have case 7.
You have now determined the weight of each ball in two weighings.
Solution by Tim Sanders
Let's name the balls R1, R2, W1, W2, B1, and B2.
Let's assign value 0 to light balls and value 1 to heavy balls.
Let's name the scales X and Y.
NOTE: There are 8 possible combinations of weight values for the 6 balls (see table below), and 3 possible outcomes for each weighing - X>Y, X<Y, and X=Y. In choosing the balls to weigh first, the
trick is to allocate 3 possible combinations to X>Y, 3 to X<Y, and 2 to X=Y.
Weigh R1-W1 on X and W2-B2 on Y (1st weighing).
The following table illustrates the possible combinations for each outcome of the 1st weighing. For instance, if X>Y, then the combinations in rows 1, 2, and 3 are possible.
X X Y Y
R1 R2 W1 W2 B1 B2
1 0 1 0 1 0 - X>Y
1 0 1 0 0 1 - X>Y
0 1 1 0 1 0 - X>Y
0 1 0 1 1 0 - X<Y
0 1 0 1 0 1 - X<Y
1 0 0 1 0 1 - X<Y
0 1 1 0 0 1 - X=Y
1 0 0 1 1 0 - X=Y
If X>Y, go to (a).
If X<Y, go to (b).
If X=Y, go to (c).
(a) Weigh R1 on X and B1 on Y (2nd weighing).
X Y
R1 R2 W1 W2 B1 B2
1 0 1 0 1 0 - X=Y
1 0 1 0 0 1 - X>Y
0 1 1 0 1 0 - X<Y
As illustrated above, each possible outcome will indicate the weight value of each ball. For instance, if X=Y, then R1=1, R2=0, W1=1, W2=0, B1=1, and B2=0.
(b) Weigh R1 on X and B1 on Y (2nd weighing).
X Y
R1 R2 W1 W2 B1 B2
0 1 0 1 1 0 - X<Y
0 1 0 1 0 1 - X=Y
1 0 0 1 0 1 - X>Y
As illustrated above, each possible outcome will indicate the weight value of each ball. For instance, if X<Y, then R1=0, R2=1, W1=0, W2=1, B1=1, and B2=0.
(c) Weigh R1 on X and R2 on Y (2nd weighing).
NOTE: Other ball pairs will work too, but let's use R1-R2 for this solution.
X Y
R1 R2 W1 W2 B1 B2
0 1 1 0 0 1 - X<Y
1 0 0 1 1 0 - X>Y
As illustrated above, each possible outcome will indicate the weight value of each ball. For instance, if X<Y, then R1=0, R2=1, W1=1, W2=0, B1=0, and B2=1.
Solution by Roland Vyncke
We label the white, red & blue balls as w1, w2 ; r1, r2 & b1, b2 and distinguish the eight possibilities :
L(ight) H(eavy)
-------- --------
w1 r1 b1 w2 r2 b2 case 1
w1 r1 b2 w2 r2 b1 case 2
w1 r2 b1 w2 r1 b2 case 3
w1 r2 b2 w2 r1 b1 case 4
w2 r1 b1 w1 r2 b2 case 5
w2 r1 b2 w1 r2 b1 case 6
w2 r2 b1 w1 r1 b2 case 7
w2 r2 b2 w1 r1 b1 case 8
1ST WEIGHING : w1 & b1 against w2 & r1
a) if w1+b1=w2+r1 then
2ND WEIGHING : w1 against w2
if w1 < w2 then case 2
if w1 > w2 then case 7
b) if w1+b1<w2+r1 then
2ND WEIGHING : w1 & w2 against b1 & r1
if w1+w2=b1+r1 then case 3
if w1+w2<b1+r1 then case 4
if w1+w2>b1+r1 then case 1
c) if w1+b1>w2+r1 then
2ND WEIGHING : w1 & w2 against b1 & r1
if w1+w2=b1+r1 then case 6
if w1+w2<b1+r1 then case 8
if w1+w2>b1+r1 then case 5
Remark: b2 & r2 remaining untouched during the whole experiment, a NEW puzzle may be formulated using only 4 balls!!!
Solution by Rob Farley
Ok, suppose we number the balls R1,R2,W1,W2,B1,B2. What we need to do is identify which of the balls are 'L', and which are 'H'.
We have eight possibilities:
1) R1=L,R2=H,B1=L,B2=H,W1=L,W2=H
2) R1=L,R2=H,B1=L,B2=H,W1=H,W2=L
3) R1=L,R2=H,B1=H,B2=L,W1=L,W2=H
4) R1=L,R2=H,B1=H,B2=L,W1=H,W2=L
5) R1=H,R2=L,B1=L,B2=H,W1=L,W2=H
6) R1=H,R2=L,B1=L,B2=H,W1=H,W2=L
7) R1=H,R2=L,B1=H,B2=L,W1=L,W2=H
8) R1=H,R2=L,B1=H,B2=L,W1=H,W2=L
Now... let's do the first weigh.
R1 & B1 against W1 & R2.
If it is even, then we know that we have either case 3 or case 6, and we can distinguish them by weighing R1 against R2 (our second weigh!)
Now, suppose that it went to the right. Then we have either case 1, 2 or 4, and we can distinguish them by weighing B2 and W2.
Similarly, if it went to the right, we have either 5, 7 or 8, which we can also distinguish by weighing B2 and W2.
Solution by William M. Shubert
First, let's call the 6 balls w1, w2, r1, r2, b1, and b2. There are 8 possible combinations:
# Heavy Balls Light Balls
1 w1,r1,b1 w2,r2,b2
2 w1,r1,b2 w2,r2,b1
3 w1,r2,b1 w2,r1,b2
4 w1,r2,b2 w2,r1,b1
5 w2,r1,b1 w1,r2,b2
6 w2,r1,b2 w1,r2,b1
7 w2,r2,b1 w1,r1,b2
8 w2,r2,b2 w1,r1,b1
So the goal is to find out which of these combinations is the right one. We have two weighings; each has 3 possible results (left heavier, right heavier, and equal).
Start by weighing w1+r1 (left) vs. w2+b1 (right). Three possible outcomes:
The first weighing leaves the left heavier. The combinations from our list of 8 that leave the left heavier are 1,2, and 4.
Our second weighing is r1 (left) vs. b2 (right). Three outcomes:
Left heavier. Must be combination 1; the heavy balls are w1, r1, and b1.
Equal weight. Must be combination 2; the heavy balls are w1, r1, and b2.
Right heavier. Must be combination 4; the heavy balls are w1, r2, and b2.
The first weighing leaves equal weight. Must be combination 3 or 6.
Our second weighing is r1 (left) vs. r2 (right). Two outcomes:
Left heavier. Must be combination 6; the heavy balls are w2, r1, and b2.
Right heavier. Must be combination 3; the heavy balls are w1, r2, and b1.
The first weighing leaves the right heavier. Must be combination 5, 7, or 8.
Our second weighing is r1 (left) vs. b2 (right). Three outcomes:
Left heavier. Must be combination 5; the heavy balls are w2, r1, and b1.
Equal weight. Must be combination 7; the heavy balls are w2, r2, and b1.
Right heavier. Must be combination 8; the heavy balls are w2, r2, and b2.
There you have it. In all cases, after two weighings I will know exactly which combination of balls I have and which are the heavy ones. There might be an easier or more elegant solution, but this
one does work!
Do I win? :-)
Solution by Du'c Hoang
1. weigh R1 & W1 vs R2 & B1.
if balanced
2a. weigh R1 vs R2
if left side is heavier:
R1, B1 & W2 are the heavy balls
if right side is heavier:
R2, B2 & W1 are the heavy balls
if left side is heavier
2b. weigh W2 vs B1
if left side is heavier:
R1, W2 & B2 are the heavy balls
if right side is heavier:
R1, B1 & W1 are the heavy balls
if balanced:
R1, B2 & W1 are the heavy balls (if B1 & W2 are heavy, 1 would be balanced)
if right side is heavier
2c weigh W1 vs B2
if left side is heavier:
R2, W1 & B1 are the heavy balls
if right side is heavier:
R2, B2 & W2 are the heavy balls
if balanced:
R2, B1 & W2 are the heavy balls (if W1 & B2 are heavy, 1 would be balanced)
Solution by Jon Black
Let’s call the balls R1, R2, W1, W2, B1, B2.
Put R1 and W1 on one side of the scale and put R2 and B1 on the other side.
If they are equal, then R1 + W1 = R2 + B1 and hence either (R1 > R2, W1< B1, W1 < W2, and B2 < B1) or (R1 < R2,W1 > B1,W1 > W2, and B2 > B1).
Next remove W1 & B1. If R1>R2 then W1 < W2 and B2 < B1. If R1 < R2 then W1 > W2 and B2 > B1.
If R1 + W1 > R2 + B1 then R1 > R2 and hence either (W1 > W2 , B2 < B1) or (W1 > W2, B2 > B1) or (W1 < W2, B2 > B1)
Next put R1 and B1 on one side of the scale and B2 and W2 on the other side. Since R1 > R2, if:
R1 + B1 > B2 + W2, then (B2 < B1, W1 >W2);
R1 + B1 = B2 + W2, then (B2 >B1, W1 >W2);
R1 + B1 < B2 + W2, then (B2 > B1, W1< W2).
If R1 + W1 < R2 + B1 then R1 < R2 and hence either (W1 < W2 , B2 > B1) or (W1 < W2, B2 < B1) or (W1 > W2, B2 < B1)
Next put R1 and B1 on one side of the scale and B2 and W2 on the other side. Since R1 < R2, if:
R1 + B1 < B2 + W2, then (B2 > B1, W1 < W2);
R1 + B1 = B2 + W2, then (B2 < B1, W1 < W2);
R1 + B1 > B2 + W2, then (B2 < B1, W1 > W2).
Solution by Geoffrey Mayne
We'll call the two sides of the balance left and right.
First, have the first white marble and and first blue marble on the left, and the second white marble and the first red marble on the right.
Second, put the second white marble and the first blue marble and the second blue marble and the first red marble on the right.
Based on these weighings, there will be a unique solution of heavier and lighter balls for each possibility.
Solution by James Higgs
"There are three pairs of balls - red, white, and blue. In each pair one ball is a little bit heavier than another one."
So we have r (light) & r' (heavy), w & w' and b & b'. And r' = r + x.
"All the heavy balls weigh the same, and all the light balls weigh the same."
So we have r' = w' = b' and r = w = b.
Weighing 1: Take the two white balls and place one on each side of the balance. This establishes w & w'.
Weighing 2: Put the heavy white ball, w', on the left side of the balance with one of the red balls. On the right side of the balance place the other red ball and one of the blue balls. This
produces one of four possible, unique displacements of the balance. If we represent the displacement, D, as being the mass of the left side of the balance minus the right side (D = L - R) then we
have the following combinations:
L R D
w'+ r' b + r 2x (balance displaced twice w' vs w)
w'+ r' b'+ r x (same as w' vs w)
w'+ r b + r' 0 (balance balanced)
w'+ r b'+ r' -x (right side heavier)
Last Updated: August 11, 2005 | {"url":"http://puzzles.com/PuzzlePlayground/RedWhiteAndBlueBalls/RedWhiteAndBlueBallsSol.htm","timestamp":"2014-04-20T01:32:00Z","content_type":null,"content_length":"67559","record_id":"<urn:uuid:70d3ae16-5a30-4ec8-8bbd-7a3941a6f625>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00418-ip-10-147-4-33.ec2.internal.warc.gz"} |
ArithmeticWhy does everyone like Science more than Maths - Homework Help - eNotes.com
Why does everyone like Science more than Maths
I wouldn't say that everyone does, it's been my experience as a history teacher that plenty of people enjoy math, and often they enjoy both science and math. In fact, many of the sciences,
particularly the physical sciences, appeal to people precisely because they require so much math.
I think some people have trouble seeing how to apply math concepts to their everyday lives. Some areas of science have topics that can be easily demonstrated so some learners can understand the idea
better and other topics have more relevance in our day to day lives.
I agree with Post #3 here. It's often very hard to see the practical applications of math. But the sciences' practical applications are very easily seen. They help us understand the things we see
everyday in a way that math does not.
I do not think this is necessarily true, but since science is the application of math most people will see it as interesting. Science you can see, and math is theoretical until you apply it.
Science and math both explain our lives and make our lives easier.
As an English teacher and history buff, I never cared much for math when I was in school. But I always found science interesting, especially the historical aspects of the subjects. I was fascinated
by the space programs of the 1960s and 1970s (and particularly the astronauts of the Gemini and Apollo missions) as well as the underwater exploration of the era.
I must agree that a statement regarding the liking of science over math can not be expected to be accepted universally. Many people love math. The problem many have with math (which makes them not
like it) is that it can be a hard subject for some to grasp. I myself always hated both subjects in high school, I am an English teacher now. That being said, I have never disregarded math or science
as unimportant.
I think that science has more appeal to students because there are opportunities for hands-on work, and because it's easier to see the connection between ones everyday life and science than it is
with math.
Many students have trouble with abstract ideas and need to "do" in order to understand a topic. This is much more easily arranged in science class than in math.
Not all like science more than math, but most do. I agree with the above posts, that people see more use in science than in math and science seems more appealing. However, math has its uses too, in
the simplest skills like telling time, counting money or cutting a cake equally for a group of people. Also, math and science are interlinked, such as physics which requires calculation, and
chemistry which requires the balancing of equations. I don't see why we shouldn't like math nor science... I like both :)
Join to answer this question
Join a community of thousands of dedicated teachers and students.
Join eNotes | {"url":"http://www.enotes.com/homework-help/arithmatic-405372","timestamp":"2014-04-20T22:51:54Z","content_type":null,"content_length":"42160","record_id":"<urn:uuid:f25e6944-6f59-46ee-83ab-e4e9e9a30b8f>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00115-ip-10-147-4-33.ec2.internal.warc.gz"} |
simultaneous equation using elimination method
can you please solve this equatiion using elmination method. $3x+2y=27$ $x-2y=-27$
Okay, so by the elimination method we just add the equations together. This always works because if A=B and C=D, then A+C=B+C because we are allowed to add the same thing to both sides... But now we
can say A+C=B+D because C=D, so looking back to our original equations we can see that we just added the left side of the first equation to the left of the second and the right of the first to the
right of the second.... Onto this question, $3x+2y+x-2y=27-27$ the left side is 4x and the right is 0 so $4x=0$ divide by 4 to get x=0 Now plug x=0 into one of the given equations, I'll use the first
one: $3(0)+2y=27$ so 2y=27 so y= $\frac{27}{2}$
Just a couple of points to remember as well: You can only eliminate terms if they have coefficients that are the same size. In this case, you can recognise that the y-values both have coefficients of
"size" 2. Then you check their signs. If they're the same sign, then you would have to subtract one equation from the other. If they are different sign, then you add them together. In this case, you
can see that one is positive, one is negative, so you would have to add the equations together, as artvandalay said. Chookas for the exam | {"url":"http://mathhelpforum.com/algebra/91481-simultaneous-equation-using-elimination-method.html","timestamp":"2014-04-18T21:11:25Z","content_type":null,"content_length":"39039","record_id":"<urn:uuid:080dd26c-3ac5-4534-9f0d-bc9822f228be>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00564-ip-10-147-4-33.ec2.internal.warc.gz"} |
1300 Math Formulas
Format Post in Mathematics
Shared By Guest
This handbook is a complete desktop reference for students and engineers. The Author of this Book is It has everything from high school math to math for advanced undergraduates in engineering,
economics, physical sciences, and mathematics. Array ISBN . The ebook contains hundreds of formulas, tables, graphs, and figures. 1300 Math Formulas available in English. The structured table of
contents, links, and layout make finding the relevant information quick and painless, so the ebook can be used as an everyday online reference guide. Contents # Number Sets # Algebra # Geometry #
Trigonometry # Matrices and Determinants # Vectors # Analytic Geometry # Differential Calculus # Integral Calculus # Differential Equations # Series # Probability
1300 Math Formulas
You should be logged in to Download this Document. Membership is Required. Register here
Comments (0)
Currently,no comments for this book! | {"url":"http://bookmoving.com/book/-math-formulas_5060.html","timestamp":"2014-04-24T21:17:37Z","content_type":null,"content_length":"13030","record_id":"<urn:uuid:5b2dc652-6dae-45e9-adb4-0d7107b8c595>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00444-ip-10-147-4-33.ec2.internal.warc.gz"} |
- -
Please install Math Player to see the Math Symbols properly
Click on a 'View Solution' below for other questions:
ff Identify an equivalent fraction for 5.655. View Solution
ff Francis had 1 lb of bananas. He gave 0.8 lb of them to his friend Jeff. Identify an equivalent fraction for the decimal representing the number of bananas given to his friend. ff View Solution
ff The length of the side of a dice is 0.24 in. Identify the length of the side of the dice as a fraction in its simplest form. View Solution
ff Kelsey had 1 lb of corn chips. She gave 0.20 lb of it to her friend Quincy. Write a fraction for the quantity of corn chips Kelsey gave to Quincy. ff View Solution
ff There are two supporting cables for the Golden Gate Bridge. The diameter of each wire in the two cables is 0.882 in. Find the equivalent fraction of the diameter of each wire in the
cables. View Solution
ff Identify the fractional form of the decimal 0.65 . View Solution
ff The length of a dragonfly is 2.25 inches. Identify the length as an equivalent fraction. View Solution
ff Francis wants to buy a T-shirt worth $42.45. Write the price of the T-shirt as a fraction. View Solution
ff Jeff ate 211 of a pizza. Identify the decimal form for the fraction 211 . ff View Solution
ff Valerie used 13 cup of milk to prepare ice cream. Express the quantity of milk in decimal form. View Solution
ff Charlie went for fishing. He caught a fish 58 ft long. Which one represents the length of the fish in decimal form? View Solution
ff Diane used 14 of a cup of milk for preparing an ice cream. Express the amount of milk used by Diane in decimal form. View Solution
ff Bill and Jeff went for skating. The diameter of the wheels of the skating shoe is 114 inches. Express the fraction in decimal form. View Solution
ff Jake wants to buy a jacket worth $31.85. Write the price of the a jacket as a fraction. View Solution
ff Sunny had 34 of a pizza. Write the decimal form for the fraction of the pizza. View Solution
ff A bag has 8 marbles, 45 of which are red marbles. Express the fraction as a decimal. ff View Solution
ff Nina used 14 of a cup of milk for preparing an ice cream. Express the amount of milk used by Nina in decimal form. View Solution
ff Ursula used 13 cup of milk to prepare coffee. Express the quantity of milk in decimal form. View Solution
ff A bottle has 4.75 pints of milk. Express the decimal as a fraction. ff View Solution
ff Victor cut 5.4 inches long color paper to paste it on one side of a cube. Express the decimal as a fraction. View Solution
ff Quincy had 1 lb of popcorn. She gave 0.25 lb of it to her friend Diane. Express the quantity of popcorn Quincy gave to Diane as a fraction. View Solution
ff Write a decimal form for the model. View Solution
ff The diameter of the cup is 312 inches. Express its diameter in decimal form. ff View Solution
ff Paula used 14 of a cup of milk for preparing an ice cream. Express the amount of milk used by Paula in decimal form. View Solution
ff Identify the fractional form of the decimal 0.95 . View Solution
ff Identify the fractional form of the decimal 0.75 . View Solution
ff The length of a dragonfly is 1.25 inches. Identify the length as an equivalent fraction. View Solution
ff Tony wants to buy a T-shirt worth $47.25. Write the price of the T-shirt as a fraction. View Solution
ff Francis ate 211 of a pizza. Identify the decimal form for the fraction 211 . ff View Solution
ff Identify an equivalent fraction for 5.835. View Solution
ff Nathan had 1 lb of cherries. He gave 0.2 lb of them to his friend John. Identify an equivalent fraction for the decimal representing the number of cherries given to his friend. ff View Solution
ff The length of the side of a dice is 0.28 in. Identify the length of the side of the dice as a fraction in its simplest form. View Solution
ff Rachel had 1 lb of corn chips. She gave 0.10 lb of it to her friend Ashley. Write a fraction for the quantity of corn chips Rachel gave to Ashley. ff View Solution
ff There are two supporting cables for the Golden Gate Bridge. The diameter of each wire in the two cables is 0.510 in. Find the equivalent fraction of the diameter of each wire in the
cables. View Solution
ff George ate 15 hundredths pounds of chocolate chip cookies while walking. What is the equivalent fraction of chocolate chip cookies he ate? View Solution
ff Joe bought stamps for $28. Express the cost of the stamps as fraction. ff View Solution
ff Identify the fraction 78 as a decimal. ff View Solution
ff Zelma used 13 cup of milk to prepare ice cream. Express the quantity of milk in decimal form. View Solution
ff Dennis went for fishing. He caught a fish 58 ft long. Which one represents the length of the fish in decimal form? View Solution
ff One bag of popcorn contains 1611ounces of it. Express the fraction in decimal form. View Solution
ff Justin and Josh went for skating. The diameter of the wheels of the skating shoe is 114 inches. Express the fraction in decimal form. View Solution
ff Express the mixed number 2712 as a decimal using long division method. ff View Solution
ff Express the model in decimal form. View Solution
ff A basket has different fruits. 512 of them are apples. Identify the fraction as a decimal. View Solution
ff A bag has 11 marbles, 45 of which are red marbles. Express the fraction as a decimal. ff View Solution
ff A bottle has 5.25 pints of milk. Express the decimal as a fraction. ff View Solution
ff Brad had 45 of a apple. Write the decimal form for the fraction of the apple. View Solution
ff Edward cut 3.8 inches long color paper to paste it on one side of a cube. Express the decimal as a fraction. View Solution
ff Alice used 13 cup of milk to prepare milk shake. Express the quantity of milk in decimal form. View Solution
ff Edward wants to buy a leather flask worth $23.45. Write the price of the a leather flask as a fraction. View Solution
ff Zelma had 1 lb of cornflakes. She gave 0.15 lb of it to her friend Paula. Express the quantity of cornflakes Zelma gave to Paula as a fraction. View Solution
ff William bought postal cards for $28. Express the cost of the postal cards as an equivalent fraction. ff View Solution
ff Fill in the blank with the suitable symbol.
4.23 _____ 4.25 View Solution
ff Compare the pair of fractions 79 and 89. ff View Solution
ff Order the fractions 17, 15, and 18 from the least to the greatest. ff View Solution
ff Order the fractions 34, 57, and 78 from the greatest to the least . ff View Solution
ff Diane used 14 of a cup of milk for preparing an ice cream. Express the amount of milk used by Diane in decimal form. View Solution
ff Ethan went for fishing. He caught a fish 58 ft long. Which one represents the length of the fish in decimal form? View Solution
ff A basket has different fruits. 512 of them are apples. Identify the fraction as a decimal. View Solution
ff A bag has 7 marbles, 45 of which are red marbles. Express the fraction as a decimal. ff View Solution
ff Identify the fraction 78 as a decimal. ff View Solution
ff Identify the fractional form of the decimal 0.25 . View Solution
ff Andy ate 211 of a pizza. Identify the decimal form for the fraction 211 . ff View Solution
ff Identify the fraction 78 as a decimal. ff View Solution
ff Lauren used 13 cup of milk to prepare ice cream. Express the quantity of milk in decimal form. View Solution
ff John cut 4.2 inches long color paper to paste it on one side of a cube. Express the decimal as a fraction. View Solution | {"url":"http://www.icoachmath.com/solvedexample/sampleworksheet.aspx?process=/__cstlqvxbefxdxbgdfmxkhkmg&.html","timestamp":"2014-04-16T22:02:37Z","content_type":null,"content_length":"112609","record_id":"<urn:uuid:0e9921d9-09fc-4946-8843-cc312af41802>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00552-ip-10-147-4-33.ec2.internal.warc.gz"} |
West Boxford SAT Math Tutor
...Math is also used to model and evaluate possible mechanisms of the reaction pathway. Truly, math is the queen of the sciences as well as a necessity in everyday life. I would like to tutor
students at any age and in subjects ranging from pure mathematics to practical applications in chemistry, physics and biology.
12 Subjects: including SAT math, chemistry, prealgebra, study skills
...Algebra 2 is when things start to get interesting. I will demonstrate this in our tutoring sessions. I also enjoy communicating the beauty of math by illustrating how alternative solutions will
all result in the same answer.
13 Subjects: including SAT math, calculus, geometry, ASVAB
...I started tutoring back when I was in high school and tutored students in Geometry. I went on to tutoring my fellow college students in calculus and differential equations. In my senior year of
college I was employed by Mathnasium, which is a tutoring and private instruction company that focuses on teaching math in a way that makes sense to kids.
4 Subjects: including SAT math, geometry, algebra 1, algebra 2
...Taught this to freshmen and sophomores at Reading High School in long term temp assignments (usually maternity leaves). This subject is taught at the middle school level in Reading, and is an
easy subject for me, but can mystify middle schoolers when they first see it. Have taught this at Reading High School in several long term assignments. Subject combined Algebra II.
8 Subjects: including SAT math, geometry, algebra 2, algebra 1
...My teaching style is tailored to each individual, using a pace that is appropriate. I strive to help students understand the core concepts and building blocks necessary to succeed not only in
their current class but in the future as well. I am a second year graduate student at MIT, and bilingual in French and English.
16 Subjects: including SAT math, French, calculus, algebra 1 | {"url":"http://www.purplemath.com/West_Boxford_SAT_Math_tutors.php","timestamp":"2014-04-18T00:55:14Z","content_type":null,"content_length":"24246","record_id":"<urn:uuid:5f7c2c4a-2638-4b6d-86a3-fd7ededd3783>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00564-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to check beam & slab deflection in ETABS
Current time: 04-17-2014, 04:30 PM
10-31-2010, 04:48 PM
Post: #1
Semi Senior Engineer
How to check beam & slab deflection in ETABS
How to check beam & slab deflection in ETABS ?
Tell me also the limit of the deflection of it
10-31-2010, 04:54 PM
Post: #2
Senior Engineer User ID: 19389 Registration Date: Apr 2010 Posts: 231 Threads: 9 Thanked: 433 in 174 posts
RE: How to check beam & slab deflection in ETABS
You can check the nodal displacement in ETABS (global X, Y and Z direction). I do not recommend ETABS for checking slab deflection. Use SAFE.
The limit of deflection is depends on your code requirements.
Hope I answered your question.
The following 2 users say Thanks/Agree You to visu for this post:2 users say Thank/Agree You to visu for this post
budis, [S:chigozie:S]
10-31-2010, 05:01 PM
Post: #3
Semi Senior Engineer
RE: How to check beam & slab deflection in ETABS
Dear Visu
After I run the model, then how check the nodal displ ? Please explain more explicit
Btw, is there the print out output of it ? Where I look for it ?
For ACI-318 ( as far I remember, please correct me if wrong in remember it ) there only beam & slab deflection of simple supported ? Then for slab & beam in frame or frame - wall system which fomula
I should use ?
10-31-2010, 06:15 PM
Post: #4
RANA WASEEM
Professional Engineer
RE: How to check beam & slab deflection in ETABS
dear user...
for beam deflection, refer to ACI 318 chapter 9 section "CONTROL OF DEFLECTIONS"...or refer to PCA Notes on ACI 318.
calculate the desired section properties, moment of inertia, cracked moment of inertia, effective moment of inertia etc...and calculate deflection.
Now ETABS does not take cracked section properties in its analysis, that is why you will have to define FRAME PROPERTY MODIFIERS...
Also in your manual calculations do not calculate Icr (Cracket moment of inertia)...rather take Ieff (effective mom. of inertia) = some factor x Ig (Gross mom. of inertia...)....
for example if you have 12 inch by 12 inch rect. section...your Ig is bd^3/12 = 12*12^3 / 12 = 1728 in^4...now take your Ieff = some factor * Ig....
Most common factor for beams = 0.35 so your Ieff = 0.35*Ig = 0.35*1728 = 604.8
Now use this value in your hand calculations and to check it in ETABS do this:-
1) draw a simple beam and apply Uniformly dist. load on it...
2) make its end offsets value 0...by ASSIGN>FRAME LINE>END OFFSETS and select DEFINE and put 0 in x & y.
3) now again ASSIGN>FRAME LINE>FRAME PROPERTY MODIFERS and put 0.35 in mom. of inertia along 3 -axis (major axis) and press OK.
4) define the desired load comb on which you want deflection..e.g. 1.0D+ 1.0L etc
5) Run the analysis
6) Click DISPLAY>SHOW MEMBER FORCES/STRESSES> FRAME/PIER/FORCES and select desired combination and select Moment 3-3 then Ok.
7) Right click on the bending moment diagram and there its at the last...deflection...dont forget to click ABSOLUTE option (first one) at the bottom to check deflection...this will be the exact
deflection you calculated manually...
if you have further doubts dont hesitate to ask....but for the actual cracked section properties without using this factor of 0.35 etc...USE SAFE...
to check the long term deflection...manually put the long term deflection combination e.g. 3SW+3DL+3LL etc...
The following 11 users say Thanks/Agree You to RANA WASEEM for this post:11 users say Thank/Agree You to RANA WASEEM for this post
[S:shaqee:S], pana2, [S:HungNguyen29:S], theanhpe, hmwere, [S:engr1900:S], jaks, cve_jule, Dell_Brett, [S:chigozie:S], engiman27
11-01-2010, 01:02 AM
Post: #5
Junior Engineer
User ID: 15983 Registration Date: Mar 2010 Posts: 19 Threads: 4 Thanked: 27 in 9 posts
Shout: pana
RE: How to check beam & slab deflection in ETABS
Dear coolestbliss
value of 0.35 or 35% is a recommendation of aci or pca
can you give me an explanation please
The following 1 user says Thanks/Agree You to pana2 for this post:1 user says Thank/Agree You to pana2 for this post
11-01-2010, 08:37 AM
Post: #6
RANA WASEEM
Professional Engineer
RE: How to check beam & slab deflection in ETABS
Dear PANA2
1) actualy this 0.35 is the ratio (not exactly but nearly) of Mcr/Ma used in deflection calculations of ACI & PCA...(PCA is the same thing...its the explanation on ACI )....
2) Instead of doing detailed calculations of cracked moment of inertia for deflection calculations, we use reduced moment of inertia by using these factors....this 0.35 for beams is a common practice
for beams in design industry...
3) just take an example of simply supported rect. beam as i said in my earlier post and calculate both deflections; by hand and by etabs...using 0.35 factor...they will give you same values with less
than 1 or 2% error.
The following 6 users say Thanks/Agree You to RANA WASEEM for this post:6 users say Thank/Agree You to RANA WASEEM for this post
theanhpe, [S:engr1900:S], jaks, cve_jule, pana2, [S:chigozie:S]
11-01-2010, 12:14 PM
Post: #7
Semi Senior Engineer
RE: How to check beam & slab deflection in ETABS
Dear coolestbliss
Very clear explanation on beam deflection checking ! Thanks
Btw, I didn't find output of beam deflection in etabs output file ? Where I can see it ?
Can you explain checking of slab deflection in etabs ?
The following 1 user says Thanks/Agree You to budis for this post:1 user says Thank/Agree You to budis for this post
11-01-2010, 01:56 PM
Post: #8
RANA WASEEM
Professional Engineer
RE: How to check beam & slab deflection in ETABS
try to learn etabs interface first
for slab deflection you can get values @ nodes...i mean you have to mesh the slab manually...better use SAFE
The following 5 users say Thanks/Agree You to RANA WASEEM for this post:5 users say Thank/Agree You to RANA WASEEM for this post
[S:engr1900:S], budis, hmwere, [S:chigozie:S], iceman84
User(s) browsing this thread: 1 Guest(s) | {"url":"http://forum.civilea.com/thread-18232.html","timestamp":"2014-04-17T13:00:28Z","content_type":null,"content_length":"71987","record_id":"<urn:uuid:fc886afb-ad63-46a2-ae6e-a8060fd0b1fd>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00487-ip-10-147-4-33.ec2.internal.warc.gz"} |
primitive root of unity
April 24th 2010, 09:31 PM
primitive root of unity
Let x=e^(2∏/13i), a primitive 13th root of unity.
Find a subfield K of Q(X) with (Q(x):K)=3.
Find a subfield L of Q(X) with (Q(x):L)=4.
April 25th 2010, 01:04 AM
Let $\zeta$ be the primitive 13th roots of unity. We see that $\text{Gal}(\mathbb{Q}(\zeta), \mathbb{Q})=(\mathbb{Z}/13\mathbb{Z})^\times \cong \mathbb{Z}/12\mathbb{Z}$, where the generator of
this cyclic group is $\sigma:\zeta \mapsto \zeta^2$.
You need to find the subgroup of the order 3 and order 4 for the above group and correspond them to the subfields of $\mathbb{Q}(\zeta)$.
The subgroup of order 3 for $\text{Gal}(\mathbb{Q}(\zeta), \mathbb{Q})$ is generated by $\sigma^4$.
Thus $K=\mathbb{Q}(\zeta + \sigma^4(\zeta) + \sigma^8(\zeta))$$= \mathbb{Q}(\zeta + \zeta^{2^4} + \zeta^{2^8}) = \mathbb{Q}(\zeta + \zeta^3 + \zeta^9)$, where $[\mathbb{Q}(\zeta):K ] = 3$.
You can find the L exactly in the same way. | {"url":"http://mathhelpforum.com/advanced-algebra/141197-primitive-root-unity-print.html","timestamp":"2014-04-21T05:22:58Z","content_type":null,"content_length":"6564","record_id":"<urn:uuid:3571abfd-3e59-466f-a714-a82da6ba3184>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00602-ip-10-147-4-33.ec2.internal.warc.gz"} |
Proving isomorphism to Zn
November 20th 2011, 03:29 PM #1
Nov 2011
I have proven cyclic group to be a group, but now must prove cyclic group order n to be isomorphic to Zn. I realize I must show homomorphism, injection and surjection. But not struggling with
initial equivalences. Please help.
Re: Proving isomorphism to Zn
Let $a$ be a generator of your cyclic group $G$, can you find a function such that $f:G \to \mathbb{Z}_n$ and $f(a)=1$ such that $f$ is an homomorphism?
Re: Proving isomorphism to Zn
are you suggesting a concrete f like f(x)=x+5 or generalized like f(x)=ix+j.
Re: Proving isomorphism to Zn
Something concrete, in terms of things we know must exist (like generator "a").
Try a^n
Re: Proving isomorphism to Zn
Re: Proving isomorphism to Zn
all elements of the cyclic group
Re: Proving isomorphism to Zn
MAdone, this is really the whole of the idea.
if $f:G \to \mathbb{Z}_n$ is to be a homomorphism, we must have f(a*a) = f(a) + f(a) = 1+1. so there is really only "one" essential way to define f:
$f(a^k) = k$. that f is a homomorphism follows immediately from the laws of exponents.
since G is finite it suffices to show that f is surjective (clearly |G| = $|\mathbb{Z}_n|$ = n), and thus f is an isomorphism.
November 20th 2011, 03:35 PM #2
Super Member
Apr 2009
November 20th 2011, 03:46 PM #3
Nov 2011
November 20th 2011, 03:49 PM #4
November 20th 2011, 03:53 PM #5
Super Member
Apr 2009
November 20th 2011, 04:39 PM #6
Nov 2011
November 20th 2011, 08:04 PM #7
MHF Contributor
Mar 2011 | {"url":"http://mathhelpforum.com/advanced-algebra/192354-proving-isomorphism-zn.html","timestamp":"2014-04-20T03:37:50Z","content_type":null,"content_length":"49719","record_id":"<urn:uuid:b718e719-4c5d-45a4-a236-34a3e21bd22f>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00140-ip-10-147-4-33.ec2.internal.warc.gz"} |
Related Programs
Related Courses
Functions explored from numerical, graphical, and analytic perspectives. Function notation, operations, and inverses. Includes study of polynomial, rational, exponential, logarithmic, and
trigonometric functions. Intended as a preparation for calculus and not open to students who have taken calculus in college. Presumes competency in the content of MATH 120. Fall, Spring. | {"url":"http://www.keene.edu/catalog/courses/detail/MATH130/","timestamp":"2014-04-18T08:43:32Z","content_type":null,"content_length":"6574","record_id":"<urn:uuid:97895363-7838-43ee-8093-320f09456b45>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00029-ip-10-147-4-33.ec2.internal.warc.gz"} |
Difference Between Variance and Covariance
Variance vs Covariance
Variance and covariance are two measures used in statistics. Variance is a measure of the scatter of the data, and covariance indicates the degree of change of two random variables together. Variance
is rather an intuitive concept, but covariance is defined mathematically in not that intuitive at first.
More about Variance
Variance is a measure of dispersion of the data from the mean value of the distribution. It tells how far the data points lie from the mean of the distribution. It is one of the primary descriptors
of the probability distribution and one of the moments of the distribution. Also, variance is a parameter of the population, and the variance of a sample from the population act as an estimator for
the variance of the population. From one perspective, it is defined as the square of the standard deviation.
In plain language, it can be described as the average of the squares of the distance between each data point and the mean of the distribution. Following formula is used to calculate the variance.
Var(X)=E[(X-µ)^2 ] for a population, and
Var(X)=E[(X-‾x)^2 ] for a sample
It can further be simplified to give Var(X)=E[X^2 ]-(E[X])^2.
Variance has some signature properties, and often used in statistics to make the usage simpler. Variance is non-negative because it is the square of the distances. However, the range of the variance
is not confined and depends on the particular distribution. The variance of a constant random variable is zero, and the variance does not change with respect to a location parameter.
More about Covariance
In statistical theory, covariance is a measure of how much two random variables change together. In other words, covariance is a measure of the strength of the correlation between two random
variables. Also, it can be considered as a generalization of the concept of variance of two random variables.
Covariance of two random variables X and Y, which are jointly distributed with finite second momentum, is known as σ[XY]=E[(X-E[X])(Y-E[Y])]. From this, variance can be seen as a special case of
covariance, where two variables are the same. Cov(X,X)=Var(X)
By normalizing the covariance, the linear correlation coefficient or the Pearson’s correlation coefficient can be obtained, which is defined as ρ=E[(X-E[X])(Y-E[Y])]/(σ[X] σ[Y] )=( Cov(X,Y))/(σ[X] σ[
Graphically, covariance between a pair of data points can be seen as the area of the rectangle with the data points at the opposite vertices. It can be interpreted as a measure of magnitude of
separation between the two data points. Considering the rectangles for the whole population, the overlapping of the rectangles corresponding to all the data points can be considered as the strength
of the separation; variance of the two variables. Covariance is in two dimensions, because of two variables, but simplifying it to one variable gives the variance of a single as the separation in one
What is the difference between Variance and Covariance?
• Variance is the measure of spread/ dispersion in a population while covariance is considered as a measure of variation of two random variables or the strength of the correlation.
• Variance can be considered as a special case of covariance.
• Variance and covariance are dependent on the magnitude of the data values, and cannot be compared; therefore, they are normalized. Covariance is normalized into the correlation coefficient
(dividing by the product of the standard deviations of the two random variables) and variance is normalized into the standard deviation (by taking the square root)
Related posts: | {"url":"http://www.differencebetween.com/difference-between-variance-and-vs-covariance/","timestamp":"2014-04-17T03:50:42Z","content_type":null,"content_length":"90797","record_id":"<urn:uuid:b60af9c8-a17b-4fbf-a55f-fcb8ef3a9666>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00384-ip-10-147-4-33.ec2.internal.warc.gz"} |
A Degree-Decreasing Lemma for (MODq-MODp) Circuits
Vince Grolmusz
Consider a (MODq,MODp) circuit, where the inputs of the bottom MODp gates are degree-d polynomials with integer coefficients of the input variables (p, q are different primes). Using our main tool
--- the Degree Decreasing Lemma --- we show that this circuit can be converted to a (MODq,MODp) circuit with linear polynomials on the input-level with the price of increasing the size of the
circuit. This result has numerous consequences: for the Constant Degree Hypothesis of Barrington, Straubing and Thérien, and generalizing the lower bound results of Yan and Parberry, Krause and
Waack, and Krause and Pudlák. Perhaps the most important application is an exponential lower bound for the size of (MODq,MODp) circuits computing the n fan-in AND, where the input of each MODp gate
at the bottom is an arbitrary integer valued function of cn variables (c<1) plus an arbitrary linear function of n input variables.
Full Text:
GZIP Compressed PostScript PostScript PDF original HTML abstract page | {"url":"http://www.dmtcs.org/dmtcs-ojs/index.php/dmtcs/article/viewArticle/145","timestamp":"2014-04-20T00:53:32Z","content_type":null,"content_length":"13147","record_id":"<urn:uuid:e25a60f0-f595-44c7-8f75-4816f760ca87>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00485-ip-10-147-4-33.ec2.internal.warc.gz"} |
Derivation of Pythagorean Theorem
Pythagorean Theorem
In any right triangle, the sum of the square of the two perpendicular sides is equal to the square of the longest side. For a right triangle with legs measures
Proved by Pythagoras
Area of the large square = Area of four triangles + Area of small square
Proved by Bhaskara
Bhaskara (1114 - 1185) was an Indian mathematician and astronomer.
Area of the large square = Area of four triangles + Area of inner (smaller) square
Proved by U.S. Pres. James Garfield
Area of trapezoid = Area of 3 triangles | {"url":"http://www.mathalino.com/reviewer/derivation-of-formulas/derivation-of-pythagorean-theorem","timestamp":"2014-04-17T04:23:34Z","content_type":null,"content_length":"52921","record_id":"<urn:uuid:49420dec-b538-411f-b602-862e27d62584>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00486-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Help
March 1st 2006, 12:34 AM #1
Sep 2005
please help.
The price p and the demand x are related by the equation x^2 – p^2x =10. Suppose that the demand is increasing at the rate of 20 units/week. At what rate is p changing with respect to time when
the demand is 50 units?
The price p and the demand x are related by the equation x^2 – p^2x =10. Suppose that the demand is increasing at the rate of 20 units/week. At what rate is p changing with respect to time when
the demand is 50 units?
You know that $x^2-p^2x=10$. So take the time derivative of both sides:
please check my work...
The price p and the demand x are related by the equation .x^2 – p^2x =10 Suppose that the demand is increasing at the rate of 20 units /week. At what rate is p changing with respect to time when
the demand is 50 units?
Solution: x^2 – p^2x =10 ---->(1)
differentiating wrt time we get
2x dx/dt -2xp dp/dt -p^2dx/dt =0
we have
substitute x=50 in eq(1)
50^2 -p^2*50=10
p= 7.05
now to find dp/dt
2*50*20 -2*50*7 dp/dt - 7^2 *20
2000 -700 dp/dt -980=0
700dp/dt= 1020
dp/dt= 1020/700
dp/dt= 1.457
please check my work...
The price p and the demand x are related by the equation .x^2 – p^2x =10 Suppose that the demand is increasing at the rate of 20 units /week. At what rate is p changing with respect to time when
the demand is 50 units?
Solution: x^2 – p^2x =10 ---->(1)
differentiating wrt time we get
2x dx/dt -2xp dp/dt -p^2dx/dt =0
we have
substitute x=50 in eq(1)
50^2 -p^2*50=10
p= 7.05
now to find dp/dt
2*50*20 -2*50*7 dp/dt - 7^2 *20
2000 -700 dp/dt -980=0
700dp/dt= 1020
dp/dt= 1020/700
dp/dt= 1.457
please check my work...
This is basically OK except that you round p to 7.05 when it is ~7.0569,
then you round it down again to exactly 7. This loses accuracy unnecessarily
and gives your answer of dp/dt=1.457, when it should be more like 1.423.
PS I have deleted the duplicate post of this question.
Last edited by CaptainBlack; March 1st 2006 at 03:02 AM.
March 1st 2006, 12:38 AM #2
March 1st 2006, 02:05 AM #3
Sep 2005
March 1st 2006, 02:48 AM #4
Grand Panjandrum
Nov 2005 | {"url":"http://mathhelpforum.com/business-math/2056-please-help.html","timestamp":"2014-04-18T07:32:06Z","content_type":null,"content_length":"36538","record_id":"<urn:uuid:c7406af5-c7e0-4393-abe7-0165e960ef18>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00316-ip-10-147-4-33.ec2.internal.warc.gz"} |
Calculate Sales Tax using Java Program
Displaying 1 - 50 of about 30035 Related Tutorials.
Calculate Sales Tax using Java Program In this section, you will learn how to calculate the sales tax and print out the receipt details for the purchased...) Basic sales tax is applicable at a rate
of 10% on all goods, except books,food
Modify the sales tax program to accept an arbitrary number of prices, total them, calculate the sales tax and print the total amount. Modify the sales tax program to accept an arbitrary number of
prices, total them, calculate
will take 132.3 and give the tax amounts from that.... WAP a java program...tax calculation I want you to write a program while take some amount and do the tax calculation Sr.No. Particulars Present
Rate Amount in INR
PHP Tax Calculator In my project i required a tax calculator that can calculate the property tax
how to calculate the employee tax How to calculate the employee tax.Can u please send to me calculation tables
Calculate sum and Average in java How to calculate sum and average in java program? Example:- import java.io.BufferedReader; import.... The given code accepts the numbers from the user using
BufferedReader class. Then using
Retail Point Of sales (using jsp and servlet) Hi, i want to know how to add products using search module, i have a database and created a product module which you can list all the available products
to process sales on new sales
Calculate Company's Sales Using Java In this section, you will learn how to calculate Company's sale. A company sales 3 items at different rates and details of sales ( total amount) of each item are
stored on weekly basis i.e. from Monday
sales detail creation of applet for sales deatils form the applet for Sales Details must include the following: ï?® The CartID, OrderNo, OrderDate... card type Expiry date OK CANCEL Workbook 54
©NIIT � The Sales details should
calculate average Design and write a java program to determine all... and write a program and calculate your final average in this course. The details of the calculating can be found in the course
syllabus. Test your program
with the complete java source code. Description of this program In this program... How to Calculate Area of Rectangle In this section we will learn how to calculate area
Java example to calculate the execution time  ... of a method or program in java. This can be done by subtracting the starting time of the program or method by the ending time of the method. In
our example java
Factorial Examples - Java Factorial Example to calculate factorial of any... program to calculate factorial of any given number. First of all define a class... in java. Here is the code of program:
Calculate the Sum of Three Numbers This is simple java programming tutorial . In this section you will learn how to calculate the sum of three numbers by using three
calculate size of array Is it possible to calculate the size of array using pointers in Java
.style1 { margin-right: 0px; } How to Calculate...; In this section we will learn how to calculate area of triangle. We... in this program we will fine the how to the display massage the area
Calculate factorial Using Recursion  ... by using recursion in jsp. To make a program on factorial, firstly it must... a program on factorial by using recursion in jsp we are going to design a
html page
calculate volume of cube, cylinder and rectangular box using method overloading in java calculate volume of cube, cylinder and rectangular box using method overloading in java calculate volume of
cube, cylinder
Simple java program Hi this is the thing we got to do and i am totally... marks up the prices of its items by 25%. Write a Java program that declares... variable named original_price 3. a double
variable named sales_tax_rate
Travelling Sales MAn Using GA travelling sales man problem using ga first i have randomly select few cities which will be called population size ex 10 cities are there then all the possible
combination will be 10
Calculate process time in Java This section provides the facility to calculate the process time of the completion of the operation through the Java program
Write a program to calculate area and perimeter of a circle... example will teach you the method for preparing a program to calculate the area...; under Java I/O package and define and integer r=o,
which is the radius
Calculate Age using current date and date of birth In this section you will learn how to calculate the age. For this purpose, through the code, we have prompted the user to enter current date and
date of birth in a specified format. Now
of advertisement. Under sales promotional program, sales manager select what should... is responsible for the handling of sales management and leading sales team called sales manager who is the main
force behind the sales management team
how to calculate max and min hye!i'm a new for the java and i want to ask a question . Could anyone help me to teach how to calculate the maximum and minimum using java language. Hi friend, In Math
class having two
write a program to calculate volume of a cube write a program to calculate volume of a cube, cylinder, rectangular box using method overloading  
calculate average Question 2 cont.d Test case c: home works/labs 10 9 9 9 10 10 9 9 10 10 9 9 10 test scores: 65 89 note, the program should work with different numbers of home works/labs
Java Program I want the answer for this question. Question: Write a program to calculate and print the sum of odd numbers and the sum of even...(using for loop). Instructions: *You should surely use
Functions in this question
Sales System.. Need Help!! were going to make a sales system in our subject Database management and that will be our project, we already have a company to use. But the problem is.. our group dont
know how to start, we dont have
Calculate Entropy using C++ # include <iostream> # include <cmath> using namespace std; int main() { float S0,S1,S2,S3; float Hs,Hs3; float
A Java Program by using JSP how to draw lines by using JSP plz show me the solution by using program
RARP program using java hai, how to implement the RARP concept using java
WAP to calculate addition of two distances in feets and inches using objects as functions arguments in java WAP to calculate addition of two distances in feets and inches using objects as functions
arguments in java  
WAP to calculate addition of two distances in feets and inches using objects as functions arguments in java WAP to calculate addition of two distances in feets and inches using objects as functions
arguments in java  
Q, 12 WAP to calculate addition of two distances in feets and inches using objects as functions arguments in java Q, 12 WAP to calculate addition of two distances in feets and inches using objects as
functions arguments in java
WAP to calculate addition of two distances in feets and inches using objects as functions arguments in java WAP to calculate addition of two distances in feets and inches using objects as functions
arguments in java  
java program Create an Employee class which has methods netSalary which would accept salary & tax as arguments & returns the netSalary which is tax deducted from the salary. Also it has a method
grade which would accept
java program Create an Employee class which has methods netSalary which would accept salary & tax as arguments & returns the netSalary which is tax deducted from the salary. Also it has a method
grade which would accept
how to calculate addition of two distances in feets and inches using objects as functions arguments how to calculate addition of two distances in feets and inches using objects as functions arguments
in java
with this program,i'm new to java need some help,thank you! This program should allow..., then calculate the user's final bill, including taxation at 20%. A sample run... = $12.50 Subtotal: $18.50
Tax @ 20%: $3.70 Grand Total: $22.20
program i want a progra in java to print a sentence in alphabetic order, taking the input from the user.the program should writen without using the array for example : if input= this is a cat then
output sould = a cat
Java Program Hi, I'm have complications with this program. I keep getting errors and my coding is off. Can you help me? Write a program called... is clicked, your program should calculate the area of
the office and display the area
java program write a java program to display array list and calculate the average of given array
java program write a java program to display array list and calculate the average of given array
Program hey please help me ... How can write multiple choice question paper using radio button.After submit calculate marks and display our marks... can any body help for me.using servlet,html Here
is a jsp test
. the program should then calculate the face value required in order...how can i calculate loan negotiating a consumer loan is not always... if the consumer needs 1,000. write a program that will
take three inputs: the amount
Java Program The numbers in the following sequence are called the fibonacci numbers . 0 , 1 , 1 , 2, 3 , 5 , 8 , 13 , ����.. Write a program using do�..while loop to calculate and print the
first m Fibonacci numbers
Code to calculate hair density Hello, Would you please give me a code to calculate the hair density on scalp from image in java
Java program? In order for an object to escape planet's.... The escape velocity varies from planet to planet.Create a Java program which calculates the escape velocity for the planet. Your program
should first prompt
Write a program that makes use of a class called Employee that will calculate an employee's weekly paycheque. Write a program that makes use of a class called Employee that will calculate an
employee's weekly paycheque. Design | {"url":"http://www.roseindia.net/software-tutorials/detail/647","timestamp":"2014-04-19T14:43:05Z","content_type":null,"content_length":"56463","record_id":"<urn:uuid:a5f40b7e-d1a5-41aa-a4ae-57a18144b46c>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00067-ip-10-147-4-33.ec2.internal.warc.gz"} |
Is it true or false that as the tails of the normal distribution curve are infinitely long, the total area under the... - Homework Help - eNotes.com
Is it true or false that as the tails of the normal distribution curve are infinitely long, the total area under the curve is also infinite.
Because the tails of the normal distribution curve are infinitely long, the total area under the curve is also infinite.
The statement is false.
Do not confuse normal with normalized.
A normalized distribution curve has an area under the curve of exact number, 1, by definition.
This is becuase the probability of all the possible events is also always exactly 1.
In this case, the shape of the curve does not matter.
In probability theory, the normal (or Gaussian ) distribution is a continuous probability distribution that has a bell-shaped probability density function, known as the Gaussian function or
informally the bell curve.
A normal distribution curve is a graphical representation of the probability of a continuous variable which has a probability defined by a Gaussian function with the highest concentration near values
that lie at the mean.
The total probability of any variable is equal to 1 and can never exceed 1.
In a normal distribution curve, the tails of the curve are infinitely long but the area under them decreases at a very fast rate as the value of the variable deviates from the mean. The total area
under the entire curve tends to 1 as the area under curve including the infinitely long tails is added up.
Join to answer this question
Join a community of thousands of dedicated teachers and students.
Join eNotes | {"url":"http://www.enotes.com/homework-help/true-false-why-because-tails-normal-distribution-340216","timestamp":"2014-04-18T21:55:38Z","content_type":null,"content_length":"28812","record_id":"<urn:uuid:229d0b18-75f5-43bf-b4c7-139ef54c5b4e>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00385-ip-10-147-4-33.ec2.internal.warc.gz"} |
Matches for:
2005; 197 pp; hardcover
ISBN-10: 0-8218-3841-5
ISBN-13: 978-0-8218-3841-9
List Price: US$43
Member Price: US$34.40
Order Code: REALAPP
Real Analysis and Applications starts with a streamlined, but complete, approach to real analysis. It finishes with a wide variety of applications in Fourier series and the calculus of variations,
including minimal surfaces, physics, economics, Riemannian geometry, and general relativity. The basic theory includes all the standard topics: limits of sequences, topology, compactness, the Cantor
set and fractals, calculus with the Riemann integral, a chapter on the Lebesgue theory, sequences of functions, infinite series, and the exponential and Gamma functions. The applications conclude
with a computation of the relativistic precession of Mercury's orbit, which Einstein called "convincing proof of the correctness of the theory [of General Relativity]."
The text not only provides clear, logical proofs, but also shows the student how to derive them. The excellent exercises come with select solutions in the back. This is a text that makes it possible
to do the full theory and significant applications in one semester.
Frank Morgan is the author of six books and over one hundred articles on mathematics. He is an inaugural recipient of the Mathematical Association of America's national Haimo award for excellence in
teaching. With this applied version of his Real Analysis text, Morgan brings his famous direct style to the growing numbers of potential mathematics majors who want to see applications along with the
The book is suitable for undergraduates interested in real analysis.
Request an examination or desk copy.
Undergraduate students interested in real analysis and its applications.
Part I: Real numbers and limits
• Numbers and logic
• Infinity
• Sequences
• Subsequences
• Functions and limits
• Composition of functions
Part II: Topology
• Open and closed sets
• Compactness
• Existence of maximum
• Uniform continuity
• Connected sets and the intermediate value theorem
• The Cantor set and fractals
Part III: Calculus
• The derivative and the mean value theorem
• The Riemann integral
• The fundamental theorem of calculus
• Sequences of functions
• The Lebesgue theory
• Infinite series \(\sum_{n=1}^\infty a_n\)
• Absolute convergence
• Power series
• The exponential function
• Volumes of \(n\)-balls and the gamma function
Part IV: Fourier series
• Fourier series
• Strings and springs
• Convergence of Fourier series
Part V: The calculus of variations
• Euler's equation
• First integrals and the Brachistochrone problem
• Geodesics and great circles
• Variational notation, higher order equations
• Harmonic functions
• Minimal surfaces
• Hamilton's action and Lagrange's equations
• Optimal economic strategies
• Utility of consumption
• Riemannian geometry
• Noneuclidean geometry
• General relativity
• Partial solutions to exercises
• Greek letters
• Index | {"url":"http://ams.org/bookstore?fn=20&arg1=tb-an&ikey=REALAPP","timestamp":"2014-04-20T20:32:32Z","content_type":null,"content_length":"18019","record_id":"<urn:uuid:59939f43-3b29-4940-b3e6-288bf2152ee5>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00531-ip-10-147-4-33.ec2.internal.warc.gz"} |
A term appropriated from physics by Espen Aarseth to describe a type of literature. Derived from the Greek words ergon and hodos, which mean "work" and "path," respectively. Ergodic literature
requires work to traverse a text.
The ergodicity of a physical or computational process is a statement about initial and final states of that process.
What defines a state? It depends on your particular problem. If we think of a classical gas of atoms, then the positions and velocities of those atoms at a given instant in time would define the
state of the gas. The state of a computer's memory could be represented by a sequence of 0's and 1's.
Let's use the letter S(t) to represent the state of a given system at a given time, labeled by t. And let's say t=0 is the initial time and t=1 is the final time. Let's also use an arrow to denote
the process under study. So under our process, S(0) → S(1), that is the process starts from initial state S(0) at t=0 and ends up at S(1).
Now imagine any 2 possible states. Let one of them be the initial state of the system, and the other the final. A process is defined to be ergodic if there is a nonzero probability for the process to
evolve S(0) to S(1), that is, if Prob( S(0) → S(1) ) > 0 no matter which 2 states you picked. Another way of phrasing the definition is: a process is ergodic if every point in state space will be
visited at least once if the process is applied for an infinite number of steps.
If you have a nasty multi-dimensional integral, Monte Carlo methods are a very efficient way to evaluate the integral numerically. Let's say the integrand depends on a field. Think of the Ising
model: a 2 dimensional lattice of points, where at each point the field is equal to +1 or -1; quantities like the magnetization and internal energy are functions of the field configuration, or state,
of the spins. To calculate thermal averages of these quantities, integrals must be done over all possible spin configurations, weighted by a Boltzmann factor, exp(-E/T). (E=energy, T=temperature).
Due to the decaying exponential, most field configurations will give exponentially small contributions to the thermal average, so it is tremendously effecient to use an algorithmic updating process
to generate just the states which dominate the integral. In order not to skew the sample, that is, in order to find the important configurations with the right weight, the updating process must be
able to, in principle, visit every single state. That is, the updating process must be ergodic in order to have an unbiased calculation. (The process must also tend to the canonical distribution
after repeated application, see below.)
Nontrivial, nonpathalogical processes which are ergodic, are called Markov chains. If a process is ergodic, and if repeated applications of the process will take any distribution of states toward the
canonical distribution (Boltzmann distribution), then the process will (eventually) yield a sample of states which can be used as a realistic sample. The sample average, which you can calculate using
your generated chain of states, will tend toward the desired thermal average (more generally, the ensemble average), the answer you're looking for. Detailed balance is a sufficient, but not generally
necessary, condition for tending to the canonical distribution.
For references, see any text on Monte Carlo methods. The one I have handy right now is Quantum Fields on a Lattice by I. Montvay and G. Münster (Cambridge Univ. Press, 1994).
ergodic theory Markov chain simple random walk A case study in genetic ideation
Cyberiada Espen Aarseth Reversibility The Cyberiad
stochastic process Cybertext Ergotic An Infinite Number of Monkeys
microstate Paolo Nori Classical Gas Law of large numbers
state space Tonsure Frier Esodic
excited pendulum Incubus drag
Log in or register to write something here or to contact authors. | {"url":"http://everything2.com/title/ergodic","timestamp":"2014-04-18T16:22:45Z","content_type":null,"content_length":"28024","record_id":"<urn:uuid:1a39f884-0db3-4489-a3f0-6ad66c28d795>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00604-ip-10-147-4-33.ec2.internal.warc.gz"} |
Olivier Bernardi
Assistant Professor of Mathematics
University of Bordeaux III, Ph.D.
Ecole Normale Superieure, M.S.
Ecole Normale Superieure, B.S.
Combinatorics and Probability
My main research interests are in combinatorics and probability.
In particular, I am interested in "discrete surfaces" (a.k.a maps), that is, surfaces obtained by gluing polygons along their edges. The applications range from the concrete (graph drawing
algorithms, encoding of computer images), to the theoretical (representation theory, random matrices, random surfaces appearing quantum mechanics and string theory). I am particularly interested in
the bijections between maps and trees which are at the core of recent advances in our understanding of discrete surfaces.
Courses Taught
MATH 8a Introduction to Probability and Statistics
MATH 35a Advanced Calculus
MATH 39a Introduction to Combinatorics
MATH 131a Algebra I
MATH 131b Algebra II
MATH 180b Topics in Combinatorics
Awards and Honors
NSF research grant: "Maps: beyond boundaries" (2013)
Olivier Bernardi and Alejandro H. Morales.. "Counting trees using symmetries." Journal of Combinatorial Theory, Series A 123. 1 (2014): 104–122.
Olivier Bernardi, Rosena R.X. Du, Alejandro H. Morales and Richard P. Stanley.. "Separation probabilities for products of permutations." Combinatorics, Probability and Computing 23. 2 (2014):
Olivier Bernardi, Gwendal Collet and Eric Fusy. "A bijection for plane graphs and its applications." ANALCO, Portland USA. January 2014.
Bernardi, Olivier. "A Short proof of Rayleigh's Theorem with extensions.." The American Mathematical Monthly 120. 4 (2013): 362-364.
Olivier Bernardi and Alejandro Morales. "Bijections and symmetries for the factorizations of the long cycle." Advances in Applied Mathematics 50. (2013): 702-722.
Bernardi Olivier, Eric Fusy. "Schnyder decompositions for regular plane graphs and application to drawing." Algorithmica 62. 3 (2012): 1159-1197.
Bernardi Olivier, Eric Fusy. "Unified bijections for maps with prescribed degrees and girth." Journal of Combinatorial Theory - Series A 119. 6 (2012): 1351-1387.
Bernardi, Olivier, Juanjo Rue. "Enumerating simplicial decompositions of surfaces with boundaries." European Journal of Combinatorics 33. 4 (2012): 302-325. | {"url":"http://www.brandeis.edu/facultyguide/person.html?emplid=483b85abfb8ca29e3bc3b8021d054cfbebcb7d9f","timestamp":"2014-04-16T22:05:44Z","content_type":null,"content_length":"11484","record_id":"<urn:uuid:f276fbe1-3754-4425-8f4e-ad3003956d24>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00656-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
i need to solve equation
• one year ago
• one year ago
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
use equation i cant understand plz
Best Response
You've already chosen the best response.
Here, $$\lambda$$ is a repeated eigenvalue for the linear system $$\frac{d\,\mathbf{X}}{dt} = A\mathbf{X}$$ so if $$\mathbf{X}_1(t) = (\mathbf{w} + t\mathbf{v})e^{\lambda t}$$ satisfies the
linear system of equations, then \[ \frac{d\, \mathbf{X}_1}{d t} = \frac{d\, }{dt} \left( (\mathbf{w} + t\mathbf{v})e^{\lambda t} \right) = \frac{d\, }{dt} \left( \mathbf{w}e^{\lambda t} + t\
mathbf{v}e^{\lambda t} \right) = \lambda\mathbf{w}e^{\lambda t} + \mathbf{v}e^{\lambda t} + \lambda t \mathbf{v}e^{\lambda t} = A\mathbf{X}_1 \] So, by evaluating the matrix-vector multiplication
on the far right hand-side, and factoring, we find \[ e^{\lambda t}(\lambda \mathbf{w} + \mathbf{v}) + te^{\lambda t}(\lambda \mathbf{v}) = A(\mathbf{w} + t\mathbf{v})e^{\lambda t} = e^{\lambda
t}(A\mathbf{w}) + te^{\lambda t}(A\mathbf{v}) \] by equating like terms, \[ \lambda \mathbf{w} + \mathbf{v} = A\mathbf{w} \text{ and } \lambda \mathbf{v} = A\mathbf{v}.\] For the first equation
we can solve for the desired \[ \mathbf{v} = A\mathbf{w} - \lambda \mathbf{w} = \mathbf{w}(A - \lambda I), \] so \[ \mathbf{v} = \mathbf{w}(A - \lambda I) \] as desired. I hope this helps!
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/5086d7a9e4b0d317738c9059","timestamp":"2014-04-17T07:12:35Z","content_type":null,"content_length":"63477","record_id":"<urn:uuid:f68f904c-1f2d-4b3b-b5ae-c122ec0a44b7>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00619-ip-10-147-4-33.ec2.internal.warc.gz"} |
Atomic, Molecular and Laser Physics
Instructor: Dr. [Sabieh Anwar] Office hours: will be announced shortly.
Teaching Fellow: [Shahid Sattar] Office hours: Tuesday, Thursday (11:00 am to 1:30 pm)
Textbooks: There is no single prescribed textbook. Use lecture notes and the supplementary books provided in the course outline below.
Click here for the course outline.
Here is the weblink for the same course I taught in Fall 2010.
• Spin spectroscopy
• Zeeman Hamiltonian
• NMR and ESR spectra
• Spin dynamics inside a magnetic field
• Bloch sphere and evolution of spin states
• Rotation operator and Rabi flopping
• Rotating wave approximation
• On-resonance and off-resonance effects
• Quadrature detection
Suggested Reading: Spin Dynamics by M. Levitt, Ch. 1-5
Homework No. 1
Spin Dynamics
HW Solution
Interaction of radiation and matter
• Using time-dependent perturbation theory to obtain transition rates between levels
• Concepts of: Rabi flopping, transition matrix elements, electric dipole operator, rotating wave approximation
• Fermi's golden rule
• Interaction of matter with thermal radiation and comparison with coherent Rabi oscillations
• Einsten's A and B coefficients
• Lineshapes and broadening mechanisms
• Quantum statistics and indistinguishability
• Bose-Einstein condensation
• Laser cooling
• Magneto optical trap (MOT)
Suggested Reading: Introduction to Quantum Mechanics by Griffiths, Ch. 9, Atomic Physics by C.J. Foot, Ch 7
Quantum Physics of Atoms, Molecules, Solids, Nuclei, and Particles by Eisberg and Resnick, Ch. 11
Practice questions
Transitions between energy levels
Homework No. 2
Transitions between energy levels, Due date: Oct. 29, 2011
Homework No. 3
Bose-Einstein condensation, Due date: Nov. 11, 2011
HW Solution
Midterm Examination
Question paper
Formula sheet
Solution set
Atomic Spectroscopy: fine structure of hydrogen atom
• Fine structure: relativistic correction
• Fine structure: spin-orbit interaction: Review articles for self-study: On the classical analysis of spin-orbit coupling in hydrogenlike atoms (Am. J. Phys. 78 (4), 2010) The Thomas precision
factor in spin-orbit interaction (Am. J. Phys. 72 (1), 2004)
• Good quantum numbers
• The quantum number j and fine structure of the energy levels
• Selection rules for the magnetic quantum number
• Selection rules for the orbital quantum number
• Grotrian diagram
• Angular momentum and helicity of the photon: Review article for self-study: A justification of selection rules Spectroscopic selection rules: The role of photon states (J. Chem. Ed., Vol. 76 No.
9, 1999)
Suggested Reading: Introduction to Quantum Mechanics by Griffiths, Section 6.3.
Homework No. 4
Atomic spectroscopy and fine structure of hydrogen, Due date: Nov. 23, 2011
HW Solution
Atoms inside magnetic fields and role of nucleus in the spectrum
• Zeeman effect: strong and weak field cases
• Using degenerate perturbation theory for intermediate magnetic fields
• Hyperfine interaction
• Here is a chart showing Clebsch-Gordan coefficients. Useful for the application of degenerate perturbation theory for determining the Zeeman shifted energy levels.
• Determination of Zeeman shifted states for n=2: PDf version of a Mathematica file. Email me to obtain the Mathematica notebook.
Suggested Reading: Introduction to Quantum Mechanics by Griffiths, Sections 6.4 and 6.5.
Homework No. 5
Zeeman effect and hyperfine interaction
HW Solution
Molecular spectroscopy
• Rotational spectroscopy
• Vibrational spectroscopy
• Raman scattering
• Fluorescence
Final Examination from the year 2010 (Practice questions)
The first four questions are relevant to this year.
Question paper
Solution set | {"url":"http://physlab.lums.edu.pk/index.php/Atomic%2C_Molecular_and_Laser_Physics","timestamp":"2014-04-19T19:43:29Z","content_type":null,"content_length":"20326","record_id":"<urn:uuid:af2b6c5c-eac4-42ee-b240-12127b5220ab>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00540-ip-10-147-4-33.ec2.internal.warc.gz"} |
Lawndale, CA Precalculus Tutor
Find a Lawndale, CA Precalculus Tutor
...I began programming in high school, so the first advanced math that I did was discrete math (using Knuth's book called Discrete Mathematics). I have also participated in high school math
competitions (ie AIME) and a college math competition (the Putnam) for several years, and in both cases the ma...
28 Subjects: including precalculus, chemistry, Spanish, physics
...I also help students understand practical applications of the material so they understand its importance and want to learn it. I am a well-rounded tutor - I understand technical details AND
can explain them. I love to see students succeed and appreciate that I can be part of the process.
18 Subjects: including precalculus, chemistry, calculus, algebra 2
...I am a graduate of both the Massachusetts Institute of Technology and University of Southern California. My passion for learning and teaching so many subjects comes from my love of
nanotechnology which is multi-disciplinary by nature. I have taken the SAT, SAT II, GRE, and several AP exams and am very familiar with standardized testing.
42 Subjects: including precalculus, reading, Spanish, chemistry
...When learning from me however, the time is yours and we can focus on nothing but your weak points, getting you caught up fast and prepared for your next lecture or test. I have a degree in
Mathematics from UCLA, but that is not really important. I understand that being good at math and being good at TEACHING math are two different things.
14 Subjects: including precalculus, calculus, geometry, statistics
...As a lifelong learner with a distinct passion for teaching and a great interest in helping students succeed, I believe that I will be an effective tutor for you or your child. I recently moved
to Los Angeles from my hometown of Ann Arbor, Michigan, and I am currently pursuing a master’s degree a...
28 Subjects: including precalculus, reading, calculus, English
Related Lawndale, CA Tutors
Lawndale, CA Accounting Tutors
Lawndale, CA ACT Tutors
Lawndale, CA Algebra Tutors
Lawndale, CA Algebra 2 Tutors
Lawndale, CA Calculus Tutors
Lawndale, CA Geometry Tutors
Lawndale, CA Math Tutors
Lawndale, CA Prealgebra Tutors
Lawndale, CA Precalculus Tutors
Lawndale, CA SAT Tutors
Lawndale, CA SAT Math Tutors
Lawndale, CA Science Tutors
Lawndale, CA Statistics Tutors
Lawndale, CA Trigonometry Tutors | {"url":"http://www.purplemath.com/Lawndale_CA_Precalculus_tutors.php","timestamp":"2014-04-18T23:47:45Z","content_type":null,"content_length":"24456","record_id":"<urn:uuid:fd6415b1-a207-4090-9ce8-0bf96d755aac>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00269-ip-10-147-4-33.ec2.internal.warc.gz"} |
math 342 assignments
Formal Assignments:
1. Midterm
Informal Assignments:
1. Build your own hyperbolic Plane! You can use any one of the four methods explained in appendix B of Experiencing Geometry. The crocheted hyperbolic planes are by far the nicest, but the hardest
to make. This assignment will be graded on the quality of your finished product.
2. Straightness & Euclid's Postulates on the Hyperbolic Plane
3. Paper Folding Proof & Problem 8.2
4. Henderson Problem 7.2
5. Computer Lab 1, electronic component | {"url":"http://www.unco.edu/nhs/mathsci/ClassSites/hoppercourse/math342/assign.html","timestamp":"2014-04-21T15:10:14Z","content_type":null,"content_length":"2169","record_id":"<urn:uuid:db9df766-c9d7-4bfe-9148-3bf5fb889609>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00457-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum: Teacher2Teacher - Q&A #10489
View entire discussion
[<< prev] [ next >>]
From: Chris <bell.christine54@googlemail.com>
To: Teacher2Teacher Public Discussion
Date: 2010092317:26:04
Subject: Discrete vs continuous data
Hi There!
I ended up on this site by pure coincidence. Actually, I was looking
for something else.
Anyway, I'm a bit baffled by the explanation. I wonder why amounts of
things on bar charts (cyclists cycling away - how many left?) and
miles vs time. Seems like apples and pears to me. I've been out of
school for a while, but my work involves a bit of maths.
Let's say you have two systems recording music.
Discrete: your system samples the music. That means it takes sort of
snapshots. For simplicity, let's say every second. That means you have
a distinct value for every second. Don't join the dots. (saying that,
in further maths you might want to do just that ... but let's leave
| |
| | |
| | |
---------------------------> t in seconds
Continuous: your system is continuously recording the music. In this
case you would have a curve as you have a value for every moment ..
err all the time ... or let's say you have no gaps.
In this case discrete is digital and continuous is analog. There are
all sorts of other topics in discrete maths. E.g. set theory,
I don't understand why amounts of things on a bar charts are mixed
with something vs time. Surely that must be confusing for the kids.
Wouldn't it make sense to teach that separately from anything on
cartesian coordinates?
Understanding the whole concept is quite important when you move to
calculus. I didn't go to school in the UK or US and charts with
amounts in bars (you know, like amount of kids in school on Monday,
Tuesday,...) were never taught with cartesian coordinates (x and y or
something vs time). They were two very different things and I never
confused them.
Actually, another thing I don't get is why 'discrete calculus' isn't
taught before continuous. Once you work with discrete values, the
concept of integrating and differentiating really is a doddle.
A great way to teach discrete maths could be to use programming since
you can only input discrete values. Once I had to model/program a real
life maths problem the whole seemingly abstract thing became so
Post a reply to this message
Post a related public discussion message
Ask Teacher2Teacher a new question | {"url":"http://mathforum.org/t2t/discuss/message.taco?thread=10489&n=10","timestamp":"2014-04-17T04:20:00Z","content_type":null,"content_length":"6635","record_id":"<urn:uuid:f218b05a-76d7-4dbb-bede-83895901ded3>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00283-ip-10-147-4-33.ec2.internal.warc.gz"} |
Simonton Calculus Tutor
...I'm a very patient person when it comes to teaching others, and I am motivated for others to learn the material. I'm a very motivated and driven individual with a positive outlook on life. I
love teaching others subjects that I have an in depth understanding in.
9 Subjects: including calculus, physics, geometry, algebra 1
...I tutor on your time. Tutoring is a full time thing for me. This is what I do and the only thing I do.
24 Subjects: including calculus, chemistry, physics, geometry
...Those problems that appear complicated are just a series of simple concepts woven together. I deconstruct the problem for students so they can understand the simple problems woven in to what
"appears" to be a complicated problem. Typically, if math is complicated it's because the method of instruction is complicated.
16 Subjects: including calculus, reading, GRE, algebra 1
...I was one of those typical procrastinators when I first started high school, but after my freshman year I was able to organize myself and do well in every class, but Spanish. As I started
tutoring in senior year I found out how fun it was and how well I got along with others. So I decided whenever I had spare time I would try and be a tutor.
32 Subjects: including calculus, English, chemistry, reading
...I have been fortunate to work with students from a broad variety of countries and cultures. I have worked with international students from the age of five up to seventy as a teacher of English
as a Second Language. I have taught test preparation for both college and graduate school admissions exams, including ACT, SAT and TOEFL preparation.
22 Subjects: including calculus, reading, English, grammar | {"url":"http://www.purplemath.com/simonton_calculus_tutors.php","timestamp":"2014-04-19T02:25:41Z","content_type":null,"content_length":"23664","record_id":"<urn:uuid:164bb426-42d6-4674-8afc-dab14c4444cc>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00426-ip-10-147-4-33.ec2.internal.warc.gz"} |
Milwaukee, WI Algebra Tutor
Find a Milwaukee, WI Algebra Tutor
...Got an 11 on physical science section of MCAT. Scored a 96 when I took it 6 years ago after high school. I tutor all the biology courses at UW-Milwaukee.
46 Subjects: including algebra 2, algebra 1, chemistry, statistics
...I, myself, am a visual learner. If I don't see a math problem worked out step-by-step, it's hard for me to understand how to navigate from the question to the correct answer. My greatest
strength, as a tutor, is that I'm able to relate to the student; I'm able to get down to their level and understand what it is they are not understanding.
32 Subjects: including algebra 1, algebra 2, English, reading
...I am a new WyzAnt tutor, and do not have an office to work out of. The best way to reach me would be through phone and email, however I would also be willing to travel to your school if need
be. I have a 3 hour cancellation policy, and make an effort to be flexible with students.
2 Subjects: including algebra 1, physical science
...I hold a Bachelor in Science with majors in physics and chemistry, and have taught and tutored physics in a high school setting for 13 years. I have a Bachelor in Science with major in Physics
and Chemistry. I have 13 years in teaching and tutoring physics and Chemistry in High School.
12 Subjects: including algebra 1, physics, algebra 2, chemistry
...Upon my return to Milwaukee four years ago, I worked at Luther Burbank in Milwaukee as a teacher's aide, assisting all grades from K-5 to 8th grade with Math and English. I am still involved
with my church's children and youth programs. I love sports and have been involved in it my entire life.
20 Subjects: including algebra 2, algebra 1, writing, reading
Related Milwaukee, WI Tutors
Milwaukee, WI Accounting Tutors
Milwaukee, WI ACT Tutors
Milwaukee, WI Algebra Tutors
Milwaukee, WI Algebra 2 Tutors
Milwaukee, WI Calculus Tutors
Milwaukee, WI Geometry Tutors
Milwaukee, WI Math Tutors
Milwaukee, WI Prealgebra Tutors
Milwaukee, WI Precalculus Tutors
Milwaukee, WI SAT Tutors
Milwaukee, WI SAT Math Tutors
Milwaukee, WI Science Tutors
Milwaukee, WI Statistics Tutors
Milwaukee, WI Trigonometry Tutors
Nearby Cities With algebra Tutor
Brookfield, WI algebra Tutors
Brown Deer, WI algebra Tutors
Cudahy algebra Tutors
Glendale, WI algebra Tutors
Greenfield, WI algebra Tutors
Menomonee Falls algebra Tutors
New Berlin, WI algebra Tutors
Racine, WI algebra Tutors
River Hills, WI algebra Tutors
Saint Francis, WI algebra Tutors
Shorewood, WI algebra Tutors
Wauwatosa, WI algebra Tutors
West Allis, WI algebra Tutors
West Milwaukee, WI algebra Tutors
Whitefish Bay, WI algebra Tutors | {"url":"http://www.purplemath.com/milwaukee_wi_algebra_tutors.php","timestamp":"2014-04-21T05:02:06Z","content_type":null,"content_length":"23919","record_id":"<urn:uuid:33c71772-f473-4d17-80cb-cc671162af6e>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00004-ip-10-147-4-33.ec2.internal.warc.gz"} |
Properties of Equality
So how do we really know when two things are equal? We rely primarily upon properties, and we aren't talking real estate. We mean the properties of reflexivity, symmetry, and transitivity. These play
a special role in geometry, so they are extra-important to remember.
The reflexive property states that A = A. That's some deep stuff, man.
Think of the reflexive property as the reflexible property. Take a contortionist, for example. He'll be a contortionist if he's standing up or if he's sitting on his own head. It doesn't matter how
flexible he is, he'll still be a contortionist.
The reflexive property may seem obvious (and it is) but it's still useful, especially when dealing with geometrical figures. And it's way easier than folding yourself into a ball.
The symmetric property states that if A = B, then B = A. (Did you notice that it's a conditional statement, too?) This is mainly useful for reorganizing our expressions on the page, since it lets us
flip statements across the equal sign.
The transitive property states that if A = B and B = C, then A = C. This ranks up there with reflexivity in how often it's used in geometry. To remember it, just build a little train that looks like
A = B = C and conclude that the engine equals the caboose.
Okay, so that doesn't happen in real life, but it's a good way to remember the transitive property. In fact, you can build even longer trains like A = B = C = D, and conclude that A = D. Don't
believe us? We can prove it (with a proof, no less).
Sample Problem
Claim: If A = B and B = C and C = D, then A = D.
Proof: If we know A = B and B = C, we can conclude by the transitive property that A = C. If we also know C = D, then we have both A = C and C = D. One more use of the transitive property will
finally give us A = D.
There's also the substitution property of equality. It says that if you know two things are equal, you can substitute one for another. Simple enough, right?
We used this property a lot in algebra. Sometimes we would solve for x, and then go back and substitute that number for x to figure out y. Remember that? The substitution property deserves a big
thank you card.
Here's a handy list. (We know all these properties have ridiculously technical sounding names, but it's what they're called and we're stuck with it.)
• Reflexive Property: A = A.
• Symmetric Property: if A = B, then B = A.
• Transitive Property: if A = B and B = C, then A = C.
• Substitution Property: if A = B and p(A) is true, then p(B) is true. Here, p(A) is just any statement that has A in it, and p(B) is what you get when you replace A with B. | {"url":"http://www.shmoop.com/logic-proof/equality-properties.html","timestamp":"2014-04-17T21:52:36Z","content_type":null,"content_length":"37801","record_id":"<urn:uuid:97a8a026-69f9-49e9-8ca5-983682ec0400>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00137-ip-10-147-4-33.ec2.internal.warc.gz"} |
Showing a subset does not form a subgroup (explaination wanted)
October 10th 2009, 08:57 AM #1
Showing a subset does not form a subgroup (explaination wanted)
If not sure how to show that a subset does not form a subgroup. Here is the example:
Let $G=GL_2(<b>R</b>)$. Show that the subset S of G defined by
$<br /> S={[<br /> a b<br /> c d<br /> ]|b=c}<br />$
of symmetric 2x2 matrices deos not form a subgroup of G.
I'm not sure where to start. I understand the three conditions:
i)ab is an element of S
ii) e is an element of S
iii)a^-1 is an element of S
But i'm not used to physically working with matrices to prove these. How do I show that at least one of these is not satisfied for S?
You can't use HTML tags inside LaTex. Use [ math ]G= GL_2(\bold{R})[ /math ] (without the spaces) to get $G= GL_2(\bold{R})$.
Show that the subset S of G defined by
$<br /> S={[<br /> a b<br /> c d<br /> ]|b=c}<br />$
use "[ math ]\begin{bmatrix}a & b \\ c & d\end{bmatrix}[ math ]" to get " $\begin{bmatrix}a & b \\ c & d\end{bmatrix}$"
of symmetric 2x2 matrices deos not form a subgroup of G.
I'm not sure where to start. I understand the three conditions:
i)ab is an element of S
ii) e is an element of S
iii)a^-1 is an element of S
But i'm not used to physically working with matrices to prove these. How do I show that at least one of these is not satisfied for S?
As for (ii), the identity is $\begin{bmatrix}1 & 0 \\ 0 & 1\end{bmatrix}$ wjhich is a symmetric matrix.
As for (iii), the inverse of $\begin{bmatrix} a & b \\ b & c\end{bmatrix}$. a "general" symmetric matrix, can be shown to be $\frac{1}{ac- b^2}\begin{bmatrix}c & -b \\ -b & a\end{bmatrix}$, also
a symmetric matrix.
So I recommend you focus on (i). Is the product of two symmetric matrices symmetric? What is $\begin{bmatrix} a & b \\ b & c\end{bmatrix}\begin{bmatrix}d & e \\ e & f\end{bmatrix}$?
Pardon me, I was away all day.
To answer your i),
$\begin{bmatrix}da+eb & ea+fb \\ db+ec & eb+fc\end{bmatrix}$
Can I say that this does not satisfy i) since the product of two symmetric matrices is not a symmetric matrix?
Does the question imply that my symmetric matrices should be abbd and accd? Since b=c, wouldnt this imply a symetric product matrix? Or am I completely on the wrong page on this one...
Last edited by elninio; October 10th 2009 at 07:04 PM.
Yes, that's the whole point. Since, in general, $ea+fbe db+ec$, the set of symmetric matrices is not closed under multiplication.
Does the question imply that my symmetric matrices should be abbd and accd? Since b=c, wouldnt this imply a symetric product matrix? Or am I completely on the wrong page on this one...
I'm not sure I understand your question. Being a symmetric matrix only means that the number in "first column, second row" is the same as the number in "second column, first row" for that
particular matrix. It does NOT imply that all symmetric matrices have those same numbers.
More specifically, both $\begin{bmatrix}2 & 3 \\ 3 & 1\end{bmatrix}$ and $\begin{bmatrix}3 & 2 \\ 2 & 1\end{bmatrix}$ are symmetric matrices and their product is $\begin{bmatrix}2 & 3 \\ 3 & 1\
end{bmatrix}\begin{bmatrix}3 & 2 \\ 2 & 1\end{bmatrix}= \begin{bmatrix}12 & 7 \\ 11 & 7\end{bmatrix}$ which is NOT a symmetric matrix.
October 10th 2009, 09:59 AM #2
MHF Contributor
Apr 2005
October 10th 2009, 06:46 PM #3
October 11th 2009, 07:03 AM #4
MHF Contributor
Apr 2005 | {"url":"http://mathhelpforum.com/advanced-algebra/107172-showing-subset-does-not-form-subgroup-explaination-wanted.html","timestamp":"2014-04-20T13:35:57Z","content_type":null,"content_length":"48604","record_id":"<urn:uuid:b2026445-664e-4c63-b50c-ef4237ebce2d>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00351-ip-10-147-4-33.ec2.internal.warc.gz"} |
y calculat
I want to develop a salary calculator based of basic salary, length of service & fixed annual increment.
I have a candidate holding an MBA degree & the basic salary for MBA holders is USD 2000. if he has 5 years experience. I want a formula that calculate the basic salary form him considering annual
increment to be 12%. to make it easier to you,
attracting candidate has 1year experience
= 2000 + %12(2000) = USD 2240
attracting candidate has 2year experience
= 2240 + %12(2240) = USD 2508.8
attracting candidate has 3year experience
=2508.8 + %12(2508.8) = USD 2809.85
Is there any solution for this so I can enter the number of years & the basic salary only?
thanks in advanced. | {"url":"http://www.knowexcel.com/view/1411258-salary-calculation.html","timestamp":"2014-04-16T13:06:34Z","content_type":null,"content_length":"63877","record_id":"<urn:uuid:d9bbbfe0-2e62-4c8f-87f7-09ead171976f>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00086-ip-10-147-4-33.ec2.internal.warc.gz"} |
Could someone please check my math on this question involving equivalent rates?
August 20th 2012, 08:40 PM #1
Aug 2012
**Funny side note: While typing this up, as a question, half way through I think I figured it out, which is why this is now a "check my math" post**
I'm currently getting my grade 11 college math (Functions and Applications, MCF3M) through a learn from home program. Unfortunately I ran into a little trouble with this problem, which had me
By the way, sorry if this is not a question involving equivalent rates but it's the title of the section this is in, in my book, and I wasn't sure what it is considered. :P
Nader plans to buy a used car. He can afford to pay $280 at the end of each month for three years. The best interest rate he can find is 9.8%/a, compounded monthly. For this interest rate, the
most he could spend on a vehicle is $8702.85.
Determine the amount he could spend on the purchase of a car if the interest rate is 9.8%/compounded annually.
First of all, I understand the first part about him being able to spend $8702.85. I know this can be figured out using the present value formula:
Which, rounded, gives you $8702.85
And I think this is how you figure out the question (though like I said I'm unsure):
1) Find the (monthly?) interest rate (of the compounded annually plan?)
2)Use this new interest rate in the present value formula used earlier
Which, rounded, gives you $9954.64
Therefore Nate could spend $9954.64 on the purchase of a car.
*NOTE: I am also unsure whether I should be using the future value formula in step 2 instead of the present value formula.*
Like I said, if someone could look this over and tell me if I did it right or what I did wrong I would be EXTREMELY thankful!
- Lliam
Re: Could someone please check my math on this question involving equivalent rates?
This question has been solved here: Could someone please check my math?
- Lliam
August 26th 2012, 05:01 PM #2
Aug 2012 | {"url":"http://mathhelpforum.com/business-math/202388-could-someone-please-check-my-math-question-involving-equivalent-rates.html","timestamp":"2014-04-18T01:09:33Z","content_type":null,"content_length":"34389","record_id":"<urn:uuid:c2b95b02-0705-47bb-a853-17af757b0a2a>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00213-ip-10-147-4-33.ec2.internal.warc.gz"} |
Cerebellum Corporation Teaching Systems Physics Module 2: Vectors and Velocity DVD
Teaching Systems Physics Module 2: Vectors and Velocity is a state standards, curriculum based, complete lesson plan in a box designed to capture your student's attention. It includes DVD video
content, a teacher's guide, quizzes, handouts, and classroom materials.Cerebellum introduces the physics students to vectors and one-dimensional kinematics, including displacement, velocity, and
acceleration. An understanding of vectors, an arrow that expresses both a quantity and a direction is critical to the understanding of physics and engineering topics in general. Dot-product and
cross-product multiplication of vectors is explained. This module also covers the study of motion in a straight line, kinematics, including velocity and acceleration. These are familiar concepts to
most students as we are used to spending time in cars. This module expands that intuitive knowledge into more rigorous physics thinking so as to build a sound foundation for the work to come. This
DVD should be considered a prerequisite to modules 3, 4, and 5. Vectors can be an intimidating topic, yet it is presented in a student-friendly fashion.Physics Module 2: Vectors and Velocity
Vector multiplication: dot-product, cross-product
One-dimensional kinematics
Displacement and velocityGrade Level: 11-12. 26 minutes.
Customer Questions & Answers: | {"url":"http://answers.christianbook.com/answers/2016/product/103537/cerebellum-corporation-teaching-systems-physics-module-2-vectors-and-velocity-dvd-questions-answers/questions.htm","timestamp":"2014-04-19T13:08:40Z","content_type":null,"content_length":"68050","record_id":"<urn:uuid:3be6a31d-280a-4314-b8d7-9fa0e4064c2d>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00537-ip-10-147-4-33.ec2.internal.warc.gz"} |
Counting Unique Rational Numbers
Date: 11/16/2008 at 14:32:50
From: Jonathan
Subject: unique rationals p/q with p and q <= n
I'm stumped on a question in number theory that has turned out to be
more difficult than I'd expected. Here it is:
How many unique simple forms of rational numbers are there of the
form p/q, where p and q are non-zero whole numbers less than or equal
to n? For example, 1/2 and 2/4 have the same simple form, so they are
not considered unique. The answer should be a function of n.
It's obviously connected to factorization and hence prime numbers.
That makes it difficult.
A similar type of question is: how many primes are less than or equal
to n? A standard approximation is n/ln(n).
Starting from this, one guess is that the answer is approximately n *
n/ln(n). But I think I've shown that it's bigger than that.
Another intriguing finding is that the answer can be written as the
k1*n/1 + k2*n/2 + k3*n/3 + k4*n/4 + k5*n/5 + ... + kn*n/n
This is the harmonic series with each term multiplied by a unique
constant shown here as k1, k2, etc. But the constants are a bizarre
sequence related to prime factorization, and hence they're very
difficult to predict.
Each k sub i is equal to i minus the number of whole numbers less
than or equal to i that share common factors with i. E.g., for i=6,
the numbers 2,3,4 and 6 share common factors, hence k sub 6 = 6-4 = 2.
One way to approach it would be to write a computer program (or use
Mathematica) to compute the answer for the first, say, ten thousand
values of n. Then try to plot it and compare to some possible
solutions and see if it seems to converge on something.
Another reasonable guess might be:
n * (n - n/ln(n)) = n^2 - n^2/ln(n)
Has this been done before? If you can find a reference, I would be
eternally grateful. This problem has been obsessing me.
Date: 11/17/2008 at 15:39:18
From: Doctor Vogler
Subject: Re: unique rationals p/q with p and q <= n
Hi Jonathan,
Thanks for writing to Dr. Math. That's a very interesting question.
I would suggest analyzing it as follows:
There are, of course, n^2 pairs of positive integers (p, q) with p and
q not larger than n. Of those floor(n/2)^2 of them will have both p
and q divisible by 2. So we should subtract that from the initial
n^2. (The floor function, also known as the greatest integer
function, means to round DOWN to the nearest integer.)
Wikipedia: Floor Function
Similarly, there are floor(n/3)^2 pairs with p and q both divisible by
3. We should similarly count floor(n/r)^2 for every prime r. But
notice that we have also double-counted the pairs with p and q both
divisible by 6, so we should add those back in, so as to only subtract
them once. Continuing this analysis, we end up with the result of the
Inclusion-Exclusion Principle
Wikipedia: Inclusion-Exclusion Principle
namely that the number you want is
f(n) = n^2 - sum floor(n/r)^2 + sum floor(n/rs)^2
- sum floor(n/rst)^2 + ...,
where the first sum is over all primes r, the second over all pairs of
distinct primes (r, s), and so on. In fact, you can write this much
more succinctly using the Mobius mu function
Wikipedia: Moebius Function
f(n) = sum mu(k) * floor(n/k)^2
where the sum is over all positive integers k (from 1 to infinity, as
When n is not too large, you can compute this sum explicitly by
computing the value of mu(k) (or looking it up in a table) for all k
up to n. (For k > n, floor(n/k) = 0, so all terms with k > n are zero.)
When n is large, so that an exact computation is not very feasible, a
simple approximation pays big: If you remove the floor function that
rounds to an integer (which causes the mu(k) = 1 terms to get larger
by a fraction and the mu(k) = -1 terms to get smaller by a fraction),
then you get the approximation
sum mu(k) * (n/k)^2 = n^2 * product (1 - p^-2) = n^2 * 6/pi^2
k=1 primes p
where the last product is over all primes p.
See also the last section of
Wikipedia: Coprime
In fact, we can also bound the difference. We can prove that (writing
abs for absolute value and int for an integral)
abs(f(n) - n^2 * 6/pi^2)
oo oo
= abs( sum mu(k)floor(n/k)^2 - sum mu(k)(n/k)^2 )
k=1 k=1
< sum (n/k)^2 - floor(n/k)^2
n oo
< sum 1 + sum (n/k)^2
k=1 k=n+1
n oo k
< sum 1 + sum int (n/x)^2 dx
k=1 k=n+1 k-1
= n + int (n/x)^2 dx
= 2n.
This means that f(n) equals n^2 * 6/pi^2 plus some small number which
is smaller than 2n. (Note that when n is large, 2n is a lot smaller
than n^2.) So that means that f(n) is approximately 60.8% of n^2.
If you have any questions about this or need more help, please write
back and show me what you have been able to do, and I will try to
offer further suggestions.
- Doctor Vogler, The Math Forum
Date: 11/17/2008 at 17:54:03
From: Jonathan
Subject: Thank you (unique rationals p/q with p and q <= n)
Thanks for pointing me to the definition of coprime. I understand | {"url":"http://mathforum.org/library/drmath/view/73025.html","timestamp":"2014-04-19T15:43:15Z","content_type":null,"content_length":"10768","record_id":"<urn:uuid:57de4d58-3a7e-4058-b747-9a626cbde8dd>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00336-ip-10-147-4-33.ec2.internal.warc.gz"} |
Embeddings of spaces of holomorphic functions of bounded type
Ansemil, José María M. and Aron, Richard M. and Ponte, Socorro (1992) Embeddings of spaces of holomorphic functions of bounded type. Journal of the London Mathematical Society. Second Series, 46 (3).
pp. 482-490. ISSN 0024-6107
Official URL: http://jlms.oxfordjournals.org/content/s2-46/3/482.full.pdf+html
Let U be an open subset of a complex locally convex space E, let F be a closed subspace of E, and let PI:E --> E/F be the canonical quotient mapping. In this paper we study the induced mapping PI*,
taking f is-an-element-of H(b)(PI(U))--> f circle PI is-an-element-of H(b)(U), where H(b)(V) denotes the space of holomorphic functions of bounded type on an open set V. We prove that this mapping is
an embedding when E is a Frechet-Schwartz space, and that it is not an embedding for certain subspaces F of every Frechet-Montel, not Schwartz, space. We provide several examples in the case where E
is a Banach space to illustrate the sharpness of our results.
Item Type: Article
Uncontrolled Keywords: space of holomorphic functions of bounded type on an open set; embedding; Fréchet-Schwartz space
Subjects: Sciences > Mathematics > Topology
ID Code: 16817
Deposited On: 23 Oct 2012 08:20
Last Modified: 09 Dec 2013 17:45
Repository Staff Only: item control page | {"url":"http://eprints.ucm.es/16817/","timestamp":"2014-04-16T07:52:40Z","content_type":null,"content_length":"23907","record_id":"<urn:uuid:7afef1b8-6973-4675-9a27-99c45e406b74>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00452-ip-10-147-4-33.ec2.internal.warc.gz"} |
Baseball Stats
To explore data sets and statistics in baseball.
At the middle-school level, students can build on previous experience to delve into statistics in greater detail. Students should be focused on the entire process, including formulating key
questions; collecting and organizing data; representing the data using graphs and summary statistics; analyzing the data; making conjectures; and communicating statistical information in a meaningful
and convincing way.
In this lesson, students will use baseball data available on the Internet to develop an understanding of the different ways in which data can be analyzed. First, they will practice selecting data to
perform calculations in response to pre-formulated questions. Then they will use the data to formulate and answer their own questions.
Distribute a copy of the 500 Homer-Club chart.
Then ask these questions:
• Which player took the most number of at bats to hit 500 home runs?
• Which player took the least number of at bats to hit 500 home runs?
Review the concepts of mean, median, and mode. Then ask each student to calculate the mean, median, and mode using the CNN/SI data in the following columns: total number of home runs, age of player,
and total number of at bats.
Have students use their Baseball Stats student esheet to go to MLB No-Hitters, on the ESPN website. Tell them that they will use information from this site to answer questions about baseball
statistics. They should use the data from 1970 to the present.
Students should answer these questions:
• What is the average score of the no-hitters pitched since 1970?
• Which scores appear most often in the data?
• What is the mean, median, and mode?
• Which year had most no hitters?
• Which League has had most no hitters?
• What is the ratio of perfect games to no-hitters?
After students have answered the questions, have the class discuss the data. Ask students to draw some conclusions from the data table and have the group provide feedback as to whether or not the
conclusion can be supported by the data. Ask if they can make any predictions about future no-hitters based on the data.
Divide the class into groups and assign one of the problems below to each group. These problems are based on statistical information about Minor League teams. Tell each group that they will report
their findings to the rest of the class, including the method used to solve the problem. Pass out the Baseball Stats student sheet for each student to fill out as he/she is working with the group.
Students will use data found in Stats on the MiLB.com site. You can have students copy the data from the website, or you can print out the charts from the page and give copies to each group. The
student esheet will direct students to the site.
Have groups answer these questions:
• Which Pacific Coast League team had the best batting average in 1998?
• Which Pacific Coast League team had the best batting average in 1997?
• Which Pacific Coast League team had the best batting average in 1996?
• Which International League team had the best batting average in 1998?
• Which International League team had the best batting average in 1997?
• Which International League team had the best batting average in 1996?
• Which American Association League team had the best batting average in 1997?
• Which American Association League team had the best batting average in 1996?
There are at least two methods that the groups might use to figure out the batting averages for the teams. One method is to add all of the batting averages in the first column and divide by the
number of players. Another method would be to add all of the hits (in the 4th column) and all of the at-bats (in the 2nd column) and divide the total number of hits by the total number of at-bats.
After students have calculated their answers and presented the information, have each group select one team and calculate the team batting average using the other method. Then, compare the results.
Ask the class to discuss which method of computing the average would be most accurate and why. Also, ask the class to discuss whether the data from previous years would still be applicable.
Finally, have students use their student esheet to go to the Minor League Archives site again. Working in groups again, students will select data from the website and devise questions that they think
can be answered using the data. Each group should then solve their problems and write the answers on a separate sheet of paper. Then, the groups should exchange and solve each other's problems.
First, have the original groups compare their solutions to the problems to those of the other group. Each group should evaluate the solution, and if it is different from their own, explain if it is a
better answer or if it is merely a different form of the same answer.
Second, have each student write a one paragraph response to the following problem: If a batter has a .400 batting average and gets a hit the first time at bat in a game, what are the odds of his
getting a hit in his second at bat? His third? His fourth? Will this change depending on which part of the season it is?
The Chance Database website contains materials designed help teach a college-level Chance course or a more standard introductory probability or statistics course. This site contains interesting
background information for teachers, but also contains links to datasets that you can adapt for middle school lessons.
The Data and Story Library is an online library of data files and stories that illustrate the use of basic statistics methods.
ESPN.com is a good source for more statistical data about a variety of sports including football, tennis, golf, soccer, and so on.
Census Bureau Population Topics and Household Economic Topics contains surveys, reports, and tables on a wide variety of topics. | {"url":"http://sciencenetlinks.com/lessons/baseball-stats/","timestamp":"2014-04-18T10:45:46Z","content_type":null,"content_length":"24142","record_id":"<urn:uuid:03d7c744-9404-4551-bae2-9e2ed75aa724>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00276-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
HELPPP ME STUDY! solve algebraically for a and b and check: 3a+5b=4 4a+3b=-2
• one year ago
• one year ago
Best Response
You've already chosen the best response.
You can use Matrix method for solving these two equations simultaneously
Best Response
You've already chosen the best response.
i only know substitution, graphing, and elimination method and those are the only methods i can use.
Best Response
You've already chosen the best response.
try eliminating either a or b
Best Response
You've already chosen the best response.
If you're looking for the point of intersection or the solution to those two then that's helpful if not, sorry, i have no clue what to do next ..
Best Response
You've already chosen the best response.
i need to find the value of a and b, not find the solution. like get a or b by itself.
Best Response
You've already chosen the best response.
how about let a = 0 or something like that...
Best Response
You've already chosen the best response.
not working :(
Best Response
You've already chosen the best response.
Here's the plan; try to "get a in terms of b" (or "b in terms of a"). Really, I mean try to solve for a, so that it equals something that involves b (or solve for b so that it equals something
that involves a).
Best Response
You've already chosen the best response.
yeah, i know. but how do i do that?
Best Response
You've already chosen the best response.
You can look at either equation, and solve for either variable. It's up to you!
Best Response
You've already chosen the best response.
It's all fair game in this problem. So pick and equation and pick a variable to solve for!
Best Response
You've already chosen the best response.
ok, 3a+5b=4. let's solve for a.
Best Response
You've already chosen the best response.
Okay, you start! I'll watch.
Best Response
You've already chosen the best response.
Unless you want to workon it together now; we can do that.
Best Response
You've already chosen the best response.
work on*
Best Response
You've already chosen the best response.
yeah, let's work on it together.
Best Response
You've already chosen the best response.
Alright. When equations aren't too complex, a great starting point is getting the terms containing variable you're solving for, a, on the opposite side of all the other terms. So look at 3a+5b=4.
Which term (containing a) do we want to it's own side?
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
3a= -5b+4
Best Response
You've already chosen the best response.
I agree! And it's with 5b, which we need to get rid of. And it appears you know how to do that! Good! So now a is still stuck with something; it's being multiplied by 3! So can you fix that, too?
Best Response
You've already chosen the best response.
\[a= \frac{ 5b+4 }{ 3 }\]
Best Response
You've already chosen the best response.
close! You forgot your sign of 5b! You had it right before! I'm guessing you were distracted by Open Study's fancy equation thing, but I don't know.
Best Response
You've already chosen the best response.
so a=(-5b+4)/3
Best Response
You've already chosen the best response.
oopssss. i meant -5b*
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
\[a=\frac{-5b+4}{3}\]Haha, that's fine! So we solved for a in that one equation, 3a+5b=4. a is always a is always (-5b+4)/3. So we can substitute it into another a's place. Try it in the other
equation, 4a+3b=-2. the a will be replaced and the only variable in the equation will be b. The we can solve for b!
Best Response
You've already chosen the best response.
So, start by substitution!
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Make sure you type (-20b+16)/3 + 3b = -2 ! I see you factored correctly, but the way you wrote it it didn't look like it was all divided by 3, though it really was! :)
Best Response
You've already chosen the best response.
So\[\frac{-20b+16}{3}+3b = -2\]
Best Response
You've already chosen the best response.
multiply by 3?
Best Response
You've already chosen the best response.
Yeah, I'd do that!
Best Response
You've already chosen the best response.
Just so that left-most b isn't divided by it.
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Common mistake! Don't worry! You just have to remember to multiply each entire side by 3. That way you're doing the same thing to each side, so that the two new sides will still be equal.
Best Response
You've already chosen the best response.
i multiplied by 3 to -20b + 16 so that canceled the 3 out, and then i multiplied the -2 by 3, giving -6. what i do wrong?
Best Response
You've already chosen the best response.
You have to multiply 3b by 3 as well!
Best Response
You've already chosen the best response.
it becomes 9b!
Best Response
You've already chosen the best response.
So -20b+16+9b=-6 :)
Best Response
You've already chosen the best response.
I'm sure you'll see it soon.
Best Response
You've already chosen the best response.
ohhh. wow, yeah. b=2?
Best Response
You've already chosen the best response.
-20b+16+9b=-6 -20b+9b=-22 -11b=-22 22=11b 22/11=b 2=b Yeah!
Best Response
You've already chosen the best response.
I had to work it out to check, but we got the same thing.
Best Response
You've already chosen the best response.
yay :D thanks eric :D yur awesome :D
Best Response
You've already chosen the best response.
Sorry for the wait, Open Study glitched :P Recap, we solved for "a in terms of b" and we solved for b. Now we know b and can now solve for a!
Best Response
You've already chosen the best response.
My pleasure! Thank you for doing your part! You did great!
Best Response
You've already chosen the best response.
\[a=\frac{-5b+4}{3}=\frac{-5(2)+4}{3}\] and compute... finale!
Best Response
You've already chosen the best response.
thanks :D
Best Response
You've already chosen the best response.
i just hope i do good on the quiz tomorrow...
Best Response
You've already chosen the best response.
a=-10+4/3 a=-6/3 a=-2
Best Response
You've already chosen the best response.
My pleasure. Did you get a = 2?
Best Response
You've already chosen the best response.
Best of luck! Tag me in any other questions you have, or you can ask them here. But the most important thing in these an dmany other math problems is planning! Knowing what you will do. This
technique we used is good for this situation.
Best Response
You've already chosen the best response.
Haha!! Caught me! -2 ! :D
Best Response
You've already chosen the best response.
Serves me right for trying to do it in my head.
Best Response
You've already chosen the best response.
My responses are out of order. Open Stud is scrambled, it seems.
Best Response
You've already chosen the best response.
yeah. open study is being really laggy today. if i can just finish up my science homework, ill bring up some more questions to study with. thanks again :D and bye for now.
Best Response
You've already chosen the best response.
I'm glad it's not just me! No problem. Good luck with everything!
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/507625d8e4b0242954d77641","timestamp":"2014-04-17T16:06:17Z","content_type":null,"content_length":"163909","record_id":"<urn:uuid:0bce319d-3a25-4650-841d-e8d25576806f>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00104-ip-10-147-4-33.ec2.internal.warc.gz"} |
separable space
separable space
Basic concepts
Separable spaces
A topological space is separable if it has a countable dense subset.
To be explicit, $X$ is separable if there exists an infinite sequence $a\colon \mathbb{N} \to X$ such that, given any point $b$ in $X$ and any neighbourhood $U$ of $b$, we have $a_i \in U$ for some
A second-countable space is separable and first-countable, but the converse need not (see Steen Seebach Example 51 ).
Many results in analysis are easiest for separable spaces. This is particularly true if one wishes to avoid using strong forms of the axiom of choice or to be predicative over the natural numbers. | {"url":"http://ncatlab.org/nlab/show/separable+space","timestamp":"2014-04-21T12:27:15Z","content_type":null,"content_length":"20674","record_id":"<urn:uuid:22cf9244-8a4a-453c-adc5-b6c952af89a1>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00213-ip-10-147-4-33.ec2.internal.warc.gz"} |
Bayesian Inverse Reinforcement Learning
Deepak Ramachandran, Eyal Amir
Inverse Reinforcement Learning (IRL) is the problem of learning the reward function underlying a Markov Decision Process given the dynamics of the system and the behaviour of an expert. IRL is
motivated by situations where knowledge of the rewards is a goal by itself (as in preference elicitation) and by the task of apprenticeship learning (learning policies from an expert). In this paper
we show how to combine prior knowledge and evidence from the expert's actions to derive a probability distribution over the space of reward functions. We present efficient algorithms that find
solutions for the reward learning and apprenticeship learning tasks that generalize well over these distributions. Experimental results show strong improvement for our methods over previous
heuristic-based approaches.
URL: http://magma.cs.uiuc.edu/deepak/IJCAI-RamachandranD1739.pdf | {"url":"http://www.ijcai.org/papers07/Abstracts/IJCAI07-416.html","timestamp":"2014-04-19T01:47:47Z","content_type":null,"content_length":"1710","record_id":"<urn:uuid:8f79fe94-f146-44a9-b9a5-5b25619b37a3>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00420-ip-10-147-4-33.ec2.internal.warc.gz"} |
An Analysis of the First Proofs of the Heine-Borel Theorem - Lebesgue's Proof
Lebesgue's Proof
In 1904, Lebesgue published his version of the theorem [14], which he said was due to Borel.
To compare the two numbers m[e], m[i], we will use a theorem attributed to M. Borel:
If one has a family of intervals Δ such that any point on an interval (a,b), including a and b, is interior to at least one of Δ, there exists a family formed of a finite number of intervals Δ
and that has the same property [any point of (a,b) is interior to one of them].
Note that where Lebesgue wrote (a,b) for a closed and bounded interval, we would write [a,b]. Unlike his predecessors, Lebesgue assumed the least upper bound property as his characterization of
In the passage shown below, Lebesgue started by presenting a new definition - that if [a,x] can be covered by a finite number of subintervals, then x is reached. In his notation, if x is reached,
then so are all points between a and x. If x is not reached, then neither are any of the points between x and b (because if there were a y between x and b that was reached, then [a,y] would be
covered by a finite number of subintervals, and so would [a,x]).
Let (á,β) be one of the intervals Δ containing a, the property to demonstrate is evident for the interval (a,x), if x is contained between á and β; I want to say that this interval may be covered
with the help of a finite number of intervals Δ, which I express in saying that the point x is reached. It must be demonstrated that b is reached. If x is reached, all the points of (a,x) are
[reached]; if x is not reached, none of the points of (x,b) are [reached].
He assumed that b is not reached (else the proof is done), then defined x[0] to be the “first point not reached” or the “last point reached”. In modern notation, he defined x[0] to be the greatest
lower bound of the set \[X=\{x\in\left[a,b\right]\,\vert\,x\,\,{\rm is}\,\,{\rm not}\,\,{\rm reached}\}.\] This set is nonempty and bounded, and therefore has a greatest lower bound .
Now x[0] is contained in some interval, which he called (á[1],β[1]). In the following passage, he then chose two points x[1] and x[2] satisfying α[1] < x[1] < x[0 ]< x[2 ]< β[1]. By the definition of
x[0] he saw that x[1] is reached and x[2] is not reached. Because x[1] is reached, [a, x[1]] is covered by a finite number of intervals. If we take that collection and append the interval (á[1],β[1])
we get a finite collection that covers x[2]. This is a contradiction. Therefore b must have been reached.
Let x[1] be a point of (á[1],x[0]), x[2] a point of (x[0],β[1]); x[1 ]is reached by assumption, the intervals Δ in finite number which are used to reach it, plus the interval (á[1], β[1]) allows
x[2 ]> x[0]; x[0] is neither the last point reached, nor the last not reached; therefore b is reached (^1).
In his footnote, Lebesgue explained Borel’s contributions. He mentioned that Borel required that the covering be countable, and noted that this may sometimes be adequate. However, he felt that the
general theorem would be more useful.
(1) M. Borel gave, in his Thesis and in his Lessons on the theory of functions, two demonstrations of this theorem. These demonstrations essentially suppose that the set of intervals Δ are
countable; this suffices in some applications; there is however interest in demonstrating the theorem of the text. For example, for the applications that I made in my Thesis of M. Borel’s
theorem, it was necessary that he demonstrated for a set of intervals Δ having the power of the continuum.
Finally we give our overview of Lebesgue’s proof.
• Completeness in the form of existence of the supremum for every non-empty bounded set will be required to carry out this proof.
• This proof is very short and is particularly easy to follow. In fact, starting the proof may lead to an “ah-ha!” moment where the students can complete the necessary steps.
• It appears that those textbooks that don’t use the “divide and conquer” technique of Cousin do use Lebesgue’s method. This proof may integrate particularly nicely into those courses.
• This technique of proof is very useful, and appears, for example, in the intermediate value theorem. If students have not already seen it, it is likely that they will.
• This proof works just as well for countable as uncountable covers.
• The proof is non-constructive. There is no way that we can ascertain the finite covering by working the proof.
This is the one! The proof is thoroughly modern and simple to follow. In comparison, all previous arguments are cumbersome and overly complicated. It is no wonder that many people choose to attach
Lebesgue’s name to Borel’s when referencing the theorem. Certainly this proof should be presented in any real analysis course, and probably in many others!
Henri Lebesgue (1875-1941) (Convergence Portrait Gallery) | {"url":"http://www.maa.org/publications/periodicals/convergence/an-analysis-of-the-first-proofs-of-the-heine-borel-theorem-lebesgues-proof","timestamp":"2014-04-16T10:56:55Z","content_type":null,"content_length":"106261","record_id":"<urn:uuid:fcac52ef-2741-4135-ba49-1eda4e46ac7b>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00200-ip-10-147-4-33.ec2.internal.warc.gz"} |
Practical Foundations of Mathematics
Welcome to E-Books Directory
This is a freely downloadable e-book.
Practical Foundations of Mathematics
Read this book online or download it here for free
Practical Foundations of Mathematics
by Paul Taylor
Publisher: Cambridge University Press 1999
ISBN/ASIN: 0521631076
ISBN-13: 9780521631075
Number of pages: 588
Practical Foundations of Mathematics explains the basis of mathematical reasoning both in pure mathematics itself (algebra and topology in particular) and in computer science. In addition to the
formal logic, this volume examines the relationship between computer languages and "plain English" mathematical proofs. The book introduces the reader to discrete mathematics, reasoning, and
categorical logic.
Download or read it online here:
(online html)
More Sites Like This
Science Books Online Books Fairy
Maths e-Books Programming Books | {"url":"http://www.e-booksdirectory.com/details.php?ebook=1897","timestamp":"2014-04-16T21:52:34Z","content_type":null,"content_length":"8515","record_id":"<urn:uuid:96b15877-2ae6-4e8b-a194-017a18fb099d>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00200-ip-10-147-4-33.ec2.internal.warc.gz"} |
Vector Question
May 7th 2010, 08:10 PM #1
Feb 2009
Vector Question
For any vectors v and w, consider the following function of scalar t:
q(t)=(v+t w) dot (v+ t w).
(1) Explain why q(t) is greater than or equal to 0 for all real t.
(2) Expand q(t) as quadratic polynomial in t using the dot product properties.
Should this look like this?
May 7th 2010, 08:40 PM #2
MHF Contributor
Mar 2010 | {"url":"http://mathhelpforum.com/calculus/143627-vector-question.html","timestamp":"2014-04-17T11:58:12Z","content_type":null,"content_length":"31803","record_id":"<urn:uuid:be55589e-175a-4147-8a25-3649a7c793d2>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00275-ip-10-147-4-33.ec2.internal.warc.gz"} |
Belgian wins Norway's $1 million Abel math prize
Undated handout picture made available by the The Norwegian Academy of Science on Wednesday March 20, 2013 of Pierre Deligne winner of the 2013 Abel Prize. Belgian-born Pierre Deligne won this year's
$1 million Abel Prize mathematics prize for his contributions to algebraic geometry and their "transformative impact on number theory, representation theory and related fields." The Norwegian
mathematics award committee says the 68-year-old professor of mathematics has excelled in finding connections between various fields of mathematics, leading to several important discoveries. (AP
Photo/Photo/Cliff Moore, The Norwegian Academy of Science / NTB scanpix)
Belgian-born Pierre Deligne has won this year's $1-million Abel Prize in mathematics for his contributions to algebraic geometry and their "transformative impact on number theory, representation
theory and related fields."
The Norwegian mathematics award committee says the 68-year-old professor of mathematics has excelled in finding connections between various fields of mathematics, leading to several important
Deligne is professor emeritus at the Institute of Advanced Study in Princeton, New Jersey. He arrived in Princeton from the Institut des Hautes Études Scientifiques at Bures-sur-Yvette near Paris,
where he was appointed its youngest ever permanent member in 1970.
Deligne, who has several mathematical concepts named after him, has written some 100 papers on mathematics. He is an honorary member of the Moscow Mathematical Society and the London Mathematical
More information: www.abelprize.no/ | {"url":"http://phys.org/news/2013-03-belgian-norway-million-abel-math.html","timestamp":"2014-04-17T04:46:34Z","content_type":null,"content_length":"64171","record_id":"<urn:uuid:5f8d7864-33c7-4ff4-a1e9-5e9119680240>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00596-ip-10-147-4-33.ec2.internal.warc.gz"} |
[FOM] shipman's challenge: the best defense
catarina dutilh cdutilhnovaes at yahoo.com
Sun Jan 13 03:51:23 EST 2008
From: Vladimir Sazonov:
"Adequate formalization" is not the same as
"adequate translation" from one natural language to another (or from a
natural language to formal) as it might be thought.
I would like to hear more on why it is so (I'm not intrinsically opposed to this view, but I think arguments are required to back this claim).
If I understand Vladimir's views correctly, informal mathematical intuitions are not mathematics properly speaking. THerefore, for him there is no such thing as 'formalizing' ordinary mathematics, as ordinary mathematics as he understands it is already thoroughly formalized. In other words, there is nothing to be formalized in the first place. This is on the one hand an empirical claim (which can be assessed by simply looking at mathematical textbooks and other sources of canonical, 'normal' mathematics in the Kuhnian sense), but on the other hand it is a conceptual issue of demarcation. What is to count as mathematics? I am not a practicing mathematician myself, but I know many mathematicians who are much more lax in their understanding of what is to count as mathematics. So while Vladimir's is an interesting view of mathematics as an enterprise, it is a contentious view that is (I think) far from being unanimously accepted among mathematicians as well
as philosophers of mathematics. His considerations on the Formalization Thesis (basically that it is trivially true) all hinge on this particular view of mathematics.
Vladimir, if it's not too much to ask, I would appreciate if you could comment on the following passage from a very interesting paper by Wang (in Mind 1955):
"Consider, for example, an oral sketch of a newly discovered proof, an abstract designed to communicate just the basic idea of the proof, an article presenting the proof to people working on related problems, a textbook formulation of the same, and a presentation of it after the manner of Principia Mathematica. The proof gets more and more thoroughly formalized as we go from an earlier version to a later. […] Each step of it should be easier to follow since it involves no jumps." (Wang 1955, 227)
One could take the formalization even further, and, from the presentation of the proof ‘after the manner of Principia Mathematica’, construct an even more detailed presentation of it using lambda-calculus, for example.
The question is then: at which point does the proof (argument) in question become a *mathematical* proof as Vladimir understands them?
Never miss a thing. Make Yahoo your home page.
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/2008-January/012502.html","timestamp":"2014-04-20T23:27:31Z","content_type":null,"content_length":"5214","record_id":"<urn:uuid:7cda57d1-c4aa-4eda-b492-a00245ee9cd0>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00097-ip-10-147-4-33.ec2.internal.warc.gz"} |